36 Commits

Author SHA1 Message Date
Mikhail Chusavitin
a0a57e0969 Redesign pricelist detail: differentiated layout by source type
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 13:14:14 +03:00
Mikhail Chusavitin
b3003c4858 Redesign pricing tab: split into purchase/sale tables with unit prices
- Split into two sections: Цена покупки and Цена продажи
- All price cells show unit price (per 1 pcs); totals only in footer
- Added note "Цены указаны за 1 шт." next to each table heading
- Buy table: Своя цена redistributes proportionally with green/red coloring vs estimate; footer shows % diff
- Sale table: configurable uplift (default 1.3) applied to estimate; Склад/Конкуренты fixed at ×1.3
- Footer Склад/Конкуренты marked red with asterisk tooltip when coverage is partial
- CSV export updated: all 8 columns, SPEC-BUY/SPEC-SALE suffix, no % annotation in totals

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:55:17 +03:00
Mikhail Chusavitin
e2da8b4253 Fix competitor price display and pricelist item deduplication
- Render competitor prices in Pricing tab (all three row branches)
- Add footer total accumulation for competitor column
- Deduplicate local_pricelist_items via migration + unique index
- Use ON CONFLICT DO NOTHING in SaveLocalPricelistItems to prevent duplicates on concurrent sync

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 10:33:04 +03:00
Mikhail Chusavitin
06397a6bd1 Local-first runtime cleanup and recovery hardening 2026-03-07 23:18:07 +03:00
Mikhail Chusavitin
4e977737ee Document legacy BOM tables 2026-03-07 21:13:08 +03:00
Mikhail Chusavitin
7c3752f110 Add vendor workspace import and pricing export workflow 2026-03-07 21:03:40 +03:00
Mikhail Chusavitin
08ecfd0826 Merge branch 'feature/vendor-spec-import' 2026-03-06 10:54:05 +03:00
Mikhail Chusavitin
42458455f7 Fix article generator producing 1xINTEL in GPU segment
MB_ lots (e.g. MB_INTEL_..._GPU8) are incorrectly categorized as GPU
in the pricelist. Two fixes:
- Skip MB_ lots in buildGPUSegment regardless of pricelist category
- Add INTEL to vendor token skip list in parseGPUModel (was missing,
  unlike AMD/NV/NVIDIA which were already skipped)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-06 10:53:31 +03:00
Mikhail Chusavitin
8663a87d28 Fix article generator producing 1xINTEL in GPU segment
MB_ lots (e.g. MB_INTEL_..._GPU8) are incorrectly categorized as GPU
in the pricelist. Two fixes:
- Skip MB_ lots in buildGPUSegment regardless of pricelist category
- Add INTEL to vendor token skip list in parseGPUModel (was missing,
  unlike AMD/NV/NVIDIA which were already skipped)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-06 10:52:22 +03:00
2f0957ae4e Fix price levels returning empty in offline mode
CalculatePriceLevels now falls back to localDB when pricelistRepo is nil
(offline mode) to resolve the latest pricelist ID per source. Previously
all price lookups were skipped, resulting in empty prices on the pricing tab.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-05 12:47:32 +03:00
65db9b37ea Update bible submodule to latest
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 12:37:18 +03:00
ed0ef04d10 Merge feature/vendor-spec-import into main (v1.4) 2026-03-04 12:35:40 +03:00
2e0faf4aec Rename vendor price to project price, expand pricing CSV export
- Rename "Цена вендора" to "Цена проектная" in pricing tab table header and comments
- Expand pricing CSV export to include: Lot, P/N вендора, Описание, Кол-во, Цена проектная

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-04 12:27:34 +03:00
4b0879779a Update bible submodule to latest
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:27:45 +03:00
2b175a3d1e Update bible paths kit/ → rules/
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 16:57:50 +03:00
5732c75b85 Update bible submodule to latest 2026-03-01 16:41:42 +03:00
eb7c3739ce Add shared bible submodule, rename local bible to bible-local
- Add bible.git as submodule at bible/
- Rename bible/ → bible-local/ (project-specific architecture)
- Update CLAUDE.md to reference both bible/ and bible-local/
- Add AGENTS.md for Codex with same structure

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 16:41:14 +03:00
Mikhail Chusavitin
6e0335af7c Fix pricing tab warehouse totals and guard custom price DOM access 2026-02-27 16:53:34 +03:00
Mikhail Chusavitin
a42a80beb8 fix(bom): preserve local vendor spec on config import 2026-02-27 10:11:20 +03:00
Mikhail Chusavitin
586114c79c refactor(bom): enforce canonical lot_mappings persistence 2026-02-27 09:47:46 +03:00
Mikhail Chusavitin
e9230c0e58 feat(bom): canonical lot mappings and updated vendor spec docs 2026-02-25 19:07:27 +03:00
Mikhail Chusavitin
aa65fc8156 Fix project line numbering and reorder bootstrap 2026-02-25 17:19:26 +03:00
Mikhail Chusavitin
b22e961656 feat(projects): compact table layout for dates and names 2026-02-25 17:19:26 +03:00
Mikhail Chusavitin
af83818564 fix(pricelists): tolerate restricted DB grants and use embedded assets only 2026-02-25 17:19:26 +03:00
Mikhail Chusavitin
8a138327a3 fix(sync): backfill missing items for existing local pricelists 2026-02-25 17:18:57 +03:00
Mikhail Chusavitin
d1f65f6684 feat(projects): compact table layout for dates and names 2026-02-24 15:42:04 +03:00
Mikhail Chusavitin
7b371add10 Merge branch 'stable'
# Conflicts:
#	bible/03-database.md
2026-02-24 15:13:41 +03:00
d0400b18a3 feat(vendor-spec): BOM import, LOT autocomplete, pricing, partnumber_seen push
- BOM paste: auto-detect columns by content (price, qty, PN, description);
  handles $5,114.00 and European comma-decimal formats
- LOT input: HTML5 datalist rebuilt on each renderBOMTable from allComponents;
  oninput updates data only (no re-render), onchange validates+resolves
- BOM persistence: PUT handler explicitly marshals VendorSpec to JSON string
  (GORM Update does not reliably call driver.Valuer for custom types)
- BOM autosave after every resolveBOM() call
- Pricing tab: async renderPricingTab() calls /api/quote/price-levels for all
  resolved LOTs directly — Estimate prices shown even before cart apply
- Unresolved PNs pushed to qt_vendor_partnumber_seen via POST
  /api/sync/partnumber-seen (fire-and-forget from JS)
- sync.PushPartnumberSeen(): upsert with ON DUPLICATE KEY UPDATE last_seen_at
- partnumber_books: pull ALL books (not only is_active=1); re-pull items when
  header exists but item count is 0; fallback for missing description column
- partnumber_books UI: collapsible snapshot section (collapsed by default),
  pagination (10/page), sync button always visible in header
- vendorSpec handlers: use GetConfigurationByUUID + IsActive check (removed
  original_username from WHERE — GetUsername returns "" without JWT)
- bible/09-vendor-spec.md: updated with all architectural decisions

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 22:21:13 +03:00
d3f1a838eb feat: add Партномера nav item and summary page
- Top nav: link to /partnumber-books
- Page: summary cards (active version, unique LOTs, total PN, primary PN)
  + searchable items table for active book
  + collapsible history of all snapshots

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:19:40 +03:00
c6086ac03a ui: simplify BOM paste to fixed positional column order
Format: PN | qty | [description] | [price]. Remove heuristic
column-type detection. Update hint text accordingly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:16:57 +03:00
a127ebea82 ui: add clear BOM button with server-side reset
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:15:13 +03:00
347599e06b ui: add format hint to BOM vendor paste area
Show supported column formats and auto-detection rules so users
know what to copy from Excel.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:13:49 +03:00
4a44d48366 docs(bible): fix and clarify SQLite migration mechanism in 03-database.md
Previous description was wrong: migrations/*.sql are MariaDB-only.
Document the actual 3-level SQLite migration flow:
1. GORM AutoMigrate (primary, runs on every start)
2. runLocalMigrations Go functions (data backfill, index creation)
3. Centralized remote migrations via qt_client_local_migrations

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:09:45 +03:00
23882637b5 fix: use AutoMigrate for new SQLite tables instead of hardcoded migrations
LocalPartnumberBook and LocalPartnumberBookItem added to AutoMigrate list
in localdb.go — consistent with all other local tables. Removed incorrectly
added addPartnumberBooks/addVendorSpecColumn functions from migrations.go
(vendor_spec column is handled by AutoMigrate via the LocalConfiguration model field).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 17:07:44 +03:00
5e56f386cc feat: implement vendor spec BOM import and PN→LOT resolution (Phase 1)
- Migration 029: local_partnumber_books, local_partnumber_book_items,
  vendor_spec TEXT column on local_configurations
- Models: LocalPartnumberBook, LocalPartnumberBookItem, VendorSpec,
  VendorSpecItem with JSON Valuer/Scanner
- Repository: PartnumberBookRepository (GetActiveBook, FindLotByPartnumber,
  SaveBook/Items, ListBooks, CountBookItems)
- Service: VendorSpecResolver 3-step resolution (book → manual suggestion
  → unresolved) + AggregateLOTs with is_primary_pn qty logic
- Sync: PullPartnumberBooks append-only pull from qt_partnumber_books
- Handlers: VendorSpecHandler (GET/PUT/resolve/apply), PartnumberBooksHandler
- Routes: /api/configs/:uuid/vendor-spec*, /api/partnumber-books,
  /api/sync/partnumber-books, /partnumber-books page
- UI: 3 top-level tabs [Estimate][BOM вендора][Ценообразование]; Excel paste,
  PN resolution, inline LOT autocomplete, pricing table
- Bible: 03-database.md updated, 09-vendor-spec.md added

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-02-21 10:22:22 +03:00
e5b6902c9e Implement persistent Line ordering for project specs and update bible 2026-02-21 07:09:38 +03:00
84 changed files with 9130 additions and 2705 deletions

3
.gitmodules vendored Normal file
View File

@@ -0,0 +1,3 @@
[submodule "bible"]
path = bible
url = https://git.mchus.pro/mchus/bible.git

11
AGENTS.md Normal file
View File

@@ -0,0 +1,11 @@
# QuoteForge — Instructions for Codex
## Shared Engineering Rules
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
Start with `bible/rules/patterns/` for specific contracts.
## Project Architecture
Read `bible-local/` — QuoteForge specific architecture.
Read order: `bible-local/README.md` → relevant files for the task.
Every architectural decision specific to this project must be recorded in `bible-local/`.

View File

@@ -1,24 +1,17 @@
# QuoteForge - Claude Code Instructions
# QuoteForge Instructions for Claude
## Bible
## Shared Engineering Rules
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
Start with `bible/rules/patterns/` for specific contracts.
The **[bible/](bible/README.md)** is the single source of truth for this project's architecture, schemas, patterns, and rules. Read it before making any changes.
## Project Architecture
Read `bible-local/` — QuoteForge specific architecture.
Read order: `bible-local/README.md` → relevant files for the task.
**Rules:**
- Every architectural decision must be recorded in `bible/` in the same commit as the code.
- Bible files are written and updated in **English only**.
- Before working on the codebase, check `releases/memory/` for the latest release notes.
## Quick Reference
Every architectural decision specific to this project must be recorded in `bible-local/`.
```bash
# Verify build
go build ./cmd/qfs && go vet ./...
# Run
go run ./cmd/qfs
make run
# Build
make build-release
go build ./cmd/qfs && go vet ./... # verify
go run ./cmd/qfs # run
make build-release # release build
```

33
acc_lot_log_import.sql Normal file
View File

@@ -0,0 +1,33 @@
-- Generated from /Users/mchusavitin/Downloads/acc.csv
-- Unambiguous rows only. Rows from headers without a date were skipped.
INSERT INTO lot_log (`lot`, `supplier`, `date`, `price`, `quality`, `comments`) VALUES
('ACC_RMK_L_Type', '', '2024-04-01', 19, NULL, 'header supplier missing in source (45383)'),
('ACC_RMK_SLIDE', '', '2024-04-01', 31, NULL, 'header supplier missing in source (45383)'),
('NVLINK_2S_Bridge', '', '2023-01-01', 431, NULL, 'header supplier missing in source (44927)'),
('NVLINK_2S_Bridge', 'Jevy Yang', '2025-01-15', 139, NULL, NULL),
('NVLINK_2S_Bridge', 'Wendy', '2025-01-15', 143, NULL, NULL),
('NVLINK_2S_Bridge', 'HONCH (Darian)', '2025-05-06', 155, NULL, NULL),
('NVLINK_2S_Bridge', 'HONCH (Sunny)', '2025-06-17', 155, NULL, NULL),
('NVLINK_2S_Bridge', 'Wendy', '2025-07-02', 145, NULL, NULL),
('NVLINK_2S_Bridge', 'Honch (Sunny)', '2025-07-10', 155, NULL, NULL),
('NVLINK_2S_Bridge', 'Honch (Yan)', '2025-08-07', 155, NULL, NULL),
('NVLINK_2S_Bridge', 'Jevy', '2025-09-09', 155, NULL, NULL),
('NVLINK_2S_Bridge', 'Honch (Darian)', '2025-11-17', 102, NULL, NULL),
('NVLINK_2W_Bridge(H200)', '', '2023-01-01', 405, NULL, 'header supplier missing in source (44927)'),
('NVLINK_2W_Bridge(H200)', 'network logic / Stephen', '2025-02-10', 305, NULL, NULL),
('NVLINK_2W_Bridge(H200)', 'JEVY', '2025-02-18', 411, NULL, NULL),
('NVLINK_4W_Bridge(H200)', '', '2023-01-01', 820, NULL, 'header supplier missing in source (44927)'),
('NVLINK_4W_Bridge(H200)', 'network logic / Stephen', '2025-02-10', 610, NULL, NULL),
('NVLINK_4W_Bridge(H200)', 'JEVY', '2025-02-18', 754, NULL, NULL),
('25G_SFP28_MMA2P00-AS', 'HONCH (Doris)', '2025-02-19', 65, NULL, NULL),
('ACC_SuperCap', '', '2024-04-01', 59, NULL, 'header supplier missing in source (45383)'),
('ACC_SuperCap', 'Chiphome', '2025-02-28', 48, NULL, NULL);
-- Skipped source values due to missing date in header:
-- lot=ACC_RMK_L_Type; header=FOB; price=19; reason=header has supplier but no date
-- lot=ACC_RMK_SLIDE; header=FOB; price=31; reason=header has supplier but no date
-- lot=NVLINK_2S_Bridge; header=FOB; price=155; reason=header has supplier but no date
-- lot=NVLINK_2W_Bridge(H200); header=FOB; price=405; reason=header has supplier but no date
-- lot=NVLINK_4W_Bridge(H200); header=FOB; price=754; reason=header has supplier but no date
-- lot=25G_SFP28_MMA2P00-AS; header=FOB; price=65; reason=header has supplier but no date
-- lot=ACC_SuperCap; header=FOB; price=48; reason=header has supplier but no date

1
bible Submodule

Submodule bible added at 5a69e0bba8

View File

@@ -3,7 +3,7 @@
## What is QuoteForge
A corporate server configuration and quotation tool.
Operates in **strict local-first** mode: all user operations go through local SQLite; MariaDB is used only for synchronization.
Operates in **strict local-first** mode: all user operations go through local SQLite; MariaDB is used only by synchronization and dedicated setup/migration tooling.
---
@@ -18,14 +18,14 @@ Operates in **strict local-first** mode: all user operations go through local SQ
- Full offline operation — continue working without network, sync later
- Guarded synchronization — sync is blocked by preflight check if local schema is not ready
### User Roles
### Local Client Security Model
| Role | Permissions |
|------|-------------|
| `viewer` | View, create quotes, export |
| `editor` | + save configurations |
| `pricing_admin` | + manage prices and alerts |
| `admin` | Full access, user management |
QuoteForge is currently a **single-user thick client** bound to `localhost`.
- The local HTTP/UI layer is not treated as a multi-user security boundary.
- RBAC is not part of the active product contract for the local client.
- The authoritative authentication boundary is the remote sync server and its DB credentials captured during setup.
- If the app is ever exposed beyond `localhost`, auth/RBAC must be reintroduced as an enforced perimeter before release.
### Price Freshness Indicators
@@ -45,7 +45,7 @@ Operates in **strict local-first** mode: all user operations go through local SQ
| Backend | Go 1.22+, Gin, GORM |
| Frontend | HTML, Tailwind CSS, htmx |
| Local DB | SQLite (`qfs.db`) |
| Server DB | MariaDB 11+ (sync + server admin) |
| Server DB | MariaDB 11+ (sync transport only for app runtime) |
| Export | encoding/csv, excelize (XLSX) |
---
@@ -82,7 +82,6 @@ quoteforge/
│ ├── db/ # DB initialization
│ ├── handlers/ # HTTP handlers
│ ├── localdb/ # SQLite layer
│ ├── lotmatch/ # Lot matching logic
│ ├── middleware/ # Auth, CORS, etc.
│ ├── models/ # GORM models
│ ├── repository/ # Repository layer
@@ -101,19 +100,31 @@ quoteforge/
## Integration with Existing DB
QuoteForge integrates with the existing `RFQ_LOG` database:
QuoteForge integrates with the existing `RFQ_LOG` database.
Hard boundary:
- normal runtime HTTP handlers, UI flows, pricing, export, BOM resolution, and project/config CRUD must use SQLite only;
- MariaDB access is allowed only inside `internal/services/sync/*` and dedicated setup/migration tools under `cmd/`;
- any new direct MariaDB query in non-sync runtime code is an architectural violation.
**Read-only:**
- `lot` — component catalog
- `qt_lot_metadata` — extended component data
- `qt_categories` — categories
- `qt_pricelists`, `qt_pricelist_items` — pricelists
- `stock_log` — stock quantities consumed during sync enrichment
- `qt_partnumber_books`, `qt_partnumber_book_items` — partnumber book snapshots consumed during sync pull
**Read + Write:**
- `qt_configurations` — configurations
- `qt_projects` — projects
**Sync service tables:**
- `qt_client_local_migrations` — migration catalog (SELECT only)
- `qt_client_schema_state` — applied migrations state
- `qt_client_schema_state` — applied migrations state and operational client status per device (`username + hostname`)
Fields written by QuoteForge:
`app_version`, `last_sync_at`, `last_sync_status`,
`pending_changes_count`, `pending_errors_count`, `configurations_count`, `projects_count`,
`estimate_pricelist_version`, `warehouse_pricelist_version`, `competitor_pricelist_version`,
`last_sync_error_code`, `last_sync_error_text`, `last_checked_at`, `updated_at`
- `qt_pricelist_sync_status` — pricelist sync status

View File

@@ -21,6 +21,36 @@ MariaDB (RFQ_LOG) ← pull/push only
- If MariaDB is unavailable → local work continues without restrictions
- Changes are queued in `pending_changes` and pushed on next sync
## MariaDB Boundary
MariaDB is not part of the runtime read/write path for user features.
Hard rules:
- HTTP handlers, web pages, quote calculation, export, vendor BOM resolution, pricelist browsing, project browsing, and configuration CRUD must read/write SQLite only.
- MariaDB access from the app runtime is allowed only inside the sync subsystem (`internal/services/sync/*`) for explicit pull/push work.
- Dedicated tooling under `cmd/migrate` and `cmd/migrate_ops_projects` may access MariaDB for operator-run schema/data migration tasks.
- Setup may test/store connection settings, but after setup the application must treat MariaDB as sync transport only.
- Any new repository/service/handler that issues MariaDB queries outside sync is a regression and must be rejected in review.
- Local SQLite migrations are code-defined only (`AutoMigrate` + `runLocalMigrations`); there is no server-driven client migration registry.
- Read-only local sync caches are disposable. If a local cache table cannot be migrated safely at startup, the client may quarantine/reset that cache and continue booting.
Forbidden patterns:
- calling `connMgr.GetDB()` from non-sync runtime business code;
- constructing MariaDB-backed repositories in handlers for normal user requests;
- using MariaDB as online fallback for reads when local SQLite already contains the synced dataset;
- adding UI/API features that depend on live MariaDB availability.
## Local Client Boundary
The running app is a localhost-only thick client.
- Browser/UI requests on the local machine are treated as part of the same trusted user session.
- Local routes are not modeled as a hardened multi-user API perimeter.
- Authorization to the central server happens through the saved MariaDB connection configured during setup.
- Any future deployment that binds beyond `127.0.0.1` must add enforced auth/RBAC before exposure.
---
## Synchronization
@@ -61,6 +91,7 @@ pending_changes pending_changes
| Projects | Client ↔ Server ↔ Other Clients |
| Pricelists | Server → Clients only (no push) |
| Components | Server → Clients only |
| Partnumber books | Server → Clients only |
Local pricelists not present on the server and not referenced by active configurations are deleted automatically on sync.
@@ -75,8 +106,7 @@ Configurations and projects are **never hard-deleted**. Deletion is archive via
Before every push/pull, a preflight check runs:
1. Is the server (MariaDB) reachable?
2. Can centralized local DB migrations be applied?
3. Does the application version satisfy `min_app_version` of pending migrations?
2. Is the local client schema initialized and writable?
**If the check fails:**
- Local CRUD continues without restriction
@@ -91,6 +121,7 @@ Before every push/pull, a preflight check runs:
**Prices come only from `local_pricelist_items`.**
Components (`local_components`) are metadata-only — they contain no pricing information.
Stock enrichment for pricelist rows is persisted into `local_pricelist_items` during sync; UI/runtime must not resolve it live from MariaDB.
### Lookup Pattern
@@ -144,7 +175,7 @@ This prevents selecting empty/incomplete snapshots and removes nondeterministic
### Principle
Append-only: every save creates an immutable snapshot in `local_configuration_versions`.
Append-only for **spec+price** changes: immutable snapshots are stored in `local_configuration_versions`.
```
local_configurations
@@ -153,9 +184,11 @@ local_configurations
local_configuration_versions (v1)
```
- `version_no = max + 1` on every save
- `version_no = max + 1` when configuration **spec+price** changes
- Old versions are never modified or deleted in normal flow
- Rollback does **not** rewind history — it creates a **new** version from the snapshot
- Operational updates (`line_no` reorder, server count, project move, rename)
are synced via `pending_changes` but do **not** create a new revision snapshot
### Rollback
@@ -181,6 +214,19 @@ local → pending → synced
---
## Project Specification Ordering (`Line`)
- Each project configuration has persistent `line_no` (`10,20,30...`) in both SQLite and MariaDB.
- Project list ordering is deterministic:
`line_no ASC`, then `created_at DESC`, then `id DESC`.
- Drag-and-drop reorder in project UI updates `line_no` for active project configurations.
- Reorder writes are queued as configuration `update` events in `pending_changes`
without creating new configuration versions.
- Backward compatibility: if remote MariaDB schema does not yet include `line_no`,
sync falls back to create/update without `line_no` instead of failing.
---
## Sync Payload for Versioning
Events in `pending_changes` for configurations contain:

267
bible-local/03-database.md Normal file
View File

@@ -0,0 +1,267 @@
# 03 — Database
## SQLite (local, client-side)
File: `qfs.db` in the user-state directory (see [05-config.md](05-config.md)).
### Tables
#### Components and Reference Data
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_components` | Component metadata (NO prices) | `lot_name` (PK), `lot_description`, `category`, `model` |
| `connection_settings` | MariaDB connection settings | key-value store |
| `app_settings` | Application settings | `key` (PK), `value`, `updated_at` |
Read-only cache contract:
- `local_components`, `local_pricelists`, `local_pricelist_items`, `local_partnumber_books`, and `local_partnumber_book_items` are synchronized caches, not user-authored data.
- Startup must prefer application availability over preserving a broken cache schema.
- If one of these tables cannot be migrated safely, the client may quarantine or drop it and recreate it empty; the next sync repopulates it.
#### Pricelists
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_pricelists` | Pricelist headers | `id`, `server_id` (unique), `source`, `version`, `created_at` |
| `local_pricelist_items` | Pricelist line items ← **sole source of prices** | `id`, `pricelist_id` (FK), `lot_name`, `price`, `lot_category` |
#### Partnumber Books (PN → LOT mapping, pull-only from PriceForge)
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_partnumber_books` | Version snapshots of PN→LOT mappings | `id`, `server_id` (unique), `version`, `created_at`, `is_active` |
| `local_partnumber_book_items` | Canonical PN catalog rows | `id`, `partnumber`, `lots_json`, `description` |
Active book: `WHERE is_active=1 ORDER BY created_at DESC, id DESC LIMIT 1`
#### Configurations and Projects
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_configurations` | Saved configurations | `id`, `uuid` (unique), `items` (JSON), `vendor_spec` (JSON: PN/qty/description + canonical `lot_mappings[]`), `line_no`, `pricelist_id`, `warehouse_pricelist_id`, `competitor_pricelist_id`, `current_version_id`, `sync_status` |
| `local_configuration_versions` | Immutable snapshots (revisions) | `id`, `configuration_id` (FK), `version_no`, `data` (JSON), `change_note`, `created_at` |
| `local_projects` | Projects | `id`, `uuid` (unique), `name`, `code`, `sync_status` |
#### Sync
| Table | Purpose |
|-------|---------|
| `pending_changes` | Queue of changes to push to MariaDB |
| `local_schema_migrations` | Applied migrations (idempotency guard) |
---
### Key SQLite Indexes
```sql
-- Pricelists
INDEX local_pricelist_items(pricelist_id)
UNIQUE INDEX local_pricelists(server_id)
INDEX local_pricelists(source, created_at) -- used for "latest by source" queries
-- latest-by-source runtime query also applies deterministic tie-break by id DESC
-- Configurations
INDEX local_configurations(pricelist_id)
INDEX local_configurations(warehouse_pricelist_id)
INDEX local_configurations(competitor_pricelist_id)
INDEX local_configurations(project_uuid, line_no) -- project ordering (Line column)
UNIQUE INDEX local_configurations(uuid)
```
---
### `items` JSON Structure in Configurations
```json
{
"items": [
{
"lot_name": "CPU_AMD_9654",
"quantity": 2,
"unit_price": 123456.78,
"section": "Processors"
}
]
}
```
Prices are stored inside the `items` JSON field and refreshed from the pricelist on configuration refresh.
---
## MariaDB (server-side, sync-only)
Database: `RFQ_LOG`
### Tables and Permissions
| Table | Purpose | Permissions |
|-------|---------|-------------|
| `lot` | Component catalog | SELECT |
| `qt_lot_metadata` | Extended component data | SELECT |
| `qt_categories` | Component categories | SELECT |
| `qt_pricelists` | Pricelists | SELECT |
| `qt_pricelist_items` | Pricelist line items | SELECT |
| `stock_log` | Latest stock qty by partnumber (pricelist enrichment during sync only) | SELECT |
| `qt_configurations` | Saved configurations (includes `line_no`) | SELECT, INSERT, UPDATE |
| `qt_projects` | Projects | SELECT, INSERT, UPDATE |
| `qt_client_schema_state` | Applied migrations state + client operational status per `username + hostname` | SELECT, INSERT, UPDATE |
| `qt_pricelist_sync_status` | Pricelist sync status | SELECT, INSERT, UPDATE |
| `qt_partnumber_books` | Partnumber book headers with snapshot membership in `partnumbers_json` (written by PriceForge) | SELECT |
| `qt_partnumber_book_items` | Canonical PN catalog with `lots_json` composition (written by PriceForge) | SELECT |
| `qt_vendor_partnumber_seen` | Vendor PN tracking for unresolved/ignored BOM rows (`is_ignored`) | INSERT only for new `partnumber`; existing rows must not be modified |
Legacy server tables not used by QuoteForge runtime anymore:
- `qt_bom`
- `qt_lot_bundles`
- `qt_lot_bundle_items`
QuoteForge canonical BOM storage is:
- `qt_configurations.vendor_spec`
- row-level PN -> multiple LOT decomposition in `vendor_spec[].lot_mappings[]`
Partnumber book server read contract:
1. Read active or target book from `qt_partnumber_books`.
2. Parse `partnumbers_json`.
3. Load payloads from `qt_partnumber_book_items WHERE partnumber IN (...)`.
Pricelist stock enrichment contract:
1. Sync pulls base pricelist rows from `qt_pricelist_items`.
2. Sync reads latest stock quantities from `stock_log`.
3. Sync resolves `partnumber -> lot` through the local mirror of `qt_partnumber_book_items` (`local_partnumber_book_items.lots_json`).
4. Sync stores enriched `available_qty` and `partnumbers` into `local_pricelist_items`.
Runtime rule:
- pricelist UI and quote logic read only `local_pricelist_items`;
- runtime code must not query `stock_log`, `qt_pricelist_items`, or `qt_partnumber_book_items` directly outside sync.
`qt_partnumber_book_items` no longer contains `book_id` or `lot_name`.
It stores one row per `partnumber` with:
- `partnumber`
- `lots_json` as `[{"lot_name":"CPU_X","qty":2}, ...]`
- `description`
`qt_client_schema_state` current contract:
- identity key: `username + hostname`
- client/runtime state:
`app_version`, `last_checked_at`, `updated_at`
- operational state:
`last_sync_at`, `last_sync_status`
- queue health:
`pending_changes_count`, `pending_errors_count`
- local dataset size:
`configurations_count`, `projects_count`
- price context:
`estimate_pricelist_version`, `warehouse_pricelist_version`, `competitor_pricelist_version`
- last known sync problem:
`last_sync_error_code`, `last_sync_error_text`
`last_sync_error_*` source priority:
1. blocked readiness state from `local_sync_guard_state`
2. latest non-empty `pending_changes.last_error`
3. `NULL` when no known sync problem exists
### Grant Permissions to Existing User
```sql
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.stock_log TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO '<DB_USER>'@'%';
GRANT INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO '<DB_USER>'@'%';
FLUSH PRIVILEGES;
```
### Create a New User
```sql
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.stock_log TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO 'quote_user'@'%';
GRANT INSERT, UPDATE ON RFQ_LOG.qt_vendor_partnumber_seen TO 'quote_user'@'%';
FLUSH PRIVILEGES;
SHOW GRANTS FOR 'quote_user'@'%';
```
**Note:** If pricelists sync but stock enrichment is empty, verify `SELECT` on `qt_pricelist_items`, `qt_partnumber_books`, `qt_partnumber_book_items`, and `stock_log`.
**Note:** If you see `Access denied for user ...@'<ip>'`, check for conflicting user entries (user@localhost vs user@'%').
---
## Migrations
### SQLite Migrations (local) — два уровня, выполняются при каждом старте
**1. GORM AutoMigrate** (`internal/localdb/localdb.go`) — первый и основной уровень.
Список Go-моделей передаётся в `db.AutoMigrate(...)`. GORM создаёт отсутствующие таблицы и добавляет новые колонки. Колонки и таблицы **не удаляет**.
Local SQLite partnumber book cache contract:
- `local_partnumber_books.partnumbers_json` stores PN membership for a pulled book.
- `local_partnumber_book_items` is a deduplicated local catalog by `partnumber`.
- `local_partnumber_book_items.lots_json` mirrors the server `lots_json` payload.
- SQLite migration `2026_03_07_local_partnumber_book_catalog` rebuilds old `book_id + lot_name` rows into the new local cache shape.
→ Для добавления новой таблицы или колонки достаточно добавить модель/поле и включить модель в AutoMigrate.
**2. `runLocalMigrations`** (`internal/localdb/migrations.go`) — второй уровень, для операций которые AutoMigrate не умеет: backfill данных, пересоздание таблиц, создание индексов.
Каждая функция выполняется один раз — идемпотентность через запись `id` в `local_schema_migrations`.
QuoteForge does not use centralized server-driven SQLite migrations.
All local SQLite schema/data migrations live in the client codebase.
### MariaDB Migrations (server-side)
- Stored in `migrations/` (SQL files)
- Applied via `-migrate` flag
- `min_app_version` — minimum app version required for the migration
---
## DB Debugging
```bash
# Inspect schema
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_components"
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_configurations"
# Check pricelist item count
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM local_pricelist_items"
# Check pending sync queue
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM pending_changes"
```

View File

@@ -57,6 +57,8 @@
| GET | `/api/configs/:uuid/versions` | List versions |
| GET | `/api/configs/:uuid/versions/:version` | Get specific version |
`line` field in configuration payloads is backed by persistent `line_no` in DB.
### Projects
| Method | Endpoint | Purpose |
@@ -67,6 +69,15 @@
| PUT | `/api/projects/:uuid` | Update project |
| DELETE | `/api/projects/:uuid` | Archive project variant (soft-delete via `is_active=false`; fails if project has no `variant` set — main projects cannot be deleted this way) |
| GET | `/api/projects/:uuid/configs` | Project configurations |
| PATCH | `/api/projects/:uuid/configs/reorder` | Reorder active project configurations (`ordered_uuids`) and persist `line_no` |
| POST | `/api/projects/:uuid/vendor-import` | Import a vendor `CFXML` workspace into the existing project |
`GET /api/projects/:uuid/configs` ordering:
`line ASC`, then `created_at DESC`, then `id DESC`.
`POST /api/projects/:uuid/vendor-import` accepts `multipart/form-data` with one required file field:
- `file` — vendor configurator export in `CFXML` format
### Sync
@@ -83,14 +94,45 @@
| POST | `/api/sync/pricelists` | Pull pricelists | MariaDB → SQLite |
| POST | `/api/sync/all` | Full sync: push + pull + import | bidirectional |
| POST | `/api/sync/repair` | Repair broken entries in pending_changes | SQLite |
| POST | `/api/sync/partnumber-seen` | Report unresolved/ignored vendor PNs for server-side tracking | QuoteForge → MariaDB |
**If sync is blocked by the readiness guard:** all POST sync methods return `423 Locked` with `reason_code` and `reason_text`.
### Vendor Spec (BOM)
| Method | Endpoint | Purpose |
|--------|----------|---------|
| GET | `/api/configs/:uuid/vendor-spec` | Fetch stored vendor BOM |
| PUT | `/api/configs/:uuid/vendor-spec` | Replace vendor BOM (full update) |
| POST | `/api/configs/:uuid/vendor-spec/resolve` | Resolve PNs → LOTs (read-only) |
| POST | `/api/configs/:uuid/vendor-spec/apply` | Apply resolved LOTs to cart |
Notes:
- `GET` / `PUT /api/configs/:uuid/vendor-spec` exchange normalized BOM rows (`vendor_spec`), not raw pasted Excel layout.
- BOM row contract stores canonical LOT mapping list as seen in BOM UI:
- `lot_mappings[]`
- each mapping contains `lot_name` + `quantity_per_pn`
- `POST /api/configs/:uuid/vendor-spec/apply` rebuilds cart items from explicit BOM mappings:
- all LOTs from `lot_mappings[]`
### Partnumber Books (read-only)
| Method | Endpoint | Purpose |
|--------|----------|---------|
| GET | `/api/partnumber-books` | List local book snapshots |
| GET | `/api/partnumber-books/:id` | Items for a book by `server_id` (`page`, `per_page`, `search`) |
| POST | `/api/sync/partnumber-books` | Pull book snapshots from MariaDB |
See [09-vendor-spec.md](09-vendor-spec.md) for schema and pull logic.
See [09-vendor-spec.md](09-vendor-spec.md) for `vendor_spec` JSON schema and BOM UI mapping contract.
### Export
| Method | Endpoint | Purpose |
|--------|----------|---------|
| POST | `/api/export/csv` | Export configuration to CSV |
| GET | `/api/projects/:uuid/export` | Legacy project CSV export in block BOM format |
| POST | `/api/projects/:uuid/export` | Project CSV export in pricing-tab format with selectable columns (`include_lot`, `include_bom`, `include_estimate`, `include_stock`, `include_competitor`) |
**Export filename format:** `YYYY-MM-DD (ProjectCode) ConfigName Article.csv`
(uses `project.Code`, not `project.Name`)
@@ -108,6 +150,7 @@
| `/projects/:uuid` | Project details |
| `/pricelists` | Pricelist list |
| `/pricelists/:id` | Pricelist details |
| `/partnumber-books` | Partnumber books (active book summary + snapshot history) |
| `/setup` | Connection settings |
---

View File

@@ -23,6 +23,18 @@ Override: `-config <path>` or `QFS_CONFIG_PATH`.
**Important:** `config.yaml` is a runtime user file — it is **not stored in the repository**.
`config.example.yaml` is the only config template in the repo.
### Local encryption key
Saved MariaDB credentials in SQLite are encrypted with:
1. `QUOTEFORGE_ENCRYPTION_KEY` if explicitly provided, otherwise
2. an application-managed random key file stored at `<state dir>/local_encryption.key`.
Rules:
- The key file is created automatically with mode `0600`.
- The key file is not committed and is not included in normal backups.
- Restoring `qfs.db` on another machine requires re-entering DB credentials unless the key file is migrated separately.
---
## config.yaml Structure
@@ -53,12 +65,12 @@ backup:
| `QFS_CONFIG_PATH` | Full path to `config.yaml` | OS-specific user state dir |
| `QFS_BACKUP_DIR` | Root directory for rotating backups | `<db dir>/backups` |
| `QFS_BACKUP_DISABLE` | Disable automatic backups | — |
| `QUOTEFORGE_ENCRYPTION_KEY` | Explicit override for local credential encryption key | app-managed key file |
| `QF_DB_HOST` | MariaDB host | localhost |
| `QF_DB_PORT` | MariaDB port | 3306 |
| `QF_DB_NAME` | Database name | RFQ_LOG |
| `QF_DB_USER` | DB user | — |
| `QF_DB_PASSWORD` | DB password | — |
| `QF_JWT_SECRET` | JWT secret | — |
| `QF_SERVER_PORT` | HTTP server port | 8080 |
`QFS_BACKUP_DISABLE` accepts: `1`, `true`, `yes`.

View File

@@ -34,6 +34,11 @@ backup:
- `QFS_BACKUP_DIR` — backup root directory (default: `<db dir>/backups`)
- `QFS_BACKUP_DISABLE` — disable backups (`1/true/yes`)
**Safety rules:**
- Backup root must resolve outside any git worktree.
- If `qfs.db` is placed inside a repository checkout, default backups are rejected until `QFS_BACKUP_DIR` points outside the repo.
- Backup archives intentionally do **not** include `local_encryption.key`; restored installations on another machine must re-enter DB credentials.
---
## Behavior
@@ -76,6 +81,7 @@ type BackupConfig struct {
- `.period.json` is the marker that prevents duplicate backups within the same period
- Archive filenames contain only the date; uniqueness is ensured by per-period directories + the period marker
- When changing naming or retention: update both the filename logic and the prune logic together
- Git worktree detection is path-based (`.git` ancestor check) and blocks backup creation inside the repo tree
---

View File

@@ -34,7 +34,7 @@ make help # All available commands
## Code Style
- **Formatting:** `gofmt` (mandatory)
- **Logging:** `slog` only (structured logging)
- **Logging:** `slog` only (structured logging to the binary's stdout/stderr). No `console.log` or any other logging in browser-side JS — the browser console is never used for diagnostics.
- **Errors:** explicit wrapping with context (`fmt.Errorf("context: %w", err)`)
- **Style:** no unnecessary abstractions; minimum code for the task
@@ -60,6 +60,11 @@ The following components were **intentionally removed** and must not be brought
- Any sync changes must preserve local-first behavior
- Local CRUD must not be blocked when MariaDB is unavailable
- Runtime business code must not query MariaDB directly; all normal reads/writes go through SQLite snapshots
- Direct MariaDB access is allowed only in `internal/services/sync/*` and dedicated setup/migration tools under `cmd/`
- `connMgr.GetDB()` in handlers/services outside sync is a code review failure unless the code is strictly setup or operator tooling
- Local SQLite migrations must be implemented in code; do not add a server-side registry of client SQLite SQL patches
- Read-only local cache tables may be reset during startup recovery if migration fails; do not apply that strategy to user-authored tables like configurations, projects, pending changes, or connection settings
### Formats and UI
@@ -108,7 +113,7 @@ if found && price > 0 {
1. **`CurrentPrice` removed from components** — any code using it will fail to compile
2. **`HasPrice` filter removed** — `component.go ListComponents` no longer supports this filter
3. **Quote calculation:** always offline-first (SQLite); online path is separate
3. **Quote calculation:** always SQLite-only; do not add a live MariaDB fallback
4. **Items JSON:** prices are stored in the `items` field of the configuration, not fetched from components
5. **Migrations are additive:** already-applied migrations are skipped (checked by `id` in `local_schema_migrations`)
6. **`SyncedAt` removed:** last component sync time is now in `app_settings` (key=`last_component_sync`)

View File

@@ -0,0 +1,568 @@
# 09 — Vendor Spec (BOM Import)
## Overview
The vendor spec feature allows importing a vendor BOM (Bill of Materials) into a configuration. It maps vendor part numbers (PN) to internal LOT names using an active partnumber book (snapshot pulled from PriceForge), then aggregates quantities to populate the Estimate (cart).
---
## Architecture
### Storage
| Data | Storage | Sync direction |
|------|---------|---------------|
| `vendor_spec` JSON | `local_configurations.vendor_spec` (TEXT, JSON-encoded) | Two-way via `pending_changes` |
| Partnumber book snapshots | `local_partnumber_books` + `local_partnumber_book_items` | Pull-only from PriceForge |
`vendor_spec` is a JSON array of `VendorSpecItem` objects stored inside the configuration row.
It syncs to MariaDB `qt_configurations.vendor_spec` via the existing pending_changes mechanism.
Legacy storage note:
- QuoteForge does not use `qt_bom`
- QuoteForge does not use `qt_lot_bundles`
- QuoteForge does not use `qt_lot_bundle_items`
The only canonical persisted BOM contract for QuoteForge is `qt_configurations.vendor_spec`.
### `vendor_spec` JSON Schema
```json
[
{
"sort_order": 10,
"vendor_partnumber": "ABC-123",
"quantity": 2,
"description": "...",
"unit_price": 4500.00,
"total_price": 9000.00,
"lot_mappings": [
{ "lot_name": "LOT_A", "quantity_per_pn": 1 },
{ "lot_name": "LOT_B", "quantity_per_pn": 2 }
]
}
]
```
`lot_mappings[]` is the canonical persisted LOT mapping list for a BOM row.
Each mapping entry stores:
- `lot_name`
- `quantity_per_pn` (how many units of this LOT are included in **one vendor PN**)
### PN → LOT Mapping Contract (single LOT, multiplier, bundle)
QuoteForge expects the server to return/store BOM rows (`vendor_spec`) using a single canonical mapping list:
- `lot_mappings[]` contains **all** LOT mappings for the PN row (single LOT and bundle cases alike)
- the list stores exactly what the user sees in BOM (LOT + "LOT в 1 PN")
- the DB contract does **not** split mappings into "base LOT" vs "bundle LOTs"
#### Final quantity contribution to Estimate
For one BOM row with vendor PN quantity `pn_qty`:
- each mapping contribution:
- `lot_qty = pn_qty * lot_mappings[i].quantity_per_pn`
#### Example: one PN maps to multiple LOTs
```json
{
"vendor_partnumber": "SYS-821GE-TNHR",
"quantity": 3,
"lot_mappings": [
{ "lot_name": "CHASSIS_X13_8GPU", "quantity_per_pn": 1 },
{ "lot_name": "PS_3000W_Titanium", "quantity_per_pn": 2 },
{ "lot_name": "RAILKIT_X13", "quantity_per_pn": 1 }
]
}
```
This row contributes to Estimate:
- `CHASSIS_X13_8GPU``3 * 1 = 3`
- `PS_3000W_Titanium``3 * 2 = 6`
- `RAILKIT_X13``3 * 1 = 3`
---
## Partnumber Books (Snapshots)
Partnumber books are immutable versioned snapshots of the global PN→LOT mapping table, analogous to pricelists. PriceForge creates new snapshots; QuoteForge only pulls and reads them.
### SQLite (local mirror)
```sql
CREATE TABLE local_partnumber_books (
id INTEGER PRIMARY KEY AUTOINCREMENT,
server_id INTEGER UNIQUE NOT NULL, -- id from qt_partnumber_books
version TEXT NOT NULL, -- format YYYY-MM-DD-NNN
created_at DATETIME NOT NULL,
is_active INTEGER NOT NULL DEFAULT 1
);
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL,
lots_json TEXT NOT NULL,
description TEXT
);
CREATE UNIQUE INDEX idx_local_book_pn ON local_partnumber_book_items(partnumber);
```
**Active book query:** `WHERE is_active=1 ORDER BY created_at DESC, id DESC LIMIT 1`
**Schema creation:** GORM AutoMigrate (not `runLocalMigrations`).
### MariaDB (managed exclusively by PriceForge)
```sql
CREATE TABLE qt_partnumber_books (
id INT AUTO_INCREMENT PRIMARY KEY,
version VARCHAR(50) NOT NULL,
created_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
is_active TINYINT(1) NOT NULL DEFAULT 1,
partnumbers_json LONGTEXT NOT NULL
);
CREATE TABLE qt_partnumber_book_items (
id INT AUTO_INCREMENT PRIMARY KEY,
partnumber VARCHAR(255) NOT NULL,
lots_json LONGTEXT NOT NULL,
description VARCHAR(10000) NULL,
UNIQUE KEY uq_qt_partnumber_book_items_partnumber (partnumber)
);
ALTER TABLE qt_configurations ADD COLUMN vendor_spec JSON NULL;
```
QuoteForge has `SELECT` permission only on `qt_partnumber_books` and `qt_partnumber_book_items`. All writes are managed by PriceForge.
**Grant (add to existing user setup):**
```sql
GRANT SELECT ON RFQ_LOG.qt_partnumber_books TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_partnumber_book_items TO '<DB_USER>'@'%';
```
---
## Resolution Algorithm (3-step)
For each `vendor_partnumber` in the BOM, QuoteForge builds/updates UI-visible LOT mappings:
1. **Active book lookup** — read active `local_partnumber_books`, verify PN membership in `partnumbers_json`, then query `local_partnumber_book_items WHERE partnumber = ?`.
2. **Populate BOM UI** — if a match exists, BOM row gets `lot_mappings[]` from `lots_json` (user can still edit it).
3. **Unresolved** — red row + inline LOT input with strict autocomplete.
Persistence note: the application stores the final user-visible mappings in `lot_mappings[]` (not separate "resolved/manual" persisted fields).
---
## CFXML Workspace Import Contract
QuoteForge may import a vendor configurator workspace in `CFXML` format as an existing project update path.
This import path must convert one external workspace into one QuoteForge project containing multiple configurations.
### Import Unit Boundaries
- One `CFXML` workspace file = one QuoteForge project import session.
- One top-level configuration group inside the workspace = one QuoteForge configuration.
- Software rows are **not** imported as standalone configurations.
- All software rows must be attached to the configuration group they belong to.
### Configuration Grouping
Top-level `ProductLineItem` rows are grouped by:
- `ProprietaryGroupIdentifier`
This field is the canonical boundary of one imported configuration.
Rules:
1. Read all top-level `ProductLineItem` rows in document order.
2. Group them by `ProprietaryGroupIdentifier`.
3. Preserve document order of groups by the first encountered `ProductLineNumber`.
4. Import each group as exactly one QuoteForge configuration.
`ConfigurationGroupLineNumberReference` is not sufficient for grouping imported configurations because
multiple independent configuration groups may share the same value in one workspace.
### Primary Row Selection (no SKU hardcode)
The importer must not hardcode vendor, model, or server SKU values.
Within each `ProprietaryGroupIdentifier` group, the importer selects one primary top-level row using
structural rules only:
1. Prefer rows with `ProductTypeCode = Hardware`.
2. If multiple rows match, prefer the row with the largest number of `ProductSubLineItem` children.
3. If there is still a tie, prefer the first row by `ProductLineNumber`.
The primary row provides configuration-level metadata such as:
- configuration name
- server count
- server model / description
- article / support code candidate
### Software Inclusion Rule
All top-level rows belonging to the same `ProprietaryGroupIdentifier` must be imported into the same
QuoteForge configuration, including:
- `Hardware`
- `Software`
- instruction / service rows represented as software-like items
Effects:
- a workspace never creates a separate configuration made only of software;
- `software1`, `software2`, license rows, and instruction rows stay inside the related configuration;
- the user sees one complete configuration instead of fragmented partial imports.
### Mapping to QuoteForge Project / Configuration
For one imported configuration group:
- QuoteForge configuration `name` <- primary row `ProductName`
- QuoteForge configuration `server_count` <- primary row `Quantity`
- QuoteForge configuration `server_model` <- primary row `ProductDescription`
- QuoteForge configuration `article` or `support_code` <- primary row `ProprietaryProductIdentifier`
- QuoteForge configuration `line` <- stable order by group appearance in the workspace
Project-level fields such as QuoteForge `code`, `name`, and `variant` are not reliably defined by `CFXML`
itself and should come from the existing target project context or explicit user input.
### Mapping to `vendor_spec`
The importer must build one combined `vendor_spec` array per configuration group.
Source rows:
- all `ProductSubLineItem` rows from the primary top-level row;
- all `ProductSubLineItem` rows from every non-primary top-level row in the same group;
- if a top-level row has no `ProductSubLineItem`, the top-level row itself may be converted into one
`vendor_spec` row so that software-only content is not lost.
Each imported row maps into one `VendorSpecItem`:
- `sort_order` <- stable sequence within the group
- `vendor_partnumber` <- `ProprietaryProductIdentifier`
- `quantity` <- `Quantity`
- `description` <- `ProductDescription`
- `unit_price` <- `UnitListPrice.FinancialAmount.MonetaryAmount` when present
- `total_price` <- `quantity * unit_price` when unit price is present
- `lot_mappings` <- resolved immediately from the active partnumber book using `lots_json`
The importer stores vendor-native rows in `vendor_spec`, then immediately runs the same logical flow as BOM
Resolve + Apply:
- resolve vendor PN rows through the active partnumber book
- persist canonical `lot_mappings[]`
- build normalized configuration `items` from `row.quantity * quantity_per_pn`
- fill `items.unit_price` from the latest local `estimate` pricelist
- recalculate configuration `total_price`
### Import Pipeline
Recommended parser pipeline:
1. Parse XML into top-level `ProductLineItem` rows.
2. Group rows by `ProprietaryGroupIdentifier`.
3. Select one primary row per group using structural rules.
4. Build one QuoteForge configuration DTO per group.
5. Merge all hardware/software rows of the group into one `vendor_spec`.
6. Resolve imported PN rows into canonical `lot_mappings[]` using the active partnumber book.
7. Build configuration `items` from resolved `lot_mappings[]`.
8. Price those `items` from the latest local `estimate` pricelist.
9. Save or update the QuoteForge configuration inside the existing project.
### Recommended Internal DTO
```go
type ImportedProject struct {
SourceFormat string
SourceFilePath string
SourceDocID string
Code string
Name string
Variant string
Configurations []ImportedConfiguration
}
type ImportedConfiguration struct {
GroupID string
Name string
Line int
ServerCount int
ServerModel string
Article string
SupportCode string
CurrencyCode string
TopLevelRows []ImportedTopLevelRow
VendorSpec []ImportedVendorRow
}
type ImportedTopLevelRow struct {
ProductLineNumber string
ItemNo string
GroupID string
ProductType string
ProductCode string
ProductName string
Description string
Quantity int
UnitPrice *float64
IsPrimary bool
SubRows []ImportedVendorRow
}
type ImportedVendorRow struct {
SortOrder int
SourceLineNumber string
SourceParentLine string
SourceProductType string
VendorPartnumber string
Description string
Quantity int
UnitPrice *float64
TotalPrice *float64
ProductCharacter string
ProductCharPath string
}
```
### Current Product Assumption
For QuoteForge product behavior, the correct user-facing interpretation is:
- one external project/workspace contains several configurations;
- each configuration contains both hardware and software rows that belong to it;
- the importer must preserve that grouping exactly.
---
## Qty Aggregation Logic
After resolution, qty per LOT is computed from the BOM row quantity multiplied by the matched `lots_json.qty`:
```
qty(lot) = SUM(quantity_of_pn_row * quantity_of_lot_inside_lots_json)
```
Examples (book: PN_X → `[{LOT_A, qty:2}, {LOT_B, qty:1}]`):
- BOM: PN_X ×3 → `LOT_A ×6`, `LOT_B ×3`
- BOM: PN_X ×1 and PN_X ×2 → `LOT_A ×6`, `LOT_B ×3`
---
## UI: Three Top-Level Tabs
The configurator (`/configurator`) has three tabs:
1. **Estimate** — existing cart/component configurator (unchanged).
2. **BOM** — paste/import vendor BOM, manual column mapping, LOT matching, bundle decomposition (`1 PN -> multiple LOTs`), "Пересчитать эстимейт", "Очистить".
3. **Ценообразование** — pricing summary table + custom price input.
BOM data is shared between tabs 2 and 3.
### BOM Import UI (raw table, manual column mapping)
After paste (`Ctrl+V`) QuoteForge renders an editable raw table (not auto-detected parsing).
- The pasted rows are shown **as-is** (including header rows, if present).
- The user selects a type for each column manually:
- `P/N`
- `Кол-во`
- `Цена`
- `Описание`
- `Не использовать`
- Required mapping:
- exactly one `P/N`
- exactly one `Кол-во`
- Optional mapping:
- `Цена` (0..1)
- `Описание` (0..1)
- Rows can be:
- ignored (UI-only, excluded from `vendor_spec`)
- deleted
- Raw cells are editable inline after paste.
Notes:
- There is **no auto column detection**.
- There is **no auto header-row skip**.
- Raw import layout itself is not stored on server; only normalized `vendor_spec` is stored.
### LOT matching in BOM table
The BOM table adds service columns on the right:
- `LOT`
- `LOT в 1 PN`
- actions (`+`, ignore, delete)
`LOT` behavior:
- The first LOT row shown in the BOM UI is the primary LOT mapping for the PN row.
- Additional LOT rows are added via the `+` action.
- inline LOT input is strict:
- autocomplete source = full local components list (`/api/components?per_page=5000`)
- free text that does not match an existing LOT is rejected
`LOT в 1 PN` behavior:
- quantity multiplier for each visible LOT row in BOM (`quantity_per_pn` in persisted `lot_mappings[]`)
- default = `1`
- editable inline
### Bundle mode (`1 PN -> multiple LOTs`)
The `+` action in BOM rows adds an extra LOT mapping row for the same vendor PN row.
- All visible LOT rows (first + added rows) are persisted uniformly in `lot_mappings[]`
- Each mapping row has:
- LOT
- qty (`LOT in 1 PN` = `quantity_per_pn`)
### BOM restore on config open
On config open, QuoteForge loads `vendor_spec` from server and reconstructs the editable BOM table in normalized form:
- columns restored as: `Qty | P/N | Description | Price`
- column mapping restored as:
- `qty`, `pn`, `description`, `price`
- LOT / `LOT в 1 PN` rows are restored from `vendor_spec.lot_mappings[]`
This restores the BOM editing state, but not the original raw Excel layout (extra columns, ignored rows, original headers).
### Pricing Tab: column order
```
LOT | PN вендора | Описание | Кол-во | Estimate | Цена вендора | Склад | Конк.
```
**If BOM is empty** — the pricing tab still renders, using cart items as rows (PN вендора = "—", Цена вендора = "—").
**Description source priority:** BOM row description → LOT description from `local_components`.
### Pricing Tab: BOM + Estimate merge behavior
When BOM exists, the pricing tab renders:
- BOM-based rows (including rows resolved via manual LOT and bundle mappings)
- plus **Estimate-only LOTs** (rows currently in cart but not covered by BOM mappings)
Estimate-only rows are shown as separate rows with:
- `PN вендора = "—"`
- vendor price = `—`
- description from local components
### Pricing Tab: "Своя цена" input
- Manual entry → proportionally redistributes custom price into "Цена вендора" cells (proportional to each row's Estimate share). Last row absorbs rounding remainder.
- "Проставить цены BOM" button → restores per-row original BOM prices directly (no proportional redistribution). Sets "Своя цена" to their sum.
- Both paths show "Скидка от Estimate: X%" info.
- "Экспорт CSV" button → downloads `pricing_<uuid>.csv` with UTF-8 BOM, same column order as table, plus Итого row.
---
## API Endpoints
| Method | URL | Description |
|--------|-----|-------------|
| GET | `/api/configs/:uuid/vendor-spec` | Fetch stored BOM |
| PUT | `/api/configs/:uuid/vendor-spec` | Replace BOM (full update) |
| POST | `/api/configs/:uuid/vendor-spec/resolve` | Resolve PNs → LOTs (no cart mutation) |
| POST | `/api/configs/:uuid/vendor-spec/apply` | Apply resolved LOTs to cart |
| POST | `/api/projects/:uuid/vendor-import` | Import `CFXML` workspace into an existing project and create grouped configurations |
| GET | `/api/partnumber-books` | List local book snapshots |
| GET | `/api/partnumber-books/:id` | Items for a book by server_id |
| POST | `/api/sync/partnumber-books` | Pull book snapshots from MariaDB |
| POST | `/api/sync/partnumber-seen` | Push unresolved PNs to `qt_vendor_partnumber_seen` on MariaDB |
## Unresolved PN Tracking (`qt_vendor_partnumber_seen`)
After each `resolveBOM()` call, QuoteForge pushes PN rows to `POST /api/sync/partnumber-seen` (fire-and-forget from JS — errors silently ignored):
- unresolved BOM rows (`ignored = false`)
- raw BOM rows explicitly marked as ignored in UI (`ignored = true`) — these rows are **not** saved to `vendor_spec`, but are reported for server-side tracking
The handler calls `sync.PushPartnumberSeen()` which inserts into `qt_vendor_partnumber_seen`.
If a row with the same `partnumber` already exists, QuoteForge must leave it untouched:
- do not update `last_seen_at`
- do not update `is_ignored`
- do not update `description`
Canonical insert behavior:
```sql
INSERT INTO qt_vendor_partnumber_seen (source_type, vendor, partnumber, description, is_ignored, last_seen_at)
VALUES ('manual', '', ?, ?, ?, NOW())
ON DUPLICATE KEY UPDATE
partnumber = partnumber
```
Uniqueness key: `partnumber` only (after PriceForge migration 025). PriceForge uses this table for populating the partnumber book.
Partnumber book sync contract:
- PriceForge writes membership snapshots to `qt_partnumber_books.partnumbers_json`.
- PriceForge writes canonical PN payloads to `qt_partnumber_book_items`.
- QuoteForge syncs book headers first, then pulls PN payloads with:
`SELECT partnumber, lots_json, description FROM qt_partnumber_book_items WHERE partnumber IN (...)`
## BOM Persistence
- `vendor_spec` is saved to server via `PUT /api/configs/:uuid/vendor-spec`.
- `GET` / `PUT` `vendor_spec` must preserve row-level mapping fields used by the UI:
- `lot_mappings[]`
- each item: `lot_name`, `quantity_per_pn`
- `description` is persisted in each BOM row and is used by the Pricing tab when available.
- Ignored raw rows are **not** persisted into `vendor_spec`.
- The PUT handler explicitly marshals `VendorSpec` to JSON string before passing to GORM `Update` (GORM does not reliably call `driver.Valuer` for custom types in `Update(column, value)`).
- BOM is autosaved (debounced) after BOM-changing actions, including:
- `resolveBOM()`
- LOT row qty (`LOT в 1 PN`) changes
- LOT row add/remove (`+` / delete in bundle context)
- "Сохранить BOM" button triggers explicit save.
## Pricing Tab: Estimate Price Source
`renderPricingTab()` is async. It calls `POST /api/quote/price-levels` with LOTs collected from:
- `lot_mappings[]` from BOM rows
- current Estimate/cart LOTs not covered by BOM mappings (to show estimate-only rows)
This ensures Estimate prices appear for:
- manually matched LOTs in the BOM tab
- bundle LOTs
- LOTs already present in Estimate but not mapped from BOM
### Apply to Estimate (`Пересчитать эстимейт`)
When applying BOM to Estimate, QuoteForge builds cart rows from explicit UI mappings stored in `lot_mappings[]`.
For a BOM row with PN qty = `Q`:
- each mapped LOT contributes `Q * quantity_per_pn`
Rows without any valid LOT mapping are skipped.
## Web Route
| Route | Page |
|-------|------|
| `/partnumber-books` | Partnumber books — active book summary (unique LOTs, total PN, primary PN count), searchable items table, collapsible snapshot history |

View File

@@ -1,181 +0,0 @@
# 03 — Database
## SQLite (local, client-side)
File: `qfs.db` in the user-state directory (see [05-config.md](05-config.md)).
### Tables
#### Components and Reference Data
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_components` | Component metadata (NO prices) | `lot_name` (PK), `lot_description`, `category`, `model` |
| `connection_settings` | MariaDB connection settings | key-value store |
| `app_settings` | Application settings | `key` (PK), `value`, `updated_at` |
#### Pricelists
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_pricelists` | Pricelist headers | `id`, `server_id` (unique), `source`, `version`, `created_at` |
| `local_pricelist_items` | Pricelist line items ← **sole source of prices** | `id`, `pricelist_id` (FK), `lot_name`, `price`, `lot_category` |
#### Configurations and Projects
| Table | Purpose | Key Fields |
|-------|---------|------------|
| `local_configurations` | Saved configurations | `id`, `uuid` (unique), `items` (JSON), `pricelist_id`, `warehouse_pricelist_id`, `competitor_pricelist_id`, `current_version_id`, `sync_status` |
| `local_configuration_versions` | Immutable snapshots (revisions) | `id`, `configuration_id` (FK), `version_no`, `data` (JSON), `change_note`, `created_at` |
| `local_projects` | Projects | `id`, `uuid` (unique), `name`, `code`, `sync_status` |
#### Sync
| Table | Purpose |
|-------|---------|
| `pending_changes` | Queue of changes to push to MariaDB |
| `local_schema_migrations` | Applied migrations (idempotency guard) |
---
### Key SQLite Indexes
```sql
-- Pricelists
INDEX local_pricelist_items(pricelist_id)
UNIQUE INDEX local_pricelists(server_id)
INDEX local_pricelists(source, created_at) -- used for "latest by source" queries
-- latest-by-source runtime query also applies deterministic tie-break by id DESC
-- Configurations
INDEX local_configurations(pricelist_id)
INDEX local_configurations(warehouse_pricelist_id)
INDEX local_configurations(competitor_pricelist_id)
UNIQUE INDEX local_configurations(uuid)
```
---
### `items` JSON Structure in Configurations
```json
{
"items": [
{
"lot_name": "CPU_AMD_9654",
"quantity": 2,
"unit_price": 123456.78,
"section": "Processors"
}
]
}
```
Prices are stored inside the `items` JSON field and refreshed from the pricelist on configuration refresh.
---
## MariaDB (server-side, sync-only)
Database: `RFQ_LOG`
### Tables and Permissions
| Table | Purpose | Permissions |
|-------|---------|-------------|
| `lot` | Component catalog | SELECT |
| `qt_lot_metadata` | Extended component data | SELECT |
| `qt_categories` | Component categories | SELECT |
| `qt_pricelists` | Pricelists | SELECT |
| `qt_pricelist_items` | Pricelist line items | SELECT |
| `lot_partnumbers` | Partnumber → lot mapping (pricelist enrichment) | SELECT |
| `stock_log` | Latest stock qty by partnumber (pricelist enrichment) | SELECT |
| `qt_configurations` | Saved configurations | SELECT, INSERT, UPDATE |
| `qt_projects` | Projects | SELECT, INSERT, UPDATE |
| `qt_client_local_migrations` | Migration catalog | SELECT only |
| `qt_client_schema_state` | Applied migrations state | SELECT, INSERT, UPDATE |
| `qt_pricelist_sync_status` | Pricelist sync status | SELECT, INSERT, UPDATE |
### Grant Permissions to Existing User
```sql
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.lot_partnumbers TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.stock_log TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
FLUSH PRIVILEGES;
```
### Create a New User
```sql
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.lot_partnumbers TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.stock_log TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
FLUSH PRIVILEGES;
SHOW GRANTS FOR 'quote_user'@'%';
```
**Note:** If pricelists sync but show `0` positions (or logs contain `enriching pricelist items with stock` + `SELECT denied`), verify `SELECT` on `lot_partnumbers` and `stock_log` in addition to `qt_pricelist_items`.
**Note:** If you see `Access denied for user ...@'<ip>'`, check for conflicting user entries (user@localhost vs user@'%').
---
## Migrations
### SQLite Migrations (local)
- Stored in `migrations/` (SQL files)
- Applied via `-migrate` flag or automatically on first run
- Idempotent: checked by `id` in `local_schema_migrations`
- Already-applied migrations are skipped
```bash
go run ./cmd/qfs -migrate
```
### Centralized Migrations (server-side)
- Stored in `qt_client_local_migrations` (MariaDB)
- Applied automatically during sync readiness check
- `min_app_version` — minimum app version required for the migration
---
## DB Debugging
```bash
# Inspect schema
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_components"
sqlite3 ~/.local/state/quoteforge/qfs.db ".schema local_configurations"
# Check pricelist item count
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM local_pricelist_items"
# Check pending sync queue
sqlite3 ~/.local/state/quoteforge/qfs.db "SELECT COUNT(*) FROM pending_changes"
```

View File

@@ -6,6 +6,7 @@ import (
"errors"
"flag"
"fmt"
"io"
"io/fs"
"log/slog"
"math"
@@ -31,7 +32,6 @@ import (
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/middleware"
"git.mchus.pro/mchus/quoteforge/internal/models"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"git.mchus.pro/mchus/quoteforge/internal/services"
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
"github.com/gin-gonic/gin"
@@ -150,29 +150,25 @@ func main() {
setupLogger(cfg.Logging)
// Create connection manager and try to connect immediately if settings exist
// Create connection manager. Runtime stays local-first; MariaDB is used on demand by sync/setup only.
connMgr := db.NewConnectionManager(local)
dbUser := local.GetDBUser()
// Try to connect to MariaDB on startup
mariaDB, err := connMgr.GetDB()
if err != nil {
slog.Warn("failed to connect to MariaDB on startup, starting in offline mode", "error", err)
mariaDB = nil
} else {
slog.Info("successfully connected to MariaDB on startup")
}
slog.Info("starting QuoteForge server",
"version", Version,
"host", cfg.Server.Host,
"port", cfg.Server.Port,
"db_user", dbUser,
"online", mariaDB != nil,
"online", false,
)
if *migrate {
mariaDB, err := connMgr.GetDB()
if err != nil {
slog.Error("cannot run migrations: database not available", "error", err)
os.Exit(1)
}
if mariaDB == nil {
slog.Error("cannot run migrations: database not available")
os.Exit(1)
@@ -189,39 +185,10 @@ func main() {
slog.Info("migrations completed")
}
// Always apply SQL migrations on startup when database is available.
// This keeps schema in sync for long-running installations without manual steps.
// If current DB user does not have enough privileges, continue startup in normal mode.
if mariaDB != nil {
sqlMigrationsPath := filepath.Join("migrations")
needsMigrations, err := models.NeedsSQLMigrations(mariaDB, sqlMigrationsPath)
if err != nil {
if models.IsMigrationPermissionError(err) {
slog.Info("startup SQL migrations skipped: insufficient database privileges", "path", sqlMigrationsPath, "error", err)
} else {
slog.Error("startup SQL migrations check failed", "path", sqlMigrationsPath, "error", err)
os.Exit(1)
}
} else if needsMigrations {
if err := models.RunSQLMigrations(mariaDB, sqlMigrationsPath); err != nil {
if models.IsMigrationPermissionError(err) {
slog.Info("startup SQL migrations skipped: insufficient database privileges", "path", sqlMigrationsPath, "error", err)
} else {
slog.Error("startup SQL migrations failed", "path", sqlMigrationsPath, "error", err)
os.Exit(1)
}
} else {
slog.Info("startup SQL migrations applied", "path", sqlMigrationsPath)
}
} else {
slog.Debug("startup SQL migrations not needed", "path", sqlMigrationsPath)
}
}
gin.SetMode(cfg.Server.Mode)
restartSig := make(chan struct{}, 1)
router, syncService, err := setupRouter(cfg, local, connMgr, mariaDB, dbUser, restartSig)
router, syncService, err := setupRouter(cfg, local, connMgr, dbUser, restartSig)
if err != nil {
slog.Error("failed to setup router", "error", err)
os.Exit(1)
@@ -336,8 +303,6 @@ func derefString(value *string) string {
return *value
}
func setConfigDefaults(cfg *config.Config) {
if cfg.Server.Host == "" {
cfg.Server.Host = "127.0.0.1"
@@ -673,46 +638,14 @@ func setupDatabaseFromDSN(dsn string) (*gorm.DB, error) {
return db, nil
}
func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.ConnectionManager, mariaDB *gorm.DB, dbUsername string, restartSig chan struct{}) (*gin.Engine, *sync.Service, error) {
// mariaDB may be nil if we're in offline mode
// Repositories
var componentRepo *repository.ComponentRepository
var categoryRepo *repository.CategoryRepository
var statsRepo *repository.StatsRepository
var pricelistRepo *repository.PricelistRepository
// Only initialize repositories if we have a database connection
if mariaDB != nil {
componentRepo = repository.NewComponentRepository(mariaDB)
categoryRepo = repository.NewCategoryRepository(mariaDB)
statsRepo = repository.NewStatsRepository(mariaDB)
pricelistRepo = repository.NewPricelistRepository(mariaDB)
} else {
// In offline mode, we'll use nil repositories or handle them differently
// This is handled in the sync service and other components
}
// Services
var componentService *services.ComponentService
var quoteService *services.QuoteService
var exportService *services.ExportService
func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.ConnectionManager, dbUsername string, restartSig chan struct{}) (*gin.Engine, *sync.Service, error) {
var syncService *sync.Service
var projectService *services.ProjectService
// Sync service always uses ConnectionManager (works offline and online)
syncService = sync.NewService(connMgr, local)
if mariaDB != nil {
componentService = services.NewComponentService(componentRepo, categoryRepo, statsRepo)
quoteService = services.NewQuoteService(componentRepo, statsRepo, pricelistRepo, local, nil)
exportService = services.NewExportService(cfg.Export, categoryRepo, local)
} else {
// In offline mode, we still need to create services that don't require DB.
componentService = services.NewComponentService(nil, nil, nil)
quoteService = services.NewQuoteService(nil, nil, nil, local, nil)
exportService = services.NewExportService(cfg.Export, nil, local)
}
componentService := services.NewComponentService(nil, nil, nil)
quoteService := services.NewQuoteService(nil, nil, nil, local, nil)
exportService := services.NewExportService(cfg.Export, nil, local)
// isOnline function for local-first architecture
isOnline := func() bool {
@@ -733,16 +666,6 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
if err := local.BackfillConfigurationProjects(dbUsername); err != nil {
slog.Warn("failed to backfill local configuration projects", "error", err)
}
if mariaDB != nil {
serverProjectRepo := repository.NewProjectRepository(mariaDB)
if removed, err := serverProjectRepo.PurgeEmptyNamelessProjects(); err == nil && removed > 0 {
slog.Info("purged empty nameless server projects", "removed", removed)
}
if err := serverProjectRepo.EnsureSystemProjectsAndBackfillConfigurations(); err != nil {
slog.Warn("failed to backfill server configuration projects", "error", err)
}
}
type pullState struct {
mu syncpkg.Mutex
running bool
@@ -820,8 +743,10 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
// Handlers
componentHandler := handlers.NewComponentHandler(componentService, local)
quoteHandler := handlers.NewQuoteHandler(quoteService)
exportHandler := handlers.NewExportHandler(exportService, configService, projectService)
exportHandler := handlers.NewExportHandler(exportService, configService, projectService, dbUsername)
pricelistHandler := handlers.NewPricelistHandler(local)
vendorSpecHandler := handlers.NewVendorSpecHandler(local)
partnumberBooksHandler := handlers.NewPartnumberBooksHandler(local)
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
if err != nil {
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
@@ -834,7 +759,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
}
// Web handler (templates)
webHandler, err := handlers.NewWebHandler(templatesPath, componentService)
webHandler, err := handlers.NewWebHandler(templatesPath, local)
if err != nil {
return nil, nil, err
}
@@ -890,20 +815,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
}
}
// Optional diagnostics mode with server table counts.
if includeCounts && status.IsConnected {
if db, err := connMgr.GetDB(); err == nil && db != nil {
_ = db.Table("lot").Count(&lotCount)
_ = db.Table("lot_log").Count(&lotLogCount)
_ = db.Table("qt_lot_metadata").Count(&metadataCount)
} else if err != nil {
dbOK = false
dbError = err.Error()
} else {
dbOK = false
dbError = "Database not connected (offline mode)"
}
} else {
// Runtime diagnostics stay local-only. Server table counts are intentionally unavailable here.
if !includeCounts || !status.IsConnected {
lotCount = 0
lotLogCount = 0
metadataCount = 0
@@ -919,11 +832,10 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
})
})
// Current user info (DB user, not app user)
// Current user info (local DB username)
router.GET("/api/current-user", func(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{
"username": local.GetDBUser(),
"role": "db_user",
})
})
@@ -942,6 +854,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
router.GET("/configs/:uuid/revisions", webHandler.ConfigRevisions)
router.GET("/pricelists", webHandler.Pricelists)
router.GET("/pricelists/:id", webHandler.PricelistDetail)
router.GET("/partnumber-books", webHandler.PartnumberBooks)
// htmx partials
partials := router.Group("/partials")
@@ -991,6 +904,13 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
pricelists.GET("/:id/lots", pricelistHandler.GetLotNames)
}
// Partnumber books (read-only)
pnBooks := api.Group("/partnumber-books")
{
pnBooks.GET("", partnumberBooksHandler.List)
pnBooks.GET("/:id", partnumberBooksHandler.GetItems)
}
// Configurations (public - RBAC disabled)
configs := api.Group("/configs")
{
@@ -1311,6 +1231,12 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
configs.GET("/:uuid/export", exportHandler.ExportConfigCSV)
// Vendor spec (BOM) endpoints
configs.GET("/:uuid/vendor-spec", vendorSpecHandler.GetVendorSpec)
configs.PUT("/:uuid/vendor-spec", vendorSpecHandler.PutVendorSpec)
configs.POST("/:uuid/vendor-spec/resolve", vendorSpecHandler.ResolveVendorSpec)
configs.POST("/:uuid/vendor-spec/apply", vendorSpecHandler.ApplyVendorSpec)
configs.PATCH("/:uuid/server-count", func(c *gin.Context) {
uuid := c.Param("uuid")
var req struct {
@@ -1662,6 +1588,43 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
c.JSON(http.StatusOK, result)
})
projects.PATCH("/:uuid/configs/reorder", func(c *gin.Context) {
var req struct {
OrderedUUIDs []string `json:"ordered_uuids"`
}
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
if len(req.OrderedUUIDs) == 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "ordered_uuids is required"})
return
}
configs, err := configService.ReorderProjectConfigurationsNoAuth(c.Param("uuid"), req.OrderedUUIDs)
if err != nil {
switch {
case errors.Is(err, services.ErrProjectNotFound):
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
default:
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
}
return
}
total := 0.0
for i := range configs {
if configs[i].TotalPrice != nil {
total += *configs[i].TotalPrice
}
}
c.JSON(http.StatusOK, gin.H{
"project_uuid": c.Param("uuid"),
"configurations": configs,
"total": total,
})
})
projects.POST("/:uuid/configs", func(c *gin.Context) {
var req services.CreateConfigRequest
if err := c.ShouldBindJSON(&req); err != nil {
@@ -1679,7 +1642,46 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
c.JSON(http.StatusCreated, config)
})
projects.POST("/:uuid/vendor-import", func(c *gin.Context) {
fileHeader, err := c.FormFile("file")
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "file is required"})
return
}
file, err := fileHeader.Open()
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to open uploaded file"})
return
}
defer file.Close()
data, err := io.ReadAll(file)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read uploaded file"})
return
}
if !services.IsCFXMLWorkspace(data) {
c.JSON(http.StatusBadRequest, gin.H{"error": "unsupported vendor export format"})
return
}
result, err := configService.ImportVendorWorkspaceToProject(c.Param("uuid"), fileHeader.Filename, data, dbUsername)
if err != nil {
switch {
case errors.Is(err, services.ErrProjectNotFound):
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
default:
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
}
return
}
c.JSON(http.StatusCreated, result)
})
projects.GET("/:uuid/export", exportHandler.ExportProjectCSV)
projects.POST("/:uuid/export", exportHandler.ExportProjectPricingCSV)
projects.POST("/:uuid/configs/:config_uuid/clone", func(c *gin.Context) {
var req struct {
@@ -1709,6 +1711,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
syncAPI.GET("/users-status", syncHandler.GetUsersStatus)
syncAPI.POST("/components", syncHandler.SyncComponents)
syncAPI.POST("/pricelists", syncHandler.SyncPricelists)
syncAPI.POST("/partnumber-books", syncHandler.SyncPartnumberBooks)
syncAPI.POST("/partnumber-seen", syncHandler.ReportPartnumberSeen)
syncAPI.POST("/all", syncHandler.SyncAll)
syncAPI.POST("/push", syncHandler.PushPendingChanges)
syncAPI.GET("/pending/count", syncHandler.GetPendingCount)

View File

@@ -37,7 +37,7 @@ func TestConfigurationVersioningAPI(t *testing.T) {
cfg := &config.Config{}
setConfigDefaults(cfg)
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
if err != nil {
t.Fatalf("setup router: %v", err)
}
@@ -77,7 +77,7 @@ func TestConfigurationVersioningAPI(t *testing.T) {
if err := json.Unmarshal(rbRec.Body.Bytes(), &rbResp); err != nil {
t.Fatalf("unmarshal rollback response: %v", err)
}
if rbResp.Message == "" || rbResp.CurrentVersion.VersionNo != 3 {
if rbResp.Message == "" || rbResp.CurrentVersion.VersionNo != 2 {
t.Fatalf("unexpected rollback response: %+v", rbResp)
}
@@ -144,7 +144,7 @@ func TestProjectArchiveHidesConfigsAndCloneIntoProject(t *testing.T) {
cfg := &config.Config{}
setConfigDefaults(cfg)
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
if err != nil {
t.Fatalf("setup router: %v", err)
}
@@ -238,7 +238,7 @@ func TestConfigMoveToProjectEndpoint(t *testing.T) {
local, connMgr, _ := newAPITestStack(t)
cfg := &config.Config{}
setConfigDefaults(cfg)
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
router, _, err := setupRouter(cfg, local, connMgr, "tester", nil)
if err != nil {
t.Fatalf("setup router: %v", err)
}

View File

@@ -18,11 +18,6 @@ database:
max_idle_conns: 5
conn_max_lifetime: "5m"
auth:
jwt_secret: "CHANGE_ME_MIN_32_CHARACTERS_LONG"
token_expiry: "24h"
refresh_expiry: "168h" # 7 days
pricing:
default_method: "weighted_median" # median | average | weighted_median
default_period_days: 90

5
go.mod
View File

@@ -5,9 +5,8 @@ go 1.24.0
require (
github.com/gin-gonic/gin v1.9.1
github.com/glebarez/sqlite v1.11.0
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/go-sql-driver/mysql v1.7.1
github.com/google/uuid v1.6.0
golang.org/x/crypto v0.43.0
gopkg.in/yaml.v3 v3.0.1
gorm.io/driver/mysql v1.5.2
gorm.io/gorm v1.25.7
@@ -23,7 +22,6 @@ require (
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.14.0 // indirect
github.com/go-sql-driver/mysql v1.7.1 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
@@ -39,6 +37,7 @@ require (
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.2.11 // indirect
golang.org/x/arch v0.3.0 // indirect
golang.org/x/crypto v0.43.0 // indirect
golang.org/x/net v0.46.0 // indirect
golang.org/x/sys v0.37.0 // indirect
golang.org/x/text v0.30.0 // indirect

2
go.sum
View File

@@ -32,8 +32,6 @@ github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrt
github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=

View File

@@ -88,6 +88,9 @@ func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
}
root := resolveBackupRoot(dbPath)
if err := validateBackupRoot(root); err != nil {
return nil, err
}
now := backupNow()
created := make([]string, 0)
@@ -111,6 +114,40 @@ func resolveBackupRoot(dbPath string) string {
return filepath.Join(filepath.Dir(dbPath), "backups")
}
func validateBackupRoot(root string) error {
absRoot, err := filepath.Abs(root)
if err != nil {
return fmt.Errorf("resolve backup root: %w", err)
}
if gitRoot, ok := findGitWorktreeRoot(absRoot); ok {
return fmt.Errorf("backup root must stay outside git worktree: %s is inside %s", absRoot, gitRoot)
}
return nil
}
func findGitWorktreeRoot(path string) (string, bool) {
current := filepath.Clean(path)
info, err := os.Stat(current)
if err == nil && !info.IsDir() {
current = filepath.Dir(current)
}
for {
gitPath := filepath.Join(current, ".git")
if _, err := os.Stat(gitPath); err == nil {
return current, true
}
parent := filepath.Dir(current)
if parent == current {
return "", false
}
current = parent
}
}
func isBackupDisabled() bool {
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
return val == "1" || val == "true" || val == "yes"

View File

@@ -3,6 +3,7 @@ package appstate
import (
"os"
"path/filepath"
"strings"
"testing"
"time"
)
@@ -69,7 +70,7 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
t.Fatalf("backup with env: %v", err)
}
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
if _, err := os.Stat(filepath.Join(backupRoot, "daily", ".period.json")); err != nil {
t.Fatalf("expected backup in custom dir: %v", err)
}
@@ -77,7 +78,35 @@ func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
t.Fatalf("backup disabled: %v", err)
}
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
if _, err := os.Stat(filepath.Join(backupRoot, "daily", ".period.json")); err != nil {
t.Fatalf("backup should remain from previous run: %v", err)
}
}
func TestEnsureRotatingLocalBackupRejectsGitWorktree(t *testing.T) {
temp := t.TempDir()
repoRoot := filepath.Join(temp, "repo")
if err := os.MkdirAll(filepath.Join(repoRoot, ".git"), 0755); err != nil {
t.Fatalf("mkdir git dir: %v", err)
}
dbPath := filepath.Join(repoRoot, "data", "qfs.db")
cfgPath := filepath.Join(repoRoot, "data", "config.yaml")
if err := os.MkdirAll(filepath.Dir(dbPath), 0755); err != nil {
t.Fatalf("mkdir data dir: %v", err)
}
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
t.Fatalf("write db: %v", err)
}
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
t.Fatalf("write cfg: %v", err)
}
_, err := EnsureRotatingLocalBackup(dbPath, cfgPath)
if err == nil {
t.Fatal("expected git worktree backup root to be rejected")
}
if !strings.Contains(err.Error(), "outside git worktree") {
t.Fatalf("unexpected error: %v", err)
}
}

View File

@@ -195,6 +195,9 @@ func buildGPUSegment(items []models.ConfigItem, cats map[string]string) string {
if !ok || group != GroupGPU {
continue
}
if strings.HasPrefix(strings.ToUpper(it.LotName), "MB_") {
continue
}
model := parseGPUModel(it.LotName)
if model == "" {
model = "UNK"
@@ -332,7 +335,7 @@ func parseGPUModel(lotName string) string {
continue
}
switch p {
case "NV", "NVIDIA", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
case "NV", "NVIDIA", "INTEL", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
continue
default:
if strings.Contains(p, "GB") {

View File

@@ -14,7 +14,6 @@ import (
type Config struct {
Server ServerConfig `yaml:"server"`
Database DatabaseConfig `yaml:"database"`
Auth AuthConfig `yaml:"auth"`
Pricing PricingConfig `yaml:"pricing"`
Export ExportConfig `yaml:"export"`
Alerts AlertsConfig `yaml:"alerts"`
@@ -57,12 +56,6 @@ func (d *DatabaseConfig) DSN() string {
return cfg.FormatDSN()
}
type AuthConfig struct {
JWTSecret string `yaml:"jwt_secret"`
TokenExpiry time.Duration `yaml:"token_expiry"`
RefreshExpiry time.Duration `yaml:"refresh_expiry"`
}
type PricingConfig struct {
DefaultMethod string `yaml:"default_method"`
DefaultPeriodDays int `yaml:"default_period_days"`
@@ -152,13 +145,6 @@ func (c *Config) setDefaults() {
c.Database.ConnMaxLifetime = 5 * time.Minute
}
if c.Auth.TokenExpiry == 0 {
c.Auth.TokenExpiry = 24 * time.Hour
}
if c.Auth.RefreshExpiry == 0 {
c.Auth.RefreshExpiry = 7 * 24 * time.Hour
}
if c.Pricing.DefaultMethod == "" {
c.Pricing.DefaultMethod = "weighted_median"
}

View File

@@ -1,113 +0,0 @@
package handlers
import (
"net/http"
"github.com/gin-gonic/gin"
"git.mchus.pro/mchus/quoteforge/internal/middleware"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"git.mchus.pro/mchus/quoteforge/internal/services"
)
type AuthHandler struct {
authService *services.AuthService
userRepo *repository.UserRepository
}
func NewAuthHandler(authService *services.AuthService, userRepo *repository.UserRepository) *AuthHandler {
return &AuthHandler{
authService: authService,
userRepo: userRepo,
}
}
type LoginRequest struct {
Username string `json:"username" binding:"required"`
Password string `json:"password" binding:"required"`
}
type LoginResponse struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresAt int64 `json:"expires_at"`
User UserResponse `json:"user"`
}
type UserResponse struct {
ID uint `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
Role string `json:"role"`
}
func (h *AuthHandler) Login(c *gin.Context) {
var req LoginRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
tokens, user, err := h.authService.Login(req.Username, req.Password)
if err != nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, LoginResponse{
AccessToken: tokens.AccessToken,
RefreshToken: tokens.RefreshToken,
ExpiresAt: tokens.ExpiresAt,
User: UserResponse{
ID: user.ID,
Username: user.Username,
Email: user.Email,
Role: string(user.Role),
},
})
}
type RefreshRequest struct {
RefreshToken string `json:"refresh_token" binding:"required"`
}
func (h *AuthHandler) Refresh(c *gin.Context) {
var req RefreshRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
tokens, err := h.authService.RefreshTokens(req.RefreshToken)
if err != nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, tokens)
}
func (h *AuthHandler) Me(c *gin.Context) {
claims := middleware.GetClaims(c)
if claims == nil {
c.JSON(http.StatusUnauthorized, gin.H{"error": "not authenticated"})
return
}
user, err := h.userRepo.GetByID(claims.UserID)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "user not found"})
return
}
c.JSON(http.StatusOK, UserResponse{
ID: user.ID,
Username: user.Username,
Email: user.Email,
Role: string(user.Role),
})
}
func (h *AuthHandler) Logout(c *gin.Context) {
// JWT is stateless, logout is handled on client by discarding tokens
c.JSON(http.StatusOK, gin.H{"message": "logged out"})
}

View File

@@ -1,239 +0,0 @@
package handlers
import (
"net/http"
"strconv"
"git.mchus.pro/mchus/quoteforge/internal/middleware"
"git.mchus.pro/mchus/quoteforge/internal/services"
"github.com/gin-gonic/gin"
)
type ConfigurationHandler struct {
configService *services.ConfigurationService
exportService *services.ExportService
}
func NewConfigurationHandler(
configService *services.ConfigurationService,
exportService *services.ExportService,
) *ConfigurationHandler {
return &ConfigurationHandler{
configService: configService,
exportService: exportService,
}
}
func (h *ConfigurationHandler) List(c *gin.Context) {
username := middleware.GetUsername(c)
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
configs, total, err := h.configService.ListByUser(username, page, perPage)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"configurations": configs,
"total": total,
"page": page,
"per_page": perPage,
})
}
func (h *ConfigurationHandler) Create(c *gin.Context) {
username := middleware.GetUsername(c)
var req services.CreateConfigRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
config, err := h.configService.Create(username, &req)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, config)
}
func (h *ConfigurationHandler) Get(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
config, err := h.configService.GetByUUID(uuid, username)
if err != nil {
status := http.StatusNotFound
if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, config)
}
func (h *ConfigurationHandler) Update(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
var req services.CreateConfigRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
config, err := h.configService.Update(uuid, username, &req)
if err != nil {
status := http.StatusInternalServerError
if err == services.ErrConfigNotFound {
status = http.StatusNotFound
} else if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, config)
}
func (h *ConfigurationHandler) Delete(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
err := h.configService.Delete(uuid, username)
if err != nil {
status := http.StatusInternalServerError
if err == services.ErrConfigNotFound {
status = http.StatusNotFound
} else if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"message": "deleted"})
}
type RenameConfigRequest struct {
Name string `json:"name" binding:"required"`
}
func (h *ConfigurationHandler) Rename(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
var req RenameConfigRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
config, err := h.configService.Rename(uuid, username, req.Name)
if err != nil {
status := http.StatusInternalServerError
if err == services.ErrConfigNotFound {
status = http.StatusNotFound
} else if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, config)
}
type CloneConfigRequest struct {
Name string `json:"name" binding:"required"`
}
func (h *ConfigurationHandler) Clone(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
var req CloneConfigRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
config, err := h.configService.Clone(uuid, username, req.Name)
if err != nil {
status := http.StatusInternalServerError
if err == services.ErrConfigNotFound {
status = http.StatusNotFound
} else if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusCreated, config)
}
func (h *ConfigurationHandler) RefreshPrices(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
config, err := h.configService.RefreshPrices(uuid, username)
if err != nil {
status := http.StatusInternalServerError
if err == services.ErrConfigNotFound {
status = http.StatusNotFound
} else if err == services.ErrConfigForbidden {
status = http.StatusForbidden
}
c.JSON(status, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, config)
}
// func (h *ConfigurationHandler) ExportJSON(c *gin.Context) {
// userID := middleware.GetUserID(c)
// uuid := c.Param("uuid")
//
// config, err := h.configService.GetByUUID(uuid, userID)
// if err != nil {
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
// return
// }
//
// data, err := h.configService.ExportJSON(uuid, userID)
// if err != nil {
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
// return
// }
//
// filename := fmt.Sprintf("%s %s SPEC.json", config.CreatedAt.Format("2006-01-02"), config.Name)
// c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
// c.Data(http.StatusOK, "application/json", data)
// }
// func (h *ConfigurationHandler) ImportJSON(c *gin.Context) {
// userID := middleware.GetUserID(c)
//
// data, err := io.ReadAll(c.Request.Body)
// if err != nil {
// c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read body"})
// return
// }
//
// config, err := h.configService.ImportJSON(userID, data)
// if err != nil {
// c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
// return
// }
//
// c.JSON(http.StatusCreated, config)
// }

View File

@@ -6,7 +6,6 @@ import (
"strings"
"time"
"git.mchus.pro/mchus/quoteforge/internal/middleware"
"git.mchus.pro/mchus/quoteforge/internal/models"
"git.mchus.pro/mchus/quoteforge/internal/services"
"github.com/gin-gonic/gin"
@@ -16,17 +15,20 @@ type ExportHandler struct {
exportService *services.ExportService
configService services.ConfigurationGetter
projectService *services.ProjectService
dbUsername string
}
func NewExportHandler(
exportService *services.ExportService,
configService services.ConfigurationGetter,
projectService *services.ProjectService,
dbUsername string,
) *ExportHandler {
return &ExportHandler{
exportService: exportService,
configService: configService,
projectService: projectService,
dbUsername: dbUsername,
}
}
@@ -45,6 +47,14 @@ type ExportRequest struct {
Notes string `json:"notes"`
}
type ProjectExportOptionsRequest struct {
IncludeLOT bool `json:"include_lot"`
IncludeBOM bool `json:"include_bom"`
IncludeEstimate bool `json:"include_estimate"`
IncludeStock bool `json:"include_stock"`
IncludeCompetitor bool `json:"include_competitor"`
}
func (h *ExportHandler) ExportCSV(c *gin.Context) {
var req ExportRequest
if err := c.ShouldBindJSON(&req); err != nil {
@@ -63,8 +73,7 @@ func (h *ExportHandler) ExportCSV(c *gin.Context) {
// Get project code for filename
projectCode := req.ProjectName // legacy field: may contain code from frontend
if projectCode == "" && req.ProjectUUID != "" {
username := middleware.GetUsername(c)
if project, err := h.projectService.GetByUUID(req.ProjectUUID, username); err == nil && project != nil {
if project, err := h.projectService.GetByUUID(req.ProjectUUID, h.dbUsername); err == nil && project != nil {
projectCode = project.Code
}
}
@@ -136,11 +145,10 @@ func sanitizeFilenameSegment(value string) string {
}
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
username := middleware.GetUsername(c)
uuid := c.Param("uuid")
// Get config before streaming (can return JSON error)
config, err := h.configService.GetByUUID(uuid, username)
config, err := h.configService.GetByUUID(uuid, h.dbUsername)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
return
@@ -157,7 +165,7 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
// Get project code for filename
projectCode := config.Name // fallback: use config name if no project
if config.ProjectUUID != nil && *config.ProjectUUID != "" {
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, username); err == nil && project != nil {
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, h.dbUsername); err == nil && project != nil {
projectCode = project.Code
}
}
@@ -181,16 +189,15 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
// ExportProjectCSV exports all active configurations of a project as a single CSV file.
func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
username := middleware.GetUsername(c)
projectUUID := c.Param("uuid")
project, err := h.projectService.GetByUUID(projectUUID, username)
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
return
}
result, err := h.projectService.ListConfigurations(projectUUID, username, "active")
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
@@ -213,3 +220,52 @@ func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
return
}
}
func (h *ExportHandler) ExportProjectPricingCSV(c *gin.Context) {
projectUUID := c.Param("uuid")
var req ProjectExportOptionsRequest
if err := c.ShouldBindJSON(&req); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
project, err := h.projectService.GetByUUID(projectUUID, h.dbUsername)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
return
}
result, err := h.projectService.ListConfigurations(projectUUID, h.dbUsername, "active")
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
if len(result.Configs) == 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "no configurations to export"})
return
}
opts := services.ProjectPricingExportOptions{
IncludeLOT: req.IncludeLOT,
IncludeBOM: req.IncludeBOM,
IncludeEstimate: req.IncludeEstimate,
IncludeStock: req.IncludeStock,
IncludeCompetitor: req.IncludeCompetitor,
}
data, err := h.exportService.ProjectToPricingExportData(result.Configs, opts)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
filename := fmt.Sprintf("%s (%s) pricing.csv", time.Now().Format("2006-01-02"), project.Code)
c.Header("Content-Type", "text/csv; charset=utf-8")
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
if err := h.exportService.ToPricingCSV(c.Writer, data, opts); err != nil {
c.Error(err)
return
}
}

View File

@@ -26,7 +26,6 @@ func (m *mockConfigService) GetByUUID(uuid string, ownerUsername string) (*model
return m.config, m.err
}
func TestExportCSV_Success(t *testing.T) {
gin.SetMode(gin.TestMode)
@@ -36,6 +35,7 @@ func TestExportCSV_Success(t *testing.T) {
exportSvc,
&mockConfigService{},
nil,
"testuser",
)
// Create JSON request body
@@ -110,6 +110,7 @@ func TestExportCSV_InvalidRequest(t *testing.T) {
exportSvc,
&mockConfigService{},
nil,
"testuser",
)
// Create invalid request (missing required field)
@@ -143,6 +144,7 @@ func TestExportCSV_EmptyItems(t *testing.T) {
exportSvc,
&mockConfigService{},
nil,
"testuser",
)
// Create request with empty items array - should fail binding validation
@@ -184,6 +186,7 @@ func TestExportConfigCSV_Success(t *testing.T) {
exportSvc,
&mockConfigService{config: mockConfig},
nil,
"testuser",
)
// Create HTTP request
@@ -196,9 +199,6 @@ func TestExportConfigCSV_Success(t *testing.T) {
{Key: "uuid", Value: "test-uuid"},
}
// Mock middleware.GetUsername
c.Set("username", "testuser")
handler.ExportConfigCSV(c)
// Check status code
@@ -233,6 +233,7 @@ func TestExportConfigCSV_NotFound(t *testing.T) {
exportSvc,
&mockConfigService{err: errors.New("config not found")},
nil,
"testuser",
)
req, _ := http.NewRequest("GET", "/api/configs/nonexistent-uuid/export", nil)
@@ -243,8 +244,6 @@ func TestExportConfigCSV_NotFound(t *testing.T) {
c.Params = gin.Params{
{Key: "uuid", Value: "nonexistent-uuid"},
}
c.Set("username", "testuser")
handler.ExportConfigCSV(c)
// Should return 404 Not Found
@@ -277,6 +276,7 @@ func TestExportConfigCSV_EmptyItems(t *testing.T) {
exportSvc,
&mockConfigService{config: mockConfig},
nil,
"testuser",
)
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
@@ -287,8 +287,6 @@ func TestExportConfigCSV_EmptyItems(t *testing.T) {
c.Params = gin.Params{
{Key: "uuid", Value: "test-uuid"},
}
c.Set("username", "testuser")
handler.ExportConfigCSV(c)
// Should return 400 Bad Request

View File

@@ -0,0 +1,106 @@
package handlers
import (
"net/http"
"strconv"
"strings"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"github.com/gin-gonic/gin"
)
// PartnumberBooksHandler provides read-only access to local partnumber book snapshots.
type PartnumberBooksHandler struct {
localDB *localdb.LocalDB
}
func NewPartnumberBooksHandler(localDB *localdb.LocalDB) *PartnumberBooksHandler {
return &PartnumberBooksHandler{localDB: localDB}
}
// List returns all local partnumber book snapshots.
// GET /api/partnumber-books
func (h *PartnumberBooksHandler) List(c *gin.Context) {
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
books, err := bookRepo.ListBooks()
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
type bookSummary struct {
ID uint `json:"id"`
ServerID int `json:"server_id"`
Version string `json:"version"`
CreatedAt string `json:"created_at"`
IsActive bool `json:"is_active"`
ItemCount int64 `json:"item_count"`
}
summaries := make([]bookSummary, 0, len(books))
for _, b := range books {
summaries = append(summaries, bookSummary{
ID: b.ID,
ServerID: b.ServerID,
Version: b.Version,
CreatedAt: b.CreatedAt.Format("2006-01-02"),
IsActive: b.IsActive,
ItemCount: bookRepo.CountBookItems(b.ID),
})
}
c.JSON(http.StatusOK, gin.H{
"books": summaries,
"total": len(summaries),
})
}
// GetItems returns items for a partnumber book by server ID.
// GET /api/partnumber-books/:id
func (h *PartnumberBooksHandler) GetItems(c *gin.Context) {
idStr := c.Param("id")
id, err := strconv.ParseUint(idStr, 10, 64)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid book ID"})
return
}
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "100"))
search := strings.TrimSpace(c.Query("search"))
if page < 1 {
page = 1
}
if perPage < 1 || perPage > 500 {
perPage = 100
}
// Find local book by server_id
var book localdb.LocalPartnumberBook
if err := h.localDB.DB().Where("server_id = ?", id).First(&book).Error; err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "partnumber book not found"})
return
}
items, total, err := bookRepo.GetBookItemsPage(book.ID, search, page, perPage)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"book_id": book.ServerID,
"version": book.Version,
"is_active": book.IsActive,
"partnumbers": book.PartnumbersJSON,
"items": items,
"total": total,
"page": page,
"per_page": perPage,
"search": search,
"book_total": bookRepo.CountBookItems(book.ID),
"lot_count": bookRepo.CountDistinctLots(book.ID),
})
}

View File

@@ -181,15 +181,39 @@ func (h *PricelistHandler) GetItems(c *gin.Context) {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
lotNames := make([]string, len(items))
for i, item := range items {
lotNames[i] = item.LotName
}
type compRow struct {
LotName string
LotDescription string
}
var comps []compRow
if len(lotNames) > 0 {
h.localDB.DB().Table("local_components").
Select("lot_name, lot_description").
Where("lot_name IN ?", lotNames).
Scan(&comps)
}
descMap := make(map[string]string, len(comps))
for _, c := range comps {
descMap[c.LotName] = c.LotDescription
}
resultItems := make([]gin.H, 0, len(items))
for _, item := range items {
resultItems = append(resultItems, gin.H{
"id": item.ID,
"lot_name": item.LotName,
"price": item.Price,
"category": item.LotCategory,
"available_qty": item.AvailableQty,
"partnumbers": []string(item.Partnumbers),
"id": item.ID,
"lot_name": item.LotName,
"lot_description": descMap[item.LotName],
"price": item.Price,
"category": item.LotCategory,
"available_qty": item.AvailableQty,
"partnumbers": []string(item.Partnumbers),
"partnumber_qtys": map[string]interface{}{},
"competitor_names": []string{},
"price_spread_pct": nil,
})
}

View File

@@ -234,6 +234,33 @@ func (h *SyncHandler) SyncPricelists(c *gin.Context) {
h.syncService.RecordSyncHeartbeat()
}
// SyncPartnumberBooks syncs partnumber book snapshots from MariaDB to local SQLite.
// POST /api/sync/partnumber-books
func (h *SyncHandler) SyncPartnumberBooks(c *gin.Context) {
if !h.ensureSyncReadiness(c) {
return
}
startTime := time.Now()
pulled, err := h.syncService.PullPartnumberBooks()
if err != nil {
slog.Error("partnumber books pull failed", "error", err)
c.JSON(http.StatusInternalServerError, gin.H{
"success": false,
"error": err.Error(),
})
return
}
c.JSON(http.StatusOK, SyncResultResponse{
Success: true,
Message: "Partnumber books synced successfully",
Synced: pulled,
Duration: time.Since(startTime).String(),
})
h.syncService.RecordSyncHeartbeat()
}
// SyncAllResponse represents result of full sync
type SyncAllResponse struct {
Success bool `json:"success"`
@@ -489,18 +516,11 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
// Get sync times
lastPricelistSync := h.localDB.GetLastSyncTime()
// Get MariaDB counts (if online)
var lotCount, lotLogCount int64
if isOnline {
if mariaDB, err := h.connMgr.GetDB(); err == nil {
mariaDB.Table("lot").Count(&lotCount)
mariaDB.Table("lot_log").Count(&lotLogCount)
}
}
// Get local counts
configCount := h.localDB.CountConfigurations()
projectCount := h.localDB.CountProjects()
componentCount := h.localDB.CountLocalComponents()
pricelistCount := h.localDB.CountLocalPricelists()
// Get error count (only changes with LastError != "")
errorCount := int(h.localDB.CountErroredChanges())
@@ -535,8 +555,8 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
DBName: dbName,
IsOnline: isOnline,
LastSyncAt: lastPricelistSync,
LotCount: lotCount,
LotLogCount: lotLogCount,
LotCount: componentCount,
LotLogCount: pricelistCount,
ConfigCount: configCount,
ProjectCount: projectCount,
PendingChanges: changes,
@@ -643,3 +663,37 @@ func (h *SyncHandler) getReadinessCached(maxAge time.Duration) *sync.SyncReadine
h.readinessMu.Unlock()
return readiness
}
// ReportPartnumberSeen pushes unresolved vendor partnumbers to qt_vendor_partnumber_seen on MariaDB.
// POST /api/sync/partnumber-seen
func (h *SyncHandler) ReportPartnumberSeen(c *gin.Context) {
var body struct {
Items []struct {
Partnumber string `json:"partnumber"`
Description string `json:"description"`
Ignored bool `json:"ignored"`
} `json:"items"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
items := make([]sync.SeenPartnumber, 0, len(body.Items))
for _, it := range body.Items {
if it.Partnumber != "" {
items = append(items, sync.SeenPartnumber{
Partnumber: it.Partnumber,
Description: it.Description,
Ignored: it.Ignored,
})
}
}
if err := h.syncService.PushPartnumberSeen(items); err != nil {
c.JSON(http.StatusServiceUnavailable, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"reported": len(items)})
}

View File

@@ -0,0 +1,209 @@
package handlers
import (
"encoding/json"
"errors"
"net/http"
"strings"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"git.mchus.pro/mchus/quoteforge/internal/services"
"github.com/gin-gonic/gin"
)
// VendorSpecHandler handles vendor BOM spec operations for a configuration.
type VendorSpecHandler struct {
localDB *localdb.LocalDB
}
func NewVendorSpecHandler(localDB *localdb.LocalDB) *VendorSpecHandler {
return &VendorSpecHandler{localDB: localDB}
}
// lookupConfig finds an active configuration by UUID using the standard localDB method.
func (h *VendorSpecHandler) lookupConfig(uuid string) (*localdb.LocalConfiguration, error) {
cfg, err := h.localDB.GetConfigurationByUUID(uuid)
if err != nil {
return nil, err
}
if !cfg.IsActive {
return nil, errors.New("not active")
}
return cfg, nil
}
// GetVendorSpec returns the vendor spec (BOM) for a configuration.
// GET /api/configs/:uuid/vendor-spec
func (h *VendorSpecHandler) GetVendorSpec(c *gin.Context) {
cfg, err := h.lookupConfig(c.Param("uuid"))
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
return
}
spec := cfg.VendorSpec
if spec == nil {
spec = localdb.VendorSpec{}
}
c.JSON(http.StatusOK, gin.H{"vendor_spec": spec})
}
// PutVendorSpec saves (replaces) the vendor spec for a configuration.
// PUT /api/configs/:uuid/vendor-spec
func (h *VendorSpecHandler) PutVendorSpec(c *gin.Context) {
cfg, err := h.lookupConfig(c.Param("uuid"))
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
return
}
var body struct {
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
for i := range body.VendorSpec {
if body.VendorSpec[i].SortOrder == 0 {
body.VendorSpec[i].SortOrder = (i + 1) * 10
}
// Persist canonical LOT mapping only.
body.VendorSpec[i].LotMappings = normalizeLotMappings(body.VendorSpec[i].LotMappings)
body.VendorSpec[i].ResolvedLotName = ""
body.VendorSpec[i].ResolutionSource = ""
body.VendorSpec[i].ManualLotSuggestion = ""
body.VendorSpec[i].LotQtyPerPN = 0
body.VendorSpec[i].LotAllocations = nil
}
spec := localdb.VendorSpec(body.VendorSpec)
specJSON, err := json.Marshal(spec)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
if err := h.localDB.DB().Model(cfg).Update("vendor_spec", string(specJSON)).Error; err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"vendor_spec": spec})
}
func normalizeLotMappings(in []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
if len(in) == 0 {
return nil
}
merged := make(map[string]int, len(in))
order := make([]string, 0, len(in))
for _, m := range in {
lot := strings.TrimSpace(m.LotName)
if lot == "" {
continue
}
qty := m.QuantityPerPN
if qty < 1 {
qty = 1
}
if _, exists := merged[lot]; !exists {
order = append(order, lot)
}
merged[lot] += qty
}
out := make([]localdb.VendorSpecLotMapping, 0, len(order))
for _, lot := range order {
out = append(out, localdb.VendorSpecLotMapping{
LotName: lot,
QuantityPerPN: merged[lot],
})
}
if len(out) == 0 {
return nil
}
return out
}
// ResolveVendorSpec resolves vendor PN → LOT without modifying the cart.
// POST /api/configs/:uuid/vendor-spec/resolve
func (h *VendorSpecHandler) ResolveVendorSpec(c *gin.Context) {
if _, err := h.lookupConfig(c.Param("uuid")); err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
return
}
var body struct {
VendorSpec []localdb.VendorSpecItem `json:"vendor_spec"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
bookRepo := repository.NewPartnumberBookRepository(h.localDB.DB())
resolver := services.NewVendorSpecResolver(bookRepo)
resolved, err := resolver.Resolve(body.VendorSpec)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
book, _ := bookRepo.GetActiveBook()
aggregated, err := services.AggregateLOTs(resolved, book, bookRepo)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"resolved": resolved,
"aggregated": aggregated,
})
}
// ApplyVendorSpec applies the resolved BOM to the cart (Estimate items).
// POST /api/configs/:uuid/vendor-spec/apply
func (h *VendorSpecHandler) ApplyVendorSpec(c *gin.Context) {
cfg, err := h.lookupConfig(c.Param("uuid"))
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "configuration not found"})
return
}
var body struct {
Items []struct {
LotName string `json:"lot_name"`
Quantity int `json:"quantity"`
UnitPrice float64 `json:"unit_price"`
} `json:"items"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
return
}
newItems := make(localdb.LocalConfigItems, 0, len(body.Items))
for _, it := range body.Items {
newItems = append(newItems, localdb.LocalConfigItem{
LotName: it.LotName,
Quantity: it.Quantity,
UnitPrice: it.UnitPrice,
})
}
itemsJSON, err := json.Marshal(newItems)
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
if err := h.localDB.DB().Model(cfg).Update("items", string(itemsJSON)).Error; err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{"items": newItems})
}

View File

@@ -3,19 +3,20 @@ package handlers
import (
"html/template"
"strconv"
"strings"
qfassets "git.mchus.pro/mchus/quoteforge"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"git.mchus.pro/mchus/quoteforge/internal/services"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/models"
"github.com/gin-gonic/gin"
)
type WebHandler struct {
templates map[string]*template.Template
componentService *services.ComponentService
templates map[string]*template.Template
localDB *localdb.LocalDB
}
func NewWebHandler(_ string, componentService *services.ComponentService) (*WebHandler, error) {
func NewWebHandler(_ string, localDB *localdb.LocalDB) (*WebHandler, error) {
funcMap := template.FuncMap{
"sub": func(a, b int) int { return a - b },
"add": func(a, b int) int { return a + b },
@@ -59,7 +60,7 @@ func NewWebHandler(_ string, componentService *services.ComponentService) (*WebH
templates := make(map[string]*template.Template)
// Load each page template with base
simplePages := []string{"login.html", "configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html", "config_revisions.html"}
simplePages := []string{"configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html", "config_revisions.html", "partnumber_books.html"}
for _, page := range simplePages {
var tmpl *template.Template
var err error
@@ -104,8 +105,8 @@ func NewWebHandler(_ string, componentService *services.ComponentService) (*WebH
}
return &WebHandler{
templates: templates,
componentService: componentService,
templates: templates,
localDB: localDB,
}, nil
}
@@ -128,36 +129,28 @@ func (h *WebHandler) Index(c *gin.Context) {
}
func (h *WebHandler) Configurator(c *gin.Context) {
categories, _ := h.componentService.GetCategories()
uuid := c.Query("uuid")
filter := repository.ComponentFilter{}
result, err := h.componentService.List(filter, 1, 20)
categories, _ := h.localCategories()
components, total, err := h.localDB.ListComponents(localdb.ComponentFilter{}, 0, 20)
data := gin.H{
"ActivePage": "configurator",
"Categories": categories,
"Components": []interface{}{},
"Components": []localComponentView{},
"Total": int64(0),
"Page": 1,
"PerPage": 20,
"ConfigUUID": uuid,
}
if err == nil && result != nil {
data["Components"] = result.Components
data["Total"] = result.Total
data["Page"] = result.Page
data["PerPage"] = result.PerPage
if err == nil {
data["Components"] = toLocalComponentViews(components)
data["Total"] = total
}
h.render(c, "index.html", data)
}
func (h *WebHandler) Login(c *gin.Context) {
h.render(c, "login.html", nil)
}
func (h *WebHandler) Configs(c *gin.Context) {
h.render(c, "configs.html", gin.H{"ActivePage": "configs"})
}
@@ -188,29 +181,38 @@ func (h *WebHandler) PricelistDetail(c *gin.Context) {
h.render(c, "pricelist_detail.html", gin.H{"ActivePage": "pricelists"})
}
func (h *WebHandler) PartnumberBooks(c *gin.Context) {
h.render(c, "partnumber_books.html", gin.H{"ActivePage": "partnumber-books"})
}
// Partials for htmx
func (h *WebHandler) ComponentsPartial(c *gin.Context) {
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
if page < 1 {
page = 1
}
filter := repository.ComponentFilter{
filter := localdb.ComponentFilter{
Category: c.Query("category"),
Search: c.Query("search"),
}
if c.Query("has_price") == "true" {
filter.HasPrice = true
}
offset := (page - 1) * 20
data := gin.H{
"Components": []interface{}{},
"Components": []localComponentView{},
"Total": int64(0),
"Page": page,
"PerPage": 20,
}
result, err := h.componentService.List(filter, page, 20)
if err == nil && result != nil {
data["Components"] = result.Components
data["Total"] = result.Total
data["Page"] = result.Page
data["PerPage"] = result.PerPage
components, total, err := h.localDB.ListComponents(filter, offset, 20)
if err == nil {
data["Components"] = toLocalComponentViews(components)
data["Total"] = total
}
c.Header("Content-Type", "text/html; charset=utf-8")
@@ -218,3 +220,46 @@ func (h *WebHandler) ComponentsPartial(c *gin.Context) {
tmpl.ExecuteTemplate(c.Writer, "components_list.html", data)
}
}
type localComponentView struct {
LotName string
Description string
Category string
CategoryName string
Model string
CurrentPrice *float64
}
func toLocalComponentViews(items []localdb.LocalComponent) []localComponentView {
result := make([]localComponentView, 0, len(items))
for _, item := range items {
result = append(result, localComponentView{
LotName: item.LotName,
Description: item.LotDescription,
Category: item.Category,
CategoryName: item.Category,
Model: item.Model,
})
}
return result
}
func (h *WebHandler) localCategories() ([]models.Category, error) {
codes, err := h.localDB.GetLocalComponentCategories()
if err != nil || len(codes) == 0 {
return []models.Category{}, err
}
categories := make([]models.Category, 0, len(codes))
for _, code := range codes {
trimmed := strings.TrimSpace(code)
if trimmed == "" {
continue
}
categories = append(categories, models.Category{
Code: trimmed,
Name: trimmed,
})
}
return categories, nil
}

View File

@@ -0,0 +1,97 @@
package localdb
import (
"testing"
"git.mchus.pro/mchus/quoteforge/internal/models"
)
func TestConfigurationConvertersPreserveBusinessFields(t *testing.T) {
estimateID := uint(11)
warehouseID := uint(22)
competitorID := uint(33)
cfg := &models.Configuration{
UUID: "cfg-1",
OwnerUsername: "tester",
Name: "Config",
PricelistID: &estimateID,
WarehousePricelistID: &warehouseID,
CompetitorPricelistID: &competitorID,
DisablePriceRefresh: true,
OnlyInStock: true,
}
local := ConfigurationToLocal(cfg)
if local.WarehousePricelistID == nil || *local.WarehousePricelistID != warehouseID {
t.Fatalf("warehouse pricelist lost in ConfigurationToLocal: %+v", local.WarehousePricelistID)
}
if local.CompetitorPricelistID == nil || *local.CompetitorPricelistID != competitorID {
t.Fatalf("competitor pricelist lost in ConfigurationToLocal: %+v", local.CompetitorPricelistID)
}
if !local.DisablePriceRefresh {
t.Fatalf("disable_price_refresh lost in ConfigurationToLocal")
}
back := LocalToConfiguration(local)
if back.WarehousePricelistID == nil || *back.WarehousePricelistID != warehouseID {
t.Fatalf("warehouse pricelist lost in LocalToConfiguration: %+v", back.WarehousePricelistID)
}
if back.CompetitorPricelistID == nil || *back.CompetitorPricelistID != competitorID {
t.Fatalf("competitor pricelist lost in LocalToConfiguration: %+v", back.CompetitorPricelistID)
}
if !back.DisablePriceRefresh {
t.Fatalf("disable_price_refresh lost in LocalToConfiguration")
}
}
func TestConfigurationSnapshotPreservesBusinessFields(t *testing.T) {
estimateID := uint(11)
warehouseID := uint(22)
competitorID := uint(33)
cfg := &LocalConfiguration{
UUID: "cfg-1",
Name: "Config",
PricelistID: &estimateID,
WarehousePricelistID: &warehouseID,
CompetitorPricelistID: &competitorID,
DisablePriceRefresh: true,
OnlyInStock: true,
VendorSpec: VendorSpec{
{
SortOrder: 10,
VendorPartnumber: "PN-1",
Quantity: 1,
LotMappings: []VendorSpecLotMapping{
{LotName: "LOT_A", QuantityPerPN: 2},
},
},
},
}
raw, err := BuildConfigurationSnapshot(cfg)
if err != nil {
t.Fatalf("BuildConfigurationSnapshot: %v", err)
}
decoded, err := DecodeConfigurationSnapshot(raw)
if err != nil {
t.Fatalf("DecodeConfigurationSnapshot: %v", err)
}
if decoded.WarehousePricelistID == nil || *decoded.WarehousePricelistID != warehouseID {
t.Fatalf("warehouse pricelist lost in snapshot: %+v", decoded.WarehousePricelistID)
}
if decoded.CompetitorPricelistID == nil || *decoded.CompetitorPricelistID != competitorID {
t.Fatalf("competitor pricelist lost in snapshot: %+v", decoded.CompetitorPricelistID)
}
if !decoded.DisablePriceRefresh {
t.Fatalf("disable_price_refresh lost in snapshot")
}
if len(decoded.VendorSpec) != 1 || decoded.VendorSpec[0].VendorPartnumber != "PN-1" {
t.Fatalf("vendor_spec lost in snapshot: %+v", decoded.VendorSpec)
}
if len(decoded.VendorSpec[0].LotMappings) != 1 || decoded.VendorSpec[0].LotMappings[0].LotName != "LOT_A" {
t.Fatalf("lot mappings lost in snapshot: %+v", decoded.VendorSpec)
}
}

View File

@@ -18,31 +18,32 @@ func ConfigurationToLocal(cfg *models.Configuration) *LocalConfiguration {
}
local := &LocalConfiguration{
UUID: cfg.UUID,
ProjectUUID: cfg.ProjectUUID,
IsActive: true,
Name: cfg.Name,
Items: items,
TotalPrice: cfg.TotalPrice,
CustomPrice: cfg.CustomPrice,
Notes: cfg.Notes,
IsTemplate: cfg.IsTemplate,
ServerCount: cfg.ServerCount,
ServerModel: cfg.ServerModel,
SupportCode: cfg.SupportCode,
Article: cfg.Article,
PricelistID: cfg.PricelistID,
OnlyInStock: cfg.OnlyInStock,
PriceUpdatedAt: cfg.PriceUpdatedAt,
CreatedAt: cfg.CreatedAt,
UpdatedAt: time.Now(),
SyncStatus: "pending",
OriginalUserID: derefUint(cfg.UserID),
OriginalUsername: cfg.OwnerUsername,
}
if local.OriginalUsername == "" && cfg.User != nil {
local.OriginalUsername = cfg.User.Username
UUID: cfg.UUID,
ProjectUUID: cfg.ProjectUUID,
IsActive: true,
Name: cfg.Name,
Items: items,
TotalPrice: cfg.TotalPrice,
CustomPrice: cfg.CustomPrice,
Notes: cfg.Notes,
IsTemplate: cfg.IsTemplate,
ServerCount: cfg.ServerCount,
ServerModel: cfg.ServerModel,
SupportCode: cfg.SupportCode,
Article: cfg.Article,
PricelistID: cfg.PricelistID,
WarehousePricelistID: cfg.WarehousePricelistID,
CompetitorPricelistID: cfg.CompetitorPricelistID,
VendorSpec: modelVendorSpecToLocal(cfg.VendorSpec),
DisablePriceRefresh: cfg.DisablePriceRefresh,
OnlyInStock: cfg.OnlyInStock,
Line: cfg.Line,
PriceUpdatedAt: cfg.PriceUpdatedAt,
CreatedAt: cfg.CreatedAt,
UpdatedAt: time.Now(),
SyncStatus: "pending",
OriginalUserID: derefUint(cfg.UserID),
OriginalUsername: cfg.OwnerUsername,
}
if cfg.ID > 0 {
@@ -65,23 +66,28 @@ func LocalToConfiguration(local *LocalConfiguration) *models.Configuration {
}
cfg := &models.Configuration{
UUID: local.UUID,
OwnerUsername: local.OriginalUsername,
ProjectUUID: local.ProjectUUID,
Name: local.Name,
Items: items,
TotalPrice: local.TotalPrice,
CustomPrice: local.CustomPrice,
Notes: local.Notes,
IsTemplate: local.IsTemplate,
ServerCount: local.ServerCount,
ServerModel: local.ServerModel,
SupportCode: local.SupportCode,
Article: local.Article,
PricelistID: local.PricelistID,
OnlyInStock: local.OnlyInStock,
PriceUpdatedAt: local.PriceUpdatedAt,
CreatedAt: local.CreatedAt,
UUID: local.UUID,
OwnerUsername: local.OriginalUsername,
ProjectUUID: local.ProjectUUID,
Name: local.Name,
Items: items,
TotalPrice: local.TotalPrice,
CustomPrice: local.CustomPrice,
Notes: local.Notes,
IsTemplate: local.IsTemplate,
ServerCount: local.ServerCount,
ServerModel: local.ServerModel,
SupportCode: local.SupportCode,
Article: local.Article,
PricelistID: local.PricelistID,
WarehousePricelistID: local.WarehousePricelistID,
CompetitorPricelistID: local.CompetitorPricelistID,
VendorSpec: localVendorSpecToModel(local.VendorSpec),
DisablePriceRefresh: local.DisablePriceRefresh,
OnlyInStock: local.OnlyInStock,
Line: local.Line,
PriceUpdatedAt: local.PriceUpdatedAt,
CreatedAt: local.CreatedAt,
}
if local.ServerID != nil {
@@ -105,6 +111,88 @@ func derefUint(v *uint) uint {
return *v
}
func modelVendorSpecToLocal(spec models.VendorSpec) VendorSpec {
if len(spec) == 0 {
return nil
}
out := make(VendorSpec, 0, len(spec))
for _, item := range spec {
row := VendorSpecItem{
SortOrder: item.SortOrder,
VendorPartnumber: item.VendorPartnumber,
Quantity: item.Quantity,
Description: item.Description,
UnitPrice: item.UnitPrice,
TotalPrice: item.TotalPrice,
ResolvedLotName: item.ResolvedLotName,
ResolutionSource: item.ResolutionSource,
ManualLotSuggestion: item.ManualLotSuggestion,
LotQtyPerPN: item.LotQtyPerPN,
}
if len(item.LotAllocations) > 0 {
row.LotAllocations = make([]VendorSpecLotAllocation, 0, len(item.LotAllocations))
for _, alloc := range item.LotAllocations {
row.LotAllocations = append(row.LotAllocations, VendorSpecLotAllocation{
LotName: alloc.LotName,
Quantity: alloc.Quantity,
})
}
}
if len(item.LotMappings) > 0 {
row.LotMappings = make([]VendorSpecLotMapping, 0, len(item.LotMappings))
for _, mapping := range item.LotMappings {
row.LotMappings = append(row.LotMappings, VendorSpecLotMapping{
LotName: mapping.LotName,
QuantityPerPN: mapping.QuantityPerPN,
})
}
}
out = append(out, row)
}
return out
}
func localVendorSpecToModel(spec VendorSpec) models.VendorSpec {
if len(spec) == 0 {
return nil
}
out := make(models.VendorSpec, 0, len(spec))
for _, item := range spec {
row := models.VendorSpecItem{
SortOrder: item.SortOrder,
VendorPartnumber: item.VendorPartnumber,
Quantity: item.Quantity,
Description: item.Description,
UnitPrice: item.UnitPrice,
TotalPrice: item.TotalPrice,
ResolvedLotName: item.ResolvedLotName,
ResolutionSource: item.ResolutionSource,
ManualLotSuggestion: item.ManualLotSuggestion,
LotQtyPerPN: item.LotQtyPerPN,
}
if len(item.LotAllocations) > 0 {
row.LotAllocations = make([]models.VendorSpecLotAllocation, 0, len(item.LotAllocations))
for _, alloc := range item.LotAllocations {
row.LotAllocations = append(row.LotAllocations, models.VendorSpecLotAllocation{
LotName: alloc.LotName,
Quantity: alloc.Quantity,
})
}
}
if len(item.LotMappings) > 0 {
row.LotMappings = make([]models.VendorSpecLotMapping, 0, len(item.LotMappings))
for _, mapping := range item.LotMappings {
row.LotMappings = append(row.LotMappings, models.VendorSpecLotMapping{
LotName: mapping.LotName,
QuantityPerPN: mapping.QuantityPerPN,
})
}
}
out = append(out, row)
}
return out
}
func ProjectToLocal(project *models.Project) *LocalProject {
local := &LocalProject{
UUID: project.UUID,

View File

@@ -7,19 +7,104 @@ import (
"crypto/sha256"
"encoding/base64"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"git.mchus.pro/mchus/quoteforge/internal/appstate"
)
// getEncryptionKey derives a 32-byte key from environment variable or machine ID
func getEncryptionKey() []byte {
const encryptionKeyFileName = "local_encryption.key"
// getEncryptionKey resolves the active encryption key.
// Preference order:
// 1. QUOTEFORGE_ENCRYPTION_KEY env var
// 2. application-managed random key file in the user state directory
func getEncryptionKey() ([]byte, error) {
key := os.Getenv("QUOTEFORGE_ENCRYPTION_KEY")
if key == "" {
// Fallback to a machine-based key (hostname + fixed salt)
hostname, _ := os.Hostname()
key = hostname + "quoteforge-salt-2024"
if key != "" {
hash := sha256.Sum256([]byte(key))
return hash[:], nil
}
// Hash to get exactly 32 bytes for AES-256
stateDir, err := resolveEncryptionStateDir()
if err != nil {
return nil, fmt.Errorf("resolve encryption state dir: %w", err)
}
return loadOrCreateEncryptionKey(filepath.Join(stateDir, encryptionKeyFileName))
}
func resolveEncryptionStateDir() (string, error) {
configPath, err := appstate.ResolveConfigPath("")
if err != nil {
return "", err
}
return filepath.Dir(configPath), nil
}
func loadOrCreateEncryptionKey(path string) ([]byte, error) {
if data, err := os.ReadFile(path); err == nil {
return parseEncryptionKeyFile(data)
} else if !errors.Is(err, os.ErrNotExist) {
return nil, fmt.Errorf("read encryption key: %w", err)
}
if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil {
return nil, fmt.Errorf("create encryption key dir: %w", err)
}
raw := make([]byte, 32)
if _, err := io.ReadFull(rand.Reader, raw); err != nil {
return nil, fmt.Errorf("generate encryption key: %w", err)
}
encoded := base64.StdEncoding.EncodeToString(raw)
if err := writeKeyFile(path, []byte(encoded+"\n")); err != nil {
if errors.Is(err, os.ErrExist) {
data, readErr := os.ReadFile(path)
if readErr != nil {
return nil, fmt.Errorf("read concurrent encryption key: %w", readErr)
}
return parseEncryptionKeyFile(data)
}
return nil, err
}
return raw, nil
}
func writeKeyFile(path string, data []byte) error {
file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
}
defer file.Close()
if _, err := file.Write(data); err != nil {
return err
}
return file.Sync()
}
func parseEncryptionKeyFile(data []byte) ([]byte, error) {
trimmed := strings.TrimSpace(string(data))
decoded, err := base64.StdEncoding.DecodeString(trimmed)
if err != nil {
return nil, fmt.Errorf("decode encryption key file: %w", err)
}
if len(decoded) != 32 {
return nil, fmt.Errorf("invalid encryption key length: %d", len(decoded))
}
return decoded, nil
}
func getLegacyEncryptionKey() []byte {
hostname, _ := os.Hostname()
key := hostname + "quoteforge-salt-2024"
hash := sha256.Sum256([]byte(key))
return hash[:]
}
@@ -30,7 +115,10 @@ func Encrypt(plaintext string) (string, error) {
return "", nil
}
key := getEncryptionKey()
key, err := getEncryptionKey()
if err != nil {
return "", err
}
block, err := aes.NewCipher(key)
if err != nil {
return "", err
@@ -56,12 +144,50 @@ func Decrypt(ciphertext string) (string, error) {
return "", nil
}
key := getEncryptionKey()
data, err := base64.StdEncoding.DecodeString(ciphertext)
key, err := getEncryptionKey()
if err != nil {
return "", err
}
plaintext, legacy, err := decryptWithKeys(ciphertext, key, getLegacyEncryptionKey())
if err != nil {
return "", err
}
_ = legacy
return plaintext, nil
}
func DecryptWithMetadata(ciphertext string) (string, bool, error) {
if ciphertext == "" {
return "", false, nil
}
key, err := getEncryptionKey()
if err != nil {
return "", false, err
}
return decryptWithKeys(ciphertext, key, getLegacyEncryptionKey())
}
func decryptWithKeys(ciphertext string, primaryKey, legacyKey []byte) (string, bool, error) {
data, err := base64.StdEncoding.DecodeString(ciphertext)
if err != nil {
return "", false, err
}
plaintext, err := decryptWithKey(data, primaryKey)
if err == nil {
return plaintext, false, nil
}
legacyPlaintext, legacyErr := decryptWithKey(data, legacyKey)
if legacyErr == nil {
return legacyPlaintext, true, nil
}
return "", false, err
}
func decryptWithKey(data, key []byte) (string, error) {
block, err := aes.NewCipher(key)
if err != nil {
return "", err

View File

@@ -0,0 +1,97 @@
package localdb
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"io"
"os"
"path/filepath"
"testing"
)
func TestEncryptCreatesPersistentKeyFile(t *testing.T) {
stateDir := t.TempDir()
t.Setenv("QFS_STATE_DIR", stateDir)
t.Setenv("QUOTEFORGE_ENCRYPTION_KEY", "")
ciphertext, err := Encrypt("secret-password")
if err != nil {
t.Fatalf("encrypt: %v", err)
}
if ciphertext == "" {
t.Fatal("expected ciphertext")
}
keyPath := filepath.Join(stateDir, encryptionKeyFileName)
info, err := os.Stat(keyPath)
if err != nil {
t.Fatalf("stat key file: %v", err)
}
if info.Mode().Perm() != 0600 {
t.Fatalf("expected 0600 key file, got %v", info.Mode().Perm())
}
}
func TestDecryptMigratesLegacyCiphertext(t *testing.T) {
stateDir := t.TempDir()
t.Setenv("QFS_STATE_DIR", stateDir)
t.Setenv("QUOTEFORGE_ENCRYPTION_KEY", "")
legacyCiphertext := encryptWithKeyForTest(t, getLegacyEncryptionKey(), "legacy-password")
plaintext, migrated, err := DecryptWithMetadata(legacyCiphertext)
if err != nil {
t.Fatalf("decrypt legacy: %v", err)
}
if plaintext != "legacy-password" {
t.Fatalf("unexpected plaintext: %q", plaintext)
}
if !migrated {
t.Fatal("expected legacy ciphertext to require migration")
}
currentCiphertext, err := Encrypt("legacy-password")
if err != nil {
t.Fatalf("encrypt current: %v", err)
}
plaintext, migrated, err = DecryptWithMetadata(currentCiphertext)
if err != nil {
t.Fatalf("decrypt current: %v", err)
}
if migrated {
t.Fatal("did not expect current ciphertext to require migration")
}
}
func encryptWithKeyForTest(t *testing.T, key []byte, plaintext string) string {
t.Helper()
block, err := aes.NewCipher(key)
if err != nil {
t.Fatalf("new cipher: %v", err)
}
gcm, err := cipher.NewGCM(block)
if err != nil {
t.Fatalf("new gcm: %v", err)
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
t.Fatalf("read nonce: %v", err)
}
ciphertext := gcm.Seal(nonce, nonce, []byte(plaintext), nil)
return base64.StdEncoding.EncodeToString(ciphertext)
}
func TestLegacyEncryptionKeyRemainsDeterministic(t *testing.T) {
hostname, _ := os.Hostname()
expected := sha256.Sum256([]byte(hostname + "quoteforge-salt-2024"))
actual := getLegacyEncryptionKey()
if string(actual) != string(expected[:]) {
t.Fatal("legacy key derivation changed")
}
}

View File

@@ -5,7 +5,10 @@ import (
"testing"
"time"
"github.com/glebarez/sqlite"
"github.com/google/uuid"
"gorm.io/gorm"
"gorm.io/gorm/logger"
)
func TestRunLocalMigrationsBackfillsExistingConfigurations(t *testing.T) {
@@ -253,3 +256,340 @@ func TestRunLocalMigrationsDeduplicatesConfigurationVersionsBySpecAndPrice(t *te
t.Fatalf("expected current_version_id to point to kept latest version v3")
}
}
func TestRunLocalMigrationsBackfillsConfigurationLineNo(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "line_no_backfill.db")
local, err := New(dbPath)
if err != nil {
t.Fatalf("open localdb: %v", err)
}
t.Cleanup(func() { _ = local.Close() })
projectUUID := "project-line"
cfg1 := &LocalConfiguration{
UUID: "line-cfg-1",
ProjectUUID: &projectUUID,
Name: "Cfg 1",
Items: LocalConfigItems{},
SyncStatus: "pending",
OriginalUsername: "tester",
IsActive: true,
CreatedAt: time.Now().Add(-2 * time.Hour),
}
cfg2 := &LocalConfiguration{
UUID: "line-cfg-2",
ProjectUUID: &projectUUID,
Name: "Cfg 2",
Items: LocalConfigItems{},
SyncStatus: "pending",
OriginalUsername: "tester",
IsActive: true,
CreatedAt: time.Now().Add(-1 * time.Hour),
}
if err := local.SaveConfiguration(cfg1); err != nil {
t.Fatalf("save cfg1: %v", err)
}
if err := local.SaveConfiguration(cfg2); err != nil {
t.Fatalf("save cfg2: %v", err)
}
if err := local.DB().Model(&LocalConfiguration{}).Where("uuid IN ?", []string{cfg1.UUID, cfg2.UUID}).Update("line_no", 0).Error; err != nil {
t.Fatalf("reset line_no: %v", err)
}
if err := local.DB().Where("id = ?", "2026_02_19_local_config_line_no").Delete(&LocalSchemaMigration{}).Error; err != nil {
t.Fatalf("delete migration record: %v", err)
}
if err := runLocalMigrations(local.DB()); err != nil {
t.Fatalf("rerun local migrations: %v", err)
}
var rows []LocalConfiguration
if err := local.DB().Where("uuid IN ?", []string{cfg1.UUID, cfg2.UUID}).Order("created_at ASC").Find(&rows).Error; err != nil {
t.Fatalf("load configurations: %v", err)
}
if len(rows) != 2 {
t.Fatalf("expected 2 configurations, got %d", len(rows))
}
if rows[0].Line != 10 || rows[1].Line != 20 {
t.Fatalf("expected line_no [10,20], got [%d,%d]", rows[0].Line, rows[1].Line)
}
}
func TestRunLocalMigrationsDeduplicatesCanonicalPartnumberCatalog(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "partnumber_catalog_dedup.db")
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
firstLots := LocalPartnumberBookLots{
{LotName: "LOT-A", Qty: 1},
}
secondLots := LocalPartnumberBookLots{
{LotName: "LOT-B", Qty: 2},
}
if err := db.Exec(`
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL,
lots_json TEXT NOT NULL,
description TEXT
)
`).Error; err != nil {
t.Fatalf("create dirty local_partnumber_book_items: %v", err)
}
if err := db.Create(&LocalPartnumberBookItem{
Partnumber: "PN-001",
LotsJSON: firstLots,
Description: "",
}).Error; err != nil {
t.Fatalf("insert first duplicate row: %v", err)
}
if err := db.Create(&LocalPartnumberBookItem{
Partnumber: "PN-001",
LotsJSON: secondLots,
Description: "Canonical description",
}).Error; err != nil {
t.Fatalf("insert second duplicate row: %v", err)
}
if err := migrateLocalPartnumberBookCatalog(db); err != nil {
t.Fatalf("migrate local partnumber catalog: %v", err)
}
var items []LocalPartnumberBookItem
if err := db.Order("partnumber ASC").Find(&items).Error; err != nil {
t.Fatalf("load migrated partnumber items: %v", err)
}
if len(items) != 1 {
t.Fatalf("expected 1 deduplicated item, got %d", len(items))
}
if items[0].Partnumber != "PN-001" {
t.Fatalf("unexpected partnumber: %s", items[0].Partnumber)
}
if items[0].Description != "Canonical description" {
t.Fatalf("expected merged description, got %q", items[0].Description)
}
if len(items[0].LotsJSON) != 2 {
t.Fatalf("expected merged lots from duplicates, got %d", len(items[0].LotsJSON))
}
var duplicateCount int64
if err := db.Model(&LocalPartnumberBookItem{}).
Where("partnumber = ?", "PN-001").
Count(&duplicateCount).Error; err != nil {
t.Fatalf("count deduplicated partnumber: %v", err)
}
if duplicateCount != 1 {
t.Fatalf("expected unique partnumber row after migration, got %d", duplicateCount)
}
}
func TestSanitizeLocalPartnumberBookCatalogRemovesRowsWithoutPartnumber(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "sanitize_partnumber_catalog.db")
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
if err := db.Exec(`
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NULL,
lots_json TEXT NOT NULL,
description TEXT
)
`).Error; err != nil {
t.Fatalf("create local_partnumber_book_items: %v", err)
}
if err := db.Exec(`
INSERT INTO local_partnumber_book_items (partnumber, lots_json, description) VALUES
(NULL, '[]', 'null pn'),
('', '[]', 'empty pn'),
('PN-OK', '[]', 'valid pn')
`).Error; err != nil {
t.Fatalf("seed local_partnumber_book_items: %v", err)
}
if err := sanitizeLocalPartnumberBookCatalog(db); err != nil {
t.Fatalf("sanitize local partnumber catalog: %v", err)
}
var items []LocalPartnumberBookItem
if err := db.Order("id ASC").Find(&items).Error; err != nil {
t.Fatalf("load sanitized items: %v", err)
}
if len(items) != 1 {
t.Fatalf("expected 1 valid item after sanitize, got %d", len(items))
}
if items[0].Partnumber != "PN-OK" {
t.Fatalf("expected remaining partnumber PN-OK, got %q", items[0].Partnumber)
}
}
func TestNewMigratesLegacyPartnumberBookCatalogBeforeAutoMigrate(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "legacy_partnumber_catalog.db")
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
if err := db.Exec(`
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL UNIQUE,
lots_json TEXT NOT NULL,
is_primary_pn INTEGER NOT NULL DEFAULT 0,
description TEXT
)
`).Error; err != nil {
t.Fatalf("create legacy local_partnumber_book_items: %v", err)
}
if err := db.Exec(`
INSERT INTO local_partnumber_book_items (partnumber, lots_json, is_primary_pn, description)
VALUES ('PN-001', '[{"lot_name":"CPU_A","qty":1}]', 0, 'Legacy row')
`).Error; err != nil {
t.Fatalf("seed legacy local_partnumber_book_items: %v", err)
}
local, err := New(dbPath)
if err != nil {
t.Fatalf("open localdb with legacy catalog: %v", err)
}
t.Cleanup(func() { _ = local.Close() })
var columns []struct {
Name string `gorm:"column:name"`
}
if err := local.DB().Raw(`SELECT name FROM pragma_table_info('local_partnumber_book_items')`).Scan(&columns).Error; err != nil {
t.Fatalf("load local_partnumber_book_items columns: %v", err)
}
for _, column := range columns {
if column.Name == "is_primary_pn" {
t.Fatalf("expected legacy is_primary_pn column to be removed before automigrate")
}
}
var items []LocalPartnumberBookItem
if err := local.DB().Find(&items).Error; err != nil {
t.Fatalf("load migrated local_partnumber_book_items: %v", err)
}
if len(items) != 1 || items[0].Partnumber != "PN-001" {
t.Fatalf("unexpected migrated rows: %#v", items)
}
}
func TestNewRecoversBrokenPartnumberBookCatalogCache(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "broken_partnumber_catalog.db")
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
if err := db.Exec(`
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL UNIQUE,
lots_json TEXT NOT NULL,
description TEXT
)
`).Error; err != nil {
t.Fatalf("create broken local_partnumber_book_items: %v", err)
}
if err := db.Exec(`
INSERT INTO local_partnumber_book_items (partnumber, lots_json, description)
VALUES ('PN-001', '{not-json}', 'Broken cache row')
`).Error; err != nil {
t.Fatalf("seed broken local_partnumber_book_items: %v", err)
}
local, err := New(dbPath)
if err != nil {
t.Fatalf("open localdb with broken catalog cache: %v", err)
}
t.Cleanup(func() { _ = local.Close() })
var count int64
if err := local.DB().Model(&LocalPartnumberBookItem{}).Count(&count).Error; err != nil {
t.Fatalf("count recovered local_partnumber_book_items: %v", err)
}
if count != 0 {
t.Fatalf("expected empty recovered local_partnumber_book_items, got %d rows", count)
}
var quarantineTables []struct {
Name string `gorm:"column:name"`
}
if err := local.DB().Raw(`
SELECT name
FROM sqlite_master
WHERE type = 'table' AND name LIKE 'local_partnumber_book_items_broken_%'
`).Scan(&quarantineTables).Error; err != nil {
t.Fatalf("load quarantine tables: %v", err)
}
if len(quarantineTables) != 1 {
t.Fatalf("expected one quarantined broken catalog table, got %d", len(quarantineTables))
}
}
func TestCleanupStaleReadOnlyCacheTempTablesDropsShadowTempWhenBaseExists(t *testing.T) {
dbPath := filepath.Join(t.TempDir(), "stale_cache_temp.db")
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
if err := db.Exec(`
CREATE TABLE local_pricelist_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
pricelist_id INTEGER NOT NULL,
partnumber TEXT,
brand TEXT NOT NULL DEFAULT '',
lot_name TEXT NOT NULL,
description TEXT,
price REAL NOT NULL DEFAULT 0,
quantity INTEGER NOT NULL DEFAULT 0,
reserve INTEGER NOT NULL DEFAULT 0,
available_qty REAL,
partnumbers TEXT,
lot_category TEXT,
created_at DATETIME,
updated_at DATETIME
)
`).Error; err != nil {
t.Fatalf("create local_pricelist_items: %v", err)
}
if err := db.Exec(`
CREATE TABLE local_pricelist_items__temp (
id INTEGER PRIMARY KEY AUTOINCREMENT,
legacy TEXT
)
`).Error; err != nil {
t.Fatalf("create local_pricelist_items__temp: %v", err)
}
if err := cleanupStaleReadOnlyCacheTempTables(db); err != nil {
t.Fatalf("cleanup stale read-only cache temp tables: %v", err)
}
if db.Migrator().HasTable("local_pricelist_items__temp") {
t.Fatalf("expected stale temp table to be dropped")
}
if !db.Migrator().HasTable("local_pricelist_items") {
t.Fatalf("expected base local_pricelist_items table to remain")
}
}

View File

@@ -1,6 +1,7 @@
package localdb
import (
"encoding/json"
"errors"
"fmt"
"log/slog"
@@ -42,6 +43,14 @@ type LocalDB struct {
path string
}
var localReadOnlyCacheTables = []string{
"local_pricelist_items",
"local_pricelists",
"local_components",
"local_partnumber_book_items",
"local_partnumber_books",
}
// ResetData clears local data tables while keeping connection settings.
// It does not drop schema or connection_settings.
func ResetData(dbPath string) error {
@@ -70,7 +79,6 @@ func ResetData(dbPath string) error {
"local_pricelists",
"local_pricelist_items",
"local_components",
"local_remote_migrations_applied",
"local_sync_guard_state",
"pending_changes",
"app_settings",
@@ -111,6 +119,12 @@ func New(dbPath string) (*LocalDB, error) {
if err := ensureLocalProjectsTable(db); err != nil {
return nil, fmt.Errorf("ensure local_projects table: %w", err)
}
if err := prepareLocalPartnumberBookCatalog(db); err != nil {
return nil, fmt.Errorf("prepare local partnumber book catalog: %w", err)
}
if err := cleanupStaleReadOnlyCacheTempTables(db); err != nil {
return nil, fmt.Errorf("cleanup stale read-only cache temp tables: %w", err)
}
// Preflight: ensure local_projects has non-null UUIDs before AutoMigrate rebuilds tables.
if db.Migrator().HasTable(&LocalProject{}) {
@@ -131,22 +145,28 @@ func New(dbPath string) (*LocalDB, error) {
}
// Auto-migrate all local tables
if err := db.AutoMigrate(
&ConnectionSettings{},
&LocalConfiguration{},
&LocalConfigurationVersion{},
&LocalPricelist{},
&LocalPricelistItem{},
&LocalComponent{},
&AppSetting{},
&LocalRemoteMigrationApplied{},
&LocalSyncGuardState{},
&PendingChange{},
); err != nil {
return nil, fmt.Errorf("migrating sqlite database: %w", err)
if err := autoMigrateLocalSchema(db); err != nil {
if recovered, recoveryErr := recoverFromReadOnlyCacheInitFailure(db, err); recoveryErr != nil {
return nil, fmt.Errorf("migrating sqlite database: %w (recovery failed: %v)", err, recoveryErr)
} else if !recovered {
return nil, fmt.Errorf("migrating sqlite database: %w", err)
}
if err := autoMigrateLocalSchema(db); err != nil {
return nil, fmt.Errorf("migrating sqlite database after cache recovery: %w", err)
}
}
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
return nil, fmt.Errorf("ensure local partnumber book item table: %w", err)
}
if err := runLocalMigrations(db); err != nil {
return nil, fmt.Errorf("running local sqlite migrations: %w", err)
if recovered, recoveryErr := recoverFromReadOnlyCacheInitFailure(db, err); recoveryErr != nil {
return nil, fmt.Errorf("running local sqlite migrations: %w (recovery failed: %v)", err, recoveryErr)
} else if !recovered {
return nil, fmt.Errorf("running local sqlite migrations: %w", err)
}
if err := runLocalMigrations(db); err != nil {
return nil, fmt.Errorf("running local sqlite migrations after cache recovery: %w", err)
}
}
slog.Info("local SQLite database initialized", "path", dbPath)
@@ -189,6 +209,282 @@ CREATE TABLE local_projects (
return nil
}
func autoMigrateLocalSchema(db *gorm.DB) error {
return db.AutoMigrate(
&ConnectionSettings{},
&LocalConfiguration{},
&LocalConfigurationVersion{},
&LocalPricelist{},
&LocalPricelistItem{},
&LocalComponent{},
&AppSetting{},
&LocalSyncGuardState{},
&PendingChange{},
&LocalPartnumberBook{},
)
}
func sanitizeLocalPartnumberBookCatalog(db *gorm.DB) error {
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
return nil
}
// Old local databases may contain partially migrated catalog rows with NULL/empty
// partnumber values. SQLite table rebuild during AutoMigrate fails on such rows once
// the schema enforces NOT NULL, so remove them before AutoMigrate touches the table.
if err := db.Exec(`
DELETE FROM local_partnumber_book_items
WHERE partnumber IS NULL OR TRIM(partnumber) = ''
`).Error; err != nil {
return err
}
return nil
}
func prepareLocalPartnumberBookCatalog(db *gorm.DB) error {
if err := cleanupStaleLocalPartnumberBookCatalogTempTable(db); err != nil {
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("cleanup stale temp table: %w", err)); recoveryErr != nil {
return recoveryErr
}
return nil
}
if err := sanitizeLocalPartnumberBookCatalog(db); err != nil {
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("sanitize catalog: %w", err)); recoveryErr != nil {
return recoveryErr
}
return nil
}
if err := migrateLegacyPartnumberBookCatalogBeforeAutoMigrate(db); err != nil {
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("migrate legacy catalog: %w", err)); recoveryErr != nil {
return recoveryErr
}
return nil
}
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("ensure canonical catalog table: %w", err)); recoveryErr != nil {
return recoveryErr
}
return nil
}
if err := validateLocalPartnumberBookCatalog(db); err != nil {
if recoveryErr := recoverLocalPartnumberBookCatalog(db, fmt.Errorf("validate canonical catalog: %w", err)); recoveryErr != nil {
return recoveryErr
}
}
return nil
}
func cleanupStaleReadOnlyCacheTempTables(db *gorm.DB) error {
for _, tableName := range localReadOnlyCacheTables {
tempName := tableName + "__temp"
if !db.Migrator().HasTable(tempName) {
continue
}
if db.Migrator().HasTable(tableName) {
if err := db.Exec(`DROP TABLE ` + tempName).Error; err != nil {
return err
}
continue
}
if err := quarantineSQLiteTable(db, tempName, localReadOnlyCacheQuarantineTableName(tableName, "temp")); err != nil {
return err
}
}
return nil
}
func cleanupStaleLocalPartnumberBookCatalogTempTable(db *gorm.DB) error {
if !db.Migrator().HasTable("local_partnumber_book_items__temp") {
return nil
}
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
return db.Exec(`DROP TABLE local_partnumber_book_items__temp`).Error
}
return quarantineSQLiteTable(db, "local_partnumber_book_items__temp", localPartnumberBookCatalogQuarantineTableName("temp"))
}
func migrateLegacyPartnumberBookCatalogBeforeAutoMigrate(db *gorm.DB) error {
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
return nil
}
// Legacy databases may still have the pre-catalog shape (`book_id`/`lot_name`) or the
// intermediate canonical shape with obsolete columns like `is_primary_pn`. Let the
// explicit rebuild logic normalize this table before GORM AutoMigrate attempts a
// table-copy migration on its own.
return migrateLocalPartnumberBookCatalog(db)
}
func ensureLocalPartnumberBookItemTable(db *gorm.DB) error {
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
return nil
}
if err := db.Exec(`
CREATE TABLE local_partnumber_book_items (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL UNIQUE,
lots_json TEXT NOT NULL,
description TEXT
)
`).Error; err != nil {
return err
}
return db.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_partnumber_book_items_partnumber ON local_partnumber_book_items(partnumber)`).Error
}
func validateLocalPartnumberBookCatalog(db *gorm.DB) error {
if !db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
return nil
}
type rawCatalogRow struct {
Partnumber string `gorm:"column:partnumber"`
LotsJSON string `gorm:"column:lots_json"`
Description string `gorm:"column:description"`
}
var rows []rawCatalogRow
if err := db.Raw(`
SELECT partnumber, lots_json, COALESCE(description, '') AS description
FROM local_partnumber_book_items
ORDER BY id ASC
`).Scan(&rows).Error; err != nil {
return fmt.Errorf("load canonical catalog rows: %w", err)
}
seen := make(map[string]struct{}, len(rows))
for _, row := range rows {
partnumber := strings.TrimSpace(row.Partnumber)
if partnumber == "" {
return errors.New("catalog contains empty partnumber")
}
if _, exists := seen[partnumber]; exists {
return fmt.Errorf("catalog contains duplicate partnumber %q", partnumber)
}
seen[partnumber] = struct{}{}
if strings.TrimSpace(row.LotsJSON) == "" {
return fmt.Errorf("catalog row %q has empty lots_json", partnumber)
}
var lots LocalPartnumberBookLots
if err := json.Unmarshal([]byte(row.LotsJSON), &lots); err != nil {
return fmt.Errorf("catalog row %q has invalid lots_json: %w", partnumber, err)
}
}
return nil
}
func recoverLocalPartnumberBookCatalog(db *gorm.DB, cause error) error {
slog.Warn("recovering broken local partnumber book catalog", "error", cause.Error())
if err := ensureLocalPartnumberBooksCatalogColumn(db); err != nil {
return fmt.Errorf("ensure local_partnumber_books.partnumbers_json during recovery: %w", err)
}
if db.Migrator().HasTable("local_partnumber_book_items__temp") {
if err := quarantineSQLiteTable(db, "local_partnumber_book_items__temp", localPartnumberBookCatalogQuarantineTableName("temp")); err != nil {
return fmt.Errorf("quarantine local_partnumber_book_items__temp: %w", err)
}
}
if db.Migrator().HasTable(&LocalPartnumberBookItem{}) {
if err := quarantineSQLiteTable(db, "local_partnumber_book_items", localPartnumberBookCatalogQuarantineTableName("broken")); err != nil {
return fmt.Errorf("quarantine local_partnumber_book_items: %w", err)
}
}
if err := ensureLocalPartnumberBookItemTable(db); err != nil {
return fmt.Errorf("recreate local_partnumber_book_items after recovery: %w", err)
}
slog.Warn("local partnumber book catalog reset to empty cache; next sync will rebuild it")
return nil
}
func recoverFromReadOnlyCacheInitFailure(db *gorm.DB, cause error) (bool, error) {
lowerCause := strings.ToLower(cause.Error())
recoveredAny := false
for _, tableName := range affectedReadOnlyCacheTables(lowerCause) {
if err := resetReadOnlyCacheTable(db, tableName); err != nil {
return recoveredAny, err
}
recoveredAny = true
}
if strings.Contains(lowerCause, "local_partnumber_book_items") || strings.Contains(lowerCause, "local_partnumber_books") {
if err := recoverLocalPartnumberBookCatalog(db, cause); err != nil {
return recoveredAny, err
}
recoveredAny = true
}
if recoveredAny {
slog.Warn("recovered read-only local cache tables after startup failure", "error", cause.Error())
}
return recoveredAny, nil
}
func affectedReadOnlyCacheTables(lowerCause string) []string {
affected := make([]string, 0, len(localReadOnlyCacheTables))
for _, tableName := range localReadOnlyCacheTables {
if tableName == "local_partnumber_book_items" || tableName == "local_partnumber_books" {
continue
}
if strings.Contains(lowerCause, tableName) {
affected = append(affected, tableName)
}
}
return affected
}
func resetReadOnlyCacheTable(db *gorm.DB, tableName string) error {
slog.Warn("resetting read-only local cache table", "table", tableName)
tempName := tableName + "__temp"
if db.Migrator().HasTable(tempName) {
if err := quarantineSQLiteTable(db, tempName, localReadOnlyCacheQuarantineTableName(tableName, "temp")); err != nil {
return fmt.Errorf("quarantine temp table %s: %w", tempName, err)
}
}
if db.Migrator().HasTable(tableName) {
if err := quarantineSQLiteTable(db, tableName, localReadOnlyCacheQuarantineTableName(tableName, "broken")); err != nil {
return fmt.Errorf("quarantine table %s: %w", tableName, err)
}
}
return nil
}
func ensureLocalPartnumberBooksCatalogColumn(db *gorm.DB) error {
if !db.Migrator().HasTable(&LocalPartnumberBook{}) {
return nil
}
if db.Migrator().HasColumn(&LocalPartnumberBook{}, "partnumbers_json") {
return nil
}
return db.Exec(`ALTER TABLE local_partnumber_books ADD COLUMN partnumbers_json TEXT NOT NULL DEFAULT '[]'`).Error
}
func quarantineSQLiteTable(db *gorm.DB, tableName string, quarantineName string) error {
if !db.Migrator().HasTable(tableName) {
return nil
}
if tableName == quarantineName {
return nil
}
if db.Migrator().HasTable(quarantineName) {
if err := db.Exec(`DROP TABLE ` + quarantineName).Error; err != nil {
return err
}
}
return db.Exec(`ALTER TABLE ` + tableName + ` RENAME TO ` + quarantineName).Error
}
func localPartnumberBookCatalogQuarantineTableName(kind string) string {
return "local_partnumber_book_items_" + kind + "_" + time.Now().UTC().Format("20060102150405")
}
func localReadOnlyCacheQuarantineTableName(tableName string, kind string) string {
return tableName + "_" + kind + "_" + time.Now().UTC().Format("20060102150405")
}
// HasSettings returns true if connection settings exist
func (l *LocalDB) HasSettings() bool {
var count int64
@@ -204,10 +500,23 @@ func (l *LocalDB) GetSettings() (*ConnectionSettings, error) {
}
// Decrypt password
password, err := Decrypt(settings.PasswordEncrypted)
password, migrated, err := DecryptWithMetadata(settings.PasswordEncrypted)
if err != nil {
return nil, fmt.Errorf("decrypting password: %w", err)
}
if migrated {
encrypted, encryptErr := Encrypt(password)
if encryptErr != nil {
return nil, fmt.Errorf("re-encrypting legacy password: %w", encryptErr)
}
if err := l.db.Model(&ConnectionSettings{}).
Where("id = ?", settings.ID).
Update("password_encrypted", encrypted).Error; err != nil {
return nil, fmt.Errorf("upgrading legacy password encryption: %w", err)
}
}
settings.PasswordEncrypted = password // Return decrypted password in this field
return &settings, nil
@@ -341,7 +650,7 @@ func (l *LocalDB) GetProjectByName(ownerUsername, name string) (*LocalProject, e
func (l *LocalDB) GetProjectConfigurations(projectUUID string) ([]LocalConfiguration, error) {
var configs []LocalConfiguration
err := l.db.Where("project_uuid = ? AND is_active = ?", projectUUID, true).
Order("created_at DESC").
Order(configurationLineOrderClause()).
Find(&configs).Error
return configs, err
}
@@ -514,9 +823,54 @@ func (l *LocalDB) BackfillConfigurationProjects(defaultOwner string) error {
// SaveConfiguration saves a configuration to local SQLite
func (l *LocalDB) SaveConfiguration(config *LocalConfiguration) error {
if config != nil && config.IsActive && config.Line <= 0 {
line, err := l.NextConfigurationLine(config.ProjectUUID, config.UUID)
if err != nil {
return err
}
config.Line = line
}
return l.db.Save(config).Error
}
func (l *LocalDB) NextConfigurationLine(projectUUID *string, excludeUUID string) (int, error) {
return NextConfigurationLineTx(l.db, projectUUID, excludeUUID)
}
func NextConfigurationLineTx(tx *gorm.DB, projectUUID *string, excludeUUID string) (int, error) {
query := tx.Model(&LocalConfiguration{}).
Where("is_active = ?", true)
trimmedExclude := strings.TrimSpace(excludeUUID)
if trimmedExclude != "" {
query = query.Where("uuid <> ?", trimmedExclude)
}
if projectUUID != nil && strings.TrimSpace(*projectUUID) != "" {
query = query.Where("project_uuid = ?", strings.TrimSpace(*projectUUID))
} else {
query = query.Where("project_uuid IS NULL OR TRIM(project_uuid) = ''")
}
var maxLine int
if err := query.Select("COALESCE(MAX(line_no), 0)").Scan(&maxLine).Error; err != nil {
return 0, fmt.Errorf("read max line_no: %w", err)
}
if maxLine < 0 {
maxLine = 0
}
next := ((maxLine / 10) + 1) * 10
if next < 10 {
next = 10
}
return next, nil
}
func configurationLineOrderClause() string {
return "CASE WHEN COALESCE(local_configurations.line_no, 0) <= 0 THEN 2147483647 ELSE local_configurations.line_no END ASC, local_configurations.created_at DESC, local_configurations.id DESC"
}
// GetConfigurations returns all local configurations
func (l *LocalDB) GetConfigurations() ([]LocalConfiguration, error) {
var configs []LocalConfiguration
@@ -793,20 +1147,20 @@ func (l *LocalDB) CountLocalPricelistItemsWithEmptyCategory(pricelistID uint) (i
return count, nil
}
// SaveLocalPricelistItems saves pricelist items to local SQLite
// SaveLocalPricelistItems saves pricelist items to local SQLite.
// Duplicate (pricelist_id, lot_name) rows are silently ignored.
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
if len(items) == 0 {
return nil
}
// Batch insert
batchSize := 500
for i := 0; i < len(items); i += batchSize {
end := i + batchSize
if end > len(items) {
end = len(items)
}
if err := l.db.CreateInBatches(items[i:end], batchSize).Error; err != nil {
if err := l.db.Clauses(clause.OnConflict{DoNothing: true}).CreateInBatches(items[i:end], batchSize).Error; err != nil {
return err
}
}
@@ -1188,42 +1542,6 @@ func (l *LocalDB) repairConfigurationChange(change *PendingChange) error {
return nil
}
// GetRemoteMigrationApplied returns a locally applied remote migration by ID.
func (l *LocalDB) GetRemoteMigrationApplied(id string) (*LocalRemoteMigrationApplied, error) {
var migration LocalRemoteMigrationApplied
if err := l.db.Where("id = ?", id).First(&migration).Error; err != nil {
return nil, err
}
return &migration, nil
}
// UpsertRemoteMigrationApplied writes applied migration metadata.
func (l *LocalDB) UpsertRemoteMigrationApplied(id, checksum, appVersion string, appliedAt time.Time) error {
record := &LocalRemoteMigrationApplied{
ID: id,
Checksum: checksum,
AppVersion: appVersion,
AppliedAt: appliedAt,
}
return l.db.Clauses(clause.OnConflict{
Columns: []clause.Column{{Name: "id"}},
DoUpdates: clause.Assignments(map[string]interface{}{
"checksum": checksum,
"app_version": appVersion,
"applied_at": appliedAt,
}),
}).Create(record).Error
}
// GetLatestAppliedRemoteMigrationID returns last applied remote migration id.
func (l *LocalDB) GetLatestAppliedRemoteMigrationID() (string, error) {
var record LocalRemoteMigrationApplied
if err := l.db.Order("applied_at DESC").First(&record).Error; err != nil {
return "", err
}
return record.ID, nil
}
// GetSyncGuardState returns the latest readiness guard state.
func (l *LocalDB) GetSyncGuardState() (*LocalSyncGuardState, error) {
var state LocalSyncGuardState

View File

@@ -4,6 +4,7 @@ import (
"errors"
"fmt"
"log/slog"
"sort"
"strings"
"time"
@@ -108,6 +109,29 @@ var localMigrations = []localMigration{
name: "Deduplicate configuration revisions by spec+price",
run: deduplicateConfigurationVersionsBySpecAndPrice,
},
{
id: "2026_02_19_local_config_line_no",
name: "Add line_no to local_configurations and backfill ordering",
run: addLocalConfigurationLineNo,
},
{
id: "2026_03_07_local_partnumber_book_catalog",
name: "Convert local partnumber book cache to book membership + deduplicated PN catalog",
run: migrateLocalPartnumberBookCatalog,
},
{
id: "2026_03_13_pricelist_items_dedup_unique",
name: "Deduplicate local_pricelist_items and add unique index on (pricelist_id, lot_name)",
run: deduplicatePricelistItemsAndAddUniqueIndex,
},
}
type localPartnumberCatalogRow struct {
Partnumber string
LotsJSON LocalPartnumberBookLots
Description string
CreatedAt time.Time
ServerID int
}
func runLocalMigrations(db *gorm.DB) error {
@@ -806,3 +830,293 @@ func addLocalConfigurationSupportCode(tx *gorm.DB) error {
}
return nil
}
func addLocalConfigurationLineNo(tx *gorm.DB) error {
type columnInfo struct {
Name string `gorm:"column:name"`
}
var columns []columnInfo
if err := tx.Raw(`
SELECT name FROM pragma_table_info('local_configurations')
WHERE name IN ('line_no')
`).Scan(&columns).Error; err != nil {
return fmt.Errorf("check local_configurations(line_no) existence: %w", err)
}
if len(columns) == 0 {
if err := tx.Exec(`
ALTER TABLE local_configurations
ADD COLUMN line_no INTEGER
`).Error; err != nil {
return fmt.Errorf("add local_configurations.line_no: %w", err)
}
slog.Info("added line_no to local_configurations")
}
if err := tx.Exec(`
WITH ranked AS (
SELECT
id,
ROW_NUMBER() OVER (
PARTITION BY COALESCE(NULLIF(TRIM(project_uuid), ''), '__NO_PROJECT__')
ORDER BY created_at ASC, id ASC
) AS rn
FROM local_configurations
WHERE line_no IS NULL OR line_no <= 0
)
UPDATE local_configurations
SET line_no = (
SELECT rn * 10
FROM ranked
WHERE ranked.id = local_configurations.id
)
WHERE id IN (SELECT id FROM ranked)
`).Error; err != nil {
return fmt.Errorf("backfill local_configurations.line_no: %w", err)
}
if err := tx.Exec(`
CREATE INDEX IF NOT EXISTS idx_local_configurations_project_line_no
ON local_configurations(project_uuid, line_no)
`).Error; err != nil {
return fmt.Errorf("ensure idx_local_configurations_project_line_no: %w", err)
}
return nil
}
func migrateLocalPartnumberBookCatalog(tx *gorm.DB) error {
type columnInfo struct {
Name string `gorm:"column:name"`
}
hasBooksTable := tx.Migrator().HasTable(&LocalPartnumberBook{})
hasItemsTable := tx.Migrator().HasTable(&LocalPartnumberBookItem{})
if !hasItemsTable {
return nil
}
if hasBooksTable {
var bookCols []columnInfo
if err := tx.Raw(`SELECT name FROM pragma_table_info('local_partnumber_books')`).Scan(&bookCols).Error; err != nil {
return fmt.Errorf("load local_partnumber_books columns: %w", err)
}
hasPartnumbersJSON := false
for _, c := range bookCols {
if c.Name == "partnumbers_json" {
hasPartnumbersJSON = true
break
}
}
if !hasPartnumbersJSON {
if err := tx.Exec(`ALTER TABLE local_partnumber_books ADD COLUMN partnumbers_json TEXT NOT NULL DEFAULT '[]'`).Error; err != nil {
return fmt.Errorf("add local_partnumber_books.partnumbers_json: %w", err)
}
}
}
var itemCols []columnInfo
if err := tx.Raw(`SELECT name FROM pragma_table_info('local_partnumber_book_items')`).Scan(&itemCols).Error; err != nil {
return fmt.Errorf("load local_partnumber_book_items columns: %w", err)
}
hasBookID := false
hasLotName := false
hasLotsJSON := false
for _, c := range itemCols {
if c.Name == "book_id" {
hasBookID = true
}
if c.Name == "lot_name" {
hasLotName = true
}
if c.Name == "lots_json" {
hasLotsJSON = true
}
}
if !hasBookID && !hasLotName && !hasLotsJSON {
return nil
}
type legacyRow struct {
BookID uint
Partnumber string
LotName string
Description string
CreatedAt time.Time
ServerID int
}
bookPNs := make(map[uint]map[string]struct{})
catalog := make(map[string]*localPartnumberCatalogRow)
if hasBookID || hasLotName {
var rows []legacyRow
if err := tx.Raw(`
SELECT
i.book_id,
i.partnumber,
i.lot_name,
COALESCE(i.description, '') AS description,
b.created_at,
b.server_id
FROM local_partnumber_book_items i
INNER JOIN local_partnumber_books b ON b.id = i.book_id
ORDER BY b.created_at DESC, b.id DESC, i.partnumber ASC, i.id ASC
`).Scan(&rows).Error; err != nil {
return fmt.Errorf("load legacy local partnumber book items: %w", err)
}
for _, row := range rows {
if _, ok := bookPNs[row.BookID]; !ok {
bookPNs[row.BookID] = make(map[string]struct{})
}
bookPNs[row.BookID][row.Partnumber] = struct{}{}
entry, ok := catalog[row.Partnumber]
if !ok {
entry = &localPartnumberCatalogRow{
Partnumber: row.Partnumber,
Description: row.Description,
CreatedAt: row.CreatedAt,
ServerID: row.ServerID,
}
catalog[row.Partnumber] = entry
}
if row.CreatedAt.After(entry.CreatedAt) || (row.CreatedAt.Equal(entry.CreatedAt) && row.ServerID >= entry.ServerID) {
entry.Description = row.Description
entry.CreatedAt = row.CreatedAt
entry.ServerID = row.ServerID
}
found := false
for i := range entry.LotsJSON {
if entry.LotsJSON[i].LotName == row.LotName {
entry.LotsJSON[i].Qty += 1
found = true
break
}
}
if !found && row.LotName != "" {
entry.LotsJSON = append(entry.LotsJSON, LocalPartnumberBookLot{LotName: row.LotName, Qty: 1})
}
}
var books []LocalPartnumberBook
if err := tx.Find(&books).Error; err != nil {
return fmt.Errorf("load local partnumber books: %w", err)
}
for _, book := range books {
pnSet := bookPNs[book.ID]
partnumbers := make([]string, 0, len(pnSet))
for pn := range pnSet {
partnumbers = append(partnumbers, pn)
}
sort.Strings(partnumbers)
if err := tx.Model(&LocalPartnumberBook{}).
Where("id = ?", book.ID).
Update("partnumbers_json", LocalStringList(partnumbers)).Error; err != nil {
return fmt.Errorf("update partnumbers_json for local book %d: %w", book.ID, err)
}
}
} else {
var items []LocalPartnumberBookItem
if err := tx.Order("id DESC").Find(&items).Error; err != nil {
return fmt.Errorf("load canonical local partnumber book items: %w", err)
}
for _, item := range items {
entry, ok := catalog[item.Partnumber]
if !ok {
copiedLots := append(LocalPartnumberBookLots(nil), item.LotsJSON...)
catalog[item.Partnumber] = &localPartnumberCatalogRow{
Partnumber: item.Partnumber,
LotsJSON: copiedLots,
Description: item.Description,
}
continue
}
if entry.Description == "" && item.Description != "" {
entry.Description = item.Description
}
for _, lot := range item.LotsJSON {
merged := false
for i := range entry.LotsJSON {
if entry.LotsJSON[i].LotName == lot.LotName {
if lot.Qty > entry.LotsJSON[i].Qty {
entry.LotsJSON[i].Qty = lot.Qty
}
merged = true
break
}
}
if !merged {
entry.LotsJSON = append(entry.LotsJSON, lot)
}
}
}
}
return rebuildLocalPartnumberBookCatalog(tx, catalog)
}
func rebuildLocalPartnumberBookCatalog(tx *gorm.DB, catalog map[string]*localPartnumberCatalogRow) error {
if err := tx.Exec(`
CREATE TABLE local_partnumber_book_items_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
partnumber TEXT NOT NULL UNIQUE,
lots_json TEXT NOT NULL,
description TEXT
)
`).Error; err != nil {
return fmt.Errorf("create new local_partnumber_book_items table: %w", err)
}
orderedPartnumbers := make([]string, 0, len(catalog))
for pn := range catalog {
orderedPartnumbers = append(orderedPartnumbers, pn)
}
sort.Strings(orderedPartnumbers)
for _, pn := range orderedPartnumbers {
row := catalog[pn]
sort.Slice(row.LotsJSON, func(i, j int) bool {
return row.LotsJSON[i].LotName < row.LotsJSON[j].LotName
})
if err := tx.Table("local_partnumber_book_items_new").Create(&LocalPartnumberBookItem{
Partnumber: row.Partnumber,
LotsJSON: row.LotsJSON,
Description: row.Description,
}).Error; err != nil {
return fmt.Errorf("insert new local_partnumber_book_items row for %s: %w", pn, err)
}
}
if err := tx.Exec(`DROP TABLE local_partnumber_book_items`).Error; err != nil {
return fmt.Errorf("drop legacy local_partnumber_book_items: %w", err)
}
if err := tx.Exec(`ALTER TABLE local_partnumber_book_items_new RENAME TO local_partnumber_book_items`).Error; err != nil {
return fmt.Errorf("rename new local_partnumber_book_items table: %w", err)
}
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_partnumber_book_items_partnumber ON local_partnumber_book_items(partnumber)`).Error; err != nil {
return fmt.Errorf("create local_partnumber_book_items partnumber index: %w", err)
}
return nil
}
func deduplicatePricelistItemsAndAddUniqueIndex(tx *gorm.DB) error {
// Remove duplicate (pricelist_id, lot_name) rows keeping only the row with the lowest id.
if err := tx.Exec(`
DELETE FROM local_pricelist_items
WHERE id NOT IN (
SELECT MIN(id) FROM local_pricelist_items
GROUP BY pricelist_id, lot_name
)
`).Error; err != nil {
return fmt.Errorf("deduplicate local_pricelist_items: %w", err)
}
// Add unique index to prevent future duplicates.
if err := tx.Exec(`
CREATE UNIQUE INDEX IF NOT EXISTS idx_local_pricelist_items_pricelist_lot_unique
ON local_pricelist_items(pricelist_id, lot_name)
`).Error; err != nil {
return fmt.Errorf("create unique index on local_pricelist_items: %w", err)
}
slog.Info("deduplicated local_pricelist_items and added unique index")
return nil
}

View File

@@ -83,35 +83,38 @@ func (s *LocalStringList) Scan(value interface{}) error {
// LocalConfiguration stores configurations in local SQLite
type LocalConfiguration struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
ServerID *uint `json:"server_id"` // ID on MariaDB server, NULL if local only
ProjectUUID *string `gorm:"index" json:"project_uuid,omitempty"`
CurrentVersionID *string `gorm:"index" json:"current_version_id,omitempty"`
IsActive bool `gorm:"default:true;index" json:"is_active"`
Name string `gorm:"not null" json:"name"`
Items LocalConfigItems `gorm:"type:text" json:"items"` // JSON stored as text in SQLite
TotalPrice *float64 `json:"total_price"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `gorm:"default:false" json:"is_template"`
ServerCount int `gorm:"default:1" json:"server_count"`
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
Article string `gorm:"size:80" json:"article,omitempty"`
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
ServerID *uint `json:"server_id"` // ID on MariaDB server, NULL if local only
ProjectUUID *string `gorm:"index" json:"project_uuid,omitempty"`
CurrentVersionID *string `gorm:"index" json:"current_version_id,omitempty"`
IsActive bool `gorm:"default:true;index" json:"is_active"`
Name string `gorm:"not null" json:"name"`
Items LocalConfigItems `gorm:"type:text" json:"items"` // JSON stored as text in SQLite
TotalPrice *float64 `json:"total_price"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `gorm:"default:false" json:"is_template"`
ServerCount int `gorm:"default:1" json:"server_count"`
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
Article string `gorm:"size:80" json:"article,omitempty"`
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
SyncedAt *time.Time `json:"synced_at"`
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
CurrentVersion *LocalConfigurationVersion `gorm:"foreignKey:CurrentVersionID;references:ID" json:"current_version,omitempty"`
Versions []LocalConfigurationVersion `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"versions,omitempty"`
VendorSpec VendorSpec `gorm:"type:text" json:"vendor_spec,omitempty"`
Line int `gorm:"column:line_no;index" json:"line"`
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
SyncedAt *time.Time `json:"synced_at"`
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
CurrentVersion *LocalConfigurationVersion `gorm:"foreignKey:CurrentVersionID;references:ID" json:"current_version,omitempty"`
Versions []LocalConfigurationVersion `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"versions,omitempty"`
}
func (LocalConfiguration) TableName() string {
@@ -200,18 +203,6 @@ func (LocalComponent) TableName() string {
return "local_components"
}
// LocalRemoteMigrationApplied tracks remote SQLite migrations received from server and applied locally.
type LocalRemoteMigrationApplied struct {
ID string `gorm:"primaryKey;size:128" json:"id"`
Checksum string `gorm:"size:128;not null" json:"checksum"`
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
AppliedAt time.Time `gorm:"not null" json:"applied_at"`
}
func (LocalRemoteMigrationApplied) TableName() string {
return "local_remote_migrations_applied"
}
// LocalSyncGuardState stores latest sync readiness decision for UI and preflight checks.
type LocalSyncGuardState struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
@@ -242,3 +233,112 @@ type PendingChange struct {
func (PendingChange) TableName() string {
return "pending_changes"
}
// LocalPartnumberBook stores a version snapshot of the PN→LOT mapping book (pull-only from PriceForge)
type LocalPartnumberBook struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
ServerID int `gorm:"uniqueIndex;not null" json:"server_id"`
Version string `gorm:"not null" json:"version"`
CreatedAt time.Time `gorm:"not null" json:"created_at"`
IsActive bool `gorm:"not null;default:true" json:"is_active"`
PartnumbersJSON LocalStringList `gorm:"column:partnumbers_json;type:text" json:"partnumbers_json"`
}
func (LocalPartnumberBook) TableName() string {
return "local_partnumber_books"
}
type LocalPartnumberBookLot struct {
LotName string `json:"lot_name"`
Qty float64 `json:"qty"`
}
type LocalPartnumberBookLots []LocalPartnumberBookLot
func (l LocalPartnumberBookLots) Value() (driver.Value, error) {
return json.Marshal(l)
}
func (l *LocalPartnumberBookLots) Scan(value interface{}) error {
if value == nil {
*l = make(LocalPartnumberBookLots, 0)
return nil
}
var bytes []byte
switch v := value.(type) {
case []byte:
bytes = v
case string:
bytes = []byte(v)
default:
return errors.New("type assertion failed for LocalPartnumberBookLots")
}
return json.Unmarshal(bytes, l)
}
// LocalPartnumberBookItem stores the canonical PN composition pulled from PriceForge.
type LocalPartnumberBookItem struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
Partnumber string `gorm:"not null" json:"partnumber"`
LotsJSON LocalPartnumberBookLots `gorm:"column:lots_json;type:text" json:"lots_json"`
Description string `json:"description,omitempty"`
}
func (LocalPartnumberBookItem) TableName() string {
return "local_partnumber_book_items"
}
// VendorSpecItem represents a single row in a vendor BOM specification
type VendorSpecItem struct {
SortOrder int `json:"sort_order"`
VendorPartnumber string `json:"vendor_partnumber"`
Quantity int `json:"quantity"`
Description string `json:"description,omitempty"`
UnitPrice *float64 `json:"unit_price,omitempty"`
TotalPrice *float64 `json:"total_price,omitempty"`
ResolvedLotName string `json:"resolved_lot_name,omitempty"`
ResolutionSource string `json:"resolution_source,omitempty"` // "book", "manual", "unresolved"
ManualLotSuggestion string `json:"manual_lot_suggestion,omitempty"`
LotQtyPerPN int `json:"lot_qty_per_pn,omitempty"`
LotAllocations []VendorSpecLotAllocation `json:"lot_allocations,omitempty"`
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
}
type VendorSpecLotAllocation struct {
LotName string `json:"lot_name"`
Quantity int `json:"quantity"` // quantity of LOT per 1 vendor PN
}
// VendorSpecLotMapping is the canonical persisted LOT mapping for a vendor PN row.
// It stores all mapped LOTs (base + bundle) uniformly.
type VendorSpecLotMapping struct {
LotName string `json:"lot_name"`
QuantityPerPN int `json:"quantity_per_pn"`
}
// VendorSpec is a JSON-encodable slice of VendorSpecItem
type VendorSpec []VendorSpecItem
func (v VendorSpec) Value() (driver.Value, error) {
if v == nil {
return nil, nil
}
return json.Marshal(v)
}
func (v *VendorSpec) Scan(value interface{}) error {
if value == nil {
*v = nil
return nil
}
var bytes []byte
switch val := value.(type) {
case []byte:
bytes = val
case string:
bytes = []byte(val)
default:
return errors.New("type assertion failed for VendorSpec")
}
return json.Unmarshal(bytes, v)
}

View File

@@ -10,31 +10,36 @@ import (
// BuildConfigurationSnapshot serializes the full local configuration state.
func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
snapshot := map[string]interface{}{
"id": localCfg.ID,
"uuid": localCfg.UUID,
"server_id": localCfg.ServerID,
"project_uuid": localCfg.ProjectUUID,
"current_version_id": localCfg.CurrentVersionID,
"is_active": localCfg.IsActive,
"name": localCfg.Name,
"items": localCfg.Items,
"total_price": localCfg.TotalPrice,
"custom_price": localCfg.CustomPrice,
"notes": localCfg.Notes,
"is_template": localCfg.IsTemplate,
"server_count": localCfg.ServerCount,
"server_model": localCfg.ServerModel,
"support_code": localCfg.SupportCode,
"article": localCfg.Article,
"pricelist_id": localCfg.PricelistID,
"only_in_stock": localCfg.OnlyInStock,
"price_updated_at": localCfg.PriceUpdatedAt,
"created_at": localCfg.CreatedAt,
"updated_at": localCfg.UpdatedAt,
"synced_at": localCfg.SyncedAt,
"sync_status": localCfg.SyncStatus,
"original_user_id": localCfg.OriginalUserID,
"original_username": localCfg.OriginalUsername,
"id": localCfg.ID,
"uuid": localCfg.UUID,
"server_id": localCfg.ServerID,
"project_uuid": localCfg.ProjectUUID,
"current_version_id": localCfg.CurrentVersionID,
"is_active": localCfg.IsActive,
"name": localCfg.Name,
"items": localCfg.Items,
"total_price": localCfg.TotalPrice,
"custom_price": localCfg.CustomPrice,
"notes": localCfg.Notes,
"is_template": localCfg.IsTemplate,
"server_count": localCfg.ServerCount,
"server_model": localCfg.ServerModel,
"support_code": localCfg.SupportCode,
"article": localCfg.Article,
"pricelist_id": localCfg.PricelistID,
"warehouse_pricelist_id": localCfg.WarehousePricelistID,
"competitor_pricelist_id": localCfg.CompetitorPricelistID,
"disable_price_refresh": localCfg.DisablePriceRefresh,
"only_in_stock": localCfg.OnlyInStock,
"vendor_spec": localCfg.VendorSpec,
"line": localCfg.Line,
"price_updated_at": localCfg.PriceUpdatedAt,
"created_at": localCfg.CreatedAt,
"updated_at": localCfg.UpdatedAt,
"synced_at": localCfg.SyncedAt,
"sync_status": localCfg.SyncStatus,
"original_user_id": localCfg.OriginalUserID,
"original_username": localCfg.OriginalUsername,
}
data, err := json.Marshal(snapshot)
@@ -47,23 +52,28 @@ func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
// DecodeConfigurationSnapshot returns editable fields from one saved snapshot.
func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
var snapshot struct {
ProjectUUID *string `json:"project_uuid"`
IsActive *bool `json:"is_active"`
Name string `json:"name"`
Items LocalConfigItems `json:"items"`
TotalPrice *float64 `json:"total_price"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `json:"is_template"`
ServerCount int `json:"server_count"`
ServerModel string `json:"server_model"`
SupportCode string `json:"support_code"`
Article string `json:"article"`
PricelistID *uint `json:"pricelist_id"`
OnlyInStock bool `json:"only_in_stock"`
PriceUpdatedAt *time.Time `json:"price_updated_at"`
OriginalUserID uint `json:"original_user_id"`
OriginalUsername string `json:"original_username"`
ProjectUUID *string `json:"project_uuid"`
IsActive *bool `json:"is_active"`
Name string `json:"name"`
Items LocalConfigItems `json:"items"`
TotalPrice *float64 `json:"total_price"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `json:"is_template"`
ServerCount int `json:"server_count"`
ServerModel string `json:"server_model"`
SupportCode string `json:"support_code"`
Article string `json:"article"`
PricelistID *uint `json:"pricelist_id"`
WarehousePricelistID *uint `json:"warehouse_pricelist_id"`
CompetitorPricelistID *uint `json:"competitor_pricelist_id"`
DisablePriceRefresh bool `json:"disable_price_refresh"`
OnlyInStock bool `json:"only_in_stock"`
VendorSpec VendorSpec `json:"vendor_spec"`
Line int `json:"line"`
PriceUpdatedAt *time.Time `json:"price_updated_at"`
OriginalUserID uint `json:"original_user_id"`
OriginalUsername string `json:"original_username"`
}
if err := json.Unmarshal([]byte(data), &snapshot); err != nil {
@@ -76,23 +86,28 @@ func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
}
return &LocalConfiguration{
IsActive: isActive,
ProjectUUID: snapshot.ProjectUUID,
Name: snapshot.Name,
Items: snapshot.Items,
TotalPrice: snapshot.TotalPrice,
CustomPrice: snapshot.CustomPrice,
Notes: snapshot.Notes,
IsTemplate: snapshot.IsTemplate,
ServerCount: snapshot.ServerCount,
ServerModel: snapshot.ServerModel,
SupportCode: snapshot.SupportCode,
Article: snapshot.Article,
PricelistID: snapshot.PricelistID,
OnlyInStock: snapshot.OnlyInStock,
PriceUpdatedAt: snapshot.PriceUpdatedAt,
OriginalUserID: snapshot.OriginalUserID,
OriginalUsername: snapshot.OriginalUsername,
IsActive: isActive,
ProjectUUID: snapshot.ProjectUUID,
Name: snapshot.Name,
Items: snapshot.Items,
TotalPrice: snapshot.TotalPrice,
CustomPrice: snapshot.CustomPrice,
Notes: snapshot.Notes,
IsTemplate: snapshot.IsTemplate,
ServerCount: snapshot.ServerCount,
ServerModel: snapshot.ServerModel,
SupportCode: snapshot.SupportCode,
Article: snapshot.Article,
PricelistID: snapshot.PricelistID,
WarehousePricelistID: snapshot.WarehousePricelistID,
CompetitorPricelistID: snapshot.CompetitorPricelistID,
DisablePriceRefresh: snapshot.DisablePriceRefresh,
OnlyInStock: snapshot.OnlyInStock,
VendorSpec: snapshot.VendorSpec,
Line: snapshot.Line,
PriceUpdatedAt: snapshot.PriceUpdatedAt,
OriginalUserID: snapshot.OriginalUserID,
OriginalUsername: snapshot.OriginalUsername,
}, nil
}

View File

@@ -1,238 +0,0 @@
package lotmatch
import (
"errors"
"regexp"
"sort"
"strings"
"git.mchus.pro/mchus/quoteforge/internal/models"
"gorm.io/gorm"
)
var (
ErrResolveConflict = errors.New("multiple lot matches")
ErrResolveNotFound = errors.New("lot not found")
)
type LotResolver struct {
partnumberToLots map[string][]string
exactLots map[string]string
allLots []string
}
type MappingMatcher struct {
exact map[string][]string
exactLot map[string]string
wildcard []wildcardMapping
}
type wildcardMapping struct {
lotName string
re *regexp.Regexp
}
func NewLotResolverFromDB(db *gorm.DB) (*LotResolver, error) {
mappings, lots, err := loadMappingsAndLots(db)
if err != nil {
return nil, err
}
return NewLotResolver(mappings, lots), nil
}
func NewMappingMatcherFromDB(db *gorm.DB) (*MappingMatcher, error) {
mappings, lots, err := loadMappingsAndLots(db)
if err != nil {
return nil, err
}
return NewMappingMatcher(mappings, lots), nil
}
func NewLotResolver(mappings []models.LotPartnumber, lots []models.Lot) *LotResolver {
partnumberToLots := make(map[string][]string, len(mappings))
for _, m := range mappings {
pn := NormalizeKey(m.Partnumber)
lot := strings.TrimSpace(m.LotName)
if pn == "" || lot == "" {
continue
}
partnumberToLots[pn] = append(partnumberToLots[pn], lot)
}
for key := range partnumberToLots {
partnumberToLots[key] = uniqueCaseInsensitive(partnumberToLots[key])
}
exactLots := make(map[string]string, len(lots))
allLots := make([]string, 0, len(lots))
for _, l := range lots {
name := strings.TrimSpace(l.LotName)
if name == "" {
continue
}
exactLots[NormalizeKey(name)] = name
allLots = append(allLots, name)
}
sort.Slice(allLots, func(i, j int) bool {
li := len([]rune(allLots[i]))
lj := len([]rune(allLots[j]))
if li == lj {
return allLots[i] < allLots[j]
}
return li > lj
})
return &LotResolver{
partnumberToLots: partnumberToLots,
exactLots: exactLots,
allLots: allLots,
}
}
func NewMappingMatcher(mappings []models.LotPartnumber, lots []models.Lot) *MappingMatcher {
exact := make(map[string][]string, len(mappings))
wildcards := make([]wildcardMapping, 0, len(mappings))
for _, m := range mappings {
pn := NormalizeKey(m.Partnumber)
lot := strings.TrimSpace(m.LotName)
if pn == "" || lot == "" {
continue
}
if strings.Contains(pn, "*") {
pattern := "^" + regexp.QuoteMeta(pn) + "$"
pattern = strings.ReplaceAll(pattern, "\\*", ".*")
re, err := regexp.Compile(pattern)
if err != nil {
continue
}
wildcards = append(wildcards, wildcardMapping{lotName: lot, re: re})
continue
}
exact[pn] = append(exact[pn], lot)
}
for key := range exact {
exact[key] = uniqueCaseInsensitive(exact[key])
}
exactLot := make(map[string]string, len(lots))
for _, l := range lots {
name := strings.TrimSpace(l.LotName)
if name == "" {
continue
}
exactLot[NormalizeKey(name)] = name
}
return &MappingMatcher{
exact: exact,
exactLot: exactLot,
wildcard: wildcards,
}
}
func (r *LotResolver) Resolve(partnumber string) (string, string, error) {
key := NormalizeKey(partnumber)
if key == "" {
return "", "", ErrResolveNotFound
}
if mapped := r.partnumberToLots[key]; len(mapped) > 0 {
if len(mapped) == 1 {
return mapped[0], "mapping_table", nil
}
return "", "", ErrResolveConflict
}
if exact, ok := r.exactLots[key]; ok {
return exact, "article_exact", nil
}
best := ""
bestLen := -1
tie := false
for _, lot := range r.allLots {
lotKey := NormalizeKey(lot)
if lotKey == "" {
continue
}
if strings.HasPrefix(key, lotKey) {
l := len([]rune(lotKey))
if l > bestLen {
best = lot
bestLen = l
tie = false
} else if l == bestLen && !strings.EqualFold(best, lot) {
tie = true
}
}
}
if best == "" {
return "", "", ErrResolveNotFound
}
if tie {
return "", "", ErrResolveConflict
}
return best, "prefix", nil
}
func (m *MappingMatcher) MatchLots(partnumber string) []string {
if m == nil {
return nil
}
key := NormalizeKey(partnumber)
if key == "" {
return nil
}
lots := make([]string, 0, 2)
if exact := m.exact[key]; len(exact) > 0 {
lots = append(lots, exact...)
}
for _, wc := range m.wildcard {
if wc.re == nil || !wc.re.MatchString(key) {
continue
}
lots = append(lots, wc.lotName)
}
if lot, ok := m.exactLot[key]; ok && strings.TrimSpace(lot) != "" {
lots = append(lots, lot)
}
return uniqueCaseInsensitive(lots)
}
func NormalizeKey(v string) string {
s := strings.ToLower(strings.TrimSpace(v))
replacer := strings.NewReplacer(" ", "", "-", "", "_", "", ".", "", "/", "", "\\", "", "\"", "", "'", "", "(", "", ")", "")
return replacer.Replace(s)
}
func loadMappingsAndLots(db *gorm.DB) ([]models.LotPartnumber, []models.Lot, error) {
var mappings []models.LotPartnumber
if err := db.Find(&mappings).Error; err != nil {
return nil, nil, err
}
var lots []models.Lot
if err := db.Select("lot_name").Find(&lots).Error; err != nil {
return nil, nil, err
}
return mappings, lots, nil
}
func uniqueCaseInsensitive(values []string) []string {
seen := make(map[string]struct{}, len(values))
out := make([]string, 0, len(values))
for _, v := range values {
trimmed := strings.TrimSpace(v)
if trimmed == "" {
continue
}
key := strings.ToLower(trimmed)
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, trimmed)
}
sort.Slice(out, func(i, j int) bool {
return strings.ToLower(out[i]) < strings.ToLower(out[j])
})
return out
}

View File

@@ -1,62 +0,0 @@
package lotmatch
import (
"testing"
"git.mchus.pro/mchus/quoteforge/internal/models"
)
func TestLotResolverPrecedence(t *testing.T) {
resolver := NewLotResolver(
[]models.LotPartnumber{
{Partnumber: "PN-1", LotName: "LOT_A"},
},
[]models.Lot{
{LotName: "CPU_X_LONG"},
{LotName: "CPU_X"},
},
)
lot, by, err := resolver.Resolve("PN-1")
if err != nil || lot != "LOT_A" || by != "mapping_table" {
t.Fatalf("expected mapping_table LOT_A, got lot=%s by=%s err=%v", lot, by, err)
}
lot, by, err = resolver.Resolve("CPU_X")
if err != nil || lot != "CPU_X" || by != "article_exact" {
t.Fatalf("expected article_exact CPU_X, got lot=%s by=%s err=%v", lot, by, err)
}
lot, by, err = resolver.Resolve("CPU_X_LONG_001")
if err != nil || lot != "CPU_X_LONG" || by != "prefix" {
t.Fatalf("expected prefix CPU_X_LONG, got lot=%s by=%s err=%v", lot, by, err)
}
}
func TestMappingMatcherWildcardAndExactLot(t *testing.T) {
matcher := NewMappingMatcher(
[]models.LotPartnumber{
{Partnumber: "R750*", LotName: "SERVER_R750"},
{Partnumber: "HDD-01", LotName: "HDD_01"},
},
[]models.Lot{
{LotName: "MEM_DDR5_16G_4800"},
},
)
check := func(partnumber string, want string) {
t.Helper()
got := matcher.MatchLots(partnumber)
if len(got) != 1 || got[0] != want {
t.Fatalf("partnumber %s: expected [%s], got %#v", partnumber, want, got)
}
}
check("R750XD", "SERVER_R750")
check("HDD-01", "HDD_01")
check("MEM_DDR5_16G_4800", "MEM_DDR5_16G_4800")
if got := matcher.MatchLots("UNKNOWN"); len(got) != 0 {
t.Fatalf("expected no matches for UNKNOWN, got %#v", got)
}
}

View File

@@ -1,110 +0,0 @@
package middleware
import (
"net/http"
"strings"
"git.mchus.pro/mchus/quoteforge/internal/models"
"git.mchus.pro/mchus/quoteforge/internal/services"
"github.com/gin-gonic/gin"
)
const (
AuthUserKey = "auth_user"
AuthClaimsKey = "auth_claims"
)
func Auth(authService *services.AuthService) gin.HandlerFunc {
return func(c *gin.Context) {
authHeader := c.GetHeader("Authorization")
if authHeader == "" {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
"error": "authorization header required",
})
return
}
parts := strings.SplitN(authHeader, " ", 2)
if len(parts) != 2 || parts[0] != "Bearer" {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
"error": "invalid authorization header format",
})
return
}
claims, err := authService.ValidateToken(parts[1])
if err != nil {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
"error": err.Error(),
})
return
}
c.Set(AuthClaimsKey, claims)
c.Next()
}
}
func RequireRole(roles ...models.UserRole) gin.HandlerFunc {
return func(c *gin.Context) {
claims, exists := c.Get(AuthClaimsKey)
if !exists {
c.AbortWithStatusJSON(http.StatusUnauthorized, gin.H{
"error": "authentication required",
})
return
}
authClaims := claims.(*services.Claims)
for _, role := range roles {
if authClaims.Role == role {
c.Next()
return
}
}
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{
"error": "insufficient permissions",
})
}
}
func RequireEditor() gin.HandlerFunc {
return RequireRole(models.RoleEditor, models.RolePricingAdmin, models.RoleAdmin)
}
func RequirePricingAdmin() gin.HandlerFunc {
return RequireRole(models.RolePricingAdmin, models.RoleAdmin)
}
func RequireAdmin() gin.HandlerFunc {
return RequireRole(models.RoleAdmin)
}
// GetClaims extracts auth claims from context
func GetClaims(c *gin.Context) *services.Claims {
claims, exists := c.Get(AuthClaimsKey)
if !exists {
return nil
}
return claims.(*services.Claims)
}
// GetUserID extracts user ID from context
func GetUserID(c *gin.Context) uint {
claims := GetClaims(c)
if claims == nil {
return 0
}
return claims.UserID
}
// GetUsername extracts username from context
func GetUsername(c *gin.Context) string {
claims := GetClaims(c)
if claims == nil {
return ""
}
return claims.Username
}

View File

@@ -39,6 +39,57 @@ func (c ConfigItems) Total() float64 {
return total
}
type VendorSpecLotAllocation struct {
LotName string `json:"lot_name"`
Quantity int `json:"quantity"`
}
type VendorSpecLotMapping struct {
LotName string `json:"lot_name"`
QuantityPerPN int `json:"quantity_per_pn"`
}
type VendorSpecItem struct {
SortOrder int `json:"sort_order"`
VendorPartnumber string `json:"vendor_partnumber"`
Quantity int `json:"quantity"`
Description string `json:"description,omitempty"`
UnitPrice *float64 `json:"unit_price,omitempty"`
TotalPrice *float64 `json:"total_price,omitempty"`
ResolvedLotName string `json:"resolved_lot_name,omitempty"`
ResolutionSource string `json:"resolution_source,omitempty"`
ManualLotSuggestion string `json:"manual_lot_suggestion,omitempty"`
LotQtyPerPN int `json:"lot_qty_per_pn,omitempty"`
LotAllocations []VendorSpecLotAllocation `json:"lot_allocations,omitempty"`
LotMappings []VendorSpecLotMapping `json:"lot_mappings,omitempty"`
}
type VendorSpec []VendorSpecItem
func (v VendorSpec) Value() (driver.Value, error) {
if v == nil {
return nil, nil
}
return json.Marshal(v)
}
func (v *VendorSpec) Scan(value interface{}) error {
if value == nil {
*v = nil
return nil
}
var bytes []byte
switch val := value.(type) {
case []byte:
bytes = val
case string:
bytes = []byte(val)
default:
return errors.New("type assertion failed for VendorSpec")
}
return json.Unmarshal(bytes, v)
}
type Configuration struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
@@ -59,13 +110,13 @@ type Configuration struct {
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
VendorSpec VendorSpec `gorm:"type:json" json:"vendor_spec,omitempty"`
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
Line int `gorm:"column:line_no;index" json:"line"`
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
CurrentVersionNo int `gorm:"-" json:"current_version_no,omitempty"`
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
}
func (Configuration) TableName() string {
@@ -80,8 +131,6 @@ type PriceOverride struct {
ValidUntil *time.Time `gorm:"type:date" json:"valid_until"`
Reason string `gorm:"type:text" json:"reason"`
CreatedBy uint `gorm:"not null" json:"created_by"`
Creator *User `gorm:"foreignKey:CreatedBy" json:"creator,omitempty"`
}
func (PriceOverride) TableName() string {

View File

@@ -55,17 +55,6 @@ func (StockLog) TableName() string {
return "stock_log"
}
// LotPartnumber maps external part numbers to internal lots.
type LotPartnumber struct {
Partnumber string `gorm:"column:partnumber;size:255;primaryKey" json:"partnumber"`
LotName string `gorm:"column:lot_name;size:255;primaryKey" json:"lot_name"`
Description *string `gorm:"column:description;size:10000" json:"description,omitempty"`
}
func (LotPartnumber) TableName() string {
return "lot_partnumbers"
}
// StockIgnoreRule contains import ignore pattern rules.
type StockIgnoreRule struct {
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`

View File

@@ -10,7 +10,6 @@ import (
// AllModels returns all models for auto-migration
func AllModels() []interface{} {
return []interface{}{
&User{},
&Category{},
&LotMetadata{},
&Project{},
@@ -52,54 +51,3 @@ func SeedCategories(db *gorm.DB) error {
}
return nil
}
// SeedAdminUser creates default admin user if not exists
// Default credentials: admin / admin123
func SeedAdminUser(db *gorm.DB, passwordHash string) error {
var count int64
db.Model(&User{}).Where("username = ?", "admin").Count(&count)
if count > 0 {
return nil
}
admin := &User{
Username: "admin",
Email: "admin@example.com",
PasswordHash: passwordHash,
Role: RoleAdmin,
IsActive: true,
}
return db.Create(admin).Error
}
// EnsureDBUser creates or returns the user corresponding to the database connection username.
// This is used when RBAC is disabled - configurations are owned by the DB user.
// Returns the user ID that should be used for all operations.
func EnsureDBUser(db *gorm.DB, dbUsername string) (uint, error) {
if dbUsername == "" {
return 0, nil
}
var user User
err := db.Where("username = ?", dbUsername).First(&user).Error
if err == nil {
return user.ID, nil
}
// User doesn't exist, create it
user = User{
Username: dbUsername,
Email: dbUsername + "@db.local",
PasswordHash: "-", // No password - this is a DB user, not an app user
Role: RoleAdmin,
IsActive: true,
}
if err := db.Create(&user).Error; err != nil {
slog.Error("failed to create DB user", "username", dbUsername, "error", err)
return 0, err
}
slog.Info("created DB user for configurations", "username", dbUsername, "user_id", user.ID)
return user.ID, nil
}

View File

@@ -1,39 +0,0 @@
package models
import "time"
type UserRole string
const (
RoleViewer UserRole = "viewer"
RoleEditor UserRole = "editor"
RolePricingAdmin UserRole = "pricing_admin"
RoleAdmin UserRole = "admin"
)
type User struct {
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
Username string `gorm:"size:100;uniqueIndex;not null" json:"username"`
Email string `gorm:"size:255;uniqueIndex;not null" json:"email"`
PasswordHash string `gorm:"size:255;not null" json:"-"`
Role UserRole `gorm:"type:enum('viewer','editor','pricing_admin','admin');default:'viewer'" json:"role"`
IsActive bool `gorm:"default:true" json:"is_active"`
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
UpdatedAt time.Time `gorm:"autoUpdateTime" json:"updated_at"`
}
func (User) TableName() string {
return "qt_users"
}
func (u *User) CanEdit() bool {
return u.Role == RoleEditor || u.Role == RolePricingAdmin || u.Role == RoleAdmin
}
func (u *User) CanManagePricing() bool {
return u.Role == RolePricingAdmin || u.Role == RoleAdmin
}
func (u *User) CanManageUsers() bool {
return u.Role == RoleAdmin
}

View File

@@ -1,6 +1,8 @@
package repository
import (
"strings"
"git.mchus.pro/mchus/quoteforge/internal/models"
"gorm.io/gorm"
)
@@ -14,7 +16,13 @@ func NewConfigurationRepository(db *gorm.DB) *ConfigurationRepository {
}
func (r *ConfigurationRepository) Create(config *models.Configuration) error {
return r.db.Create(config).Error
if err := r.db.Create(config).Error; err != nil {
if isUnknownLineNoColumnError(err) {
return r.db.Omit("line_no").Create(config).Error
}
return err
}
return nil
}
func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error) {
@@ -36,7 +44,21 @@ func (r *ConfigurationRepository) GetByUUID(uuid string) (*models.Configuration,
}
func (r *ConfigurationRepository) Update(config *models.Configuration) error {
return r.db.Save(config).Error
if err := r.db.Save(config).Error; err != nil {
if isUnknownLineNoColumnError(err) {
return r.db.Omit("line_no").Save(config).Error
}
return err
}
return nil
}
func isUnknownLineNoColumnError(err error) bool {
if err == nil {
return false
}
msg := strings.ToLower(err.Error())
return strings.Contains(msg, "unknown column 'line_no'") || strings.Contains(msg, "no column named line_no")
}
func (r *ConfigurationRepository) Delete(id uint) error {

View File

@@ -0,0 +1,174 @@
package repository
import (
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// PartnumberBookRepository provides read-only access to local partnumber book snapshots.
type PartnumberBookRepository struct {
db *gorm.DB
}
func NewPartnumberBookRepository(db *gorm.DB) *PartnumberBookRepository {
return &PartnumberBookRepository{db: db}
}
// GetActiveBook returns the most recently active local partnumber book.
func (r *PartnumberBookRepository) GetActiveBook() (*localdb.LocalPartnumberBook, error) {
var book localdb.LocalPartnumberBook
err := r.db.Where("is_active = 1").Order("created_at DESC, id DESC").First(&book).Error
if err != nil {
return nil, err
}
return &book, nil
}
// GetBookItems returns all items for the given local book ID.
func (r *PartnumberBookRepository) GetBookItems(bookID uint) ([]localdb.LocalPartnumberBookItem, error) {
book, err := r.getBook(bookID)
if err != nil {
return nil, err
}
items, _, err := r.listCatalogItems(book.PartnumbersJSON, "", 0, 0)
return items, err
}
// GetBookItemsPage returns items for the given local book ID with optional search and pagination.
func (r *PartnumberBookRepository) GetBookItemsPage(bookID uint, search string, page, perPage int) ([]localdb.LocalPartnumberBookItem, int64, error) {
if page < 1 {
page = 1
}
if perPage < 1 {
perPage = 100
}
book, err := r.getBook(bookID)
if err != nil {
return nil, 0, err
}
return r.listCatalogItems(book.PartnumbersJSON, search, page, perPage)
}
// FindLotByPartnumber looks up a partnumber in the active book and returns the matching items.
func (r *PartnumberBookRepository) FindLotByPartnumber(bookID uint, partnumber string) ([]localdb.LocalPartnumberBookItem, error) {
book, err := r.getBook(bookID)
if err != nil {
return nil, err
}
found := false
for _, pn := range book.PartnumbersJSON {
if pn == partnumber {
found = true
break
}
}
if !found {
return nil, nil
}
var items []localdb.LocalPartnumberBookItem
err = r.db.Where("partnumber = ?", partnumber).Find(&items).Error
return items, err
}
// ListBooks returns all local partnumber books ordered newest first.
func (r *PartnumberBookRepository) ListBooks() ([]localdb.LocalPartnumberBook, error) {
var books []localdb.LocalPartnumberBook
err := r.db.Order("created_at DESC, id DESC").Find(&books).Error
return books, err
}
// SaveBook saves a new partnumber book snapshot.
func (r *PartnumberBookRepository) SaveBook(book *localdb.LocalPartnumberBook) error {
return r.db.Save(book).Error
}
// SaveBookItems upserts canonical PN catalog rows.
func (r *PartnumberBookRepository) SaveBookItems(items []localdb.LocalPartnumberBookItem) error {
if len(items) == 0 {
return nil
}
return r.db.Clauses(clause.OnConflict{
Columns: []clause.Column{{Name: "partnumber"}},
DoUpdates: clause.AssignmentColumns([]string{
"lots_json",
"description",
}),
}).CreateInBatches(items, 500).Error
}
// CountBookItems returns the number of items for a given local book ID.
func (r *PartnumberBookRepository) CountBookItems(bookID uint) int64 {
book, err := r.getBook(bookID)
if err != nil {
return 0
}
return int64(len(book.PartnumbersJSON))
}
func (r *PartnumberBookRepository) CountDistinctLots(bookID uint) int64 {
items, err := r.GetBookItems(bookID)
if err != nil {
return 0
}
seen := make(map[string]struct{})
for _, item := range items {
for _, lot := range item.LotsJSON {
if lot.LotName == "" {
continue
}
seen[lot.LotName] = struct{}{}
}
}
return int64(len(seen))
}
func (r *PartnumberBookRepository) HasAllBookItems(bookID uint) bool {
book, err := r.getBook(bookID)
if err != nil {
return false
}
if len(book.PartnumbersJSON) == 0 {
return true
}
var count int64
if err := r.db.Model(&localdb.LocalPartnumberBookItem{}).
Where("partnumber IN ?", []string(book.PartnumbersJSON)).
Count(&count).Error; err != nil {
return false
}
return count == int64(len(book.PartnumbersJSON))
}
func (r *PartnumberBookRepository) getBook(bookID uint) (*localdb.LocalPartnumberBook, error) {
var book localdb.LocalPartnumberBook
if err := r.db.First(&book, bookID).Error; err != nil {
return nil, err
}
return &book, nil
}
func (r *PartnumberBookRepository) listCatalogItems(partnumbers localdb.LocalStringList, search string, page, perPage int) ([]localdb.LocalPartnumberBookItem, int64, error) {
if len(partnumbers) == 0 {
return []localdb.LocalPartnumberBookItem{}, 0, nil
}
query := r.db.Model(&localdb.LocalPartnumberBookItem{}).Where("partnumber IN ?", []string(partnumbers))
if search != "" {
trimmedSearch := "%" + search + "%"
query = query.Where("partnumber LIKE ? OR lots_json LIKE ? OR description LIKE ?", trimmedSearch, trimmedSearch, trimmedSearch)
}
var total int64
if err := query.Count(&total).Error; err != nil {
return nil, 0, err
}
var items []localdb.LocalPartnumberBookItem
if page > 0 && perPage > 0 {
query = query.Offset((page - 1) * perPage).Limit(perPage)
}
err := query.Order("partnumber ASC, id ASC").Find(&items).Error
return items, total, err
}

View File

@@ -3,13 +3,10 @@ package repository
import (
"errors"
"fmt"
"log/slog"
"sort"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/quoteforge/internal/lotmatch"
"git.mchus.pro/mchus/quoteforge/internal/models"
"gorm.io/gorm"
)
@@ -246,94 +243,9 @@ func (r *PricelistRepository) GetItems(pricelistID uint, offset, limit int, sear
items[i].Category = strings.TrimSpace(items[i].LotCategory)
}
// Stock/partnumber enrichment is optional for pricelist item listing.
// Return base pricelist rows (lot_name/price/category) even when DB user lacks
// access to stock mapping tables (e.g. lot_partnumbers, stock_log).
if err := r.enrichItemsWithStock(items); err != nil {
slog.Warn("pricelist items stock enrichment skipped", "pricelist_id", pricelistID, "error", err)
}
return items, total, nil
}
func (r *PricelistRepository) enrichItemsWithStock(items []models.PricelistItem) error {
if len(items) == 0 {
return nil
}
resolver, err := lotmatch.NewLotResolverFromDB(r.db)
if err != nil {
return err
}
type stockRow struct {
Partnumber string `gorm:"column:partnumber"`
Qty *float64 `gorm:"column:qty"`
}
rows := make([]stockRow, 0)
if err := r.db.Raw(`
SELECT s.partnumber, s.qty
FROM stock_log s
INNER JOIN (
SELECT partnumber, MAX(date) AS max_date
FROM stock_log
GROUP BY partnumber
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
WHERE s.qty IS NOT NULL
`).Scan(&rows).Error; err != nil {
return err
}
lotTotals := make(map[string]float64, len(items))
lotPartnumbers := make(map[string][]string, len(items))
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
for i := range rows {
row := rows[i]
if strings.TrimSpace(row.Partnumber) == "" {
continue
}
lotName, _, resolveErr := resolver.Resolve(row.Partnumber)
if resolveErr != nil || strings.TrimSpace(lotName) == "" {
continue
}
if row.Qty != nil {
lotTotals[lotName] += *row.Qty
}
pn := strings.TrimSpace(row.Partnumber)
if pn == "" {
continue
}
if _, ok := seenPartnumbers[lotName]; !ok {
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
}
key := strings.ToLower(pn)
if _, exists := seenPartnumbers[lotName][key]; exists {
continue
}
seenPartnumbers[lotName][key] = struct{}{}
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
}
for i := range items {
lotName := items[i].LotName
if qty, ok := lotTotals[lotName]; ok {
qtyCopy := qty
items[i].AvailableQty = &qtyCopy
}
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
sort.Slice(partnumbers, func(a, b int) bool {
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
})
items[i].Partnumbers = partnumbers
}
}
return nil
}
// GetLotNames returns distinct lot names from pricelist items.
func (r *PricelistRepository) GetLotNames(pricelistID uint) ([]string, error) {
var lotNames []string

View File

@@ -75,57 +75,6 @@ func TestGenerateVersion_IsolatedBySource(t *testing.T) {
}
}
func TestGetItems_WarehouseAvailableQtyUsesPrefixResolver(t *testing.T) {
repo := newTestPricelistRepository(t)
db := repo.db
warehouse := models.Pricelist{
Source: string(models.PricelistSourceWarehouse),
Version: "S-2026-02-07-001",
CreatedBy: "test",
IsActive: true,
}
if err := db.Create(&warehouse).Error; err != nil {
t.Fatalf("create pricelist: %v", err)
}
if err := db.Create(&models.PricelistItem{
PricelistID: warehouse.ID,
LotName: "SSD_NVME_03.2T",
Price: 100,
}).Error; err != nil {
t.Fatalf("create pricelist item: %v", err)
}
if err := db.Create(&models.Lot{LotName: "SSD_NVME_03.2T"}).Error; err != nil {
t.Fatalf("create lot: %v", err)
}
qty := 5.0
if err := db.Create(&models.StockLog{
Partnumber: "SSD_NVME_03.2T_GEN3_P4610",
Date: time.Now(),
Price: 200,
Qty: &qty,
}).Error; err != nil {
t.Fatalf("create stock log: %v", err)
}
items, total, err := repo.GetItems(warehouse.ID, 0, 20, "")
if err != nil {
t.Fatalf("GetItems: %v", err)
}
if total != 1 {
t.Fatalf("expected total=1, got %d", total)
}
if len(items) != 1 {
t.Fatalf("expected 1 item, got %d", len(items))
}
if items[0].AvailableQty == nil {
t.Fatalf("expected available qty to be set")
}
if *items[0].AvailableQty != 5 {
t.Fatalf("expected available qty=5, got %v", *items[0].AvailableQty)
}
}
func TestGetLatestActiveBySource_SkipsPricelistsWithoutItems(t *testing.T) {
repo := newTestPricelistRepository(t)
db := repo.db
@@ -228,7 +177,7 @@ func newTestPricelistRepository(t *testing.T) *PricelistRepository {
if err != nil {
t.Fatalf("open sqlite: %v", err)
}
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.LotPartnumber{}, &models.StockLog{}); err != nil {
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.StockLog{}); err != nil {
t.Fatalf("migrate: %v", err)
}
return NewPricelistRepository(db)

View File

@@ -1,62 +0,0 @@
package repository
import (
"git.mchus.pro/mchus/quoteforge/internal/models"
"gorm.io/gorm"
)
type UserRepository struct {
db *gorm.DB
}
func NewUserRepository(db *gorm.DB) *UserRepository {
return &UserRepository{db: db}
}
func (r *UserRepository) Create(user *models.User) error {
return r.db.Create(user).Error
}
func (r *UserRepository) GetByID(id uint) (*models.User, error) {
var user models.User
err := r.db.First(&user, id).Error
if err != nil {
return nil, err
}
return &user, nil
}
func (r *UserRepository) GetByUsername(username string) (*models.User, error) {
var user models.User
err := r.db.Where("username = ?", username).First(&user).Error
if err != nil {
return nil, err
}
return &user, nil
}
func (r *UserRepository) GetByEmail(email string) (*models.User, error) {
var user models.User
err := r.db.Where("email = ?", email).First(&user).Error
if err != nil {
return nil, err
}
return &user, nil
}
func (r *UserRepository) Update(user *models.User) error {
return r.db.Save(user).Error
}
func (r *UserRepository) Delete(id uint) error {
return r.db.Delete(&models.User{}, id).Error
}
func (r *UserRepository) List(offset, limit int) ([]models.User, int64, error) {
var users []models.User
var total int64
r.db.Model(&models.User{}).Count(&total)
err := r.db.Offset(offset).Limit(limit).Find(&users).Error
return users, total, err
}

View File

@@ -1,180 +0,0 @@
package services
import (
"errors"
"time"
"github.com/golang-jwt/jwt/v5"
"git.mchus.pro/mchus/quoteforge/internal/config"
"git.mchus.pro/mchus/quoteforge/internal/models"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"golang.org/x/crypto/bcrypt"
)
var (
ErrInvalidCredentials = errors.New("invalid username or password")
ErrUserNotFound = errors.New("user not found")
ErrUserInactive = errors.New("user account is inactive")
ErrInvalidToken = errors.New("invalid token")
ErrTokenExpired = errors.New("token expired")
)
type AuthService struct {
userRepo *repository.UserRepository
config config.AuthConfig
}
func NewAuthService(userRepo *repository.UserRepository, cfg config.AuthConfig) *AuthService {
return &AuthService{
userRepo: userRepo,
config: cfg,
}
}
type TokenPair struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresAt int64 `json:"expires_at"`
}
type Claims struct {
UserID uint `json:"user_id"`
Username string `json:"username"`
Role models.UserRole `json:"role"`
jwt.RegisteredClaims
}
func (s *AuthService) Login(username, password string) (*TokenPair, *models.User, error) {
user, err := s.userRepo.GetByUsername(username)
if err != nil {
return nil, nil, ErrInvalidCredentials
}
if !user.IsActive {
return nil, nil, ErrUserInactive
}
if err := bcrypt.CompareHashAndPassword([]byte(user.PasswordHash), []byte(password)); err != nil {
return nil, nil, ErrInvalidCredentials
}
tokens, err := s.generateTokenPair(user)
if err != nil {
return nil, nil, err
}
return tokens, user, nil
}
func (s *AuthService) RefreshTokens(refreshToken string) (*TokenPair, error) {
claims, err := s.ValidateToken(refreshToken)
if err != nil {
return nil, err
}
user, err := s.userRepo.GetByID(claims.UserID)
if err != nil {
return nil, ErrUserNotFound
}
if !user.IsActive {
return nil, ErrUserInactive
}
return s.generateTokenPair(user)
}
func (s *AuthService) ValidateToken(tokenString string) (*Claims, error) {
token, err := jwt.ParseWithClaims(tokenString, &Claims{}, func(token *jwt.Token) (interface{}, error) {
return []byte(s.config.JWTSecret), nil
})
if err != nil {
if errors.Is(err, jwt.ErrTokenExpired) {
return nil, ErrTokenExpired
}
return nil, ErrInvalidToken
}
claims, ok := token.Claims.(*Claims)
if !ok || !token.Valid {
return nil, ErrInvalidToken
}
return claims, nil
}
func (s *AuthService) generateTokenPair(user *models.User) (*TokenPair, error) {
now := time.Now()
accessExpiry := now.Add(s.config.TokenExpiry)
refreshExpiry := now.Add(s.config.RefreshExpiry)
accessClaims := &Claims{
UserID: user.ID,
Username: user.Username,
Role: user.Role,
RegisteredClaims: jwt.RegisteredClaims{
ExpiresAt: jwt.NewNumericDate(accessExpiry),
IssuedAt: jwt.NewNumericDate(now),
Subject: user.Username,
},
}
accessToken := jwt.NewWithClaims(jwt.SigningMethodHS256, accessClaims)
accessTokenString, err := accessToken.SignedString([]byte(s.config.JWTSecret))
if err != nil {
return nil, err
}
refreshClaims := &Claims{
UserID: user.ID,
Username: user.Username,
Role: user.Role,
RegisteredClaims: jwt.RegisteredClaims{
ExpiresAt: jwt.NewNumericDate(refreshExpiry),
IssuedAt: jwt.NewNumericDate(now),
Subject: user.Username,
},
}
refreshToken := jwt.NewWithClaims(jwt.SigningMethodHS256, refreshClaims)
refreshTokenString, err := refreshToken.SignedString([]byte(s.config.JWTSecret))
if err != nil {
return nil, err
}
return &TokenPair{
AccessToken: accessTokenString,
RefreshToken: refreshTokenString,
ExpiresAt: accessExpiry.Unix(),
}, nil
}
func (s *AuthService) HashPassword(password string) (string, error) {
hash, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.DefaultCost)
if err != nil {
return "", err
}
return string(hash), nil
}
func (s *AuthService) CreateUser(username, email, password string, role models.UserRole) (*models.User, error) {
hash, err := s.HashPassword(password)
if err != nil {
return nil, err
}
user := &models.User{
Username: username,
Email: email,
PasswordHash: hash,
Role: role,
IsActive: true,
}
if err := s.userRepo.Create(user); err != nil {
return nil, err
}
return user, nil
}

View File

@@ -45,18 +45,21 @@ func NewConfigurationService(
}
type CreateConfigRequest struct {
Name string `json:"name"`
Items models.ConfigItems `json:"items"`
ProjectUUID *string `json:"project_uuid,omitempty"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `json:"is_template"`
ServerCount int `json:"server_count"`
ServerModel string `json:"server_model,omitempty"`
SupportCode string `json:"support_code,omitempty"`
Article string `json:"article,omitempty"`
PricelistID *uint `json:"pricelist_id,omitempty"`
OnlyInStock bool `json:"only_in_stock"`
Name string `json:"name"`
Items models.ConfigItems `json:"items"`
ProjectUUID *string `json:"project_uuid,omitempty"`
CustomPrice *float64 `json:"custom_price"`
Notes string `json:"notes"`
IsTemplate bool `json:"is_template"`
ServerCount int `json:"server_count"`
ServerModel string `json:"server_model,omitempty"`
SupportCode string `json:"support_code,omitempty"`
Article string `json:"article,omitempty"`
PricelistID *uint `json:"pricelist_id,omitempty"`
WarehousePricelistID *uint `json:"warehouse_pricelist_id,omitempty"`
CompetitorPricelistID *uint `json:"competitor_pricelist_id,omitempty"`
DisablePriceRefresh bool `json:"disable_price_refresh"`
OnlyInStock bool `json:"only_in_stock"`
}
type ArticlePreviewRequest struct {
@@ -84,21 +87,24 @@ func (s *ConfigurationService) Create(ownerUsername string, req *CreateConfigReq
}
config := &models.Configuration{
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: projectUUID,
Name: req.Name,
Items: req.Items,
TotalPrice: &total,
CustomPrice: req.CustomPrice,
Notes: req.Notes,
IsTemplate: req.IsTemplate,
ServerCount: req.ServerCount,
ServerModel: req.ServerModel,
SupportCode: req.SupportCode,
Article: req.Article,
PricelistID: pricelistID,
OnlyInStock: req.OnlyInStock,
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: projectUUID,
Name: req.Name,
Items: req.Items,
TotalPrice: &total,
CustomPrice: req.CustomPrice,
Notes: req.Notes,
IsTemplate: req.IsTemplate,
ServerCount: req.ServerCount,
ServerModel: req.ServerModel,
SupportCode: req.SupportCode,
Article: req.Article,
PricelistID: pricelistID,
WarehousePricelistID: req.WarehousePricelistID,
CompetitorPricelistID: req.CompetitorPricelistID,
DisablePriceRefresh: req.DisablePriceRefresh,
OnlyInStock: req.OnlyInStock,
}
if err := s.configRepo.Create(config); err != nil {
@@ -163,6 +169,9 @@ func (s *ConfigurationService) Update(uuid string, ownerUsername string, req *Cr
config.SupportCode = req.SupportCode
config.Article = req.Article
config.PricelistID = pricelistID
config.WarehousePricelistID = req.WarehousePricelistID
config.CompetitorPricelistID = req.CompetitorPricelistID
config.DisablePriceRefresh = req.DisablePriceRefresh
config.OnlyInStock = req.OnlyInStock
if err := s.configRepo.Update(config); err != nil {
@@ -230,18 +239,24 @@ func (s *ConfigurationService) CloneToProject(configUUID string, ownerUsername s
}
clone := &models.Configuration{
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false, // Clone is never a template
ServerCount: original.ServerCount,
PricelistID: original.PricelistID,
OnlyInStock: original.OnlyInStock,
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false, // Clone is never a template
ServerCount: original.ServerCount,
ServerModel: original.ServerModel,
SupportCode: original.SupportCode,
Article: original.Article,
PricelistID: original.PricelistID,
WarehousePricelistID: original.WarehousePricelistID,
CompetitorPricelistID: original.CompetitorPricelistID,
DisablePriceRefresh: original.DisablePriceRefresh,
OnlyInStock: original.OnlyInStock,
}
if err := s.configRepo.Create(clone); err != nil {
@@ -314,7 +329,13 @@ func (s *ConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigReques
config.Notes = req.Notes
config.IsTemplate = req.IsTemplate
config.ServerCount = req.ServerCount
config.ServerModel = req.ServerModel
config.SupportCode = req.SupportCode
config.Article = req.Article
config.PricelistID = pricelistID
config.WarehousePricelistID = req.WarehousePricelistID
config.CompetitorPricelistID = req.CompetitorPricelistID
config.DisablePriceRefresh = req.DisablePriceRefresh
config.OnlyInStock = req.OnlyInStock
if err := s.configRepo.Update(config); err != nil {
@@ -587,13 +608,7 @@ func (s *ConfigurationService) isOwner(config *models.Configuration, ownerUserna
if config == nil || ownerUsername == "" {
return false
}
if config.OwnerUsername != "" {
return config.OwnerUsername == ownerUsername
}
if config.User != nil {
return config.User.Username == ownerUsername
}
return false
return config.OwnerUsername == ownerUsername
}
// // Export configuration as JSON

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"math"
"sort"
"strings"
"time"
@@ -42,6 +43,7 @@ type ExportItem struct {
// ConfigExportBlock represents one configuration (server) in the export.
type ConfigExportBlock struct {
Article string
Line int
ServerCount int
UnitPrice float64 // sum of component prices for one server
Items []ExportItem
@@ -53,6 +55,38 @@ type ProjectExportData struct {
CreatedAt time.Time
}
type ProjectPricingExportOptions struct {
IncludeLOT bool `json:"include_lot"`
IncludeBOM bool `json:"include_bom"`
IncludeEstimate bool `json:"include_estimate"`
IncludeStock bool `json:"include_stock"`
IncludeCompetitor bool `json:"include_competitor"`
}
type ProjectPricingExportData struct {
Configs []ProjectPricingExportConfig
CreatedAt time.Time
}
type ProjectPricingExportConfig struct {
Name string
Article string
Line int
ServerCount int
Rows []ProjectPricingExportRow
}
type ProjectPricingExportRow struct {
LotDisplay string
VendorPN string
Description string
Quantity int
BOMTotal *float64
Estimate *float64
Stock *float64
Competitor *float64
}
// ToCSV writes project export data in the new structured CSV format.
//
// Format:
@@ -92,7 +126,10 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
}
for i, block := range data.Configs {
lineNo := (i + 1) * 10
lineNo := block.Line
if lineNo <= 0 {
lineNo = (i + 1) * 10
}
serverCount := block.ServerCount
if serverCount < 1 {
@@ -104,13 +141,13 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
// Server summary row
serverRow := []string{
fmt.Sprintf("%d", lineNo), // Line
"", // Type
block.Article, // p/n
"", // Description
"", // Qty (1 pcs.)
fmt.Sprintf("%d", serverCount), // Qty (total)
formatPriceInt(block.UnitPrice), // Price (1 pcs.)
formatPriceWithSpace(totalPrice), // Price (total)
"", // Type
block.Article, // p/n
"", // Description
"", // Qty (1 pcs.)
fmt.Sprintf("%d", serverCount), // Qty (total)
formatPriceInt(block.UnitPrice), // Price (1 pcs.)
formatPriceWithSpace(totalPrice), // Price (total)
}
if err := csvWriter.Write(serverRow); err != nil {
return fmt.Errorf("failed to write server row: %w", err)
@@ -124,14 +161,14 @@ func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
// Component rows
for _, item := range sortedItems {
componentRow := []string{
"", // Line
item.Category, // Type
item.LotName, // p/n
"", // Description
fmt.Sprintf("%d", item.Quantity), // Qty (1 pcs.)
"", // Qty (total)
formatPriceComma(item.UnitPrice), // Price (1 pcs.)
"", // Price (total)
"", // Line
item.Category, // Type
item.LotName, // p/n
"", // Description
fmt.Sprintf("%d", item.Quantity), // Qty (1 pcs.)
"", // Qty (total)
formatPriceComma(item.UnitPrice), // Price (1 pcs.)
"", // Price (total)
}
if err := csvWriter.Write(componentRow); err != nil {
return fmt.Errorf("failed to write component row: %w", err)
@@ -163,6 +200,80 @@ func (s *ExportService) ToCSVBytes(data *ProjectExportData) ([]byte, error) {
return buf.Bytes(), nil
}
func (s *ExportService) ProjectToPricingExportData(configs []models.Configuration, opts ProjectPricingExportOptions) (*ProjectPricingExportData, error) {
sortedConfigs := make([]models.Configuration, len(configs))
copy(sortedConfigs, configs)
sort.Slice(sortedConfigs, func(i, j int) bool {
leftLine := sortedConfigs[i].Line
rightLine := sortedConfigs[j].Line
if leftLine <= 0 {
leftLine = int(^uint(0) >> 1)
}
if rightLine <= 0 {
rightLine = int(^uint(0) >> 1)
}
if leftLine != rightLine {
return leftLine < rightLine
}
if !sortedConfigs[i].CreatedAt.Equal(sortedConfigs[j].CreatedAt) {
return sortedConfigs[i].CreatedAt.After(sortedConfigs[j].CreatedAt)
}
return sortedConfigs[i].UUID > sortedConfigs[j].UUID
})
blocks := make([]ProjectPricingExportConfig, 0, len(sortedConfigs))
for i := range sortedConfigs {
block, err := s.buildPricingExportBlock(&sortedConfigs[i], opts)
if err != nil {
return nil, err
}
blocks = append(blocks, block)
}
return &ProjectPricingExportData{
Configs: blocks,
CreatedAt: time.Now(),
}, nil
}
func (s *ExportService) ToPricingCSV(w io.Writer, data *ProjectPricingExportData, opts ProjectPricingExportOptions) error {
if _, err := w.Write([]byte{0xEF, 0xBB, 0xBF}); err != nil {
return fmt.Errorf("failed to write BOM: %w", err)
}
csvWriter := csv.NewWriter(w)
csvWriter.Comma = ';'
defer csvWriter.Flush()
headers := pricingCSVHeaders(opts)
if err := csvWriter.Write(headers); err != nil {
return fmt.Errorf("failed to write pricing header: %w", err)
}
for idx, cfg := range data.Configs {
if err := csvWriter.Write(pricingConfigSummaryRow(cfg, opts)); err != nil {
return fmt.Errorf("failed to write config summary row: %w", err)
}
for _, row := range cfg.Rows {
if err := csvWriter.Write(pricingCSVRow(row, opts)); err != nil {
return fmt.Errorf("failed to write pricing row: %w", err)
}
}
if idx < len(data.Configs)-1 {
if err := csvWriter.Write([]string{}); err != nil {
return fmt.Errorf("failed to write separator row: %w", err)
}
}
}
csvWriter.Flush()
if err := csvWriter.Error(); err != nil {
return fmt.Errorf("csv writer error: %w", err)
}
return nil
}
// ConfigToExportData converts a single configuration into ProjectExportData.
func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectExportData {
block := s.buildExportBlock(cfg)
@@ -174,9 +285,30 @@ func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectEx
// ProjectToExportData converts multiple configurations into ProjectExportData.
func (s *ExportService) ProjectToExportData(configs []models.Configuration) *ProjectExportData {
sortedConfigs := make([]models.Configuration, len(configs))
copy(sortedConfigs, configs)
sort.Slice(sortedConfigs, func(i, j int) bool {
leftLine := sortedConfigs[i].Line
rightLine := sortedConfigs[j].Line
if leftLine <= 0 {
leftLine = int(^uint(0) >> 1)
}
if rightLine <= 0 {
rightLine = int(^uint(0) >> 1)
}
if leftLine != rightLine {
return leftLine < rightLine
}
if !sortedConfigs[i].CreatedAt.Equal(sortedConfigs[j].CreatedAt) {
return sortedConfigs[i].CreatedAt.After(sortedConfigs[j].CreatedAt)
}
return sortedConfigs[i].UUID > sortedConfigs[j].UUID
})
blocks := make([]ConfigExportBlock, 0, len(configs))
for i := range configs {
blocks = append(blocks, s.buildExportBlock(&configs[i]))
for i := range sortedConfigs {
blocks = append(blocks, s.buildExportBlock(&sortedConfigs[i]))
}
return &ProjectExportData{
Configs: blocks,
@@ -214,12 +346,106 @@ func (s *ExportService) buildExportBlock(cfg *models.Configuration) ConfigExport
return ConfigExportBlock{
Article: cfg.Article,
Line: cfg.Line,
ServerCount: serverCount,
UnitPrice: unitTotal,
Items: items,
}
}
func (s *ExportService) buildPricingExportBlock(cfg *models.Configuration, opts ProjectPricingExportOptions) (ProjectPricingExportConfig, error) {
block := ProjectPricingExportConfig{
Name: cfg.Name,
Article: cfg.Article,
Line: cfg.Line,
ServerCount: exportPositiveInt(cfg.ServerCount, 1),
Rows: make([]ProjectPricingExportRow, 0),
}
if s.localDB == nil {
for _, item := range cfg.Items {
block.Rows = append(block.Rows, ProjectPricingExportRow{
LotDisplay: item.LotName,
VendorPN: "—",
Quantity: item.Quantity,
Estimate: floatPtr(item.UnitPrice * float64(item.Quantity)),
})
}
return block, nil
}
localCfg, err := s.localDB.GetConfigurationByUUID(cfg.UUID)
if err != nil {
localCfg = nil
}
priceMap := s.resolvePricingTotals(cfg, localCfg, opts)
componentDescriptions := s.resolveLotDescriptions(cfg, localCfg)
if opts.IncludeBOM && localCfg != nil && len(localCfg.VendorSpec) > 0 {
coveredLots := make(map[string]struct{})
for _, row := range localCfg.VendorSpec {
rowMappings := normalizeLotMappings(row.LotMappings)
for _, mapping := range rowMappings {
coveredLots[mapping.LotName] = struct{}{}
}
description := strings.TrimSpace(row.Description)
if description == "" && len(rowMappings) > 0 {
description = componentDescriptions[rowMappings[0].LotName]
}
pricingRow := ProjectPricingExportRow{
LotDisplay: formatLotDisplay(rowMappings),
VendorPN: row.VendorPartnumber,
Description: description,
Quantity: exportPositiveInt(row.Quantity, 1),
BOMTotal: vendorRowTotal(row),
Estimate: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Estimate }),
Stock: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Stock }),
Competitor: computeMappingTotal(priceMap, rowMappings, row.Quantity, func(p pricingLevels) *float64 { return p.Competitor }),
}
block.Rows = append(block.Rows, pricingRow)
}
for _, item := range cfg.Items {
if item.LotName == "" {
continue
}
if _, ok := coveredLots[item.LotName]; ok {
continue
}
estimate := estimateOnlyTotal(priceMap[item.LotName].Estimate, item.UnitPrice, item.Quantity)
block.Rows = append(block.Rows, ProjectPricingExportRow{
LotDisplay: item.LotName,
VendorPN: "—",
Description: componentDescriptions[item.LotName],
Quantity: exportPositiveInt(item.Quantity, 1),
Estimate: estimate,
Stock: totalForUnitPrice(priceMap[item.LotName].Stock, item.Quantity),
Competitor: totalForUnitPrice(priceMap[item.LotName].Competitor, item.Quantity),
})
}
return block, nil
}
for _, item := range cfg.Items {
if item.LotName == "" {
continue
}
estimate := estimateOnlyTotal(priceMap[item.LotName].Estimate, item.UnitPrice, item.Quantity)
block.Rows = append(block.Rows, ProjectPricingExportRow{
LotDisplay: item.LotName,
VendorPN: "—",
Description: componentDescriptions[item.LotName],
Quantity: exportPositiveInt(item.Quantity, 1),
Estimate: estimate,
Stock: totalForUnitPrice(priceMap[item.LotName].Stock, item.Quantity),
Competitor: totalForUnitPrice(priceMap[item.LotName].Competitor, item.Quantity),
})
}
return block, nil
}
// resolveCategories returns lot_name → category map.
// Primary source: pricelist items (lot_category). Fallback: local_components table.
func (s *ExportService) resolveCategories(pricelistID *uint, lotNames []string) map[string]string {
@@ -276,6 +502,324 @@ func sortItemsByCategory(items []ExportItem, categoryOrder map[string]int) {
}
}
type pricingLevels struct {
Estimate *float64
Stock *float64
Competitor *float64
}
func (s *ExportService) resolvePricingTotals(cfg *models.Configuration, localCfg *localdb.LocalConfiguration, opts ProjectPricingExportOptions) map[string]pricingLevels {
result := map[string]pricingLevels{}
lots := collectPricingLots(cfg, localCfg, opts.IncludeBOM)
if len(lots) == 0 || s.localDB == nil {
return result
}
estimateID := cfg.PricelistID
if estimateID == nil || *estimateID == 0 {
if latest, err := s.localDB.GetLatestLocalPricelistBySource("estimate"); err == nil && latest != nil {
estimateID = &latest.ServerID
}
}
var warehouseID *uint
var competitorID *uint
if localCfg != nil {
warehouseID = localCfg.WarehousePricelistID
competitorID = localCfg.CompetitorPricelistID
}
if warehouseID == nil || *warehouseID == 0 {
if latest, err := s.localDB.GetLatestLocalPricelistBySource("warehouse"); err == nil && latest != nil {
warehouseID = &latest.ServerID
}
}
if competitorID == nil || *competitorID == 0 {
if latest, err := s.localDB.GetLatestLocalPricelistBySource("competitor"); err == nil && latest != nil {
competitorID = &latest.ServerID
}
}
for _, lot := range lots {
level := pricingLevels{}
level.Estimate = s.lookupPricePointer(estimateID, lot)
level.Stock = s.lookupPricePointer(warehouseID, lot)
level.Competitor = s.lookupPricePointer(competitorID, lot)
result[lot] = level
}
return result
}
func (s *ExportService) lookupPricePointer(serverPricelistID *uint, lotName string) *float64 {
if s.localDB == nil || serverPricelistID == nil || *serverPricelistID == 0 || strings.TrimSpace(lotName) == "" {
return nil
}
localPL, err := s.localDB.GetLocalPricelistByServerID(*serverPricelistID)
if err != nil {
return nil
}
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
if err != nil || price <= 0 {
return nil
}
return floatPtr(price)
}
func (s *ExportService) resolveLotDescriptions(cfg *models.Configuration, localCfg *localdb.LocalConfiguration) map[string]string {
lots := collectPricingLots(cfg, localCfg, true)
result := make(map[string]string, len(lots))
if s.localDB == nil {
return result
}
for _, lot := range lots {
component, err := s.localDB.GetLocalComponent(lot)
if err != nil {
continue
}
result[lot] = component.LotDescription
}
return result
}
func collectPricingLots(cfg *models.Configuration, localCfg *localdb.LocalConfiguration, includeBOM bool) []string {
seen := map[string]struct{}{}
out := make([]string, 0)
if includeBOM && localCfg != nil {
for _, row := range localCfg.VendorSpec {
for _, mapping := range normalizeLotMappings(row.LotMappings) {
if _, ok := seen[mapping.LotName]; ok {
continue
}
seen[mapping.LotName] = struct{}{}
out = append(out, mapping.LotName)
}
}
}
for _, item := range cfg.Items {
lot := strings.TrimSpace(item.LotName)
if lot == "" {
continue
}
if _, ok := seen[lot]; ok {
continue
}
seen[lot] = struct{}{}
out = append(out, lot)
}
return out
}
func normalizeLotMappings(mappings []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
if len(mappings) == 0 {
return nil
}
out := make([]localdb.VendorSpecLotMapping, 0, len(mappings))
for _, mapping := range mappings {
lot := strings.TrimSpace(mapping.LotName)
if lot == "" {
continue
}
qty := mapping.QuantityPerPN
if qty < 1 {
qty = 1
}
out = append(out, localdb.VendorSpecLotMapping{
LotName: lot,
QuantityPerPN: qty,
})
}
return out
}
func vendorRowTotal(row localdb.VendorSpecItem) *float64 {
if row.TotalPrice != nil {
return floatPtr(*row.TotalPrice)
}
if row.UnitPrice == nil {
return nil
}
return floatPtr(*row.UnitPrice * float64(exportPositiveInt(row.Quantity, 1)))
}
func computeMappingTotal(priceMap map[string]pricingLevels, mappings []localdb.VendorSpecLotMapping, pnQty int, selector func(pricingLevels) *float64) *float64 {
if len(mappings) == 0 {
return nil
}
total := 0.0
hasValue := false
qty := exportPositiveInt(pnQty, 1)
for _, mapping := range mappings {
price := selector(priceMap[mapping.LotName])
if price == nil || *price <= 0 {
continue
}
total += *price * float64(qty*mapping.QuantityPerPN)
hasValue = true
}
if !hasValue {
return nil
}
return floatPtr(total)
}
func totalForUnitPrice(unitPrice *float64, quantity int) *float64 {
if unitPrice == nil || *unitPrice <= 0 {
return nil
}
total := *unitPrice * float64(exportPositiveInt(quantity, 1))
return &total
}
func estimateOnlyTotal(estimatePrice *float64, fallbackUnitPrice float64, quantity int) *float64 {
if estimatePrice != nil && *estimatePrice > 0 {
return totalForUnitPrice(estimatePrice, quantity)
}
if fallbackUnitPrice <= 0 {
return nil
}
total := fallbackUnitPrice * float64(maxInt(quantity, 1))
return &total
}
func pricingCSVHeaders(opts ProjectPricingExportOptions) []string {
headers := make([]string, 0, 8)
headers = append(headers, "Line Item")
if opts.IncludeLOT {
headers = append(headers, "LOT")
}
headers = append(headers, "PN вендора", "Описание", "Кол-во")
if opts.IncludeBOM {
headers = append(headers, "BOM")
}
if opts.IncludeEstimate {
headers = append(headers, "Estimate")
}
if opts.IncludeStock {
headers = append(headers, "Stock")
}
if opts.IncludeCompetitor {
headers = append(headers, "Конкуренты")
}
return headers
}
func pricingCSVRow(row ProjectPricingExportRow, opts ProjectPricingExportOptions) []string {
record := make([]string, 0, 8)
record = append(record, "")
if opts.IncludeLOT {
record = append(record, emptyDash(row.LotDisplay))
}
record = append(record,
emptyDash(row.VendorPN),
emptyDash(row.Description),
fmt.Sprintf("%d", exportPositiveInt(row.Quantity, 1)),
)
if opts.IncludeBOM {
record = append(record, formatMoneyValue(row.BOMTotal))
}
if opts.IncludeEstimate {
record = append(record, formatMoneyValue(row.Estimate))
}
if opts.IncludeStock {
record = append(record, formatMoneyValue(row.Stock))
}
if opts.IncludeCompetitor {
record = append(record, formatMoneyValue(row.Competitor))
}
return record
}
func pricingConfigSummaryRow(cfg ProjectPricingExportConfig, opts ProjectPricingExportOptions) []string {
record := make([]string, 0, 8)
record = append(record, fmt.Sprintf("%d", cfg.Line))
if opts.IncludeLOT {
record = append(record, "")
}
record = append(record,
"",
emptyDash(cfg.Name),
fmt.Sprintf("%d", exportPositiveInt(cfg.ServerCount, 1)),
)
if opts.IncludeBOM {
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.BOMTotal })))
}
if opts.IncludeEstimate {
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Estimate })))
}
if opts.IncludeStock {
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Stock })))
}
if opts.IncludeCompetitor {
record = append(record, formatMoneyValue(sumPricingColumn(cfg.Rows, func(row ProjectPricingExportRow) *float64 { return row.Competitor })))
}
return record
}
func formatLotDisplay(mappings []localdb.VendorSpecLotMapping) string {
switch len(mappings) {
case 0:
return "н/д"
case 1:
return mappings[0].LotName
default:
return fmt.Sprintf("%s +%d", mappings[0].LotName, len(mappings)-1)
}
}
func formatMoneyValue(value *float64) string {
if value == nil {
return "—"
}
n := math.Round(*value*100) / 100
sign := ""
if n < 0 {
sign = "-"
n = -n
}
whole := int64(n)
fraction := int(math.Round((n - float64(whole)) * 100))
if fraction == 100 {
whole++
fraction = 0
}
return fmt.Sprintf("%s%s,%02d", sign, formatIntWithSpace(whole), fraction)
}
func emptyDash(value string) string {
if strings.TrimSpace(value) == "" {
return "—"
}
return value
}
func sumPricingColumn(rows []ProjectPricingExportRow, selector func(ProjectPricingExportRow) *float64) *float64 {
total := 0.0
hasValue := false
for _, row := range rows {
value := selector(row)
if value == nil {
continue
}
total += *value
hasValue = true
}
if !hasValue {
return nil
}
return floatPtr(total)
}
func floatPtr(value float64) *float64 {
v := value
return &v
}
func exportPositiveInt(value, fallback int) int {
if value < 1 {
return fallback
}
return value
}
// formatPriceComma formats a price with comma as decimal separator (e.g., "2074,5").
// Trailing zeros after the comma are trimmed, and if the value is an integer, no comma is shown.
func formatPriceComma(value float64) string {

View File

@@ -8,6 +8,7 @@ import (
"time"
"git.mchus.pro/mchus/quoteforge/internal/config"
"git.mchus.pro/mchus/quoteforge/internal/models"
)
func newTestProjectData(items []ExportItem, article string, serverCount int) *ProjectExportData {
@@ -357,6 +358,51 @@ func TestToCSV_MultipleBlocks(t *testing.T) {
}
}
func TestProjectToExportData_SortsByLine(t *testing.T) {
svc := NewExportService(config.ExportConfig{}, nil, nil)
configs := []models.Configuration{
{
UUID: "cfg-1",
Line: 30,
Article: "ART-30",
ServerCount: 1,
Items: models.ConfigItems{{LotName: "LOT-30", Quantity: 1, UnitPrice: 300}},
CreatedAt: time.Now().Add(-1 * time.Hour),
},
{
UUID: "cfg-2",
Line: 10,
Article: "ART-10",
ServerCount: 1,
Items: models.ConfigItems{{LotName: "LOT-10", Quantity: 1, UnitPrice: 100}},
CreatedAt: time.Now().Add(-2 * time.Hour),
},
{
UUID: "cfg-3",
Line: 20,
Article: "ART-20",
ServerCount: 1,
Items: models.ConfigItems{{LotName: "LOT-20", Quantity: 1, UnitPrice: 200}},
CreatedAt: time.Now().Add(-3 * time.Hour),
},
}
data := svc.ProjectToExportData(configs)
if len(data.Configs) != 3 {
t.Fatalf("expected 3 blocks, got %d", len(data.Configs))
}
if data.Configs[0].Article != "ART-10" || data.Configs[0].Line != 10 {
t.Fatalf("first block must be line 10, got article=%s line=%d", data.Configs[0].Article, data.Configs[0].Line)
}
if data.Configs[1].Article != "ART-20" || data.Configs[1].Line != 20 {
t.Fatalf("second block must be line 20, got article=%s line=%d", data.Configs[1].Article, data.Configs[1].Line)
}
if data.Configs[2].Article != "ART-30" || data.Configs[2].Line != 30 {
t.Fatalf("third block must be line 30, got article=%s line=%d", data.Configs[2].Article, data.Configs[2].Line)
}
}
func TestFormatPriceWithSpace(t *testing.T) {
tests := []struct {
input float64
@@ -398,6 +444,117 @@ func TestFormatPriceComma(t *testing.T) {
}
}
func TestToPricingCSV_UsesSelectedColumns(t *testing.T) {
svc := NewExportService(config.ExportConfig{}, nil, nil)
data := &ProjectPricingExportData{
Configs: []ProjectPricingExportConfig{
{
Name: "Config A",
Article: "ART-1",
Line: 10,
ServerCount: 2,
Rows: []ProjectPricingExportRow{
{
LotDisplay: "LOT_A +1",
VendorPN: "PN-001",
Description: "Bundle row",
Quantity: 2,
BOMTotal: floatPtr(2400.5),
Estimate: floatPtr(2000),
Stock: floatPtr(1800.25),
},
},
},
},
CreatedAt: time.Now(),
}
opts := ProjectPricingExportOptions{
IncludeLOT: true,
IncludeBOM: true,
IncludeEstimate: true,
IncludeStock: true,
}
var buf bytes.Buffer
if err := svc.ToPricingCSV(&buf, data, opts); err != nil {
t.Fatalf("ToPricingCSV failed: %v", err)
}
reader := csv.NewReader(bytes.NewReader(buf.Bytes()[3:]))
reader.Comma = ';'
reader.FieldsPerRecord = -1
header, err := reader.Read()
if err != nil {
t.Fatalf("read header row: %v", err)
}
expectedHeader := []string{"Line Item", "LOT", "PN вендора", "Описание", "Кол-во", "BOM", "Estimate", "Stock"}
for i, want := range expectedHeader {
if header[i] != want {
t.Fatalf("header[%d]: expected %q, got %q", i, want, header[i])
}
}
summary, err := reader.Read()
if err != nil {
t.Fatalf("read summary row: %v", err)
}
expectedSummary := []string{"10", "", "", "Config A", "2", "2 400,50", "2 000,00", "1 800,25"}
for i, want := range expectedSummary {
if summary[i] != want {
t.Fatalf("summary[%d]: expected %q, got %q", i, want, summary[i])
}
}
row, err := reader.Read()
if err != nil {
t.Fatalf("read data row: %v", err)
}
expectedRow := []string{"", "LOT_A +1", "PN-001", "Bundle row", "2", "2 400,50", "2 000,00", "1 800,25"}
for i, want := range expectedRow {
if row[i] != want {
t.Fatalf("row[%d]: expected %q, got %q", i, want, row[i])
}
}
}
func TestProjectToPricingExportData_UsesCartRowsWithoutBOM(t *testing.T) {
svc := NewExportService(config.ExportConfig{}, nil, nil)
configs := []models.Configuration{
{
UUID: "cfg-1",
Name: "Config A",
Article: "ART-1",
ServerCount: 1,
Items: models.ConfigItems{
{LotName: "LOT_A", Quantity: 2, UnitPrice: 300},
},
CreatedAt: time.Now(),
},
}
data, err := svc.ProjectToPricingExportData(configs, ProjectPricingExportOptions{
IncludeLOT: true,
IncludeEstimate: true,
})
if err != nil {
t.Fatalf("ProjectToPricingExportData failed: %v", err)
}
if len(data.Configs) != 1 || len(data.Configs[0].Rows) != 1 {
t.Fatalf("unexpected rows count: %+v", data.Configs)
}
row := data.Configs[0].Rows[0]
if row.LotDisplay != "LOT_A" {
t.Fatalf("expected LOT_A, got %q", row.LotDisplay)
}
if row.VendorPN != "—" {
t.Fatalf("expected vendor dash, got %q", row.VendorPN)
}
if row.Estimate == nil || *row.Estimate != 600 {
t.Fatalf("expected estimate total 600, got %+v", row.Estimate)
}
}
// failingWriter always returns an error
type failingWriter struct{}

View File

@@ -83,22 +83,25 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
}
cfg := &models.Configuration{
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: projectUUID,
Name: req.Name,
Items: req.Items,
TotalPrice: &total,
CustomPrice: req.CustomPrice,
Notes: req.Notes,
IsTemplate: req.IsTemplate,
ServerCount: req.ServerCount,
ServerModel: req.ServerModel,
SupportCode: req.SupportCode,
Article: req.Article,
PricelistID: pricelistID,
OnlyInStock: req.OnlyInStock,
CreatedAt: time.Now(),
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: projectUUID,
Name: req.Name,
Items: req.Items,
TotalPrice: &total,
CustomPrice: req.CustomPrice,
Notes: req.Notes,
IsTemplate: req.IsTemplate,
ServerCount: req.ServerCount,
ServerModel: req.ServerModel,
SupportCode: req.SupportCode,
Article: req.Article,
PricelistID: pricelistID,
WarehousePricelistID: req.WarehousePricelistID,
CompetitorPricelistID: req.CompetitorPricelistID,
DisablePriceRefresh: req.DisablePriceRefresh,
OnlyInStock: req.OnlyInStock,
CreatedAt: time.Now(),
}
// Convert to local model
@@ -107,6 +110,7 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
return nil, fmt.Errorf("create configuration with version: %w", err)
}
cfg.Line = localCfg.Line
// Record usage stats
_ = s.quoteService.RecordUsage(req.Items)
@@ -195,6 +199,9 @@ func (s *LocalConfigurationService) Update(uuid string, ownerUsername string, re
localCfg.SupportCode = req.SupportCode
localCfg.Article = req.Article
localCfg.PricelistID = pricelistID
localCfg.WarehousePricelistID = req.WarehousePricelistID
localCfg.CompetitorPricelistID = req.CompetitorPricelistID
localCfg.DisablePriceRefresh = req.DisablePriceRefresh
localCfg.OnlyInStock = req.OnlyInStock
localCfg.UpdatedAt = time.Now()
localCfg.SyncStatus = "pending"
@@ -303,28 +310,32 @@ func (s *LocalConfigurationService) CloneToProject(configUUID string, ownerUsern
}
clone := &models.Configuration{
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false,
ServerCount: original.ServerCount,
ServerModel: original.ServerModel,
SupportCode: original.SupportCode,
Article: original.Article,
PricelistID: original.PricelistID,
OnlyInStock: original.OnlyInStock,
CreatedAt: time.Now(),
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false,
ServerCount: original.ServerCount,
ServerModel: original.ServerModel,
SupportCode: original.SupportCode,
Article: original.Article,
PricelistID: original.PricelistID,
WarehousePricelistID: original.WarehousePricelistID,
CompetitorPricelistID: original.CompetitorPricelistID,
DisablePriceRefresh: original.DisablePriceRefresh,
OnlyInStock: original.OnlyInStock,
CreatedAt: time.Now(),
}
localCfg := localdb.ConfigurationToLocal(clone)
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
return nil, fmt.Errorf("clone configuration with version: %w", err)
}
clone.Line = localCfg.Line
return clone, nil
}
@@ -519,6 +530,9 @@ func (s *LocalConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigR
localCfg.SupportCode = req.SupportCode
localCfg.Article = req.Article
localCfg.PricelistID = pricelistID
localCfg.WarehousePricelistID = req.WarehousePricelistID
localCfg.CompetitorPricelistID = req.CompetitorPricelistID
localCfg.DisablePriceRefresh = req.DisablePriceRefresh
localCfg.OnlyInStock = req.OnlyInStock
localCfg.UpdatedAt = time.Now()
localCfg.SyncStatus = "pending"
@@ -621,25 +635,32 @@ func (s *LocalConfigurationService) CloneNoAuthToProjectFromVersion(configUUID s
}
clone := &models.Configuration{
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false,
ServerCount: original.ServerCount,
PricelistID: original.PricelistID,
OnlyInStock: original.OnlyInStock,
CreatedAt: time.Now(),
UUID: uuid.New().String(),
OwnerUsername: ownerUsername,
ProjectUUID: resolvedProjectUUID,
Name: newName,
Items: original.Items,
TotalPrice: &total,
CustomPrice: original.CustomPrice,
Notes: original.Notes,
IsTemplate: false,
ServerCount: original.ServerCount,
ServerModel: original.ServerModel,
SupportCode: original.SupportCode,
Article: original.Article,
PricelistID: original.PricelistID,
WarehousePricelistID: original.WarehousePricelistID,
CompetitorPricelistID: original.CompetitorPricelistID,
DisablePriceRefresh: original.DisablePriceRefresh,
OnlyInStock: original.OnlyInStock,
CreatedAt: time.Now(),
}
localCfg := localdb.ConfigurationToLocal(clone)
if err := s.createWithVersion(localCfg, ownerUsername); err != nil {
return nil, fmt.Errorf("clone configuration without auth with version: %w", err)
}
clone.Line = localCfg.Line
return clone, nil
}
@@ -826,21 +847,13 @@ func (s *LocalConfigurationService) UpdateServerCount(configUUID string, serverC
return fmt.Errorf("save local configuration: %w", err)
}
// Use existing current version for the pending change
var version localdb.LocalConfigurationVersion
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
if err := tx.Where("id = ?", *localCfg.CurrentVersionID).First(&version).Error; err != nil {
return fmt.Errorf("load current version: %w", err)
}
} else {
if err := tx.Where("configuration_uuid = ?", localCfg.UUID).
Order("version_no DESC").First(&version).Error; err != nil {
return fmt.Errorf("load latest version: %w", err)
}
version, err := s.loadVersionForPendingTx(tx, localCfg)
if err != nil {
return err
}
cfg = localdb.LocalToConfiguration(localCfg)
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "update", &version, ""); err != nil {
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "update", version, ""); err != nil {
return fmt.Errorf("enqueue server-count pending change: %w", err)
}
return nil
@@ -852,6 +865,99 @@ func (s *LocalConfigurationService) UpdateServerCount(configUUID string, serverC
return cfg, nil
}
func (s *LocalConfigurationService) ReorderProjectConfigurationsNoAuth(projectUUID string, orderedUUIDs []string) ([]models.Configuration, error) {
projectUUID = strings.TrimSpace(projectUUID)
if projectUUID == "" {
return nil, ErrProjectNotFound
}
if _, err := s.localDB.GetProjectByUUID(projectUUID); err != nil {
return nil, ErrProjectNotFound
}
if len(orderedUUIDs) == 0 {
return []models.Configuration{}, nil
}
seen := make(map[string]struct{}, len(orderedUUIDs))
normalized := make([]string, 0, len(orderedUUIDs))
for _, raw := range orderedUUIDs {
u := strings.TrimSpace(raw)
if u == "" {
return nil, fmt.Errorf("ordered_uuids contains empty uuid")
}
if _, exists := seen[u]; exists {
return nil, fmt.Errorf("ordered_uuids contains duplicate uuid: %s", u)
}
seen[u] = struct{}{}
normalized = append(normalized, u)
}
err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
var active []localdb.LocalConfiguration
if err := tx.Where("project_uuid = ? AND is_active = ?", projectUUID, true).
Find(&active).Error; err != nil {
return fmt.Errorf("load project active configurations: %w", err)
}
if len(active) != len(normalized) {
return fmt.Errorf("ordered_uuids count mismatch: expected %d got %d", len(active), len(normalized))
}
byUUID := make(map[string]*localdb.LocalConfiguration, len(active))
for i := range active {
cfg := active[i]
byUUID[cfg.UUID] = &cfg
}
for _, id := range normalized {
if _, ok := byUUID[id]; !ok {
return fmt.Errorf("configuration %s not found in project %s", id, projectUUID)
}
}
now := time.Now()
for idx, id := range normalized {
cfg := byUUID[id]
newLine := (idx + 1) * 10
if cfg.Line == newLine {
continue
}
cfg.Line = newLine
cfg.UpdatedAt = now
cfg.SyncStatus = "pending"
if err := tx.Save(cfg).Error; err != nil {
return fmt.Errorf("save reordered configuration %s: %w", cfg.UUID, err)
}
version, err := s.loadVersionForPendingTx(tx, cfg)
if err != nil {
return err
}
if err := s.enqueueConfigurationPendingChangeTx(tx, cfg, "update", version, ""); err != nil {
return fmt.Errorf("enqueue reorder pending change for %s: %w", cfg.UUID, err)
}
}
return nil
})
if err != nil {
return nil, err
}
var localConfigs []localdb.LocalConfiguration
if err := s.localDB.DB().
Preload("CurrentVersion").
Where("project_uuid = ? AND is_active = ?", projectUUID, true).
Order("CASE WHEN COALESCE(line_no, 0) <= 0 THEN 2147483647 ELSE line_no END ASC, created_at DESC, id DESC").
Find(&localConfigs).Error; err != nil {
return nil, fmt.Errorf("load reordered configurations: %w", err)
}
result := make([]models.Configuration, 0, len(localConfigs))
for i := range localConfigs {
result = append(result, *localdb.LocalToConfiguration(&localConfigs[i]))
}
return result, nil
}
// ImportFromServer imports configurations from MariaDB to local SQLite cache.
func (s *LocalConfigurationService) ImportFromServer() (*sync.ConfigImportResult, error) {
return s.syncService.ImportConfigurationsToLocal()
@@ -965,34 +1071,43 @@ func (s *LocalConfigurationService) isOwner(cfg *localdb.LocalConfiguration, own
func (s *LocalConfigurationService) createWithVersion(localCfg *localdb.LocalConfiguration, createdBy string) error {
return s.localDB.DB().Transaction(func(tx *gorm.DB) error {
if err := tx.Create(localCfg).Error; err != nil {
return fmt.Errorf("create local configuration: %w", err)
}
version, err := s.appendVersionTx(tx, localCfg, "create", createdBy)
if err != nil {
return fmt.Errorf("append create version: %w", err)
}
if err := tx.Model(&localdb.LocalConfiguration{}).
Where("uuid = ?", localCfg.UUID).
Update("current_version_id", version.ID).Error; err != nil {
return fmt.Errorf("set current version id: %w", err)
}
localCfg.CurrentVersionID = &version.ID
localCfg.CurrentVersion = version
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "create", version, createdBy); err != nil {
return fmt.Errorf("enqueue create pending change: %w", err)
}
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
return fmt.Errorf("recalculate local pricelist usage: %w", err)
}
return nil
return s.createWithVersionTx(tx, localCfg, createdBy)
})
}
func (s *LocalConfigurationService) createWithVersionTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration, createdBy string) error {
if localCfg.IsActive {
if err := s.ensureConfigurationLineTx(tx, localCfg); err != nil {
return err
}
}
if err := tx.Create(localCfg).Error; err != nil {
return fmt.Errorf("create local configuration: %w", err)
}
version, err := s.appendVersionTx(tx, localCfg, "create", createdBy)
if err != nil {
return fmt.Errorf("append create version: %w", err)
}
if err := tx.Model(&localdb.LocalConfiguration{}).
Where("uuid = ?", localCfg.UUID).
Update("current_version_id", version.ID).Error; err != nil {
return fmt.Errorf("set current version id: %w", err)
}
localCfg.CurrentVersionID = &version.ID
localCfg.CurrentVersion = version
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, "create", version, createdBy); err != nil {
return fmt.Errorf("enqueue create pending change: %w", err)
}
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
return fmt.Errorf("recalculate local pricelist usage: %w", err)
}
return nil
}
func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.LocalConfiguration, operation string, createdBy string) (*models.Configuration, error) {
var cfg *models.Configuration
@@ -1021,12 +1136,31 @@ func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.
return fmt.Errorf("compare revision content: %w", err)
}
if sameRevisionContent {
cfg = localdb.LocalToConfiguration(&locked)
if !hasNonRevisionConfigurationChanges(&locked, localCfg) {
cfg = localdb.LocalToConfiguration(&locked)
return nil
}
if err := tx.Save(localCfg).Error; err != nil {
return fmt.Errorf("save local configuration (no new revision): %w", err)
}
cfg = localdb.LocalToConfiguration(localCfg)
if err := s.enqueueConfigurationPendingChangeTx(tx, localCfg, operation, currentVersion, createdBy); err != nil {
return fmt.Errorf("enqueue %s pending change without revision: %w", operation, err)
}
if err := s.recalculateLocalPricelistUsageTx(tx); err != nil {
return fmt.Errorf("recalculate local pricelist usage: %w", err)
}
return nil
}
}
}
if localCfg.IsActive {
if err := s.ensureConfigurationLineTx(tx, localCfg); err != nil {
return err
}
}
if err := tx.Save(localCfg).Error; err != nil {
return fmt.Errorf("save local configuration: %w", err)
}
@@ -1061,6 +1195,51 @@ func (s *LocalConfigurationService) saveWithVersionAndPending(localCfg *localdb.
return cfg, nil
}
func hasNonRevisionConfigurationChanges(current *localdb.LocalConfiguration, next *localdb.LocalConfiguration) bool {
if current == nil || next == nil {
return true
}
if current.Name != next.Name ||
current.Notes != next.Notes ||
current.IsTemplate != next.IsTemplate ||
current.ServerModel != next.ServerModel ||
current.SupportCode != next.SupportCode ||
current.Article != next.Article ||
current.DisablePriceRefresh != next.DisablePriceRefresh ||
current.OnlyInStock != next.OnlyInStock ||
current.IsActive != next.IsActive ||
current.Line != next.Line {
return true
}
if !equalUintPtr(current.PricelistID, next.PricelistID) ||
!equalUintPtr(current.WarehousePricelistID, next.WarehousePricelistID) ||
!equalUintPtr(current.CompetitorPricelistID, next.CompetitorPricelistID) ||
!equalStringPtr(current.ProjectUUID, next.ProjectUUID) {
return true
}
return false
}
func equalStringPtr(a, b *string) bool {
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
return strings.TrimSpace(*a) == strings.TrimSpace(*b)
}
func equalUintPtr(a, b *uint) bool {
if a == nil && b == nil {
return true
}
if a == nil || b == nil {
return false
}
return *a == *b
}
func (s *LocalConfigurationService) loadCurrentVersionTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) (*localdb.LocalConfigurationVersion, error) {
var version localdb.LocalConfigurationVersion
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
@@ -1082,6 +1261,76 @@ func (s *LocalConfigurationService) loadCurrentVersionTx(tx *gorm.DB, localCfg *
return &version, nil
}
func (s *LocalConfigurationService) loadVersionForPendingTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) (*localdb.LocalConfigurationVersion, error) {
if localCfg.CurrentVersionID != nil && *localCfg.CurrentVersionID != "" {
var current localdb.LocalConfigurationVersion
if err := tx.Where("id = ?", *localCfg.CurrentVersionID).First(&current).Error; err == nil {
return &current, nil
}
}
var latest localdb.LocalConfigurationVersion
if err := tx.Where("configuration_uuid = ?", localCfg.UUID).
Order("version_no DESC").
First(&latest).Error; err != nil {
if !errors.Is(err, gorm.ErrRecordNotFound) {
return nil, fmt.Errorf("load version for pending change: %w", err)
}
// Legacy/imported rows may exist without local version history.
// Bootstrap the first version so pending sync payloads can reference a version.
version, createErr := s.appendVersionTx(tx, localCfg, "bootstrap", "")
if createErr != nil {
return nil, fmt.Errorf("bootstrap version for pending change: %w", createErr)
}
if err := tx.Model(&localdb.LocalConfiguration{}).
Where("uuid = ?", localCfg.UUID).
Update("current_version_id", version.ID).Error; err != nil {
return nil, fmt.Errorf("set current version id for bootstrapped pending change: %w", err)
}
localCfg.CurrentVersionID = &version.ID
return version, nil
}
return &latest, nil
}
func (s *LocalConfigurationService) ensureConfigurationLineTx(tx *gorm.DB, localCfg *localdb.LocalConfiguration) error {
if localCfg == nil || !localCfg.IsActive {
return nil
}
needsAssign := localCfg.Line <= 0
if !needsAssign {
query := tx.Model(&localdb.LocalConfiguration{}).
Where("is_active = ? AND line_no = ?", true, localCfg.Line)
if strings.TrimSpace(localCfg.UUID) != "" {
query = query.Where("uuid <> ?", strings.TrimSpace(localCfg.UUID))
}
if localCfg.ProjectUUID != nil && strings.TrimSpace(*localCfg.ProjectUUID) != "" {
query = query.Where("project_uuid = ?", strings.TrimSpace(*localCfg.ProjectUUID))
} else {
query = query.Where("project_uuid IS NULL OR TRIM(project_uuid) = ''")
}
var conflicts int64
if err := query.Count(&conflicts).Error; err != nil {
return fmt.Errorf("check line_no conflict for configuration %s: %w", localCfg.UUID, err)
}
needsAssign = conflicts > 0
}
if needsAssign {
line, err := localdb.NextConfigurationLineTx(tx, localCfg.ProjectUUID, localCfg.UUID)
if err != nil {
return fmt.Errorf("assign line_no for configuration %s: %w", localCfg.UUID, err)
}
localCfg.Line = line
}
return nil
}
func (s *LocalConfigurationService) hasSameRevisionContent(localCfg *localdb.LocalConfiguration, currentVersion *localdb.LocalConfigurationVersion) (bool, error) {
currentSnapshotCfg, err := s.decodeConfigurationSnapshot(currentVersion.Data)
if err != nil {
@@ -1193,8 +1442,18 @@ func (s *LocalConfigurationService) rollbackToVersion(configurationUUID string,
current.Notes = rollbackData.Notes
current.IsTemplate = rollbackData.IsTemplate
current.ServerCount = rollbackData.ServerCount
current.ServerModel = rollbackData.ServerModel
current.SupportCode = rollbackData.SupportCode
current.Article = rollbackData.Article
current.PricelistID = rollbackData.PricelistID
current.WarehousePricelistID = rollbackData.WarehousePricelistID
current.CompetitorPricelistID = rollbackData.CompetitorPricelistID
current.DisablePriceRefresh = rollbackData.DisablePriceRefresh
current.OnlyInStock = rollbackData.OnlyInStock
current.VendorSpec = rollbackData.VendorSpec
if rollbackData.Line > 0 {
current.Line = rollbackData.Line
}
current.PriceUpdatedAt = rollbackData.PriceUpdatedAt
current.UpdatedAt = time.Now()
current.SyncStatus = "pending"

View File

@@ -137,6 +137,78 @@ func TestUpdateNoAuthSkipsRevisionWhenSpecAndPriceUnchanged(t *testing.T) {
}
}
func TestReorderProjectConfigurationsDoesNotCreateNewVersions(t *testing.T) {
service, local := newLocalConfigServiceForTest(t)
project := &localdb.LocalProject{
UUID: "project-reorder",
OwnerUsername: "tester",
Code: "PRJ-ORDER",
Variant: "",
Name: ptrString("Project Reorder"),
IsActive: true,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
SyncStatus: "pending",
}
if err := local.SaveProject(project); err != nil {
t.Fatalf("save project: %v", err)
}
first, err := service.Create("tester", &CreateConfigRequest{
Name: "Cfg A",
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
ServerCount: 1,
ProjectUUID: &project.UUID,
})
if err != nil {
t.Fatalf("create first config: %v", err)
}
second, err := service.Create("tester", &CreateConfigRequest{
Name: "Cfg B",
Items: models.ConfigItems{{LotName: "CPU_B", Quantity: 1, UnitPrice: 200}},
ServerCount: 1,
ProjectUUID: &project.UUID,
})
if err != nil {
t.Fatalf("create second config: %v", err)
}
beforeFirst := loadVersions(t, local, first.UUID)
beforeSecond := loadVersions(t, local, second.UUID)
reordered, err := service.ReorderProjectConfigurationsNoAuth(project.UUID, []string{second.UUID, first.UUID})
if err != nil {
t.Fatalf("reorder configurations: %v", err)
}
if len(reordered) != 2 {
t.Fatalf("expected 2 reordered configs, got %d", len(reordered))
}
if reordered[0].UUID != second.UUID || reordered[0].Line != 10 {
t.Fatalf("expected second config first with line 10, got uuid=%s line=%d", reordered[0].UUID, reordered[0].Line)
}
if reordered[1].UUID != first.UUID || reordered[1].Line != 20 {
t.Fatalf("expected first config second with line 20, got uuid=%s line=%d", reordered[1].UUID, reordered[1].Line)
}
afterFirst := loadVersions(t, local, first.UUID)
afterSecond := loadVersions(t, local, second.UUID)
if len(afterFirst) != len(beforeFirst) || len(afterSecond) != len(beforeSecond) {
t.Fatalf("reorder must not create new versions")
}
var pendingCount int64
if err := local.DB().
Table("pending_changes").
Where("entity_type = ? AND operation = ? AND entity_uuid IN ?", "configuration", "update", []string{first.UUID, second.UUID}).
Count(&pendingCount).Error; err != nil {
t.Fatalf("count reorder pending changes: %v", err)
}
if pendingCount < 2 {
t.Fatalf("expected at least 2 pending update changes for reorder, got %d", pendingCount)
}
}
func TestAppendOnlyInvariantOldRowsUnchanged(t *testing.T) {
service, local := newLocalConfigServiceForTest(t)

View File

@@ -16,10 +16,10 @@ import (
)
var (
ErrProjectNotFound = errors.New("project not found")
ErrProjectForbidden = errors.New("access to project forbidden")
ErrProjectCodeExists = errors.New("project code and variant already exist")
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
ErrProjectNotFound = errors.New("project not found")
ErrProjectForbidden = errors.New("access to project forbidden")
ErrProjectCodeExists = errors.New("project code and variant already exist")
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
)
type ProjectService struct {
@@ -31,10 +31,10 @@ func NewProjectService(localDB *localdb.LocalDB) *ProjectService {
}
type CreateProjectRequest struct {
Code string `json:"code"`
Variant string `json:"variant,omitempty"`
Code string `json:"code"`
Variant string `json:"variant,omitempty"`
Name *string `json:"name,omitempty"`
TrackerURL string `json:"tracker_url"`
TrackerURL string `json:"tracker_url"`
}
type UpdateProjectRequest struct {
@@ -275,8 +275,23 @@ func (s *ProjectService) ListConfigurations(projectUUID, ownerUsername, status s
}, nil
}
query := s.localDB.DB().
Preload("CurrentVersion").
Where("project_uuid = ?", projectUUID).
Order("CASE WHEN COALESCE(line_no, 0) <= 0 THEN 2147483647 ELSE line_no END ASC, created_at DESC, id DESC")
switch status {
case "active", "":
query = query.Where("is_active = ?", true)
case "archived":
query = query.Where("is_active = ?", false)
case "all":
default:
query = query.Where("is_active = ?", true)
}
var localConfigs []localdb.LocalConfiguration
if err := s.localDB.DB().Preload("CurrentVersion").Order("created_at DESC").Find(&localConfigs).Error; err != nil {
if err := query.Find(&localConfigs).Error; err != nil {
return nil, err
}
@@ -284,25 +299,6 @@ func (s *ProjectService) ListConfigurations(projectUUID, ownerUsername, status s
total := 0.0
for i := range localConfigs {
localCfg := localConfigs[i]
if localCfg.ProjectUUID == nil || *localCfg.ProjectUUID != projectUUID {
continue
}
switch status {
case "active", "":
if !localCfg.IsActive {
continue
}
case "archived":
if localCfg.IsActive {
continue
}
case "all":
default:
if !localCfg.IsActive {
continue
}
}
cfg := localdb.LocalToConfiguration(&localCfg)
if cfg.TotalPrice != nil {
total += *cfg.TotalPrice

View File

@@ -289,6 +289,13 @@ func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLeve
result.ResolvedPricelistIDs[sourceKey] = latest.ID
}
}
if st.id == 0 && s.localDB != nil {
localPL, err := s.localDB.GetLatestLocalPricelistBySource(sourceKey)
if err == nil && localPL != nil {
st.id = localPL.ServerID
result.ResolvedPricelistIDs[sourceKey] = localPL.ServerID
}
}
if st.id == 0 {
continue
}

View File

@@ -0,0 +1,153 @@
package sync
import (
"encoding/json"
"fmt"
"log/slog"
"time"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"gorm.io/gorm"
)
// PullPartnumberBooks synchronizes partnumber book snapshots from MariaDB to local SQLite.
// Append-only for headers; re-pulls items if a book header exists but has 0 items.
func (s *Service) PullPartnumberBooks() (int, error) {
slog.Info("starting partnumber book pull")
mariaDB, err := s.getDB()
if err != nil {
return 0, fmt.Errorf("database not available: %w", err)
}
localBookRepo := repository.NewPartnumberBookRepository(s.localDB.DB())
type serverBook struct {
ID int `gorm:"column:id"`
Version string `gorm:"column:version"`
CreatedAt time.Time `gorm:"column:created_at"`
IsActive bool `gorm:"column:is_active"`
PartnumbersJSON string `gorm:"column:partnumbers_json"`
}
var serverBooks []serverBook
if err := mariaDB.Raw("SELECT id, version, created_at, is_active, partnumbers_json FROM qt_partnumber_books ORDER BY created_at DESC, id DESC").Scan(&serverBooks).Error; err != nil {
return 0, fmt.Errorf("querying server partnumber books: %w", err)
}
slog.Info("partnumber books found on server", "count", len(serverBooks))
pulled := 0
for _, sb := range serverBooks {
var existing localdb.LocalPartnumberBook
err := s.localDB.DB().Where("server_id = ?", sb.ID).First(&existing).Error
partnumbers, errPartnumbers := decodeServerPartnumbers(sb.PartnumbersJSON)
if errPartnumbers != nil {
slog.Error("failed to decode server partnumbers_json", "server_id", sb.ID, "error", errPartnumbers)
continue
}
if err == nil {
existing.Version = sb.Version
existing.CreatedAt = sb.CreatedAt
existing.IsActive = sb.IsActive
existing.PartnumbersJSON = partnumbers
if err := localBookRepo.SaveBook(&existing); err != nil {
slog.Error("failed to update local partnumber book header", "server_id", sb.ID, "error", err)
continue
}
localItemCount := localBookRepo.CountBookItems(existing.ID)
if localItemCount > 0 && localBookRepo.HasAllBookItems(existing.ID) {
slog.Debug("partnumber book already synced, skipping", "server_id", sb.ID, "version", sb.Version, "items", localItemCount)
continue
}
slog.Info("partnumber book header exists but catalog items are missing, re-pulling items", "server_id", sb.ID, "version", sb.Version)
n, err := pullBookItems(mariaDB, localBookRepo, existing.PartnumbersJSON)
if err != nil {
slog.Error("failed to re-pull items for existing book", "server_id", sb.ID, "error", err)
} else {
slog.Info("re-pulled items for existing book", "server_id", sb.ID, "version", sb.Version, "items_saved", n)
pulled++
}
continue
}
slog.Info("pulling new partnumber book", "server_id", sb.ID, "version", sb.Version, "is_active", sb.IsActive)
localBook := &localdb.LocalPartnumberBook{
ServerID: sb.ID,
Version: sb.Version,
CreatedAt: sb.CreatedAt,
IsActive: sb.IsActive,
PartnumbersJSON: partnumbers,
}
if err := localBookRepo.SaveBook(localBook); err != nil {
slog.Error("failed to save local partnumber book", "server_id", sb.ID, "error", err)
continue
}
n, err := pullBookItems(mariaDB, localBookRepo, localBook.PartnumbersJSON)
if err != nil {
slog.Error("failed to pull items for new book", "server_id", sb.ID, "error", err)
continue
}
slog.Info("partnumber book saved locally", "server_id", sb.ID, "version", sb.Version, "items_saved", n)
pulled++
}
slog.Info("partnumber book pull completed", "new_books_pulled", pulled, "total_on_server", len(serverBooks))
return pulled, nil
}
// pullBookItems fetches catalog items for a partnumber list from MariaDB and saves them to SQLite.
// Returns the number of items saved.
func pullBookItems(mariaDB *gorm.DB, repo *repository.PartnumberBookRepository, partnumbers localdb.LocalStringList) (int, error) {
if len(partnumbers) == 0 {
return 0, nil
}
type serverItem struct {
Partnumber string `gorm:"column:partnumber"`
LotsJSON string `gorm:"column:lots_json"`
Description string `gorm:"column:description"`
}
var serverItems []serverItem
err := mariaDB.Raw("SELECT partnumber, lots_json, description FROM qt_partnumber_book_items WHERE partnumber IN ?", []string(partnumbers)).Scan(&serverItems).Error
if err != nil {
return 0, fmt.Errorf("querying items from server: %w", err)
}
slog.Info("partnumber book items fetched from server", "count", len(serverItems), "requested_partnumbers", len(partnumbers))
if len(serverItems) == 0 {
slog.Warn("server returned 0 partnumber book items")
return 0, nil
}
localItems := make([]localdb.LocalPartnumberBookItem, 0, len(serverItems))
for _, si := range serverItems {
var lots localdb.LocalPartnumberBookLots
if err := json.Unmarshal([]byte(si.LotsJSON), &lots); err != nil {
return 0, fmt.Errorf("decode lots_json for %s: %w", si.Partnumber, err)
}
localItems = append(localItems, localdb.LocalPartnumberBookItem{
Partnumber: si.Partnumber,
LotsJSON: lots,
Description: si.Description,
})
}
if err := repo.SaveBookItems(localItems); err != nil {
return 0, fmt.Errorf("saving items to local db: %w", err)
}
return len(localItems), nil
}
func decodeServerPartnumbers(raw string) (localdb.LocalStringList, error) {
if raw == "" {
return localdb.LocalStringList{}, nil
}
var items []string
if err := json.Unmarshal([]byte(raw), &items); err != nil {
return nil, err
}
return localdb.LocalStringList(items), nil
}

View File

@@ -0,0 +1,49 @@
package sync
import (
"fmt"
"log/slog"
"time"
)
// SeenPartnumber represents an unresolved vendor partnumber to report.
type SeenPartnumber struct {
Partnumber string
Description string
Ignored bool
}
// PushPartnumberSeen inserts unresolved vendor partnumbers into qt_vendor_partnumber_seen on MariaDB.
// Existing rows are left untouched: no updates to last_seen_at, is_ignored, or description.
func (s *Service) PushPartnumberSeen(items []SeenPartnumber) error {
if len(items) == 0 {
return nil
}
mariaDB, err := s.getDB()
if err != nil {
return fmt.Errorf("database not available: %w", err)
}
now := time.Now().UTC()
for _, item := range items {
if item.Partnumber == "" {
continue
}
err := mariaDB.Exec(`
INSERT INTO qt_vendor_partnumber_seen
(source_type, vendor, partnumber, description, is_ignored, last_seen_at)
VALUES
('manual', '', ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
partnumber = partnumber
`, item.Partnumber, item.Description, item.Ignored, now).Error
if err != nil {
slog.Error("failed to insert partnumber_seen", "partnumber", item.Partnumber, "error", err)
// Continue with remaining items
}
}
slog.Info("partnumber_seen pushed to server", "count", len(items))
return nil
}

View File

@@ -1,13 +1,10 @@
package sync
import (
"bufio"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"log/slog"
"strconv"
"os"
"strings"
"time"
@@ -79,48 +76,6 @@ func (s *Service) GetReadiness() (*SyncReadiness, error) {
)
}
migrations, err := listActiveClientMigrations(mariaDB)
if err != nil {
return s.blockedReadiness(
now,
"REMOTE_MIGRATION_REGISTRY_UNAVAILABLE",
"Синхронизация заблокирована: не удалось проверить централизованные миграции локальной БД.",
nil,
)
}
for i := range migrations {
m := migrations[i]
if strings.TrimSpace(m.MinAppVersion) != "" {
if compareVersions(appmeta.Version(), m.MinAppVersion) < 0 {
min := m.MinAppVersion
return s.blockedReadiness(
now,
"MIN_APP_VERSION_REQUIRED",
fmt.Sprintf("Требуется обновление приложения до версии %s для безопасной синхронизации.", m.MinAppVersion),
&min,
)
}
}
}
if err := s.applyMissingRemoteMigrations(migrations); err != nil {
if strings.Contains(strings.ToLower(err.Error()), "checksum") {
return s.blockedReadiness(
now,
"REMOTE_MIGRATION_CHECKSUM_MISMATCH",
"Синхронизация заблокирована: контрольная сумма миграции не совпадает.",
nil,
)
}
return s.blockedReadiness(
now,
"LOCAL_MIGRATION_APPLY_FAILED",
"Синхронизация заблокирована: не удалось применить миграции локальной БД.",
nil,
)
}
if err := s.reportClientSchemaState(mariaDB, now); err != nil {
slog.Warn("failed to report client schema state", "error", err)
}
@@ -157,73 +112,68 @@ func (s *Service) isOnline() bool {
return s.connMgr.IsOnline()
}
type clientLocalMigration struct {
ID string `gorm:"column:id"`
Name string `gorm:"column:name"`
SQLText string `gorm:"column:sql_text"`
Checksum string `gorm:"column:checksum"`
MinAppVersion string `gorm:"column:min_app_version"`
OrderNo int `gorm:"column:order_no"`
CreatedAt time.Time `gorm:"column:created_at"`
}
func listActiveClientMigrations(db *gorm.DB) ([]clientLocalMigration, error) {
if strings.EqualFold(db.Dialector.Name(), "sqlite") {
return []clientLocalMigration{}, nil
}
if err := ensureClientMigrationRegistryTable(db); err != nil {
return nil, err
}
rows := make([]clientLocalMigration, 0)
if err := db.Raw(`
SELECT id, name, sql_text, checksum, COALESCE(min_app_version, '') AS min_app_version, order_no, created_at
FROM qt_client_local_migrations
WHERE is_active = 1
ORDER BY order_no ASC, created_at ASC, id ASC
`).Scan(&rows).Error; err != nil {
return nil, fmt.Errorf("load client local migrations: %w", err)
}
return rows, nil
}
func ensureClientMigrationRegistryTable(db *gorm.DB) error {
// Check if table exists instead of trying to create (avoids permission issues)
if !tableExists(db, "qt_client_local_migrations") {
if err := db.Exec(`
CREATE TABLE IF NOT EXISTS qt_client_local_migrations (
id VARCHAR(128) NOT NULL,
name VARCHAR(255) NOT NULL,
sql_text LONGTEXT NOT NULL,
checksum VARCHAR(128) NOT NULL,
min_app_version VARCHAR(64) NULL,
order_no INT NOT NULL DEFAULT 0,
is_active TINYINT(1) NOT NULL DEFAULT 1,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (id),
INDEX idx_qt_client_local_migrations_active_order (is_active, order_no, created_at)
)
`).Error; err != nil {
return fmt.Errorf("create qt_client_local_migrations table: %w", err)
}
}
func ensureClientSchemaStateTable(db *gorm.DB) error {
if !tableExists(db, "qt_client_schema_state") {
if err := db.Exec(`
CREATE TABLE IF NOT EXISTS qt_client_schema_state (
username VARCHAR(100) NOT NULL,
last_applied_migration_id VARCHAR(128) NULL,
hostname VARCHAR(255) NOT NULL DEFAULT '',
app_version VARCHAR(64) NULL,
last_sync_at DATETIME NULL,
last_sync_status VARCHAR(32) NULL,
pending_changes_count INT NOT NULL DEFAULT 0,
pending_errors_count INT NOT NULL DEFAULT 0,
configurations_count INT NOT NULL DEFAULT 0,
projects_count INT NOT NULL DEFAULT 0,
estimate_pricelist_version VARCHAR(128) NULL,
warehouse_pricelist_version VARCHAR(128) NULL,
competitor_pricelist_version VARCHAR(128) NULL,
last_sync_error_code VARCHAR(128) NULL,
last_sync_error_text TEXT NULL,
last_checked_at DATETIME NOT NULL,
updated_at DATETIME NOT NULL,
PRIMARY KEY (username),
PRIMARY KEY (username, hostname),
INDEX idx_qt_client_schema_state_checked (last_checked_at)
)
`).Error; err != nil {
return fmt.Errorf("create qt_client_schema_state table: %w", err)
}
}
if tableExists(db, "qt_client_schema_state") {
if err := db.Exec(`
ALTER TABLE qt_client_schema_state
ADD COLUMN IF NOT EXISTS hostname VARCHAR(255) NOT NULL DEFAULT '' AFTER username
`).Error; err != nil {
return fmt.Errorf("add qt_client_schema_state.hostname: %w", err)
}
if err := db.Exec(`
ALTER TABLE qt_client_schema_state
DROP PRIMARY KEY,
ADD PRIMARY KEY (username, hostname)
`).Error; err != nil && !isDuplicatePrimaryKeyDefinition(err) {
return fmt.Errorf("set qt_client_schema_state primary key: %w", err)
}
for _, stmt := range []string{
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_at DATETIME NULL AFTER app_version",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_status VARCHAR(32) NULL AFTER last_sync_at",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pending_changes_count INT NOT NULL DEFAULT 0 AFTER last_sync_status",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS pending_errors_count INT NOT NULL DEFAULT 0 AFTER pending_changes_count",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS configurations_count INT NOT NULL DEFAULT 0 AFTER pending_errors_count",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS projects_count INT NOT NULL DEFAULT 0 AFTER configurations_count",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS estimate_pricelist_version VARCHAR(128) NULL AFTER projects_count",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS warehouse_pricelist_version VARCHAR(128) NULL AFTER estimate_pricelist_version",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS competitor_pricelist_version VARCHAR(128) NULL AFTER warehouse_pricelist_version",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_code VARCHAR(128) NULL AFTER competitor_pricelist_version",
"ALTER TABLE qt_client_schema_state ADD COLUMN IF NOT EXISTS last_sync_error_text TEXT NULL AFTER last_sync_error_code",
} {
if err := db.Exec(stmt).Error; err != nil {
return fmt.Errorf("expand qt_client_schema_state: %w", err)
}
}
}
return nil
}
@@ -239,159 +189,114 @@ func tableExists(db *gorm.DB, tableName string) bool {
return count > 0
}
func (s *Service) applyMissingRemoteMigrations(migrations []clientLocalMigration) error {
for i := range migrations {
m := migrations[i]
computedChecksum := digestSQL(m.SQLText)
checksum := strings.TrimSpace(m.Checksum)
if checksum == "" {
checksum = computedChecksum
} else if !strings.EqualFold(checksum, computedChecksum) {
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
}
applied, err := s.localDB.GetRemoteMigrationApplied(m.ID)
if err == nil {
if strings.TrimSpace(applied.Checksum) != checksum {
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
}
continue
}
if !errors.Is(err, gorm.ErrRecordNotFound) {
return fmt.Errorf("check local applied migration %s: %w", m.ID, err)
}
if strings.TrimSpace(m.SQLText) == "" {
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
return fmt.Errorf("mark empty migration %s as applied: %w", m.ID, err)
}
continue
}
statements := splitSQLStatementsLite(m.SQLText)
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
for _, stmt := range statements {
if err := tx.Exec(stmt).Error; err != nil {
return fmt.Errorf("apply migration %s statement %q: %w", m.ID, stmt, err)
}
}
return nil
}); err != nil {
return err
}
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
return fmt.Errorf("record applied migration %s: %w", m.ID, err)
}
}
return nil
}
func splitSQLStatementsLite(script string) []string {
scanner := bufio.NewScanner(strings.NewReader(script))
scanner.Buffer(make([]byte, 1024), 1024*1024)
lines := make([]string, 0, 64)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
if line == "" || strings.HasPrefix(line, "--") {
continue
}
lines = append(lines, scanner.Text())
}
combined := strings.Join(lines, "\n")
raw := strings.Split(combined, ";")
stmts := make([]string, 0, len(raw))
for _, stmt := range raw {
trimmed := strings.TrimSpace(stmt)
if trimmed == "" {
continue
}
stmts = append(stmts, trimmed)
}
return stmts
}
func digestSQL(sqlText string) string {
hash := sha256.Sum256([]byte(sqlText))
return hex.EncodeToString(hash[:])
}
func compareVersions(left, right string) int {
leftParts := normalizeVersionParts(left)
rightParts := normalizeVersionParts(right)
maxLen := len(leftParts)
if len(rightParts) > maxLen {
maxLen = len(rightParts)
}
for i := 0; i < maxLen; i++ {
lv := 0
rv := 0
if i < len(leftParts) {
lv = leftParts[i]
}
if i < len(rightParts) {
rv = rightParts[i]
}
if lv < rv {
return -1
}
if lv > rv {
return 1
}
}
return 0
}
func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time) error {
if strings.EqualFold(mariaDB.Dialector.Name(), "sqlite") {
return nil
}
if err := ensureClientSchemaStateTable(mariaDB); err != nil {
return err
}
username := strings.TrimSpace(s.localDB.GetDBUser())
if username == "" {
return nil
}
lastMigrationID := ""
if id, err := s.localDB.GetLatestAppliedRemoteMigrationID(); err == nil {
lastMigrationID = id
hostname, err := os.Hostname()
if err != nil {
hostname = ""
}
hostname = strings.TrimSpace(hostname)
lastSyncAt := s.localDB.GetLastSyncTime()
lastSyncStatus := ReadinessReady
pendingChangesCount := s.localDB.CountPendingChanges()
pendingErrorsCount := s.localDB.CountErroredChanges()
configurationsCount := s.localDB.CountConfigurations()
projectsCount := s.localDB.CountProjects()
estimateVersion := latestPricelistVersion(s.localDB, "estimate")
warehouseVersion := latestPricelistVersion(s.localDB, "warehouse")
competitorVersion := latestPricelistVersion(s.localDB, "competitor")
lastSyncErrorCode, lastSyncErrorText := latestSyncErrorState(s.localDB)
return mariaDB.Exec(`
INSERT INTO qt_client_schema_state (username, last_applied_migration_id, app_version, last_checked_at, updated_at)
VALUES (?, ?, ?, ?, ?)
INSERT INTO qt_client_schema_state (
username, hostname, app_version,
last_sync_at, last_sync_status, pending_changes_count, pending_errors_count,
configurations_count, projects_count,
estimate_pricelist_version, warehouse_pricelist_version, competitor_pricelist_version,
last_sync_error_code, last_sync_error_text,
last_checked_at, updated_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
last_applied_migration_id = VALUES(last_applied_migration_id),
app_version = VALUES(app_version),
last_sync_at = VALUES(last_sync_at),
last_sync_status = VALUES(last_sync_status),
pending_changes_count = VALUES(pending_changes_count),
pending_errors_count = VALUES(pending_errors_count),
configurations_count = VALUES(configurations_count),
projects_count = VALUES(projects_count),
estimate_pricelist_version = VALUES(estimate_pricelist_version),
warehouse_pricelist_version = VALUES(warehouse_pricelist_version),
competitor_pricelist_version = VALUES(competitor_pricelist_version),
last_sync_error_code = VALUES(last_sync_error_code),
last_sync_error_text = VALUES(last_sync_error_text),
last_checked_at = VALUES(last_checked_at),
updated_at = VALUES(updated_at)
`, username, lastMigrationID, appmeta.Version(), checkedAt, checkedAt).Error
`, username, hostname, appmeta.Version(),
lastSyncAt, lastSyncStatus, pendingChangesCount, pendingErrorsCount,
configurationsCount, projectsCount,
estimateVersion, warehouseVersion, competitorVersion,
lastSyncErrorCode, lastSyncErrorText,
checkedAt, checkedAt).Error
}
func normalizeVersionParts(v string) []int {
trimmed := strings.TrimSpace(v)
trimmed = strings.TrimPrefix(trimmed, "v")
chunks := strings.Split(trimmed, ".")
parts := make([]int, 0, len(chunks))
for _, chunk := range chunks {
clean := strings.TrimSpace(chunk)
if clean == "" {
parts = append(parts, 0)
continue
}
n := 0
for i := 0; i < len(clean); i++ {
if clean[i] < '0' || clean[i] > '9' {
clean = clean[:i]
break
}
}
if clean != "" {
if parsed, err := strconv.Atoi(clean); err == nil {
n = parsed
}
}
parts = append(parts, n)
func isDuplicatePrimaryKeyDefinition(err error) bool {
if err == nil {
return false
}
return parts
msg := strings.ToLower(err.Error())
return strings.Contains(msg, "multiple primary key defined") ||
strings.Contains(msg, "duplicate key name 'primary'") ||
strings.Contains(msg, "duplicate entry")
}
func latestPricelistVersion(local *localdb.LocalDB, source string) *string {
if local == nil {
return nil
}
pl, err := local.GetLatestLocalPricelistBySource(source)
if err != nil || pl == nil {
return nil
}
version := strings.TrimSpace(pl.Version)
if version == "" {
return nil
}
return &version
}
func latestSyncErrorState(local *localdb.LocalDB) (*string, *string) {
if local == nil {
return nil, nil
}
if guard, err := local.GetSyncGuardState(); err == nil && guard != nil && strings.EqualFold(guard.Status, ReadinessBlocked) {
return optionalString(strings.TrimSpace(guard.ReasonCode)), optionalString(strings.TrimSpace(guard.ReasonText))
}
var pending localdb.PendingChange
if err := local.DB().
Where("TRIM(COALESCE(last_error, '')) <> ''").
Order("id DESC").
First(&pending).Error; err == nil {
return optionalString("PENDING_CHANGE_ERROR"), optionalString(strings.TrimSpace(pending.LastError))
}
return nil, nil
}
func optionalString(value string) *string {
if strings.TrimSpace(value) == "" {
return nil
}
v := strings.TrimSpace(value)
return &v
}
func toReadinessFromState(state *localdb.LocalSyncGuardState) *SyncReadiness {

View File

@@ -145,6 +145,9 @@ func (s *Service) ImportConfigurationsToLocal() (*ConfigImportResult, error) {
if existing != nil && err == nil {
localCfg.ID = existing.ID
if localCfg.Line <= 0 && existing.Line > 0 {
localCfg.Line = existing.Line
}
result.Updated++
} else {
result.Imported++
@@ -687,6 +690,9 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
for i, item := range serverItems {
localItems[i] = *localdb.PricelistItemToLocal(&item, localPricelistID)
}
if err := s.enrichLocalPricelistItemsWithStock(mariaDB, localItems); err != nil {
slog.Warn("pricelist stock enrichment skipped", "pricelist_id", localPricelistID, "error", err)
}
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
return 0, fmt.Errorf("saving local pricelist items: %w", err)
@@ -705,6 +711,111 @@ func (s *Service) SyncPricelistItemsByServerID(serverPricelistID uint) (int, err
return s.SyncPricelistItems(localPL.ID)
}
func (s *Service) enrichLocalPricelistItemsWithStock(mariaDB *gorm.DB, items []localdb.LocalPricelistItem) error {
if len(items) == 0 {
return nil
}
bookRepo := repository.NewPartnumberBookRepository(s.localDB.DB())
book, err := bookRepo.GetActiveBook()
if err != nil || book == nil {
return nil
}
bookItems, err := bookRepo.GetBookItems(book.ID)
if err != nil {
return err
}
if len(bookItems) == 0 {
return nil
}
partnumberToLots := make(map[string][]string, len(bookItems))
for _, item := range bookItems {
pn := strings.TrimSpace(item.Partnumber)
if pn == "" {
continue
}
seenLots := make(map[string]struct{}, len(item.LotsJSON))
for _, lot := range item.LotsJSON {
lotName := strings.TrimSpace(lot.LotName)
if lotName == "" {
continue
}
key := strings.ToLower(lotName)
if _, exists := seenLots[key]; exists {
continue
}
seenLots[key] = struct{}{}
partnumberToLots[pn] = append(partnumberToLots[pn], lotName)
}
}
if len(partnumberToLots) == 0 {
return nil
}
type stockRow struct {
Partnumber string `gorm:"column:partnumber"`
Qty *float64 `gorm:"column:qty"`
}
rows := make([]stockRow, 0)
if err := mariaDB.Raw(`
SELECT s.partnumber, s.qty
FROM stock_log s
INNER JOIN (
SELECT partnumber, MAX(date) AS max_date
FROM stock_log
GROUP BY partnumber
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
WHERE s.qty IS NOT NULL
`).Scan(&rows).Error; err != nil {
return err
}
lotTotals := make(map[string]float64, len(items))
lotPartnumbers := make(map[string][]string, len(items))
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
for _, row := range rows {
pn := strings.TrimSpace(row.Partnumber)
if pn == "" || row.Qty == nil {
continue
}
lots := partnumberToLots[pn]
if len(lots) == 0 {
continue
}
for _, lotName := range lots {
lotTotals[lotName] += *row.Qty
if _, ok := seenPartnumbers[lotName]; !ok {
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
}
key := strings.ToLower(pn)
if _, exists := seenPartnumbers[lotName][key]; exists {
continue
}
seenPartnumbers[lotName][key] = struct{}{}
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
}
}
for i := range items {
lotName := strings.TrimSpace(items[i].LotName)
if qty, ok := lotTotals[lotName]; ok {
qtyCopy := qty
items[i].AvailableQty = &qtyCopy
}
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
sort.Slice(partnumbers, func(a, b int) bool {
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
})
items[i].Partnumbers = append(localdb.LocalStringList{}, partnumbers...)
}
}
return nil
}
// GetLocalPriceForLot returns the price for a lot from a local pricelist
func (s *Service) GetLocalPriceForLot(localPricelistID uint, lotName string) (float64, error) {
return s.localDB.GetLocalPriceForLot(localPricelistID, lotName)

View File

@@ -17,7 +17,6 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
&models.Pricelist{},
&models.PricelistItem{},
&models.Lot{},
&models.LotPartnumber{},
&models.StockLog{},
); err != nil {
t.Fatalf("migrate server tables: %v", err)
@@ -105,3 +104,102 @@ func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T)
}
}
func TestSyncPricelistItems_EnrichesStockFromLocalPartnumberBook(t *testing.T) {
local := newLocalDBForSyncTest(t)
serverDB := newServerDBForSyncTest(t)
if err := serverDB.AutoMigrate(
&models.Pricelist{},
&models.PricelistItem{},
&models.Lot{},
&models.StockLog{},
); err != nil {
t.Fatalf("migrate server tables: %v", err)
}
serverPL := models.Pricelist{
Source: "warehouse",
Version: "2026-03-07-001",
Notification: "server",
CreatedBy: "tester",
IsActive: true,
CreatedAt: time.Now().Add(-1 * time.Hour),
}
if err := serverDB.Create(&serverPL).Error; err != nil {
t.Fatalf("create server pricelist: %v", err)
}
if err := serverDB.Create(&models.PricelistItem{
PricelistID: serverPL.ID,
LotName: "CPU_A",
LotCategory: "CPU",
Price: 10,
}).Error; err != nil {
t.Fatalf("create server pricelist item: %v", err)
}
qty := 7.0
if err := serverDB.Create(&models.StockLog{
Partnumber: "CPU-PN-1",
Date: time.Now(),
Price: 100,
Qty: &qty,
}).Error; err != nil {
t.Fatalf("create stock log: %v", err)
}
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
ServerID: serverPL.ID,
Source: serverPL.Source,
Version: serverPL.Version,
Name: serverPL.Notification,
CreatedAt: serverPL.CreatedAt,
SyncedAt: time.Now(),
IsUsed: false,
}); err != nil {
t.Fatalf("seed local pricelist: %v", err)
}
localPL, err := local.GetLocalPricelistByServerID(serverPL.ID)
if err != nil {
t.Fatalf("get local pricelist: %v", err)
}
if err := local.DB().Create(&localdb.LocalPartnumberBook{
ServerID: 1,
Version: "2026-03-07-001",
CreatedAt: time.Now(),
IsActive: true,
PartnumbersJSON: localdb.LocalStringList{"CPU-PN-1"},
}).Error; err != nil {
t.Fatalf("create local partnumber book: %v", err)
}
if err := local.DB().Create(&localdb.LocalPartnumberBookItem{
Partnumber: "CPU-PN-1",
LotsJSON: localdb.LocalPartnumberBookLots{
{LotName: "CPU_A", Qty: 1},
},
Description: "CPU PN",
}).Error; err != nil {
t.Fatalf("create local partnumber book item: %v", err)
}
svc := syncsvc.NewServiceWithDB(serverDB, local)
if _, err := svc.SyncPricelistItems(localPL.ID); err != nil {
t.Fatalf("sync pricelist items: %v", err)
}
items, err := local.GetLocalPricelistItems(localPL.ID)
if err != nil {
t.Fatalf("load local items: %v", err)
}
if len(items) != 1 {
t.Fatalf("expected 1 local item, got %d", len(items))
}
if items[0].AvailableQty == nil {
t.Fatalf("expected available_qty to be set")
}
if *items[0].AvailableQty != 7 {
t.Fatalf("expected available_qty=7, got %v", *items[0].AvailableQty)
}
if len(items[0].Partnumbers) != 1 || items[0].Partnumbers[0] != "CPU-PN-1" {
t.Fatalf("expected partnumbers [CPU-PN-1], got %v", items[0].Partnumbers)
}
}

View File

@@ -250,6 +250,121 @@ func TestPushPendingChangesCreateIsIdempotent(t *testing.T) {
}
}
func TestPushPendingChangesConfigurationPushesLine(t *testing.T) {
local := newLocalDBForSyncTest(t)
serverDB := newServerDBForSyncTest(t)
localSync := syncsvc.NewService(nil, local)
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
pushService := syncsvc.NewServiceWithDB(serverDB, local)
created, err := configService.Create("tester", &services.CreateConfigRequest{
Name: "Cfg Line Push",
Items: models.ConfigItems{{LotName: "CPU_LINE", Quantity: 1, UnitPrice: 1000}},
ServerCount: 1,
})
if err != nil {
t.Fatalf("create config: %v", err)
}
if created.Line != 10 {
t.Fatalf("expected local create line=10, got %d", created.Line)
}
if _, err := pushService.PushPendingChanges(); err != nil {
t.Fatalf("push pending changes: %v", err)
}
var serverCfg models.Configuration
if err := serverDB.Where("uuid = ?", created.UUID).First(&serverCfg).Error; err != nil {
t.Fatalf("load server config: %v", err)
}
if serverCfg.Line != 10 {
t.Fatalf("expected server line=10 after push, got %d", serverCfg.Line)
}
}
func TestImportConfigurationsToLocalPullsLine(t *testing.T) {
local := newLocalDBForSyncTest(t)
serverDB := newServerDBForSyncTest(t)
cfg := models.Configuration{
UUID: "server-line-config",
OwnerUsername: "tester",
Name: "Cfg Line Pull",
Items: models.ConfigItems{{LotName: "CPU_PULL", Quantity: 1, UnitPrice: 900}},
ServerCount: 1,
Line: 40,
}
total := cfg.Items.Total()
cfg.TotalPrice = &total
if err := serverDB.Create(&cfg).Error; err != nil {
t.Fatalf("seed server config: %v", err)
}
svc := syncsvc.NewServiceWithDB(serverDB, local)
if _, err := svc.ImportConfigurationsToLocal(); err != nil {
t.Fatalf("import configurations to local: %v", err)
}
localCfg, err := local.GetConfigurationByUUID(cfg.UUID)
if err != nil {
t.Fatalf("load local config: %v", err)
}
if localCfg.Line != 40 {
t.Fatalf("expected imported line=40, got %d", localCfg.Line)
}
}
func TestImportConfigurationsToLocalLoadsVendorSpecFromServer(t *testing.T) {
local := newLocalDBForSyncTest(t)
serverDB := newServerDBForSyncTest(t)
serverSpec := models.VendorSpec{
{
SortOrder: 10,
VendorPartnumber: "GPU-NVHGX-H200-8141",
Quantity: 1,
Description: "NVIDIA HGX Delta-Next GPU Baseboard",
LotMappings: []models.VendorSpecLotMapping{
{LotName: "GPU_NV_H200_141GB_SXM_(HGX)", QuantityPerPN: 8},
},
},
}
cfg := models.Configuration{
UUID: "server-vendorspec-config",
OwnerUsername: "tester",
Name: "Cfg VendorSpec Pull",
Items: models.ConfigItems{{LotName: "CPU_PULL", Quantity: 1, UnitPrice: 900}},
ServerCount: 1,
Line: 50,
VendorSpec: serverSpec,
}
total := cfg.Items.Total()
cfg.TotalPrice = &total
if err := serverDB.Create(&cfg).Error; err != nil {
t.Fatalf("seed server config: %v", err)
}
svc := syncsvc.NewServiceWithDB(serverDB, local)
if _, err := svc.ImportConfigurationsToLocal(); err != nil {
t.Fatalf("import configurations to local: %v", err)
}
localCfg, err := local.GetConfigurationByUUID(cfg.UUID)
if err != nil {
t.Fatalf("load local config: %v", err)
}
if len(localCfg.VendorSpec) != 1 {
t.Fatalf("expected server vendor_spec imported, got %d rows", len(localCfg.VendorSpec))
}
if localCfg.VendorSpec[0].VendorPartnumber != "GPU-NVHGX-H200-8141" {
t.Fatalf("unexpected vendor_partnumber after import: %q", localCfg.VendorSpec[0].VendorPartnumber)
}
if len(localCfg.VendorSpec[0].LotMappings) != 1 || localCfg.VendorSpec[0].LotMappings[0].LotName != "GPU_NV_H200_141GB_SXM_(HGX)" {
t.Fatalf("unexpected lot mappings after import: %+v", localCfg.VendorSpec[0].LotMappings)
}
}
func TestPushPendingChangesCreateThenUpdateBeforeFirstPush(t *testing.T) {
local := newLocalDBForSyncTest(t)
serverDB := newServerDBForSyncTest(t)
@@ -361,7 +476,9 @@ CREATE TABLE qt_configurations (
competitor_pricelist_id INTEGER NULL,
disable_price_refresh INTEGER NOT NULL DEFAULT 0,
only_in_stock INTEGER NOT NULL DEFAULT 0,
line_no INTEGER NULL,
price_updated_at DATETIME NULL,
vendor_spec TEXT NULL,
created_at DATETIME
);`).Error; err != nil {
t.Fatalf("create qt_configurations: %v", err)

View File

@@ -0,0 +1,141 @@
package services
import (
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"math"
)
// ResolvedBOMRow is the result of resolving a single vendor BOM row.
type ResolvedBOMRow struct {
localdb.VendorSpecItem
// ResolutionSource already on VendorSpecItem: "book", "manual_suggestion", "unresolved"
}
// AggregatedLOT represents a LOT with its aggregated quantity from the BOM.
type AggregatedLOT struct {
LotName string
Quantity int
}
// VendorSpecResolver resolves vendor BOM rows to LOT names using the active partnumber book.
type VendorSpecResolver struct {
bookRepo *repository.PartnumberBookRepository
}
func NewVendorSpecResolver(bookRepo *repository.PartnumberBookRepository) *VendorSpecResolver {
return &VendorSpecResolver{bookRepo: bookRepo}
}
// Resolve resolves each vendor spec item's lot name using the 3-step algorithm.
// It returns the resolved items. Manual lot suggestions from the input are preserved as pre-fill.
func (r *VendorSpecResolver) Resolve(items []localdb.VendorSpecItem) ([]localdb.VendorSpecItem, error) {
// Step 1: Get the active book
book, err := r.bookRepo.GetActiveBook()
if err != nil {
// No book available — mark all as unresolved
for i := range items {
if items[i].ResolvedLotName == "" {
items[i].ResolutionSource = "unresolved"
}
}
return items, nil
}
for i, item := range items {
pn := item.VendorPartnumber
// Step 1: Look up in active book
matches, err := r.bookRepo.FindLotByPartnumber(book.ID, pn)
if err == nil && len(matches) > 0 {
items[i].LotMappings = make([]localdb.VendorSpecLotMapping, 0, len(matches[0].LotsJSON))
for _, lot := range matches[0].LotsJSON {
if lot.LotName == "" {
continue
}
items[i].LotMappings = append(items[i].LotMappings, localdb.VendorSpecLotMapping{
LotName: lot.LotName,
QuantityPerPN: lotQtyToInt(lot.Qty),
})
}
if len(items[i].LotMappings) > 0 {
items[i].ResolvedLotName = items[i].LotMappings[0].LotName
}
items[i].ResolutionSource = "book"
continue
}
// Step 2: Pre-fill from manual_lot_suggestion if provided
if item.ManualLotSuggestion != "" {
items[i].ResolvedLotName = item.ManualLotSuggestion
items[i].ResolutionSource = "manual_suggestion"
continue
}
// Step 3: Unresolved
items[i].ResolvedLotName = ""
items[i].ResolutionSource = "unresolved"
}
return items, nil
}
// AggregateLOTs applies qty from the resolved PN composition stored in lots_json.
func AggregateLOTs(items []localdb.VendorSpecItem, book *localdb.LocalPartnumberBook, bookRepo *repository.PartnumberBookRepository) ([]AggregatedLOT, error) {
lotTotals := make(map[string]int)
if book != nil {
for _, item := range items {
if item.ResolvedLotName == "" {
continue
}
lot := item.ResolvedLotName
pn := item.VendorPartnumber
matches, err := bookRepo.FindLotByPartnumber(book.ID, pn)
if err != nil || len(matches) == 0 {
lotTotals[lot] += item.Quantity
continue
}
for _, m := range matches {
for _, mappedLot := range m.LotsJSON {
if mappedLot.LotName != lot {
continue
}
lotTotals[lot] += item.Quantity * lotQtyToInt(mappedLot.Qty)
}
}
}
} else {
// No book: all resolved rows contribute qty=1 per lot
for _, item := range items {
if item.ResolvedLotName != "" {
lotTotals[item.ResolvedLotName] += item.Quantity
}
}
}
// Build aggregated list
seen := make(map[string]bool)
var result []AggregatedLOT
for _, item := range items {
lot := item.ResolvedLotName
if lot == "" || seen[lot] {
continue
}
seen[lot] = true
qty := lotTotals[lot]
if qty < 1 {
qty = 1
}
result = append(result, AggregatedLOT{LotName: lot, Quantity: qty})
}
return result, nil
}
func lotQtyToInt(qty float64) int {
if qty < 1 {
return 1
}
return int(math.Round(qty))
}

View File

@@ -0,0 +1,560 @@
package services
import (
"bytes"
"encoding/xml"
"fmt"
"path/filepath"
"sort"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
"git.mchus.pro/mchus/quoteforge/internal/models"
"git.mchus.pro/mchus/quoteforge/internal/repository"
"github.com/google/uuid"
"gorm.io/gorm"
)
type VendorWorkspaceImportResult struct {
Imported int `json:"imported"`
Project *models.Project `json:"project,omitempty"`
Configs []VendorWorkspaceImportedConfig `json:"configs"`
}
type VendorWorkspaceImportedConfig struct {
UUID string `json:"uuid"`
Name string `json:"name"`
ServerCount int `json:"server_count"`
ServerModel string `json:"server_model,omitempty"`
Rows int `json:"rows"`
}
type importedWorkspace struct {
SourceFormat string
SourceDocID string
SourceFileName string
CurrencyCode string
Configurations []importedConfiguration
}
type importedConfiguration struct {
GroupID string
Name string
Line int
ServerCount int
ServerModel string
Article string
CurrencyCode string
Rows []localdb.VendorSpecItem
TotalPrice *float64
}
type groupedItem struct {
order int
row cfxmlProductLineItem
}
type cfxmlDocument struct {
XMLName xml.Name `xml:"CFXML"`
ThisDocumentIdentifier cfxmlDocumentIdentifier `xml:"thisDocumentIdentifier"`
CFData cfxmlData `xml:"CFData"`
}
type cfxmlDocumentIdentifier struct {
ProprietaryDocumentIdentifier string `xml:"ProprietaryDocumentIdentifier"`
}
type cfxmlData struct {
ProprietaryInformation []cfxmlProprietaryInformation `xml:"ProprietaryInformation"`
ProductLineItems []cfxmlProductLineItem `xml:"ProductLineItem"`
}
type cfxmlProprietaryInformation struct {
Name string `xml:"Name"`
Value string `xml:"Value"`
}
type cfxmlProductLineItem struct {
ProductLineNumber string `xml:"ProductLineNumber"`
ItemNo string `xml:"ItemNo"`
TransactionType string `xml:"TransactionType"`
ProprietaryGroupIdentifier string `xml:"ProprietaryGroupIdentifier"`
ConfigurationGroupLineNumberReference string `xml:"ConfigurationGroupLineNumberReference"`
Quantity string `xml:"Quantity"`
ProductIdentification cfxmlProductIdentification `xml:"ProductIdentification"`
UnitListPrice cfxmlUnitListPrice `xml:"UnitListPrice"`
ProductSubLineItems []cfxmlProductSubLineItem `xml:"ProductSubLineItem"`
}
type cfxmlProductSubLineItem struct {
LineNumber string `xml:"LineNumber"`
TransactionType string `xml:"TransactionType"`
Quantity string `xml:"Quantity"`
ProductIdentification cfxmlProductIdentification `xml:"ProductIdentification"`
UnitListPrice cfxmlUnitListPrice `xml:"UnitListPrice"`
}
type cfxmlProductIdentification struct {
PartnerProductIdentification cfxmlPartnerProductIdentification `xml:"PartnerProductIdentification"`
}
type cfxmlPartnerProductIdentification struct {
ProprietaryProductIdentifier string `xml:"ProprietaryProductIdentifier"`
ProprietaryProductChar string `xml:"ProprietaryProductChar"`
ProductCharacter string `xml:"ProductCharacter"`
ProductDescription string `xml:"ProductDescription"`
ProductName string `xml:"ProductName"`
ProductTypeCode string `xml:"ProductTypeCode"`
}
type cfxmlUnitListPrice struct {
FinancialAmount cfxmlFinancialAmount `xml:"FinancialAmount"`
}
type cfxmlFinancialAmount struct {
GlobalCurrencyCode string `xml:"GlobalCurrencyCode"`
MonetaryAmount string `xml:"MonetaryAmount"`
}
func (s *LocalConfigurationService) ImportVendorWorkspaceToProject(projectUUID string, sourceFileName string, data []byte, ownerUsername string) (*VendorWorkspaceImportResult, error) {
project, err := s.localDB.GetProjectByUUID(projectUUID)
if err != nil {
return nil, ErrProjectNotFound
}
workspace, err := parseCFXMLWorkspace(data, filepath.Base(sourceFileName))
if err != nil {
return nil, err
}
result := &VendorWorkspaceImportResult{
Imported: 0,
Project: localdb.LocalToProject(project),
Configs: make([]VendorWorkspaceImportedConfig, 0, len(workspace.Configurations)),
}
err = s.localDB.DB().Transaction(func(tx *gorm.DB) error {
bookRepo := repository.NewPartnumberBookRepository(tx)
for _, imported := range workspace.Configurations {
now := time.Now()
cfgUUID := uuid.NewString()
groupRows, items, totalPrice, estimatePricelistID, err := s.prepareImportedConfiguration(imported.Rows, imported.ServerCount, bookRepo)
if err != nil {
return fmt.Errorf("prepare imported configuration group %s: %w", imported.GroupID, err)
}
localCfg := &localdb.LocalConfiguration{
UUID: cfgUUID,
ProjectUUID: &projectUUID,
IsActive: true,
Name: imported.Name,
Items: items,
TotalPrice: totalPrice,
ServerCount: imported.ServerCount,
ServerModel: imported.ServerModel,
Article: imported.Article,
PricelistID: estimatePricelistID,
VendorSpec: groupRows,
CreatedAt: now,
UpdatedAt: now,
SyncStatus: "pending",
OriginalUsername: ownerUsername,
}
if err := s.createWithVersionTx(tx, localCfg, ownerUsername); err != nil {
return fmt.Errorf("import configuration group %s: %w", imported.GroupID, err)
}
result.Imported++
result.Configs = append(result.Configs, VendorWorkspaceImportedConfig{
UUID: localCfg.UUID,
Name: localCfg.Name,
ServerCount: localCfg.ServerCount,
ServerModel: localCfg.ServerModel,
Rows: len(localCfg.VendorSpec),
})
}
return nil
})
if err != nil {
return nil, err
}
return result, nil
}
func (s *LocalConfigurationService) prepareImportedConfiguration(rows []localdb.VendorSpecItem, serverCount int, bookRepo *repository.PartnumberBookRepository) (localdb.VendorSpec, localdb.LocalConfigItems, *float64, *uint, error) {
resolver := NewVendorSpecResolver(bookRepo)
resolved, err := resolver.Resolve(append([]localdb.VendorSpecItem(nil), rows...))
if err != nil {
return nil, nil, nil, nil, err
}
canonical := make(localdb.VendorSpec, 0, len(resolved))
for _, row := range resolved {
if len(row.LotMappings) == 0 && strings.TrimSpace(row.ResolvedLotName) != "" {
row.LotMappings = []localdb.VendorSpecLotMapping{
{LotName: strings.TrimSpace(row.ResolvedLotName), QuantityPerPN: 1},
}
}
row.LotMappings = normalizeImportedLotMappings(row.LotMappings)
row.ResolvedLotName = ""
row.ResolutionSource = ""
row.ManualLotSuggestion = ""
row.LotQtyPerPN = 0
row.LotAllocations = nil
canonical = append(canonical, row)
}
estimatePricelist, _ := s.localDB.GetLatestLocalPricelistBySource("estimate")
var serverPricelistID *uint
if estimatePricelist != nil {
serverPricelistID = &estimatePricelist.ServerID
}
items := aggregateVendorSpecToItems(canonical, estimatePricelist, s.localDB)
totalValue := items.Total()
if serverCount > 1 {
totalValue *= float64(serverCount)
}
totalPrice := &totalValue
return canonical, items, totalPrice, serverPricelistID, nil
}
func aggregateVendorSpecToItems(spec localdb.VendorSpec, estimatePricelist *localdb.LocalPricelist, local *localdb.LocalDB) localdb.LocalConfigItems {
if len(spec) == 0 {
return localdb.LocalConfigItems{}
}
lotMap := make(map[string]int)
order := make([]string, 0)
for _, row := range spec {
for _, mapping := range normalizeImportedLotMappings(row.LotMappings) {
if _, exists := lotMap[mapping.LotName]; !exists {
order = append(order, mapping.LotName)
}
lotMap[mapping.LotName] += row.Quantity * mapping.QuantityPerPN
}
}
sort.Strings(order)
items := make(localdb.LocalConfigItems, 0, len(order))
for _, lotName := range order {
unitPrice := 0.0
if estimatePricelist != nil && local != nil {
if price, err := local.GetLocalPriceForLot(estimatePricelist.ID, lotName); err == nil && price > 0 {
unitPrice = price
}
}
items = append(items, localdb.LocalConfigItem{
LotName: lotName,
Quantity: lotMap[lotName],
UnitPrice: unitPrice,
})
}
return items
}
func normalizeImportedLotMappings(in []localdb.VendorSpecLotMapping) []localdb.VendorSpecLotMapping {
if len(in) == 0 {
return nil
}
merged := make(map[string]int, len(in))
order := make([]string, 0, len(in))
for _, mapping := range in {
lot := strings.TrimSpace(mapping.LotName)
if lot == "" {
continue
}
qty := mapping.QuantityPerPN
if qty < 1 {
qty = 1
}
if _, exists := merged[lot]; !exists {
order = append(order, lot)
}
merged[lot] += qty
}
out := make([]localdb.VendorSpecLotMapping, 0, len(order))
for _, lot := range order {
out = append(out, localdb.VendorSpecLotMapping{
LotName: lot,
QuantityPerPN: merged[lot],
})
}
if len(out) == 0 {
return nil
}
return out
}
func parseCFXMLWorkspace(data []byte, sourceFileName string) (*importedWorkspace, error) {
var doc cfxmlDocument
if err := xml.Unmarshal(data, &doc); err != nil {
return nil, fmt.Errorf("parse CFXML workspace: %w", err)
}
if doc.XMLName.Local != "CFXML" {
return nil, fmt.Errorf("unsupported workspace root: %s", doc.XMLName.Local)
}
if len(doc.CFData.ProductLineItems) == 0 {
return nil, fmt.Errorf("CFXML workspace has no ProductLineItem rows")
}
workspace := &importedWorkspace{
SourceFormat: "CFXML",
SourceDocID: strings.TrimSpace(doc.ThisDocumentIdentifier.ProprietaryDocumentIdentifier),
SourceFileName: sourceFileName,
CurrencyCode: detectWorkspaceCurrency(doc.CFData.ProprietaryInformation, doc.CFData.ProductLineItems),
}
type groupBucket struct {
order int
items []groupedItem
}
groupOrder := make([]string, 0)
groups := make(map[string]*groupBucket)
for idx, item := range doc.CFData.ProductLineItems {
groupID := strings.TrimSpace(item.ProprietaryGroupIdentifier)
if groupID == "" {
groupID = firstNonEmpty(strings.TrimSpace(item.ProductLineNumber), strings.TrimSpace(item.ItemNo), fmt.Sprintf("group-%d", idx+1))
}
bucket := groups[groupID]
if bucket == nil {
bucket = &groupBucket{order: idx}
groups[groupID] = bucket
groupOrder = append(groupOrder, groupID)
}
bucket.items = append(bucket.items, groupedItem{order: idx, row: item})
}
for lineIdx, groupID := range groupOrder {
bucket := groups[groupID]
if bucket == nil || len(bucket.items) == 0 {
continue
}
primary := pickPrimaryTopLevelRow(bucket.items)
serverCount := maxInt(parseInt(primary.row.Quantity), 1)
rows := make([]localdb.VendorSpecItem, 0, len(bucket.items)*4)
sortOrder := 10
for _, item := range bucket.items {
topRow := vendorSpecItemFromTopLevel(item.row, serverCount, sortOrder)
if topRow != nil {
rows = append(rows, *topRow)
sortOrder += 10
}
for _, sub := range item.row.ProductSubLineItems {
subRow := vendorSpecItemFromSubLine(sub, sortOrder)
if subRow == nil {
continue
}
rows = append(rows, *subRow)
sortOrder += 10
}
}
total := sumVendorSpecRows(rows, serverCount)
name := strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductName)
if name == "" {
name = strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductDescription)
}
if name == "" {
name = fmt.Sprintf("Imported config %d", lineIdx+1)
}
workspace.Configurations = append(workspace.Configurations, importedConfiguration{
GroupID: groupID,
Name: name,
Line: (lineIdx + 1) * 10,
ServerCount: serverCount,
ServerModel: strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProductDescription),
Article: strings.TrimSpace(primary.row.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier),
CurrencyCode: workspace.CurrencyCode,
Rows: rows,
TotalPrice: total,
})
}
if len(workspace.Configurations) == 0 {
return nil, fmt.Errorf("CFXML workspace has no importable configuration groups")
}
return workspace, nil
}
func detectWorkspaceCurrency(meta []cfxmlProprietaryInformation, rows []cfxmlProductLineItem) string {
for _, item := range meta {
if strings.EqualFold(strings.TrimSpace(item.Name), "Currencies") {
value := strings.TrimSpace(item.Value)
if value != "" {
return value
}
}
}
for _, row := range rows {
code := strings.TrimSpace(row.UnitListPrice.FinancialAmount.GlobalCurrencyCode)
if code != "" {
return code
}
}
return ""
}
func pickPrimaryTopLevelRow(items []groupedItem) groupedItem {
best := items[0]
bestScore := primaryScore(best.row)
for _, item := range items[1:] {
score := primaryScore(item.row)
if score > bestScore {
best = item
bestScore = score
continue
}
if score == bestScore && compareLineNumbers(item.row.ProductLineNumber, best.row.ProductLineNumber) < 0 {
best = item
}
}
return best
}
func primaryScore(row cfxmlProductLineItem) int {
score := len(row.ProductSubLineItems)
if strings.EqualFold(strings.TrimSpace(row.ProductIdentification.PartnerProductIdentification.ProductTypeCode), "Hardware") {
score += 100000
}
return score
}
func compareLineNumbers(left, right string) int {
li := parseInt(left)
ri := parseInt(right)
switch {
case li < ri:
return -1
case li > ri:
return 1
default:
return strings.Compare(left, right)
}
}
func vendorSpecItemFromTopLevel(item cfxmlProductLineItem, serverCount int, sortOrder int) *localdb.VendorSpecItem {
code := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier)
desc := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProductDescription)
if code == "" && desc == "" {
return nil
}
qty := normalizeTopLevelQuantity(item.Quantity, serverCount)
unitPrice := parseOptionalFloat(item.UnitListPrice.FinancialAmount.MonetaryAmount)
return &localdb.VendorSpecItem{
SortOrder: sortOrder,
VendorPartnumber: code,
Quantity: qty,
Description: desc,
UnitPrice: unitPrice,
TotalPrice: totalPrice(unitPrice, qty),
}
}
func vendorSpecItemFromSubLine(item cfxmlProductSubLineItem, sortOrder int) *localdb.VendorSpecItem {
code := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProprietaryProductIdentifier)
desc := strings.TrimSpace(item.ProductIdentification.PartnerProductIdentification.ProductDescription)
if code == "" && desc == "" {
return nil
}
qty := maxInt(parseInt(item.Quantity), 1)
unitPrice := parseOptionalFloat(item.UnitListPrice.FinancialAmount.MonetaryAmount)
return &localdb.VendorSpecItem{
SortOrder: sortOrder,
VendorPartnumber: code,
Quantity: qty,
Description: desc,
UnitPrice: unitPrice,
TotalPrice: totalPrice(unitPrice, qty),
}
}
func sumVendorSpecRows(rows []localdb.VendorSpecItem, serverCount int) *float64 {
total := 0.0
hasTotal := false
for _, row := range rows {
if row.TotalPrice == nil {
continue
}
total += *row.TotalPrice
hasTotal = true
}
if !hasTotal {
return nil
}
if serverCount > 1 {
total *= float64(serverCount)
}
return &total
}
func totalPrice(unitPrice *float64, qty int) *float64 {
if unitPrice == nil {
return nil
}
total := *unitPrice * float64(qty)
return &total
}
func parseOptionalFloat(raw string) *float64 {
trimmed := strings.TrimSpace(raw)
if trimmed == "" {
return nil
}
value, err := strconv.ParseFloat(trimmed, 64)
if err != nil {
return nil
}
return &value
}
func parseInt(raw string) int {
trimmed := strings.TrimSpace(raw)
if trimmed == "" {
return 0
}
value, err := strconv.Atoi(trimmed)
if err != nil {
return 0
}
return value
}
func firstNonEmpty(values ...string) string {
for _, value := range values {
if strings.TrimSpace(value) != "" {
return strings.TrimSpace(value)
}
}
return ""
}
func maxInt(value, floor int) int {
if value < floor {
return floor
}
return value
}
func normalizeTopLevelQuantity(raw string, serverCount int) int {
qty := maxInt(parseInt(raw), 1)
if serverCount <= 1 {
return qty
}
if qty%serverCount == 0 {
return maxInt(qty/serverCount, 1)
}
return qty
}
func IsCFXMLWorkspace(data []byte) bool {
return bytes.Contains(data, []byte("<CFXML>")) || bytes.Contains(data, []byte("<CFXML "))
}

View File

@@ -0,0 +1,360 @@
package services
import (
"path/filepath"
"testing"
"time"
"git.mchus.pro/mchus/quoteforge/internal/localdb"
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
)
func TestParseCFXMLWorkspaceGroupsSoftwareIntoConfiguration(t *testing.T) {
const sample = `<?xml version="1.0" encoding="UTF-8"?>
<CFXML>
<thisDocumentIdentifier>
<ProprietaryDocumentIdentifier>CFXML.workspace-test</ProprietaryDocumentIdentifier>
</thisDocumentIdentifier>
<CFData>
<ProprietaryInformation>
<Name>Currencies</Name>
<Value>USD</Value>
</ProprietaryInformation>
<ProductLineItem>
<ProductLineNumber>1000</ProductLineNumber>
<ItemNo>1000</ItemNo>
<TransactionType>NEW</TransactionType>
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
<Quantity>6</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
<ProductName>#1</ProductName>
<ProductTypeCode>Hardware</ProductTypeCode>
</PartnerProductIdentification>
</ProductIdentification>
<UnitListPrice>
<FinancialAmount>
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
<MonetaryAmount>100.00</MonetaryAmount>
</FinancialAmount>
</UnitListPrice>
<ProductSubLineItem>
<LineNumber>1001</LineNumber>
<TransactionType>ADD</TransactionType>
<Quantity>2</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>CPU-1</ProprietaryProductIdentifier>
<ProductDescription>CPU</ProductDescription>
<ProductCharacter>PROCESSOR</ProductCharacter>
</PartnerProductIdentification>
</ProductIdentification>
<UnitListPrice>
<FinancialAmount>
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
<MonetaryAmount>0</MonetaryAmount>
</FinancialAmount>
</UnitListPrice>
</ProductSubLineItem>
</ProductLineItem>
<ProductLineItem>
<ProductLineNumber>2000</ProductLineNumber>
<ItemNo>2000</ItemNo>
<TransactionType>NEW</TransactionType>
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
<Quantity>6</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>7S0X-CTO8WW</ProprietaryProductIdentifier>
<ProductDescription>XClarity Controller Prem-FOD</ProductDescription>
<ProductName>software1</ProductName>
<ProductTypeCode>Software</ProductTypeCode>
</PartnerProductIdentification>
</ProductIdentification>
<UnitListPrice>
<FinancialAmount>
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
<MonetaryAmount>25.00</MonetaryAmount>
</FinancialAmount>
</UnitListPrice>
<ProductSubLineItem>
<LineNumber>2001</LineNumber>
<TransactionType>ADD</TransactionType>
<Quantity>1</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>LIC-1</ProprietaryProductIdentifier>
<ProductDescription>License</ProductDescription>
<ProductCharacter>SOFTWARE</ProductCharacter>
</PartnerProductIdentification>
</ProductIdentification>
<UnitListPrice>
<FinancialAmount>
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
<MonetaryAmount>0</MonetaryAmount>
</FinancialAmount>
</UnitListPrice>
</ProductSubLineItem>
</ProductLineItem>
<ProductLineItem>
<ProductLineNumber>3000</ProductLineNumber>
<ItemNo>3000</ItemNo>
<TransactionType>NEW</TransactionType>
<ProprietaryGroupIdentifier>3000</ProprietaryGroupIdentifier>
<ConfigurationGroupLineNumberReference>100</ConfigurationGroupLineNumberReference>
<Quantity>2</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
<ProductName>#2</ProductName>
<ProductTypeCode>Hardware</ProductTypeCode>
</PartnerProductIdentification>
</ProductIdentification>
<UnitListPrice>
<FinancialAmount>
<GlobalCurrencyCode>USD</GlobalCurrencyCode>
<MonetaryAmount>90.00</MonetaryAmount>
</FinancialAmount>
</UnitListPrice>
</ProductLineItem>
</CFData>
</CFXML>`
workspace, err := parseCFXMLWorkspace([]byte(sample), "sample.xml")
if err != nil {
t.Fatalf("parseCFXMLWorkspace: %v", err)
}
if workspace.SourceFormat != "CFXML" {
t.Fatalf("unexpected source format: %q", workspace.SourceFormat)
}
if len(workspace.Configurations) != 2 {
t.Fatalf("expected 2 configurations, got %d", len(workspace.Configurations))
}
first := workspace.Configurations[0]
if first.GroupID != "1000" {
t.Fatalf("expected first group 1000, got %q", first.GroupID)
}
if first.Name != "#1" {
t.Fatalf("expected first config name #1, got %q", first.Name)
}
if first.ServerCount != 6 {
t.Fatalf("expected first server count 6, got %d", first.ServerCount)
}
if len(first.Rows) != 4 {
t.Fatalf("expected 4 vendor rows in first config, got %d", len(first.Rows))
}
foundSoftwareTopLevel := false
foundSoftwareSubRow := false
foundPrimaryTopLevelQty := 0
for _, row := range first.Rows {
if row.VendorPartnumber == "7DG9-CTO1WW" {
foundPrimaryTopLevelQty = row.Quantity
}
if row.VendorPartnumber == "7S0X-CTO8WW" {
foundSoftwareTopLevel = true
}
if row.VendorPartnumber == "LIC-1" {
foundSoftwareSubRow = true
}
}
if !foundSoftwareTopLevel {
t.Fatalf("expected software top-level row to stay inside configuration")
}
if !foundSoftwareSubRow {
t.Fatalf("expected software sub-row to stay inside configuration")
}
if foundPrimaryTopLevelQty != 1 {
t.Fatalf("expected primary top-level qty normalized to 1, got %d", foundPrimaryTopLevelQty)
}
if first.TotalPrice == nil || *first.TotalPrice != 750 {
t.Fatalf("expected first total price 750, got %+v", first.TotalPrice)
}
second := workspace.Configurations[1]
if second.Name != "#2" {
t.Fatalf("expected second config name #2, got %q", second.Name)
}
if len(second.Rows) != 1 {
t.Fatalf("expected second config to contain single top-level row, got %d", len(second.Rows))
}
}
func TestImportVendorWorkspaceToProject_AutoResolvesAndAppliesEstimate(t *testing.T) {
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
if err != nil {
t.Fatalf("init local db: %v", err)
}
t.Cleanup(func() { _ = local.Close() })
projectName := "OPS-2079"
if err := local.SaveProject(&localdb.LocalProject{
UUID: "project-1",
OwnerUsername: "tester",
Code: "OPS-2079",
Variant: "",
Name: &projectName,
IsActive: true,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}); err != nil {
t.Fatalf("save project: %v", err)
}
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
ServerID: 101,
Source: "estimate",
Version: "E-1",
Name: "Estimate",
CreatedAt: time.Now(),
SyncedAt: time.Now(),
}); err != nil {
t.Fatalf("save estimate pricelist: %v", err)
}
estimatePL, err := local.GetLocalPricelistByServerID(101)
if err != nil {
t.Fatalf("get estimate pricelist: %v", err)
}
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
{PricelistID: estimatePL.ID, LotName: "CPU_INTEL_6747P", Price: 1000},
{PricelistID: estimatePL.ID, LotName: "LICENSE_XCC", Price: 50},
}); err != nil {
t.Fatalf("save estimate items: %v", err)
}
bookRepo := local.DB()
if err := bookRepo.Create(&localdb.LocalPartnumberBook{
ServerID: 501,
Version: "B-1",
CreatedAt: time.Now(),
IsActive: true,
PartnumbersJSON: localdb.LocalStringList{"CPU-1", "LIC-1"},
}).Error; err != nil {
t.Fatalf("save active book: %v", err)
}
if err := bookRepo.Create([]localdb.LocalPartnumberBookItem{
{Partnumber: "CPU-1", LotsJSON: localdb.LocalPartnumberBookLots{{LotName: "CPU_INTEL_6747P", Qty: 1}}},
{Partnumber: "LIC-1", LotsJSON: localdb.LocalPartnumberBookLots{{LotName: "LICENSE_XCC", Qty: 1}}},
}).Error; err != nil {
t.Fatalf("save book items: %v", err)
}
service := NewLocalConfigurationService(local, syncsvc.NewService(nil, local), &QuoteService{}, func() bool { return false })
const sample = `<?xml version="1.0" encoding="UTF-8"?>
<CFXML>
<thisDocumentIdentifier>
<ProprietaryDocumentIdentifier>CFXML.workspace-test</ProprietaryDocumentIdentifier>
</thisDocumentIdentifier>
<CFData>
<ProductLineItem>
<ProductLineNumber>1000</ProductLineNumber>
<ItemNo>1000</ItemNo>
<TransactionType>NEW</TransactionType>
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
<Quantity>2</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>7DG9-CTO1WW</ProprietaryProductIdentifier>
<ProductDescription>ThinkSystem SR630 V4</ProductDescription>
<ProductName>#1</ProductName>
<ProductTypeCode>Hardware</ProductTypeCode>
</PartnerProductIdentification>
</ProductIdentification>
<ProductSubLineItem>
<LineNumber>1001</LineNumber>
<Quantity>2</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>CPU-1</ProprietaryProductIdentifier>
<ProductDescription>CPU</ProductDescription>
</PartnerProductIdentification>
</ProductIdentification>
</ProductSubLineItem>
</ProductLineItem>
<ProductLineItem>
<ProductLineNumber>2000</ProductLineNumber>
<ItemNo>2000</ItemNo>
<TransactionType>NEW</TransactionType>
<ProprietaryGroupIdentifier>1000</ProprietaryGroupIdentifier>
<Quantity>2</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>7S0X-CTO8WW</ProprietaryProductIdentifier>
<ProductDescription>XClarity Controller</ProductDescription>
<ProductName>software1</ProductName>
<ProductTypeCode>Software</ProductTypeCode>
</PartnerProductIdentification>
</ProductIdentification>
<ProductSubLineItem>
<LineNumber>2001</LineNumber>
<Quantity>1</Quantity>
<ProductIdentification>
<PartnerProductIdentification>
<ProprietaryProductIdentifier>LIC-1</ProprietaryProductIdentifier>
<ProductDescription>License</ProductDescription>
</PartnerProductIdentification>
</ProductIdentification>
</ProductSubLineItem>
</ProductLineItem>
</CFData>
</CFXML>`
result, err := service.ImportVendorWorkspaceToProject("project-1", "sample.xml", []byte(sample), "tester")
if err != nil {
t.Fatalf("ImportVendorWorkspaceToProject: %v", err)
}
if result.Imported != 1 || len(result.Configs) != 1 {
t.Fatalf("unexpected import result: %+v", result)
}
cfg, err := local.GetConfigurationByUUID(result.Configs[0].UUID)
if err != nil {
t.Fatalf("load imported config: %v", err)
}
if cfg.PricelistID == nil || *cfg.PricelistID != 101 {
t.Fatalf("expected estimate pricelist id 101, got %+v", cfg.PricelistID)
}
if len(cfg.VendorSpec) != 4 {
t.Fatalf("expected 4 vendor spec rows, got %d", len(cfg.VendorSpec))
}
if len(cfg.Items) != 2 {
t.Fatalf("expected 2 cart items, got %d", len(cfg.Items))
}
if cfg.Items[0].LotName != "CPU_INTEL_6747P" || cfg.Items[0].Quantity != 2 || cfg.Items[0].UnitPrice != 1000 {
t.Fatalf("unexpected first item: %+v", cfg.Items[0])
}
if cfg.Items[1].LotName != "LICENSE_XCC" || cfg.Items[1].Quantity != 1 || cfg.Items[1].UnitPrice != 50 {
t.Fatalf("unexpected second item: %+v", cfg.Items[1])
}
if cfg.TotalPrice == nil || *cfg.TotalPrice != 4100 {
t.Fatalf("expected total price 4100 for 2 servers, got %+v", cfg.TotalPrice)
}
foundCPU := false
foundLIC := false
for _, row := range cfg.VendorSpec {
if row.VendorPartnumber == "CPU-1" {
foundCPU = true
if len(row.LotMappings) != 1 || row.LotMappings[0].LotName != "CPU_INTEL_6747P" || row.LotMappings[0].QuantityPerPN != 1 {
t.Fatalf("unexpected CPU mappings: %+v", row.LotMappings)
}
}
if row.VendorPartnumber == "LIC-1" {
foundLIC = true
if len(row.LotMappings) != 1 || row.LotMappings[0].LotName != "LICENSE_XCC" || row.LotMappings[0].QuantityPerPN != 1 {
t.Fatalf("unexpected LIC mappings: %+v", row.LotMappings)
}
}
}
if !foundCPU || !foundLIC {
t.Fatalf("expected resolved rows for CPU and LIC in vendor spec")
}
}

View File

@@ -1,7 +0,0 @@
CREATE TABLE IF NOT EXISTS lot_partnumbers (
partnumber VARCHAR(255) NOT NULL,
lot_name VARCHAR(255) NOT NULL DEFAULT '',
description VARCHAR(10000) NULL,
PRIMARY KEY (partnumber, lot_name),
INDEX idx_lot_partnumbers_lot_name (lot_name)
);

View File

@@ -1,25 +0,0 @@
-- Allow placeholder mappings (partnumber without bound lot) and store import description.
ALTER TABLE lot_partnumbers
ADD COLUMN IF NOT EXISTS description VARCHAR(10000) NULL AFTER lot_name;
ALTER TABLE lot_partnumbers
MODIFY COLUMN lot_name VARCHAR(255) NOT NULL DEFAULT '';
-- Drop FK on lot_name if it exists to allow unresolved placeholders.
SET @lp_fk_name := (
SELECT kcu.CONSTRAINT_NAME
FROM information_schema.KEY_COLUMN_USAGE kcu
WHERE kcu.TABLE_SCHEMA = DATABASE()
AND kcu.TABLE_NAME = 'lot_partnumbers'
AND kcu.COLUMN_NAME = 'lot_name'
AND kcu.REFERENCED_TABLE_NAME IS NOT NULL
LIMIT 1
);
SET @lp_drop_fk_sql := IF(
@lp_fk_name IS NULL,
'SELECT 1',
CONCAT('ALTER TABLE lot_partnumbers DROP FOREIGN KEY `', @lp_fk_name, '`')
);
PREPARE lp_stmt FROM @lp_drop_fk_sql;
EXECUTE lp_stmt;
DEALLOCATE PREPARE lp_stmt;

View File

@@ -0,0 +1,18 @@
ALTER TABLE qt_configurations
ADD COLUMN IF NOT EXISTS line_no INT NULL AFTER only_in_stock;
UPDATE qt_configurations q
JOIN (
SELECT
id,
ROW_NUMBER() OVER (
PARTITION BY COALESCE(NULLIF(TRIM(project_uuid), ''), '__NO_PROJECT__')
ORDER BY created_at ASC, id ASC
) AS rn
FROM qt_configurations
WHERE line_no IS NULL OR line_no <= 0
) ranked ON ranked.id = q.id
SET q.line_no = ranked.rn * 10;
ALTER TABLE qt_configurations
ADD INDEX IF NOT EXISTS idx_qt_configurations_project_line_no (project_uuid, line_no);

6
package-lock.json generated Normal file
View File

@@ -0,0 +1,6 @@
{
"name": "QuoteForge",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}

1
package.json Normal file
View File

@@ -0,0 +1 @@
{}

View File

@@ -21,6 +21,7 @@
<div class="hidden md:flex space-x-4">
<a href="/projects" class="text-gray-600 hover:text-gray-900 px-3 py-2 text-sm">Мои проекты</a>
<a href="/pricelists" class="text-gray-600 hover:text-gray-900 px-3 py-2 text-sm">Прайслисты</a>
<a href="/partnumber-books" class="text-gray-600 hover:text-gray-900 px-3 py-2 text-sm">Партномера</a>
<a href="/setup" class="text-gray-600 hover:text-gray-900 px-3 py-2 text-sm">Настройки</a>
</div>
</div>

View File

@@ -20,7 +20,7 @@
</div>
<div class="max-w-md">
<input id="configs-search" type="text" placeholder="Поиск квоты по названию"
<input id="configs-search" type="text" placeholder="Поиск конфигурации по названию"
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-blue-500 focus:border-blue-500">
</div>
@@ -141,7 +141,7 @@
<div class="space-y-4">
<div class="text-sm text-gray-600">
Квота: <span id="move-project-config-name" class="font-medium text-gray-900"></span>
Конфигурация: <span id="move-project-config-name" class="font-medium text-gray-900"></span>
</div>
<div>
<label class="block text-sm font-medium text-gray-700 mb-1">Проект</label>
@@ -174,7 +174,7 @@
<div id="create-project-on-move-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-3">Проект не найден</h2>
<p class="text-sm text-gray-600 mb-4">Проект с кодом "<span id="create-project-on-move-code" class="font-medium text-gray-900"></span>" не найден. <span id="create-project-on-move-description">Создать и привязать квоту?</span></p>
<p class="text-sm text-gray-600 mb-4">Проект с кодом "<span id="create-project-on-move-code" class="font-medium text-gray-900"></span>" не найден. <span id="create-project-on-move-description">Создать и привязать конфигурацию?</span></p>
<div class="mb-4">
<label for="create-project-on-move-name" class="block text-sm font-medium text-gray-700 mb-1">Название проекта</label>
<input id="create-project-on-move-name" type="text" placeholder="Например: Инфраструктура для OPS-123"
@@ -601,7 +601,7 @@ function openCreateProjectOnMoveModal(projectName) {
document.getElementById('create-project-on-move-code').textContent = projectName;
document.getElementById('create-project-on-move-name').value = projectName;
document.getElementById('create-project-on-move-variant').value = '';
document.getElementById('create-project-on-move-description').textContent = 'Создать и привязать квоту?';
document.getElementById('create-project-on-move-description').textContent = 'Создать и привязать конфигурацию?';
document.getElementById('create-project-on-move-confirm-btn').textContent = 'Создать и привязать';
document.getElementById('create-project-on-move-modal').classList.remove('hidden');
document.getElementById('create-project-on-move-modal').classList.add('flex');
@@ -719,7 +719,7 @@ async function moveConfigToProject(uuid, projectUUID) {
});
if (!resp.ok) {
const err = await resp.json();
alert('Не удалось перенести квоту: ' + (err.error || 'ошибка'));
alert('Не удалось перенести конфигурацию: ' + (err.error || 'ошибка'));
return false;
}
closeMoveProjectModal();
@@ -727,7 +727,7 @@ async function moveConfigToProject(uuid, projectUUID) {
await loadConfigs();
return true;
} catch (e) {
alert('Ошибка переноса квоты');
alert('Ошибка переноса конфигурации');
return false;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,82 +0,0 @@
{{define "title"}}Вход - QuoteForge{{end}}
{{define "content"}}
<div class="max-w-sm mx-auto mt-16">
<div class="bg-white rounded-lg shadow p-6">
<h1 class="text-xl font-bold text-center mb-6">Вход в систему</h1>
<form id="login-form">
<div class="mb-4">
<label class="block text-sm font-medium text-gray-700 mb-1">Логин</label>
<input type="text" name="username" id="username" required
class="w-full px-3 py-2 border rounded focus:outline-none focus:ring-2 focus:ring-blue-500"
value="admin">
</div>
<div class="mb-4">
<label class="block text-sm font-medium text-gray-700 mb-1">Пароль</label>
<input type="password" name="password" id="password" required
class="w-full px-3 py-2 border rounded focus:outline-none focus:ring-2 focus:ring-blue-500"
value="admin123">
</div>
<div id="error" class="text-red-600 text-sm mb-4 hidden"></div>
<button type="submit" id="submit-btn"
class="w-full bg-blue-600 text-white py-2 rounded hover:bg-blue-700">
Войти
</button>
</form>
<p class="text-center text-sm text-gray-500 mt-4">
<a href="/" class="text-blue-600">На главную</a>
</p>
</div>
</div>
<script>
document.addEventListener('DOMContentLoaded', function() {
const form = document.getElementById('login-form');
if (!form) return;
form.addEventListener('submit', async function(e) {
e.preventDefault();
const username = document.getElementById('username').value;
const password = document.getElementById('password').value;
const errorEl = document.getElementById('error');
const btn = document.getElementById('submit-btn');
errorEl.classList.add('hidden');
btn.disabled = true;
btn.textContent = 'Вход...';
try {
const resp = await fetch('/api/auth/login', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({username, password})
});
const data = await resp.json();
if (resp.ok && data.access_token) {
localStorage.setItem('token', data.access_token);
localStorage.setItem('refresh_token', data.refresh_token);
localStorage.setItem('user', JSON.stringify(data.user));
window.location.href = '/configs';
} else {
errorEl.textContent = data.error || 'Неверный логин или пароль';
errorEl.classList.remove('hidden');
btn.disabled = false;
btn.textContent = 'Войти';
}
} catch(err) {
errorEl.textContent = 'Ошибка соединения с сервером';
errorEl.classList.remove('hidden');
btn.disabled = false;
btn.textContent = 'Войти';
}
});
});
</script>
{{end}}
{{template "base" .}}

View File

@@ -0,0 +1,303 @@
{{define "title"}}QuoteForge - Партномера{{end}}
{{define "content"}}
<div class="space-y-4">
<h1 class="text-2xl font-bold text-gray-900">Партномера</h1>
<!-- Summary cards -->
<div id="summary-cards" class="grid grid-cols-2 md:grid-cols-3 gap-4 hidden">
<div class="bg-white rounded-lg shadow p-4">
<div class="text-xs text-gray-500 mb-1">Активный лист</div>
<div id="card-version" class="font-mono font-semibold text-gray-800 text-sm truncate"></div>
<div id="card-date" class="text-xs text-gray-400 mt-0.5"></div>
</div>
<div class="bg-white rounded-lg shadow p-4">
<div class="text-xs text-gray-500 mb-1">Уникальных LOT</div>
<div id="card-lots" class="text-2xl font-bold text-blue-600"></div>
</div>
<div class="bg-white rounded-lg shadow p-4">
<div class="text-xs text-gray-500 mb-1">Всего PN</div>
<div id="card-pn-total" class="text-2xl font-bold text-gray-800"></div>
</div>
</div>
<div id="summary-empty" class="hidden bg-yellow-50 border border-yellow-200 rounded-lg p-4 text-sm text-yellow-800">
Нет активного листа сопоставлений. Нажмите «Синхронизировать» для загрузки с сервера.
</div>
<!-- All books list (collapsed by default) -->
<div class="bg-white rounded-lg shadow overflow-hidden">
<!-- Header row — always visible -->
<div class="px-4 py-3 flex items-center justify-between">
<button onclick="toggleBooksSection()" class="flex items-center gap-2 text-sm font-semibold text-gray-800 hover:text-gray-600 select-none">
<svg id="books-chevron" class="w-4 h-4 text-gray-400 transition-transform duration-150" fill="none" stroke="currentColor" stroke-width="2" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" d="M9 5l7 7-7 7"/></svg>
Снимки сопоставлений (Partnumber Books)
</button>
<button onclick="syncPartnumberBooks()" class="px-4 py-2 bg-orange-500 text-white rounded hover:bg-orange-600 text-sm font-medium">
Синхронизировать
</button>
</div>
<!-- Collapsible body -->
<div id="books-section-body" class="hidden border-t">
<div id="books-list-loading" class="p-6 text-center text-gray-400 text-sm">Загрузка...</div>
<table id="books-table" class="w-full text-sm hidden">
<thead class="bg-gray-50 text-gray-600">
<tr>
<th class="px-4 py-2 text-left">Версия</th>
<th class="px-4 py-2 text-left">Дата</th>
<th class="px-4 py-2 text-right">Позиций</th>
<th class="px-4 py-2 text-center">Статус</th>
</tr>
</thead>
<tbody id="books-table-body"></tbody>
</table>
<div id="books-empty" class="hidden p-6 text-center text-gray-400 text-sm">
Нет загруженных снимков.
</div>
<!-- Pagination -->
<div id="books-pagination" class="hidden px-4 py-3 border-t flex items-center justify-between text-sm text-gray-600">
<span id="books-page-info"></span>
<div class="flex gap-2">
<button id="books-prev" onclick="changeBooksPage(-1)" class="px-3 py-1 rounded border hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed">← Назад</button>
<button id="books-next" onclick="changeBooksPage(1)" class="px-3 py-1 rounded border hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed">Вперёд →</button>
</div>
</div>
</div>
</div>
<!-- Active book items with search -->
<div id="active-book-section" class="hidden bg-white rounded-lg shadow overflow-hidden">
<div class="px-4 py-3 border-b flex items-center justify-between gap-3">
<span class="font-semibold text-gray-800 whitespace-nowrap">Сопоставления активного листа</span>
<input type="text" id="pn-search" placeholder="Поиск по PN или LOT..."
class="flex-1 max-w-xs px-3 py-1.5 border rounded text-sm focus:ring-2 focus:ring-blue-400 focus:outline-none"
oninput="onItemsSearchInput(this.value)">
<span id="pn-count" class="text-xs text-gray-400 whitespace-nowrap"></span>
</div>
<div class="overflow-x-auto">
<table class="w-full text-sm">
<thead class="bg-gray-50 text-gray-600 sticky top-0">
<tr>
<th class="px-4 py-2 text-left">Partnumber</th>
<th class="px-4 py-2 text-left">LOT</th>
<th class="px-4 py-2 text-left">Описание</th>
</tr>
</thead>
<tbody id="active-items-body"></tbody>
</table>
</div>
<div id="items-pagination" class="hidden px-4 py-3 border-t flex items-center justify-between text-sm text-gray-600">
<span id="items-page-info"></span>
<div class="flex gap-2">
<button id="items-prev" onclick="changeItemsPage(-1)" class="px-3 py-1 rounded border hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed">← Назад</button>
<button id="items-next" onclick="changeItemsPage(1)" class="px-3 py-1 rounded border hover:bg-gray-50 disabled:opacity-40 disabled:cursor-not-allowed">Вперёд →</button>
</div>
</div>
</div>
</div>
<script>
let allBooks = [];
let booksPage = 1;
const BOOKS_PER_PAGE = 10;
const ITEMS_PER_PAGE = 100;
let activeBookServerID = null;
let activeItems = [];
let itemsPage = 1;
let itemsTotal = 0;
let itemsSearch = '';
let _itemsSearchTimer = null;
function toggleBooksSection() {
const body = document.getElementById('books-section-body');
const chevron = document.getElementById('books-chevron');
const collapsed = body.classList.toggle('hidden');
chevron.style.transform = collapsed ? '' : 'rotate(90deg)';
}
async function loadBooks() {
let resp, data;
try {
resp = await fetch('/api/partnumber-books');
data = await resp.json();
} catch (e) {
document.getElementById('books-list-loading').classList.add('hidden');
document.getElementById('books-empty').classList.remove('hidden');
return;
}
allBooks = data.books || [];
document.getElementById('books-list-loading').classList.add('hidden');
if (!allBooks.length) {
document.getElementById('books-empty').classList.remove('hidden');
document.getElementById('summary-empty').classList.remove('hidden');
return;
}
booksPage = 1;
renderBooksPage();
const active = allBooks.find(b => b.is_active) || allBooks[0];
await loadActiveBookItems(active);
}
function renderBooksPage() {
const total = allBooks.length;
const totalPages = Math.ceil(total / BOOKS_PER_PAGE);
const start = (booksPage - 1) * BOOKS_PER_PAGE;
const pageBooks = allBooks.slice(start, start + BOOKS_PER_PAGE);
const tbody = document.getElementById('books-table-body');
tbody.innerHTML = '';
pageBooks.forEach(b => {
const tr = document.createElement('tr');
tr.className = 'border-b hover:bg-gray-50';
tr.innerHTML = `
<td class="px-4 py-2 font-mono text-xs">${b.version}</td>
<td class="px-4 py-2 text-gray-500 text-xs">${b.created_at}</td>
<td class="px-4 py-2 text-right text-xs">${b.item_count}</td>
<td class="px-4 py-2 text-center">
${b.is_active
? '<span class="px-2 py-0.5 bg-green-100 text-green-700 rounded text-xs">Активный</span>'
: '<span class="px-2 py-0.5 bg-gray-100 text-gray-500 rounded text-xs">Архив</span>'}
</td>
`;
tbody.appendChild(tr);
});
document.getElementById('books-table').classList.remove('hidden');
// Pagination controls
if (total > BOOKS_PER_PAGE) {
document.getElementById('books-pagination').classList.remove('hidden');
document.getElementById('books-page-info').textContent =
`Снимки ${start + 1}${Math.min(start + BOOKS_PER_PAGE, total)} из ${total}`;
document.getElementById('books-prev').disabled = booksPage === 1;
document.getElementById('books-next').disabled = booksPage === totalPages;
} else {
document.getElementById('books-pagination').classList.add('hidden');
}
}
function changeBooksPage(delta) {
const totalPages = Math.ceil(allBooks.length / BOOKS_PER_PAGE);
booksPage = Math.max(1, Math.min(totalPages, booksPage + delta));
renderBooksPage();
}
async function loadActiveBookItems(book) {
activeBookServerID = book.server_id;
return loadActiveBookItemsPage(1, itemsSearch, book);
}
async function loadActiveBookItemsPage(page = 1, search = '', book = null) {
const targetBook = book || allBooks.find(b => b.server_id === activeBookServerID);
if (!targetBook) return;
let resp, data;
try {
const params = new URLSearchParams({
page: String(page),
per_page: String(ITEMS_PER_PAGE),
});
if (search.trim()) {
params.set('search', search.trim());
}
resp = await fetch(`/api/partnumber-books/${targetBook.server_id}?` + params.toString());
data = await resp.json();
} catch (e) {
return;
}
if (!resp.ok) return;
activeItems = data.items || [];
itemsPage = data.page || page;
itemsTotal = Number(data.total || 0);
itemsSearch = data.search || search || '';
document.getElementById('card-version').textContent = targetBook.version;
document.getElementById('card-date').textContent = targetBook.created_at;
document.getElementById('card-lots').textContent = Number(data.lot_count || 0);
document.getElementById('card-pn-total').textContent = Number(data.book_total || 0);
document.getElementById('summary-cards').classList.remove('hidden');
document.getElementById('active-book-section').classList.remove('hidden');
document.getElementById('pn-search').value = itemsSearch;
renderItems(activeItems);
renderItemsPagination();
}
function renderItems(items) {
const tbody = document.getElementById('active-items-body');
tbody.innerHTML = '';
items.forEach(item => {
const tr = document.createElement('tr');
tr.className = 'border-b hover:bg-gray-50';
const lots = Array.isArray(item.lots_json) ? item.lots_json : [];
const lotsText = lots.map(l => `${l.lot_name} x${l.qty}`).join(', ');
tr.innerHTML = `
<td class="px-4 py-1.5 font-mono text-xs">${item.partnumber}</td>
<td class="px-4 py-1.5 text-xs font-medium text-blue-700">${lotsText}</td>
<td class="px-4 py-1.5 text-xs text-gray-500">${item.description || ''}</td>
`;
tbody.appendChild(tr);
});
const totalLabel = itemsSearch ? `${items.length} из ${itemsTotal} по фильтру` : `${items.length} из ${itemsTotal}`;
document.getElementById('pn-count').textContent = totalLabel;
}
function renderItemsPagination() {
const totalPages = Math.max(1, Math.ceil(itemsTotal / ITEMS_PER_PAGE));
const start = itemsTotal === 0 ? 0 : ((itemsPage - 1) * ITEMS_PER_PAGE) + 1;
const end = Math.min(itemsPage * ITEMS_PER_PAGE, itemsTotal);
const box = document.getElementById('items-pagination');
if (itemsTotal > ITEMS_PER_PAGE) {
box.classList.remove('hidden');
document.getElementById('items-page-info').textContent = `Сопоставления ${start}${end} из ${itemsTotal}`;
document.getElementById('items-prev').disabled = itemsPage === 1;
document.getElementById('items-next').disabled = itemsPage >= totalPages;
} else {
box.classList.add('hidden');
}
}
function changeItemsPage(delta) {
const totalPages = Math.max(1, Math.ceil(itemsTotal / ITEMS_PER_PAGE));
const nextPage = Math.max(1, Math.min(totalPages, itemsPage + delta));
if (nextPage === itemsPage) return;
loadActiveBookItemsPage(nextPage, itemsSearch);
}
function onItemsSearchInput(value) {
clearTimeout(_itemsSearchTimer);
_itemsSearchTimer = setTimeout(() => {
itemsSearch = value.trim();
loadActiveBookItemsPage(1, itemsSearch);
}, 250);
}
async function syncPartnumberBooks() {
let resp, data;
try {
resp = await fetch('/api/sync/partnumber-books', {method: 'POST'});
data = await resp.json();
} catch (e) {
showToast('Ошибка синхронизации', 'error');
return;
}
if (data.success) {
showToast(`Синхронизировано: ${data.synced} листов`, 'success');
loadBooks();
} else if (data.blocked) {
showToast(`Синк заблокирован: ${data.reason_text}`, 'error');
} else {
showToast('Ошибка: ' + (data.error || 'неизвестная ошибка'), 'error');
}
}
document.addEventListener('DOMContentLoaded', loadBooks);
</script>
{{end}}
{{template "base" .}}

View File

@@ -60,8 +60,9 @@
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Описание</th>
<th id="th-qty" class="hidden px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase">Доступно</th>
<th id="th-partnumbers" class="hidden px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Partnumbers</th>
<th id="th-competitors" class="hidden px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Поставщик</th>
<th class="px-6 py-3 text-right text-xs font-medium text-gray-500 uppercase">Цена, $</th>
<th class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Настройки</th>
<th id="th-settings" class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase">Настройки</th>
</tr>
</thead>
<tbody id="items-body" class="bg-white divide-y divide-gray-200">
@@ -150,18 +151,25 @@
}
}
function isStockSource() {
const src = (currentSource || '').toLowerCase();
return src === 'warehouse' || src === 'competitor';
}
function isWarehouseSource() {
return (currentSource || '').toLowerCase() === 'warehouse';
}
function itemsColspan() {
return isWarehouseSource() ? 7 : 5;
return isStockSource() ? 6 : 5;
}
function toggleWarehouseColumns() {
const visible = isWarehouseSource();
document.getElementById('th-qty').classList.toggle('hidden', !visible);
document.getElementById('th-partnumbers').classList.toggle('hidden', !visible);
const stock = isStockSource();
document.getElementById('th-qty').classList.toggle('hidden', true);
document.getElementById('th-partnumbers').classList.toggle('hidden', !stock);
document.getElementById('th-competitors').classList.toggle('hidden', !stock);
document.getElementById('th-settings').classList.toggle('hidden', stock);
}
function formatQty(qty) {
@@ -234,27 +242,69 @@
return;
}
const showWarehouse = isWarehouseSource();
const stock = isStockSource();
const p = stock ? 'px-3 py-2' : 'px-6 py-3';
const descMax = stock ? 30 : 60;
const html = items.map(item => {
const price = item.price.toLocaleString('en-US', { minimumFractionDigits: 2, maximumFractionDigits: 2 });
const description = item.lot_description || '-';
const truncatedDesc = description.length > 60 ? description.substring(0, 60) + '...' : description;
const qty = formatQty(item.available_qty);
const partnumbers = Array.isArray(item.partnumbers) && item.partnumbers.length > 0 ? item.partnumbers.join(', ') : '—';
const truncatedDesc = description.length > descMax ? description.substring(0, descMax) + '...' : description;
// Partnumbers cell (stock sources only)
let pnHtml = '—';
if (stock) {
const qtys = item.partnumber_qtys || {};
const allPNs = Array.isArray(item.partnumbers) ? item.partnumbers : [];
const withQty = allPNs.filter(pn => qtys[pn] > 0);
const list = withQty.length > 0 ? withQty : allPNs;
if (list.length === 0) {
pnHtml = '—';
} else {
const shown = list.slice(0, 4);
const rest = list.length - shown.length;
const formatPN = pn => {
const q = qtys[pn];
const qStr = (q > 0) ? ` <span class="text-gray-400">(${formatQty(q)} шт.)</span>` : '';
return `<div><span class="font-mono text-xs">${escapeHtml(pn)}</span>${qStr}</div>`;
};
pnHtml = shown.map(formatPN).join('');
if (rest > 0) pnHtml += `<div class="text-gray-400 text-xs">+${rest} ещё</div>`;
}
}
// Supplier cell (stock sources only)
let supplierHtml = '';
if (stock) {
if (isWarehouseSource()) {
supplierHtml = `<span class="text-gray-500 text-sm">склад</span>`;
} else {
const names = Array.isArray(item.competitor_names) && item.competitor_names.length > 0
? item.competitor_names
: ['конкурент'];
supplierHtml = names.map(n => `<span class="px-1.5 py-0.5 text-xs bg-blue-50 text-blue-700 rounded">${escapeHtml(n)}</span>`).join(' ');
}
}
// Price cell — add spread badge for competitor
let priceHtml = price;
if (!isWarehouseSource() && item.price_spread_pct > 0) {
priceHtml += ` <span class="text-xs text-amber-600 font-medium" title="Разброс цен конкурентов">±${item.price_spread_pct.toFixed(0)}%</span>`;
}
return `
<tr class="hover:bg-gray-50">
<td class="px-6 py-3 whitespace-nowrap">
<span class="font-mono text-sm">${item.lot_name}</span>
<td class="${p} max-w-[160px]">
<span class="font-mono text-sm break-all">${escapeHtml(item.lot_name)}</span>
</td>
<td class="px-6 py-3 whitespace-nowrap">
<span class="px-2 py-1 text-xs bg-gray-100 rounded">${item.category || '-'}</span>
<td class="${p} whitespace-nowrap">
<span class="px-2 py-1 text-xs bg-gray-100 rounded">${escapeHtml(item.category || '-')}</span>
</td>
<td class="px-6 py-3 text-sm text-gray-500" title="${description}">${truncatedDesc}</td>
${showWarehouse ? `<td class="px-6 py-3 whitespace-nowrap text-right font-mono">${qty}</td>` : ''}
${showWarehouse ? `<td class="px-6 py-3 text-sm text-gray-600" title="${escapeHtml(partnumbers)}">${escapeHtml(partnumbers)}</td>` : ''}
<td class="px-6 py-3 whitespace-nowrap text-right font-mono">${price}</td>
<td class="px-6 py-3 whitespace-nowrap text-sm"><span class="text-xs bg-gray-100 px-2 py-1 rounded">${formatPriceSettings(item)}</span></td>
<td class="${p} text-sm text-gray-500" title="${escapeHtml(description)}">${escapeHtml(truncatedDesc)}</td>
${stock ? `<td class="${p} text-sm text-gray-600">${pnHtml}</td>` : ''}
${stock ? `<td class="${p} text-sm">${supplierHtml}</td>` : ''}
<td class="${p} whitespace-nowrap text-right font-mono">${priceHtml}</td>
${!stock ? `<td class="${p} whitespace-nowrap text-sm"><span class="text-xs bg-gray-100 px-2 py-1 rounded">${formatPriceSettings(item)}</span></td>` : ''}
</tr>
`;
}).join('');

View File

@@ -2,21 +2,21 @@
{{define "content"}}
<div class="space-y-4">
<div class="flex items-center justify-between gap-3">
<div class="flex items-center gap-3">
<a href="/projects" class="text-gray-500 hover:text-gray-700" title="Все проекты">
<div class="flex items-start justify-between gap-3">
<div class="flex flex-wrap items-center gap-2 sm:gap-3 min-w-0">
<a href="/projects" class="shrink-0 text-gray-500 hover:text-gray-700" title="Все проекты">
<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M3 9.75L12 3l9 6.75v9A2.25 2.25 0 0118.75 21h-13.5A2.25 2.25 0 013 18.75v-9z"></path>
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 21v-6h6v6"></path>
</svg>
</a>
<div class="text-2xl font-bold flex items-center gap-2">
<a id="project-code-link" href="/projects" class="text-blue-700 hover:underline">
<div class="flex flex-wrap items-center gap-2 text-xl sm:text-2xl font-bold min-w-0">
<a id="project-code-link" href="/projects" class="min-w-0 truncate text-blue-700 hover:underline">
<span id="project-code"></span>
</a>
<span class="text-gray-400">-</span>
<div class="relative">
<button id="project-variant-button" type="button" class="inline-flex items-center gap-2 text-base font-medium px-3 py-1.5 rounded-lg bg-gray-100 hover:bg-gray-200 border border-gray-200">
<span class="text-gray-400 shrink-0">-</span>
<div class="relative shrink-0">
<button id="project-variant-button" type="button" class="inline-flex items-center gap-2 text-sm sm:text-base font-medium px-3 py-1.5 rounded-lg bg-gray-100 hover:bg-gray-200 border border-gray-200">
<span id="project-variant-label">main</span>
<svg class="w-4 h-4 text-gray-500" fill="none" stroke="currentColor" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 9l-7 7-7-7"></path>
@@ -26,24 +26,24 @@
<div id="project-variant-list" class="py-1"></div>
</div>
</div>
<button onclick="openNewVariantModal()" class="inline-flex w-full sm:w-auto justify-center items-center px-3 py-1.5 text-sm font-medium bg-purple-600 text-white rounded-lg hover:bg-purple-700">
+ Вариант
</button>
</div>
</div>
</div>
<div id="action-buttons" class="mt-4 grid grid-cols-1 sm:grid-cols-6 gap-3">
<button onclick="openNewVariantModal()" class="py-2 bg-purple-600 text-white rounded-lg hover:bg-purple-700 font-medium">
+ Новый вариант
</button>
<button onclick="openCreateModal()" class="py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 font-medium">
+ Создать новую квоту
Новая конфигурация
</button>
<button onclick="openImportModal()" class="py-2 bg-indigo-600 text-white rounded-lg hover:bg-indigo-700 font-medium">
Импорт квоты
<button onclick="openVendorImportModal()" class="py-2 bg-amber-600 text-white rounded-lg hover:bg-amber-700 font-medium">
Импорт выгрузки вендора
</button>
<button onclick="openProjectSettingsModal()" class="py-2 bg-gray-700 text-white rounded-lg hover:bg-gray-800 font-medium">
Параметры
</button>
<button onclick="exportProject()" class="py-2 bg-green-600 text-white rounded-lg hover:bg-green-700 font-medium">
<button onclick="openExportModal()" class="py-2 bg-green-600 text-white rounded-lg hover:bg-green-700 font-medium">
Экспорт CSV
</button>
<button id="delete-variant-btn" onclick="deleteVariant()" class="py-2 bg-red-600 text-white rounded-lg hover:bg-red-700 font-medium hidden">
@@ -72,10 +72,10 @@
<div id="create-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Новая квота в проекте</h2>
<h2 class="text-xl font-semibold mb-4">Новая конфигурация в проекте</h2>
<div class="space-y-4">
<div>
<label class="block text-sm font-medium text-gray-700 mb-1">Название квоты</label>
<label class="block text-sm font-medium text-gray-700 mb-1">Название конфигурации</label>
<input type="text" id="create-name" placeholder="Например: OPP-2026-001"
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-blue-500 focus:border-blue-500">
</div>
@@ -87,6 +87,65 @@
</div>
</div>
<div id="vendor-import-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-lg mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Импорт выгрузки вендора</h2>
<div class="space-y-4">
<div class="text-sm text-gray-600">
Загружает `CFXML`-выгрузку в текущий проект и создаёт несколько конфигураций, если они есть в файле.
</div>
<div>
<label for="vendor-import-file" class="block text-sm font-medium text-gray-700 mb-1">Файл выгрузки</label>
<input id="vendor-import-file" type="file" accept=".xml,text/xml,application/xml"
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-amber-500 focus:border-amber-500">
</div>
<div id="vendor-import-status" class="hidden text-sm rounded border px-3 py-2"></div>
</div>
<div class="flex justify-end space-x-3 mt-6">
<button onclick="closeVendorImportModal()" class="px-4 py-2 text-gray-600 hover:text-gray-800">Отмена</button>
<button id="vendor-import-submit" onclick="importVendorWorkspace()" class="px-4 py-2 bg-amber-600 text-white rounded hover:bg-amber-700">Импортировать</button>
</div>
</div>
</div>
<div id="project-export-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Экспорт CSV</h2>
<div class="space-y-4">
<div class="text-sm text-gray-600">
Экспортирует проект в формате вкладки ценообразования. Если включён `BOM`, строки строятся по BOM; иначе по текущему Estimate.
</div>
<div class="space-y-3">
<label class="flex items-center gap-3 text-sm text-gray-700">
<input id="export-col-lot" type="checkbox" class="rounded border-gray-300" checked>
<span>LOT</span>
</label>
<label class="flex items-center gap-3 text-sm text-gray-700">
<input id="export-col-bom" type="checkbox" class="rounded border-gray-300">
<span>BOM</span>
</label>
<label class="flex items-center gap-3 text-sm text-gray-700">
<input id="export-col-estimate" type="checkbox" class="rounded border-gray-300" checked>
<span>Estimate</span>
</label>
<label class="flex items-center gap-3 text-sm text-gray-700">
<input id="export-col-stock" type="checkbox" class="rounded border-gray-300">
<span>Stock</span>
</label>
<label class="flex items-center gap-3 text-sm text-gray-700">
<input id="export-col-competitor" type="checkbox" class="rounded border-gray-300">
<span>Конкуренты</span>
</label>
</div>
<div id="project-export-status" class="hidden text-sm rounded border px-3 py-2"></div>
</div>
<div class="flex justify-end space-x-3 mt-6">
<button onclick="closeExportModal()" class="px-4 py-2 text-gray-600 hover:text-gray-800">Отмена</button>
<button id="project-export-submit" onclick="exportProject()" class="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-700">Скачать CSV</button>
</div>
</div>
</div>
<div id="new-variant-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-lg p-6">
<h2 class="text-xl font-semibold mb-4">Новый вариант</h2>
@@ -116,7 +175,7 @@
<div id="config-action-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Действия с квотой</h2>
<h2 class="text-xl font-semibold mb-4">Действия с конфигурацией</h2>
<div class="space-y-4">
<div>
<label class="block text-sm font-medium text-gray-700 mb-1">Название</label>
@@ -154,25 +213,6 @@
</div>
</div>
<div id="import-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Импорт квоты в проект</h2>
<div class="space-y-3">
<label class="block text-sm font-medium text-gray-700">Квота</label>
<input id="import-config-input"
list="import-config-options"
placeholder="Начните вводить название квоты"
class="w-full px-3 py-2 border rounded focus:ring-2 focus:ring-blue-500 focus:border-blue-500">
<datalist id="import-config-options"></datalist>
<div class="text-xs text-gray-500">Квота будет перемещена в текущий проект.</div>
</div>
<div class="flex justify-end gap-2 mt-6">
<button onclick="closeImportModal()" class="px-4 py-2 text-gray-600 hover:text-gray-800">Отмена</button>
<button onclick="importConfigToProject()" class="px-4 py-2 bg-indigo-600 text-white rounded hover:bg-indigo-700">Импортировать</button>
</div>
</div>
</div>
<div id="project-settings-modal" class="fixed inset-0 bg-black bg-opacity-50 hidden items-center justify-center z-50">
<div class="bg-white rounded-lg shadow-xl w-full max-w-md mx-4 p-6">
<h2 class="text-xl font-semibold mb-4">Параметры проекта</h2>
@@ -211,6 +251,8 @@ const projectUUID = '{{.ProjectUUID}}';
let configStatusMode = 'active';
let project = null;
let allConfigs = [];
let dragConfigUUID = '';
let isReorderingConfigs = false;
let projectVariants = [];
let projectsCatalog = [];
let variantMenuInitialized = false;
@@ -221,6 +263,11 @@ function escapeHtml(text) {
return div.innerHTML;
}
function formatMoneyNoDecimals(value) {
const safe = Number.isFinite(Number(value)) ? Number(value) : 0;
return '$' + Math.round(safe).toLocaleString('en-US');
}
function resolveProjectTrackerURL(projectData) {
if (!projectData) return '';
const explicitURL = (projectData.tracker_url || '').trim();
@@ -340,7 +387,7 @@ function applyStatusModeUI() {
}
function renderConfigs(configs) {
const emptyText = configStatusMode === 'archived' ? 'Архив пуст' : 'Нет квот в проекте';
const emptyText = configStatusMode === 'archived' ? 'Архив пуст' : 'Нет конфигураций в проекте';
if (configs.length === 0) {
document.getElementById('configs-list').innerHTML =
'<div class="bg-white rounded-lg shadow p-8 text-center text-gray-500">' + emptyText + '</div>';
@@ -350,42 +397,53 @@ function renderConfigs(configs) {
let totalSum = 0;
let html = '<div class="bg-white rounded-lg shadow overflow-hidden"><table class="w-full">';
html += '<thead class="bg-gray-50"><tr>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Дата</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Line</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">модель</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Название</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Автор</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Цена (за 1 шт)</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Цена за 1 шт.</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Кол-во</th>';
html += '<th class="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Сумма</th>';
html += '<th class="px-4 py-3 text-center text-xs font-medium text-gray-500 uppercase">Ревизия</th>';
html += '<th class="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Стоимость</th>';
html += '<th class="px-2 py-3 text-center text-xs font-medium text-gray-500 uppercase w-12"></th>';
html += '<th class="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Действия</th>';
html += '</tr></thead><tbody class="divide-y">';
html += '</tr></thead><tbody class="divide-y" id="project-configs-tbody">';
configs.forEach(c => {
const date = new Date(c.created_at).toLocaleDateString('ru-RU');
configs.forEach((c, idx) => {
const total = c.total_price || 0;
const serverCount = c.server_count || 1;
const author = c.owner_username || (c.user && c.user.username) || '—';
const unitPrice = serverCount > 0 ? (total / serverCount) : 0;
const lineValue = (idx + 1) * 10;
const serverModel = (c.server_model || '').trim() || '—';
totalSum += total;
html += '<tr class="hover:bg-gray-50">';
html += '<td class="px-4 py-3 text-sm text-gray-500">' + date + '</td>';
const draggable = configStatusMode === 'active' ? 'true' : 'false';
html += '<tr class="hover:bg-gray-50" draggable="' + draggable + '" data-config-uuid="' + c.uuid + '" ondragstart="onConfigDragStart(event)" ondragover="onConfigDragOver(event)" ondragleave="onConfigDragLeave(event)" ondrop="onConfigDrop(event)" ondragend="onConfigDragEnd(event)">';
if (configStatusMode === 'active') {
html += '<td class="px-4 py-3 text-sm text-gray-500">';
html += '<span class="inline-flex items-center gap-2"><span class="drag-handle text-gray-400 hover:text-gray-700 cursor-grab active:cursor-grabbing select-none" title="Перетащить для изменения порядка" aria-label="Перетащить">';
html += '<svg class="w-4 h-4 pointer-events-none" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M8 6h.01M8 12h.01M8 18h.01M16 6h.01M16 12h.01M16 18h.01"></path></svg>';
html += '</span><span>' + lineValue + '</span></span></td>';
} else {
html += '<td class="px-4 py-3 text-sm text-gray-500">' + lineValue + '</td>';
}
html += '<td class="px-4 py-3 text-sm text-gray-500">' + escapeHtml(serverModel) + '</td>';
if (configStatusMode === 'archived') {
html += '<td class="px-4 py-3 text-sm font-medium text-gray-700">' + escapeHtml(c.name) + '</td>';
} else {
html += '<td class="px-4 py-3 text-sm font-medium"><a href="/configurator?uuid=' + c.uuid + '" class="text-blue-600 hover:text-blue-800 hover:underline">' + escapeHtml(c.name) + '</a></td>';
}
html += '<td class="px-4 py-3 text-sm text-gray-500">' + escapeHtml(author) + '</td>';
html += '<td class="px-4 py-3 text-sm text-gray-500">$' + unitPrice.toLocaleString('en-US', {minimumFractionDigits: 2}) + '</td>';
html += '<td class="px-4 py-3 text-sm text-gray-500">' + formatMoneyNoDecimals(unitPrice) + '</td>';
if (configStatusMode === 'archived') {
html += '<td class="px-4 py-3 text-sm text-gray-500">' + serverCount + '</td>';
} else {
html += '<td class="px-4 py-3 text-sm text-gray-500"><input type="number" min="1" value="' + serverCount + '" class="w-16 px-1 py-0.5 border rounded text-center text-sm" data-uuid="' + c.uuid + '" data-prev="' + serverCount + '" onchange="updateConfigServerCount(this)"></td>';
}
html += '<td class="px-4 py-3 text-sm text-right" data-total-uuid="' + c.uuid + '">$' + total.toLocaleString('en-US', {minimumFractionDigits: 2}) + '</td>';
html += '<td class="px-4 py-3 text-sm text-right" data-total-uuid="' + c.uuid + '">' + formatMoneyNoDecimals(total) + '</td>';
const versionNo = c.current_version_no || 1;
html += '<td class="px-4 py-3 text-sm text-center text-gray-500">v' + versionNo + '</td>';
html += '<td class="px-4 py-3 text-sm text-right space-x-2">';
html += '<td class="px-2 py-3 text-sm text-center text-gray-500 w-12">v' + versionNo + '</td>';
html += '<td class="px-4 py-3 text-sm text-right whitespace-nowrap"><div class="inline-flex items-center justify-end gap-2">';
if (configStatusMode === 'archived') {
html += '<button onclick="reactivateConfig(\'' + c.uuid + '\')" class="text-emerald-600 hover:text-emerald-800" title="Восстановить">';
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M5 13l4 4L19 7"></path></svg></button>';
@@ -397,15 +455,16 @@ function renderConfigs(configs) {
html += '<button onclick="deleteConfig(\'' + c.uuid + '\')" class="text-red-600 hover:text-red-800" title="В архив">';
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16"></path></svg></button>';
}
html += '</td></tr>';
html += '</div></td></tr>';
});
html += '</tbody>';
html += '<tfoot class="bg-gray-50 border-t">';
html += '<tr>';
html += '<td class="px-4 py-3 text-sm font-medium text-gray-700" colspan="4">Итого по проекту</td>';
html += '<td class="px-4 py-3 text-sm font-medium text-gray-700" colspan="5">Итого по проекту</td>';
html += '<td class="px-4 py-3 text-sm text-gray-700">' + configs.length + '</td>';
html += '<td class="px-4 py-3 text-sm text-right font-semibold text-gray-900" data-footer-total="1">$' + totalSum.toLocaleString('en-US', {minimumFractionDigits: 2}) + '</td>';
html += '<td class="px-4 py-3 text-sm text-right font-semibold text-gray-900" data-footer-total="1">' + formatMoneyNoDecimals(totalSum) + '</td>';
html += '<td class="px-4 py-3"></td>';
html += '<td class="px-4 py-3"></td>';
html += '<td class="px-4 py-3"></td>';
html += '</tr>';
@@ -521,6 +580,86 @@ function closeCreateModal() {
document.getElementById('create-modal').classList.remove('flex');
}
function setVendorImportStatus(message, type) {
const box = document.getElementById('vendor-import-status');
if (!box) return;
if (!message) {
box.textContent = '';
box.className = 'hidden text-sm rounded border px-3 py-2';
return;
}
let classes = 'text-sm rounded border px-3 py-2 ';
if (type === 'error') {
classes += 'border-red-200 bg-red-50 text-red-700';
} else if (type === 'success') {
classes += 'border-emerald-200 bg-emerald-50 text-emerald-700';
} else {
classes += 'border-amber-200 bg-amber-50 text-amber-700';
}
box.className = classes;
box.textContent = message;
}
function openVendorImportModal() {
document.getElementById('vendor-import-file').value = '';
setVendorImportStatus('', '');
document.getElementById('vendor-import-submit').disabled = false;
document.getElementById('vendor-import-submit').textContent = 'Импортировать';
document.getElementById('vendor-import-modal').classList.remove('hidden');
document.getElementById('vendor-import-modal').classList.add('flex');
}
function closeVendorImportModal() {
document.getElementById('vendor-import-modal').classList.add('hidden');
document.getElementById('vendor-import-modal').classList.remove('flex');
}
async function importVendorWorkspace() {
const input = document.getElementById('vendor-import-file');
const submit = document.getElementById('vendor-import-submit');
if (!input || !input.files || !input.files[0]) {
setVendorImportStatus('Выберите XML-файл выгрузки', 'error');
return;
}
const formData = new FormData();
formData.append('file', input.files[0]);
submit.disabled = true;
submit.textContent = 'Импорт...';
setVendorImportStatus('Импортирую файл в проект...', 'info');
try {
const resp = await fetch('/api/projects/' + projectUUID + '/vendor-import', {
method: 'POST',
body: formData
});
const data = await resp.json().catch(() => ({}));
if (!resp.ok) {
throw new Error(data.error || 'Не удалось импортировать выгрузку');
}
const imported = Array.isArray(data.configs) ? data.configs : [];
const importedNames = imported.map(c => c.name).filter(Boolean);
const message = importedNames.length
? 'Импортировано ' + imported.length + ': ' + importedNames.join(', ')
: 'Импортировано конфигураций: ' + (data.imported || 0);
setVendorImportStatus(message, 'success');
if (typeof showToast === 'function') {
showToast('Импорт завершён', 'success');
}
await loadConfigs();
setTimeout(closeVendorImportModal, 900);
} catch (e) {
setVendorImportStatus(e.message || 'Не удалось импортировать выгрузку', 'error');
if (typeof showToast === 'function') {
showToast(e.message || 'Не удалось импортировать выгрузку', 'error');
}
} finally {
submit.disabled = false;
submit.textContent = 'Импортировать';
}
}
async function createConfig() {
const name = document.getElementById('create-name').value.trim();
if (!name) {
@@ -533,7 +672,7 @@ async function createConfig() {
body: JSON.stringify({name: name, items: [], notes: '', server_count: 1})
});
if (!resp.ok) {
alert('Не удалось создать квоту');
alert('Не удалось создать конфигурацию');
return;
}
closeCreateModal();
@@ -541,7 +680,7 @@ async function createConfig() {
}
async function deleteConfig(uuid) {
if (!confirm('Переместить квоту в архив?')) return;
if (!confirm('Переместить конфигурацию в архив?')) return;
await fetch('/api/configs/' + uuid, {method: 'DELETE'});
loadConfigs();
}
@@ -549,7 +688,7 @@ async function deleteConfig(uuid) {
async function reactivateConfig(uuid) {
const resp = await fetch('/api/configs/' + uuid + '/reactivate', {method: 'POST'});
if (!resp.ok) {
alert('Не удалось восстановить квоту');
alert('Не удалось восстановить конфигурацию');
return;
}
const moved = await fetch('/api/configs/' + uuid + '/project', {
@@ -558,7 +697,7 @@ async function reactivateConfig(uuid) {
body: JSON.stringify({project_uuid: projectUUID})
});
if (!moved.ok) {
alert('Квота восстановлена, но не удалось вернуть в проект');
alert('Конфигурация восстановлена, но не удалось вернуть в проект');
}
loadConfigs();
}
@@ -733,7 +872,7 @@ async function saveConfigAction() {
body: JSON.stringify({name: name})
});
if (!cloneResp.ok) {
notify('Не удалось скопировать квоту', 'error');
notify('Не удалось скопировать конфигурацию', 'error');
return;
}
closeConfigActionModal();
@@ -754,7 +893,7 @@ async function saveConfigAction() {
body: JSON.stringify({name: name})
});
if (!renameResp.ok) {
notify('Не удалось переименовать квоту', 'error');
notify('Не удалось переименовать конфигурацию', 'error');
return;
}
changed = true;
@@ -767,7 +906,7 @@ async function saveConfigAction() {
body: JSON.stringify({project_uuid: targetProjectUUID})
});
if (!moveResp.ok) {
notify('Не удалось перенести квоту', 'error');
notify('Не удалось перенести конфигурацию', 'error');
return;
}
changed = true;
@@ -786,21 +925,6 @@ async function saveConfigAction() {
}
}
function openImportModal() {
const activeOther = allConfigs.length ? null : null; // no-op placeholder
void activeOther;
document.getElementById('import-config-input').value = '';
document.getElementById('import-config-options').innerHTML = '';
loadImportOptions();
document.getElementById('import-modal').classList.remove('hidden');
document.getElementById('import-modal').classList.add('flex');
}
function closeImportModal() {
document.getElementById('import-modal').classList.add('hidden');
document.getElementById('import-modal').classList.remove('flex');
}
function openProjectSettingsModal() {
if (!project) return;
if (project.is_system) {
@@ -860,89 +984,10 @@ async function saveProjectSettings() {
closeProjectSettingsModal();
}
async function loadImportOptions() {
const resp = await fetch('/api/configs?page=1&per_page=500&status=active');
if (!resp.ok) return;
const data = await resp.json();
const options = document.getElementById('import-config-options');
options.innerHTML = '';
(data.configurations || [])
.filter(c => c.project_uuid !== projectUUID)
.forEach(c => {
const opt = document.createElement('option');
opt.value = c.name;
options.appendChild(opt);
});
}
async function importConfigToProject() {
const query = document.getElementById('import-config-input').value.trim();
if (!query) {
alert('Выберите квоту');
return;
}
const resp = await fetch('/api/configs?page=1&per_page=500&status=active');
if (!resp.ok) {
alert('Не удалось загрузить список квот');
return;
}
const data = await resp.json();
const sourceConfigs = (data.configurations || []).filter(c => c.project_uuid !== projectUUID);
let targets = [];
if (query.includes('*')) {
targets = sourceConfigs.filter(c => wildcardMatch(c.name || '', query));
} else {
const found = sourceConfigs.find(c => (c.name || '').toLowerCase() === query.toLowerCase());
if (found) {
targets = [found];
}
}
if (!targets.length) {
alert('Подходящие квоты не найдены');
return;
}
let moved = 0;
let failed = 0;
for (const cfg of targets) {
const move = await fetch('/api/configs/' + cfg.uuid + '/project', {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({project_uuid: projectUUID})
});
if (move.ok) {
moved++;
} else {
failed++;
}
}
if (!moved) {
alert('Не удалось импортировать квоты');
return;
}
closeImportModal();
await loadConfigs();
if (targets.length > 1 || failed > 0) {
alert('Импорт завершен: перенесено ' + moved + ', ошибок ' + failed);
}
}
function wildcardMatch(value, pattern) {
const escaped = pattern
.replace(/[.+^${}()|[\]\\]/g, '\\$&')
.replace(/\*/g, '.*');
const regex = new RegExp('^' + escaped + '$', 'i');
return regex.test(value);
}
async function deleteVariant() {
if (!project) return;
const variantLabel = normalizeVariantLabel(project.variant);
if (!confirm('Удалить вариант «' + variantLabel + '»? Все квоты будут архивированы.')) return;
if (!confirm('Удалить вариант «' + variantLabel + '»? Все конфигурации будут архивированы.')) return;
const resp = await fetch('/api/projects/' + projectUUID, {method: 'DELETE'});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
@@ -969,9 +1014,9 @@ function updateDeleteVariantButton() {
}
document.getElementById('create-modal').addEventListener('click', function(e) { if (e.target === this) closeCreateModal(); });
document.getElementById('vendor-import-modal').addEventListener('click', function(e) { if (e.target === this) closeVendorImportModal(); });
document.getElementById('new-variant-modal').addEventListener('click', function(e) { if (e.target === this) closeNewVariantModal(); });
document.getElementById('config-action-modal').addEventListener('click', function(e) { if (e.target === this) closeConfigActionModal(); });
document.getElementById('import-modal').addEventListener('click', function(e) { if (e.target === this) closeImportModal(); });
document.getElementById('project-settings-modal').addEventListener('click', function(e) { if (e.target === this) closeProjectSettingsModal(); });
document.getElementById('config-action-project-input').addEventListener('input', function(e) {
const code = resolveProjectCodeFromInput(e.target.value);
@@ -988,12 +1033,111 @@ document.getElementById('config-action-copy').addEventListener('change', functio
document.addEventListener('keydown', function(e) {
if (e.key === 'Escape') {
closeCreateModal();
closeVendorImportModal();
closeConfigActionModal();
closeImportModal();
closeProjectSettingsModal();
}
});
function onConfigDragStart(event) {
if (configStatusMode !== 'active' || isReorderingConfigs) {
event.preventDefault();
return;
}
const row = event.target.closest('tr[data-config-uuid]');
if (!row) {
event.preventDefault();
return;
}
dragConfigUUID = row.dataset.configUuid || '';
if (!dragConfigUUID) {
event.preventDefault();
return;
}
row.classList.add('opacity-50');
event.dataTransfer.effectAllowed = 'move';
event.dataTransfer.setData('text/plain', dragConfigUUID);
}
function onConfigDragOver(event) {
if (!dragConfigUUID || configStatusMode !== 'active') return;
event.preventDefault();
const row = event.target.closest('tr[data-config-uuid]');
if (!row || row.dataset.configUuid === dragConfigUUID) return;
row.classList.add('ring-2', 'ring-blue-300');
}
function onConfigDragLeave(event) {
const row = event.target.closest('tr[data-config-uuid]');
if (!row) return;
row.classList.remove('ring-2', 'ring-blue-300');
}
async function onConfigDrop(event) {
if (!dragConfigUUID || configStatusMode !== 'active' || isReorderingConfigs) return;
event.preventDefault();
const targetRow = event.target.closest('tr[data-config-uuid]');
if (!targetRow) return;
targetRow.classList.remove('ring-2', 'ring-blue-300');
const targetUUID = targetRow.dataset.configUuid || '';
if (!targetUUID || targetUUID === dragConfigUUID) return;
const previous = allConfigs.slice();
const fromIndex = allConfigs.findIndex(c => c.uuid === dragConfigUUID);
const toIndex = allConfigs.findIndex(c => c.uuid === targetUUID);
if (fromIndex < 0 || toIndex < 0) return;
const [moved] = allConfigs.splice(fromIndex, 1);
allConfigs.splice(toIndex, 0, moved);
renderConfigs(allConfigs);
await saveConfigReorder(previous);
}
function onConfigDragEnd() {
document.querySelectorAll('tr[data-config-uuid]').forEach(row => {
row.classList.remove('ring-2', 'ring-blue-300', 'opacity-50');
});
dragConfigUUID = '';
}
async function saveConfigReorder(previousConfigs) {
if (isReorderingConfigs) return;
isReorderingConfigs = true;
const orderedUUIDs = allConfigs.map(c => c.uuid);
try {
const resp = await fetch('/api/projects/' + projectUUID + '/configs/reorder', {
method: 'PATCH',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({ordered_uuids: orderedUUIDs}),
});
if (!resp.ok) {
const data = await resp.json().catch(() => ({}));
throw new Error(data.error || 'Не удалось сохранить порядок');
}
const data = await resp.json();
allConfigs = data.configurations || allConfigs;
renderConfigs(allConfigs);
if (typeof showToast === 'function') {
showToast('Порядок сохранён', 'success');
}
} catch (e) {
allConfigs = previousConfigs.slice();
renderConfigs(allConfigs);
if (typeof showToast === 'function') {
showToast(e.message || 'Не удалось сохранить порядок', 'error');
} else {
alert(e.message || 'Не удалось сохранить порядок');
}
} finally {
isReorderingConfigs = false;
dragConfigUUID = '';
}
}
async function updateConfigServerCount(input) {
const uuid = input.dataset.uuid;
const prevValue = parseInt(input.dataset.prev) || 1;
@@ -1018,7 +1162,7 @@ async function updateConfigServerCount(input) {
// Update row total price
const totalCell = document.querySelector('[data-total-uuid="' + uuid + '"]');
if (totalCell && updated.total_price != null) {
totalCell.textContent = '$' + updated.total_price.toLocaleString('en-US', {minimumFractionDigits: 2});
totalCell.textContent = formatMoneyNoDecimals(updated.total_price);
}
// Update the config in allConfigs and recalculate footer total
for (let i = 0; i < allConfigs.length; i++) {
@@ -1040,12 +1184,86 @@ function updateFooterTotal() {
allConfigs.forEach(c => { totalSum += (c.total_price || 0); });
const footer = document.querySelector('tfoot td[data-footer-total]');
if (footer) {
footer.textContent = '$' + totalSum.toLocaleString('en-US', {minimumFractionDigits: 2});
footer.textContent = formatMoneyNoDecimals(totalSum);
}
}
function exportProject() {
window.location.href = '/api/projects/' + projectUUID + '/export';
function openExportModal() {
const modal = document.getElementById('project-export-modal');
if (!modal) return;
setProjectExportStatus('', '');
modal.classList.remove('hidden');
modal.classList.add('flex');
}
function closeExportModal() {
const modal = document.getElementById('project-export-modal');
if (!modal) return;
modal.classList.add('hidden');
modal.classList.remove('flex');
}
function setProjectExportStatus(message, tone) {
const status = document.getElementById('project-export-status');
if (!status) return;
if (!message) {
status.className = 'hidden text-sm rounded border px-3 py-2';
status.textContent = '';
return;
}
const palette = tone === 'error'
? 'text-red-700 bg-red-50 border-red-200'
: 'text-gray-700 bg-gray-50 border-gray-200';
status.className = 'text-sm rounded border px-3 py-2 ' + palette;
status.textContent = message;
}
async function exportProject() {
const submitBtn = document.getElementById('project-export-submit');
const payload = {
include_lot: !!document.getElementById('export-col-lot')?.checked,
include_bom: !!document.getElementById('export-col-bom')?.checked,
include_estimate: !!document.getElementById('export-col-estimate')?.checked,
include_stock: !!document.getElementById('export-col-stock')?.checked,
include_competitor: !!document.getElementById('export-col-competitor')?.checked
};
if (submitBtn) submitBtn.disabled = true;
setProjectExportStatus('', '');
try {
const resp = await fetch('/api/projects/' + projectUUID + '/export', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify(payload)
});
if (!resp.ok) {
let message = 'Не удалось экспортировать CSV';
try {
const data = await resp.json();
if (data && data.error) message = data.error;
} catch (_) {}
setProjectExportStatus(message, 'error');
return;
}
const blob = await resp.blob();
const link = document.createElement('a');
const url = window.URL.createObjectURL(blob);
const disposition = resp.headers.get('Content-Disposition') || '';
const match = disposition.match(/filename="([^"]+)"/);
link.href = url;
link.download = match ? match[1] : 'project-export.csv';
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
window.URL.revokeObjectURL(url);
closeExportModal();
} catch (e) {
setProjectExportStatus('Ошибка сети при экспорте CSV', 'error');
} finally {
if (submitBtn) submitBtn.disabled = false;
}
}
document.addEventListener('DOMContentLoaded', async function() {

View File

@@ -97,6 +97,40 @@ function formatDateTime(value) {
});
}
function formatDateParts(value) {
if (!value) return null;
const date = new Date(value);
if (Number.isNaN(date.getTime())) return null;
return {
date: date.toLocaleDateString('ru-RU', {
year: 'numeric',
month: '2-digit',
day: '2-digit'
}),
time: date.toLocaleTimeString('ru-RU', {
hour: '2-digit',
minute: '2-digit'
})
};
}
function renderAuditCell(value, user) {
const parts = formatDateParts(value);
const safeUser = escapeHtml((user || '—').trim() || '—');
if (!parts) {
return '<div class="leading-tight">' +
'<div class="text-gray-400">—</div>' +
'<div class="text-gray-400">—</div>' +
'<div class="text-gray-500">@ ' + safeUser + '</div>' +
'</div>';
}
return '<div class="leading-tight whitespace-nowrap">' +
'<div>' + escapeHtml(parts.date) + '</div>' +
'<div class="text-gray-500">' + escapeHtml(parts.time) + '</div>' +
'<div class="text-gray-600">@ ' + safeUser + '</div>' +
'</div>';
}
function normalizeVariant(variant) {
const trimmed = (variant || '').trim();
return trimmed === '' ? 'main' : trimmed;
@@ -225,20 +259,20 @@ async function loadProjects() {
return;
}
let html = '<div class="overflow-x-auto"><table class="w-full">';
let html = '<div class="overflow-x-auto"><table class="w-full table-fixed min-w-[980px]">';
html += '<thead class="bg-gray-50">';
html += '<tr>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Код</th>';
html += '<th class="w-28 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Код</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">';
html += '<button type="button" onclick="toggleSort(\'name\')" class="inline-flex items-center gap-1 hover:text-gray-700">Название';
if (sortField === 'name') {
html += sortDir === 'asc' ? ' <span>↑</span>' : ' <span>↓</span>';
}
html += '</button></th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Создан @ автор</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Изменен @ кто</th>';
html += '<th class="px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Варианты</th>';
html += '<th class="px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Действия</th>';
html += '<th class="w-44 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Создан</th>';
html += '<th class="w-44 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Изменен</th>';
html += '<th class="w-36 px-4 py-3 text-left text-xs font-medium text-gray-500 uppercase">Варианты</th>';
html += '<th class="w-36 px-4 py-3 text-right text-xs font-medium text-gray-500 uppercase">Действия</th>';
html += '</tr>';
html += '<tr>';
html += '<th class="px-4 py-2"></th>';
@@ -259,14 +293,12 @@ async function loadProjects() {
const displayName = p.name || '';
const createdBy = p.owner_username || '—';
const updatedBy = '—';
const createdLabel = formatDateTime(p.created_at) + ' @ ' + createdBy;
const updatedLabel = formatDateTime(p.updated_at) + ' @ ' + updatedBy;
const variantChips = renderVariantChips(p.code, p.variant, p.uuid);
html += '<td class="px-4 py-3 text-sm font-medium"><a class="text-blue-600 hover:underline" href="/projects/' + p.uuid + '">' + escapeHtml(p.code || '—') + '</a></td>';
html += '<td class="px-4 py-3 text-sm text-gray-700">' + escapeHtml(displayName) + '</td>';
html += '<td class="px-4 py-3 text-sm text-gray-600">' + escapeHtml(createdLabel) + '</td>';
html += '<td class="px-4 py-3 text-sm text-gray-600">' + escapeHtml(updatedLabel) + '</td>';
html += '<td class="px-4 py-3 text-sm">' + variantChips + '</td>';
html += '<td class="px-4 py-3 text-sm font-medium align-top"><a class="inline-block max-w-full text-blue-600 hover:underline whitespace-nowrap" href="/projects/' + p.uuid + '">' + escapeHtml(p.code || '—') + '</a></td>';
html += '<td class="px-4 py-3 text-sm text-gray-700 align-top"><div class="truncate" title="' + escapeHtml(displayName) + '">' + escapeHtml(displayName || '—') + '</div></td>';
html += '<td class="px-4 py-3 text-sm text-gray-600 align-top">' + renderAuditCell(p.created_at, createdBy) + '</td>';
html += '<td class="px-4 py-3 text-sm text-gray-600 align-top">' + renderAuditCell(p.updated_at, updatedBy) + '</td>';
html += '<td class="px-4 py-3 text-sm align-top"><div class="flex flex-wrap gap-1">' + variantChips + '</div></td>';
html += '<td class="px-4 py-3 text-sm text-right"><div class="inline-flex items-center gap-2">';
if (p.is_active) {
@@ -283,7 +315,7 @@ async function loadProjects() {
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 7l-.867 12.142A2 2 0 0116.138 21H7.862a2 2 0 01-1.995-1.858L5 7m5 4v6m4-6v6m1-10V4a1 1 0 00-1-1h-4a1 1 0 00-1 1v3M4 7h16"></path></svg>';
html += '</button>';
html += '<button onclick="addConfigToProject(\'' + p.uuid + '\')" class="text-indigo-700 hover:text-indigo-900" title="Добавить квоту">';
html += '<button onclick="addConfigToProject(\'' + p.uuid + '\')" class="text-indigo-700 hover:text-indigo-900" title="Добавить конфигурацию">';
html += '<svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 4v16m8-8H4"></path></svg>';
html += '</button>';
} else {
@@ -440,7 +472,7 @@ async function reactivateProject(projectUUID) {
}
async function addConfigToProject(projectUUID) {
const name = prompt('Название новой квоты');
const name = prompt('Название новой конфигурации');
if (!name || !name.trim()) return;
const resp = await fetch('/api/projects/' + projectUUID + '/configs', {
method: 'POST',
@@ -448,7 +480,7 @@ async function addConfigToProject(projectUUID) {
body: JSON.stringify({name: name.trim(), items: [], notes: '', server_count: 1})
});
if (!resp.ok) {
alert('Не удалось создать квоту');
alert('Не удалось создать конфигурацию');
return;
}
loadProjects();
@@ -478,7 +510,7 @@ async function copyProject(projectUUID, projectName) {
const listResp = await fetch('/api/projects/' + projectUUID + '/configs');
if (!listResp.ok) {
alert('Проект скопирован без квот (не удалось загрузить исходные квоты)');
alert('Проект скопирован без конфигураций (не удалось загрузить исходные конфигурации)');
loadProjects();
return;
}