Compare commits
57 Commits
65871a8b04
...
v1.3.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9b5d57902d | ||
|
|
4e1a46bd71 | ||
|
|
857ec7a0e5 | ||
|
|
01f21fa5ac | ||
|
|
a1edca3be9 | ||
|
|
7fbf813952 | ||
|
|
e58fd35ee4 | ||
|
|
e3559035f7 | ||
|
|
5edffe822b | ||
|
|
99fd80bca7 | ||
|
|
d8edd5d5f0 | ||
|
|
9cb17ee03f | ||
|
|
8f596cec68 | ||
|
|
8fd27d11a7 | ||
|
|
600f842b82 | ||
|
|
acf7c8a4da | ||
|
|
5984a57a8b | ||
|
|
84dda8cf0a | ||
|
|
abeb26d82d | ||
|
|
29edd73744 | ||
|
|
e8d0e28415 | ||
|
|
08feda9af6 | ||
|
|
af79b6f3bf | ||
|
|
bca82f9dc0 | ||
| 17969277e6 | |||
| 0dbfe45353 | |||
| f609d2ce35 | |||
| 593280de99 | |||
| eb8555c11a | |||
| 7523a7d887 | |||
| 95b5f8bf65 | |||
| b629af9742 | |||
| 72ff842f5d | |||
|
|
5f2969a85a | ||
|
|
eb8ac34d83 | ||
|
|
104a26d907 | ||
|
|
b965c6bb95 | ||
|
|
29035ddc5a | ||
|
|
2f0ac2f6d2 | ||
|
|
8a8ea10dc2 | ||
|
|
51e2d1fc83 | ||
|
|
3d5ab63970 | ||
|
|
c02a7eac73 | ||
|
|
651427e0dd | ||
|
|
f665e9b08c | ||
|
|
994eec53e7 | ||
|
|
2f3c20fea6 | ||
|
|
80ec7bc6b8 | ||
|
|
8e5c4f5a7c | ||
|
|
1744e6a3b8 | ||
|
|
726dccb07c | ||
|
|
38d7332a38 | ||
|
|
c0beed021c | ||
|
|
08b95c293c | ||
|
|
c418d6cfc3 | ||
|
|
548a256d04 | ||
|
|
77c00de97a |
5
.githooks/pre-commit
Executable file
5
.githooks/pre-commit
Executable file
@@ -0,0 +1,5 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
repo_root="$(git rev-parse --show-toplevel)"
|
||||||
|
"$repo_root/scripts/check-secrets.sh"
|
||||||
17
.gitignore
vendored
17
.gitignore
vendored
@@ -1,5 +1,16 @@
|
|||||||
# QuoteForge
|
# QuoteForge
|
||||||
config.yaml
|
config.yaml
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
*.pem
|
||||||
|
*.key
|
||||||
|
*.p12
|
||||||
|
*.pfx
|
||||||
|
*.crt
|
||||||
|
id_rsa
|
||||||
|
id_rsa.*
|
||||||
|
secrets.yaml
|
||||||
|
secrets.yml
|
||||||
|
|
||||||
# Local SQLite database (contains encrypted credentials)
|
# Local SQLite database (contains encrypted credentials)
|
||||||
/data/*.db
|
/data/*.db
|
||||||
@@ -12,6 +23,7 @@ config.yaml
|
|||||||
/importer
|
/importer
|
||||||
/cron
|
/cron
|
||||||
/bin/
|
/bin/
|
||||||
|
qfs
|
||||||
|
|
||||||
# Local Go build cache used in sandboxed runs
|
# Local Go build cache used in sandboxed runs
|
||||||
.gocache/
|
.gocache/
|
||||||
@@ -63,4 +75,7 @@ Network Trash Folder
|
|||||||
Temporary Items
|
Temporary Items
|
||||||
.apdisk
|
.apdisk
|
||||||
|
|
||||||
releases/
|
# Release artifacts (binaries, archives, checksums), but DO track releases/memory/ for changelog
|
||||||
|
releases/*
|
||||||
|
!releases/memory/
|
||||||
|
!releases/memory/**
|
||||||
|
|||||||
204
CLAUDE.md
204
CLAUDE.md
@@ -1,163 +1,83 @@
|
|||||||
# QuoteForge - Claude Code Instructions
|
# QuoteForge - Claude Code Instructions
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
Корпоративный конфигуратор серверов и формирование КП. MariaDB (RFQ_LOG) + SQLite для оффлайн.
|
Корпоративный конфигуратор серверов с offline-first архитектурой.
|
||||||
|
Приложение работает через локальную SQLite базу, синхронизация с MariaDB выполняется фоново.
|
||||||
|
|
||||||
## Development Phases
|
## Product Scope
|
||||||
|
- Конфигуратор компонентов и расчёт КП
|
||||||
|
- Проекты и конфигурации
|
||||||
|
- Read-only просмотр прайслистов из локального кэша
|
||||||
|
- Sync (pull компонентов/прайслистов, push локальных изменений)
|
||||||
|
|
||||||
### Phase 1: Pricelists in MariaDB ✅ DONE
|
Из области исключены:
|
||||||
### Phase 2: Local SQLite Database ✅ DONE
|
- admin pricing UI/API
|
||||||
|
- stock import
|
||||||
|
- alerts
|
||||||
|
- cron/importer утилиты
|
||||||
|
|
||||||
### Phase 2.5: Full Offline Mode 🔶 IN PROGRESS
|
## Architecture
|
||||||
**Local-first architecture:** приложение ВСЕГДА работает с SQLite, MariaDB только для синхронизации.
|
- Local-first: чтение и запись происходят в SQLite
|
||||||
|
- MariaDB используется как сервер синхронизации
|
||||||
|
- Background worker: периодический sync push+pull
|
||||||
|
|
||||||
**Принцип работы:**
|
## Guardrails
|
||||||
- ВСЕ операции (CRUD) выполняются в SQLite
|
- Не возвращать в проект удалённые legacy-разделы: cron jobs, importer utility, admin pricing, alerts, stock import.
|
||||||
- При создании конфигурации:
|
- Runtime-конфиг читается из user state (`config.yaml`) или через `-config` / `QFS_CONFIG_PATH`; не хранить рабочий `config.yaml` в репозитории.
|
||||||
1. Если online → проверить новые прайслисты на сервере → скачать если есть
|
- `config.example.yaml` остаётся единственным шаблоном конфигурации в репо.
|
||||||
2. Далее работаем с local_pricelists (и online, и offline одинаково)
|
- Любые изменения в sync должны сохранять local-first поведение: локальные CRUD не блокируются из-за недоступности MariaDB.
|
||||||
- Background sync: push pending_changes → pull updates
|
|
||||||
|
|
||||||
**DONE:**
|
## Key SQLite Data
|
||||||
- ✅ Sync queue table (pending_changes) - `internal/localdb/models.go`
|
- `connection_settings`
|
||||||
- ✅ Model converters: MariaDB ↔ SQLite - `internal/localdb/converters.go`
|
- `local_components`
|
||||||
- ✅ LocalConfigurationService: все CRUD через SQLite - `internal/services/local_configuration.go`
|
- `local_pricelists`, `local_pricelist_items`
|
||||||
- ✅ Pre-create pricelist check: `SyncPricelistsIfNeeded()` - `internal/services/sync/service.go`
|
- `local_configurations`
|
||||||
- ✅ Push pending changes: `PushPendingChanges()` - sync service + handlers
|
- `local_projects`
|
||||||
- ✅ Sync API endpoints: `/api/sync/push`, `/pending/count`, `/pending`
|
- `pending_changes`
|
||||||
- ✅ Integrate LocalConfigurationService in main.go (replace ConfigurationService)
|
|
||||||
- ✅ Add routes for new sync endpoints (`/api/sync/push`, `/pending/count`, `/pending`)
|
|
||||||
- ✅ ConfigurationGetter interface for handler compatibility
|
|
||||||
- ✅ Background sync worker: auto-sync every 5min (push + pull) - `internal/services/sync/worker.go`
|
|
||||||
- ✅ UI: sync status indicator (pending badge + sync button + offline/online dot) - `web/templates/partials/sync_status.html`
|
|
||||||
- ✅ RefreshPrices for local mode:
|
|
||||||
- `RefreshPrices()` / `RefreshPricesNoAuth()` в `local_configuration.go`
|
|
||||||
- Берёт цены из `local_components.current_price`
|
|
||||||
- Graceful degradation при отсутствии компонента
|
|
||||||
- Добавлено поле `price_updated_at` в `LocalConfiguration` (models.go:72)
|
|
||||||
- Обновлены converters для PriceUpdatedAt
|
|
||||||
- UI кнопка "Пересчитать цену" работает offline/online
|
|
||||||
- ✅ Fixed sync bugs:
|
|
||||||
- Duplicate entry error при update конфигураций (`sync/service.go:334-365`)
|
|
||||||
- pushConfigurationUpdate теперь проверяет наличие server_id перед update
|
|
||||||
- Если нет ID → получает из LocalConfiguration.ServerID или ищет на сервере
|
|
||||||
- Fixed setup.go: `settings.Password` → `settings.PasswordEncrypted`
|
|
||||||
|
|
||||||
**TODO:**
|
|
||||||
- ❌ Conflict resolution (Phase 4, last-write-wins default)
|
|
||||||
|
|
||||||
### UI Improvements ✅ MOSTLY DONE
|
|
||||||
|
|
||||||
**1. Sync UI + pricelist badge: ✅ DONE**
|
|
||||||
- ✅ `sync_status.html`: SVG иконки Online/Offline (кликабельные → открывают модал)
|
|
||||||
- ✅ Кнопка sync → иконка circular arrows (только full sync)
|
|
||||||
- ✅ Модальное окно "Статус системы" в `base.html` (info о БД, ошибки синхронизации)
|
|
||||||
- ✅ `configs.html`: badge с версией активного прайслиста
|
|
||||||
- ✅ Загрузка через `/api/pricelists/latest` при DOMContentLoaded
|
|
||||||
- ✅ Удалён dropdown с Push changes (упрощение UI)
|
|
||||||
|
|
||||||
**2. Прайслисты → вкладка в "Администратор цен": ✅ DONE**
|
|
||||||
- ✅ `base.html`: убрана ссылка "Прайслисты" из навигации
|
|
||||||
- ✅ `admin_pricing.html`: добавлена вкладка "Прайслисты"
|
|
||||||
- ✅ Логика перенесена из `pricelists.html` (table, create modal, CRUD)
|
|
||||||
- ✅ Route `/pricelists` → редирект на `/admin/pricing?tab=pricelists`
|
|
||||||
- ✅ Поддержка URL param `?tab=pricelists`
|
|
||||||
|
|
||||||
**3. Модал "Настройка цены" - кол-во котировок с учётом периода: ❌ TODO**
|
|
||||||
- Текущее: показывает только общее кол-во котировок
|
|
||||||
- Новое: показывать `N (всего: M)` где N - за выбранный период, M - всего
|
|
||||||
- ❌ `admin_pricing.html`: обновить `#modal-quote-count`
|
|
||||||
- ❌ `admin_pricing_handler.go`: в `/api/admin/pricing/preview` возвращать `quote_count_period` + `quote_count_total`
|
|
||||||
|
|
||||||
**4. Страница настроек: ❌ ОТЛОЖЕНО**
|
|
||||||
- Перенесено в Phase 3 (после основных UI улучшений)
|
|
||||||
|
|
||||||
### Phase 3: Projects and Specifications
|
|
||||||
- qt_projects, qt_specifications tables (MariaDB)
|
|
||||||
- Replace qt_configurations → Project/Specification hierarchy
|
|
||||||
- Fields: opty, customer_requirement, variant, qty, rev
|
|
||||||
- Local projects/specs with server sync
|
|
||||||
|
|
||||||
### Phase 4: Price Versioning
|
|
||||||
- Bind specifications to pricelist versions
|
|
||||||
- Price diff comparison
|
|
||||||
- Auto-cleanup expired pricelists (>1 year, usage_count=0)
|
|
||||||
|
|
||||||
## Tech Stack
|
|
||||||
Go 1.22+ | Gin | GORM | MariaDB 11 | SQLite (glebarez/sqlite) | htmx + Tailwind CDN | excelize
|
|
||||||
|
|
||||||
## Key Tables
|
|
||||||
|
|
||||||
### READ-ONLY (external systems)
|
|
||||||
- `lot` (lot_name PK, lot_description)
|
|
||||||
- `lot_log` (lot, supplier, date, price, quality, comments)
|
|
||||||
- `supplier` (supplier_name PK)
|
|
||||||
|
|
||||||
### MariaDB (qt_* prefix)
|
|
||||||
- `qt_lot_metadata` - component prices, methods, popularity
|
|
||||||
- `qt_categories` - category codes and names
|
|
||||||
- `qt_pricelists` - version snapshots (YYYY-MM-DD-NNN format)
|
|
||||||
- `qt_pricelist_items` - prices per pricelist
|
|
||||||
- `qt_projects` - uuid, opty, customer_requirement, name (Phase 3)
|
|
||||||
- `qt_specifications` - project_id, pricelist_id, variant, rev, qty, items JSON (Phase 3)
|
|
||||||
|
|
||||||
### SQLite (data/quoteforge.db)
|
|
||||||
- `connection_settings` - encrypted DB credentials (PasswordEncrypted field)
|
|
||||||
- `local_pricelists/items` - cached from server
|
|
||||||
- `local_components` - lot cache for offline search (with current_price)
|
|
||||||
- `local_configurations` - UUID, items, price_updated_at, sync_status (pending/synced/conflict), server_id
|
|
||||||
- `local_projects/specifications` - Phase 3
|
|
||||||
- `pending_changes` - sync queue (entity_type, uuid, op, payload, created_at, attempts, last_error)
|
|
||||||
|
|
||||||
## Business Logic
|
|
||||||
|
|
||||||
**Part number parsing:** `CPU_AMD_9654` → category=`CPU`, model=`AMD_9654`
|
|
||||||
|
|
||||||
**Price methods:** manual | median | average | weighted_median
|
|
||||||
|
|
||||||
**Price freshness:** fresh (<30d, ≥3 quotes) | normal (<60d) | stale (<90d) | critical
|
|
||||||
|
|
||||||
**Pricelist version:** `YYYY-MM-DD-NNN` (e.g., `2024-01-31-001`)
|
|
||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
|
||||||
| Group | Endpoints |
|
| Group | Endpoints |
|
||||||
|-------|-----------|
|
|-------|-----------|
|
||||||
| Setup | GET/POST /setup, POST /setup/test |
|
| Setup | `GET /setup`, `POST /setup`, `POST /setup/test`, `GET /setup/status` |
|
||||||
| Components | GET /api/components, /api/categories |
|
| Components | `GET /api/components`, `GET /api/components/:lot_name`, `GET /api/categories` |
|
||||||
| Pricelists | CRUD /api/pricelists, GET /latest, POST /compare |
|
| Quote | `POST /api/quote/validate`, `POST /api/quote/calculate`, `POST /api/quote/price-levels` |
|
||||||
| Projects | CRUD /api/projects/:uuid (Phase 3) |
|
| Pricelists (read-only) | `GET /api/pricelists`, `GET /api/pricelists/latest`, `GET /api/pricelists/:id`, `GET /api/pricelists/:id/items`, `GET /api/pricelists/:id/lots` |
|
||||||
| Specs | CRUD /api/specs/:uuid, POST /upgrade, GET /diff (Phase 3) |
|
| Configs | CRUD + refresh/clone/reactivate/rename/project binding via `/api/configs/*` |
|
||||||
| Configs | POST /:uuid/refresh-prices (обновить цены из local_components) |
|
| Projects | CRUD + nested configs via `/api/projects/*` |
|
||||||
| Sync | GET /status, POST /components, /pricelists, /push, /pull, /resolve-conflict |
|
| Sync | `GET /api/sync/status`, `GET /api/sync/readiness`, `GET /api/sync/info`, `GET /api/sync/users-status`, `POST /api/sync/components`, `POST /api/sync/pricelists`, `POST /api/sync/all`, `POST /api/sync/push`, `GET /api/sync/pending`, `GET /api/sync/pending/count` |
|
||||||
| Export | GET /api/specs/:uuid/export, /api/projects/:uuid/export |
|
| Export | `POST /api/export/csv` |
|
||||||
|
|
||||||
|
## Web Routes
|
||||||
|
- `/configs`
|
||||||
|
- `/configurator`
|
||||||
|
- `/projects`
|
||||||
|
- `/projects/:uuid`
|
||||||
|
- `/pricelists`
|
||||||
|
- `/pricelists/:id`
|
||||||
|
- `/setup`
|
||||||
|
|
||||||
|
## Release Notes & Change Log
|
||||||
|
Release notes are maintained in `releases/memory/` directory organized by version tags (e.g., `v1.2.1.md`).
|
||||||
|
Before working on the codebase, review the most recent release notes to understand recent changes.
|
||||||
|
- Check `releases/memory/` for detailed changelog between tags
|
||||||
|
- Each release file documents commits, breaking changes, and migration notes
|
||||||
|
|
||||||
## Commands
|
## Commands
|
||||||
```bash
|
```bash
|
||||||
# Development
|
# Development
|
||||||
go run ./cmd/qfs # Dev server
|
go run ./cmd/qfs
|
||||||
make run # Dev server (via Makefile)
|
make run
|
||||||
|
|
||||||
# Production build
|
# Build
|
||||||
make build-release # Optimized build with version (recommended)
|
make build-release
|
||||||
VERSION=$(git describe --tags --always --dirty)
|
CGO_ENABLED=0 go build -o bin/qfs ./cmd/qfs
|
||||||
CGO_ENABLED=0 go build -ldflags="-s -w -X main.Version=$VERSION" -o bin/qfs ./cmd/qfs
|
|
||||||
|
|
||||||
# Cron jobs
|
# Verification
|
||||||
go run ./cmd/cron -job=cleanup-pricelists # Remove old unused pricelists
|
go build ./cmd/qfs
|
||||||
go run ./cmd/cron -job=update-prices # Recalculate all prices
|
go vet ./...
|
||||||
go run ./cmd/cron -job=update-popularity # Update popularity scores
|
|
||||||
|
|
||||||
# Check version
|
|
||||||
./bin/qfs -version
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Code Style
|
## Code Style
|
||||||
- gofmt, structured logging (slog), wrap errors with context
|
- gofmt
|
||||||
- snake_case files, PascalCase types
|
- structured logging (`slog`)
|
||||||
- RBAC disabled: DB username = user_id via `models.EnsureDBUser()`
|
- explicit error wrapping with context
|
||||||
|
|
||||||
## UI Guidelines
|
|
||||||
- htmx (hx-get/post/target/swap), Tailwind CDN
|
|
||||||
- Freshness colors: green (fresh) → yellow → orange → red (critical)
|
|
||||||
- Sync status + offline indicator in header
|
|
||||||
|
|||||||
@@ -1,178 +0,0 @@
|
|||||||
# Local-First Architecture Integration Guide
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
QuoteForge теперь поддерживает local-first архитектуру: приложение ВСЕГДА работает с SQLite (localdb), MariaDB используется только для синхронизации.
|
|
||||||
|
|
||||||
## Реализованные компоненты
|
|
||||||
|
|
||||||
### 1. Конвертеры моделей (`internal/localdb/converters.go`)
|
|
||||||
|
|
||||||
Конвертеры между MariaDB и SQLite моделями:
|
|
||||||
- `ConfigurationToLocal()` / `LocalToConfiguration()`
|
|
||||||
- `PricelistToLocal()` / `LocalToPricelist()`
|
|
||||||
- `ComponentToLocal()` / `LocalToComponent()`
|
|
||||||
|
|
||||||
### 2. LocalDB методы (`internal/localdb/localdb.go`)
|
|
||||||
|
|
||||||
Добавлены методы для работы с pending changes:
|
|
||||||
- `MarkChangesSynced(ids []int64)` - помечает изменения как синхронизированные
|
|
||||||
- `GetPendingCount()` - возвращает количество несинхронизированных изменений
|
|
||||||
|
|
||||||
### 3. Sync Service расширения (`internal/services/sync/service.go`)
|
|
||||||
|
|
||||||
Новые методы:
|
|
||||||
- `SyncPricelistsIfNeeded()` - проверяет и скачивает новые прайслисты при необходимости
|
|
||||||
- `PushPendingChanges()` - отправляет все pending changes на сервер
|
|
||||||
- `pushSingleChange()` - обрабатывает один pending change
|
|
||||||
- `pushConfigurationCreate/Update/Delete()` - специфичные методы для конфигураций
|
|
||||||
|
|
||||||
**ВАЖНО**: Конструктор изменен - теперь требует `ConfigurationRepository`:
|
|
||||||
```go
|
|
||||||
syncService := sync.NewService(pricelistRepo, configRepo, local)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. LocalConfigurationService (`internal/services/local_configuration.go`)
|
|
||||||
|
|
||||||
Новый сервис для работы с конфигурациями в local-first режиме:
|
|
||||||
- Все операции CRUD работают через SQLite
|
|
||||||
- Автоматически добавляет изменения в pending_changes
|
|
||||||
- При создании конфигурации (если online) проверяет новые прайслисты
|
|
||||||
|
|
||||||
```go
|
|
||||||
localConfigService := services.NewLocalConfigurationService(
|
|
||||||
localDB,
|
|
||||||
syncService,
|
|
||||||
quoteService,
|
|
||||||
isOnlineFunc,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Sync Handler расширения (`internal/handlers/sync.go`)
|
|
||||||
|
|
||||||
Новые endpoints:
|
|
||||||
- `POST /api/sync/push` - отправить pending changes на сервер
|
|
||||||
- `GET /api/sync/pending/count` - получить количество pending changes
|
|
||||||
- `GET /api/sync/pending` - получить список pending changes
|
|
||||||
|
|
||||||
## Интеграция
|
|
||||||
|
|
||||||
### Шаг 1: Обновить main.go
|
|
||||||
|
|
||||||
```go
|
|
||||||
// В cmd/qfs/main.go
|
|
||||||
syncService := sync.NewService(pricelistRepo, configRepo, local)
|
|
||||||
|
|
||||||
// Создать isOnline функцию
|
|
||||||
isOnlineFunc := func() bool {
|
|
||||||
sqlDB, err := db.DB()
|
|
||||||
if err != nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return sqlDB.Ping() == nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Создать LocalConfigurationService
|
|
||||||
localConfigService := services.NewLocalConfigurationService(
|
|
||||||
local,
|
|
||||||
syncService,
|
|
||||||
quoteService,
|
|
||||||
isOnlineFunc,
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Шаг 2: Обновить ConfigurationHandler
|
|
||||||
|
|
||||||
Заменить `ConfigurationService` на `LocalConfigurationService` в handlers:
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Было:
|
|
||||||
configHandler := handlers.NewConfigurationHandler(configService, exportService)
|
|
||||||
|
|
||||||
// Стало:
|
|
||||||
configHandler := handlers.NewConfigurationHandler(localConfigService, exportService)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Шаг 3: Добавить endpoints для sync
|
|
||||||
|
|
||||||
В роутере добавить:
|
|
||||||
```go
|
|
||||||
syncGroup := router.Group("/api/sync")
|
|
||||||
{
|
|
||||||
syncGroup.POST("/push", syncHandler.PushPendingChanges)
|
|
||||||
syncGroup.GET("/pending/count", syncHandler.GetPendingCount)
|
|
||||||
syncGroup.GET("/pending", syncHandler.GetPendingChanges)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Как это работает
|
|
||||||
|
|
||||||
### Создание конфигурации
|
|
||||||
|
|
||||||
1. Пользователь создает конфигурацию
|
|
||||||
2. `LocalConfigurationService.Create()`:
|
|
||||||
- Если online → `SyncPricelistsIfNeeded()` проверяет новые прайслисты
|
|
||||||
- Сохраняет конфигурацию в SQLite
|
|
||||||
- Добавляет в `pending_changes` с operation="create"
|
|
||||||
3. Конфигурация доступна локально сразу
|
|
||||||
|
|
||||||
### Синхронизация с сервером
|
|
||||||
|
|
||||||
**Manual sync:**
|
|
||||||
```bash
|
|
||||||
POST /api/sync/push
|
|
||||||
```
|
|
||||||
|
|
||||||
**Background sync (TODO):**
|
|
||||||
- Периодический worker вызывает `syncService.PushPendingChanges()`
|
|
||||||
- Проверяет online статус
|
|
||||||
- Отправляет все pending changes на сервер
|
|
||||||
- Удаляет успешно синхронизированные записи
|
|
||||||
|
|
||||||
### Offline режим
|
|
||||||
|
|
||||||
1. Все операции работают нормально через SQLite
|
|
||||||
2. Изменения копятся в `pending_changes`
|
|
||||||
3. При восстановлении соединения автоматически синхронизируются
|
|
||||||
|
|
||||||
## Pending Changes Queue
|
|
||||||
|
|
||||||
Таблица `pending_changes`:
|
|
||||||
```go
|
|
||||||
type PendingChange struct {
|
|
||||||
ID int64 // Auto-increment
|
|
||||||
EntityType string // "configuration", "project", "specification"
|
|
||||||
EntityUUID string // UUID сущности
|
|
||||||
Operation string // "create", "update", "delete"
|
|
||||||
Payload string // JSON snapshot сущности
|
|
||||||
CreatedAt time.Time
|
|
||||||
Attempts int // Счетчик попыток синхронизации
|
|
||||||
LastError string // Последняя ошибка синхронизации
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## TODO для Phase 2.5
|
|
||||||
|
|
||||||
- [ ] Background sync worker (автоматическая синхронизация каждые N минут)
|
|
||||||
- [ ] Conflict resolution (при конфликтах обновления)
|
|
||||||
- [ ] UI: pending counter в header
|
|
||||||
- [ ] UI: manual sync button
|
|
||||||
- [ ] UI: conflict alerts
|
|
||||||
- [ ] Retry logic для failed pending changes
|
|
||||||
- [ ] RefreshPrices для local mode (через local_components)
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Compile
|
|
||||||
go build ./cmd/qfs
|
|
||||||
|
|
||||||
# Run
|
|
||||||
./quoteforge
|
|
||||||
|
|
||||||
# Check pending changes
|
|
||||||
curl http://localhost:8080/api/sync/pending/count
|
|
||||||
|
|
||||||
# Manual sync
|
|
||||||
curl -X POST http://localhost:8080/api/sync/push
|
|
||||||
```
|
|
||||||
@@ -1,121 +0,0 @@
|
|||||||
# Миграция: Функционал пересчета цен в конфигураторе
|
|
||||||
|
|
||||||
## Описание изменений
|
|
||||||
|
|
||||||
Добавлен функционал автоматического обновления цен компонентов в сохраненных конфигурациях.
|
|
||||||
|
|
||||||
### Новые возможности
|
|
||||||
|
|
||||||
1. **Кнопка "Пересчитать цену"** на странице конфигуратора
|
|
||||||
- Обновляет цены всех компонентов в конфигурации до актуальных значений из базы данных
|
|
||||||
- Сохраняет количество компонентов, обновляя только цены
|
|
||||||
- Отображает время последнего обновления цен
|
|
||||||
|
|
||||||
2. **Поле `price_updated_at`** в таблице конфигураций
|
|
||||||
- Хранит дату и время последнего обновления цен
|
|
||||||
- Отображается на странице конфигуратора в удобном формате ("5 мин. назад", "2 ч. назад" и т.д.)
|
|
||||||
|
|
||||||
### Изменения в базе данных
|
|
||||||
|
|
||||||
Добавлено новое поле в таблицу `qt_configurations`:
|
|
||||||
```sql
|
|
||||||
ALTER TABLE qt_configurations
|
|
||||||
ADD COLUMN price_updated_at TIMESTAMP NULL DEFAULT NULL
|
|
||||||
AFTER server_count;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Новый API endpoint
|
|
||||||
|
|
||||||
```
|
|
||||||
POST /api/configs/:uuid/refresh-prices
|
|
||||||
```
|
|
||||||
|
|
||||||
**Требования:**
|
|
||||||
- Авторизация: Bearer Token
|
|
||||||
- Роль: editor или выше
|
|
||||||
|
|
||||||
**Ответ:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"id": 1,
|
|
||||||
"uuid": "...",
|
|
||||||
"name": "Конфигурация 1",
|
|
||||||
"items": [
|
|
||||||
{
|
|
||||||
"lot_name": "CPU_AMD_9654",
|
|
||||||
"quantity": 2,
|
|
||||||
"unit_price": 11500.00
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"total_price": 23000.00,
|
|
||||||
"price_updated_at": "2026-01-31T12:34:56Z",
|
|
||||||
...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Применение изменений
|
|
||||||
|
|
||||||
### 1. Обновление базы данных
|
|
||||||
|
|
||||||
Запустите сервер с флагом миграции:
|
|
||||||
```bash
|
|
||||||
./quoteforge -migrate -config config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Или выполните SQL миграцию вручную:
|
|
||||||
```bash
|
|
||||||
mysql -u user -p RFQ_LOG < migrations/004_add_price_updated_at.sql
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Перезапуск сервера
|
|
||||||
|
|
||||||
После применения миграции перезапустите сервер:
|
|
||||||
```bash
|
|
||||||
./quoteforge -config config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
## Использование
|
|
||||||
|
|
||||||
1. Откройте любую сохраненную конфигурацию в конфигураторе
|
|
||||||
2. Нажмите кнопку **"Пересчитать цену"** рядом с кнопкой "Сохранить"
|
|
||||||
3. Все цены компонентов будут обновлены до актуальных значений
|
|
||||||
4. Конфигурация автоматически сохраняется с обновленными ценами
|
|
||||||
5. Под кнопками отображается время последнего обновления цен
|
|
||||||
|
|
||||||
## Технические детали
|
|
||||||
|
|
||||||
### Измененные файлы
|
|
||||||
|
|
||||||
- `internal/models/configuration.go` - добавлено поле `PriceUpdatedAt`
|
|
||||||
- `internal/services/configuration.go` - добавлен метод `RefreshPrices()`
|
|
||||||
- `internal/handlers/configuration.go` - добавлен обработчик `RefreshPrices()`
|
|
||||||
- `cmd/qfs/main.go` - добавлен маршрут `/api/configs/:uuid/refresh-prices`
|
|
||||||
- `web/templates/index.html` - добавлена кнопка и JavaScript функции
|
|
||||||
- `migrations/004_add_price_updated_at.sql` - SQL миграция
|
|
||||||
- `CLAUDE.md` - обновлена документация
|
|
||||||
|
|
||||||
### Логика обновления цен
|
|
||||||
|
|
||||||
1. Получение конфигурации по UUID
|
|
||||||
2. Проверка прав доступа (пользователь должен быть владельцем)
|
|
||||||
3. Для каждого компонента в конфигурации:
|
|
||||||
- Получение актуальной цены из `qt_lot_metadata.current_price`
|
|
||||||
- Обновление `unit_price` в items
|
|
||||||
4. Пересчет `total_price` с учетом `server_count`
|
|
||||||
5. Установка `price_updated_at` на текущее время
|
|
||||||
6. Сохранение конфигурации
|
|
||||||
|
|
||||||
### Обработка ошибок
|
|
||||||
|
|
||||||
- Если компонент не найден или у него нет цены - сохраняется старая цена
|
|
||||||
- При ошибках доступа возвращается 403 Forbidden
|
|
||||||
- При отсутствии конфигурации возвращается 404 Not Found
|
|
||||||
|
|
||||||
## Отмена изменений (Rollback)
|
|
||||||
|
|
||||||
Для отмены миграции выполните:
|
|
||||||
```sql
|
|
||||||
ALTER TABLE qt_configurations DROP COLUMN price_updated_at;
|
|
||||||
```
|
|
||||||
|
|
||||||
**Внимание:** После отмены миграции функционал пересчета цен перестанет работать корректно.
|
|
||||||
9
Makefile
9
Makefile
@@ -1,4 +1,4 @@
|
|||||||
.PHONY: build build-release clean test run version
|
.PHONY: build build-release clean test run version install-hooks
|
||||||
|
|
||||||
# Get version from git
|
# Get version from git
|
||||||
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
|
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
|
||||||
@@ -72,6 +72,12 @@ deps:
|
|||||||
go mod download
|
go mod download
|
||||||
go mod tidy
|
go mod tidy
|
||||||
|
|
||||||
|
# Install local git hooks
|
||||||
|
install-hooks:
|
||||||
|
git config core.hooksPath .githooks
|
||||||
|
chmod +x .githooks/pre-commit scripts/check-secrets.sh
|
||||||
|
@echo "Installed git hooks from .githooks/"
|
||||||
|
|
||||||
# Help
|
# Help
|
||||||
help:
|
help:
|
||||||
@echo "QuoteForge Server (qfs) - Build Commands"
|
@echo "QuoteForge Server (qfs) - Build Commands"
|
||||||
@@ -92,6 +98,7 @@ help:
|
|||||||
@echo " run Run development server"
|
@echo " run Run development server"
|
||||||
@echo " watch Run with auto-restart (requires entr)"
|
@echo " watch Run with auto-restart (requires entr)"
|
||||||
@echo " deps Install/update dependencies"
|
@echo " deps Install/update dependencies"
|
||||||
|
@echo " install-hooks Install local git hooks (secret scan on commit)"
|
||||||
@echo " help Show this help"
|
@echo " help Show this help"
|
||||||
@echo ""
|
@echo ""
|
||||||
@echo "Current version: $(VERSION)"
|
@echo "Current version: $(VERSION)"
|
||||||
|
|||||||
261
README.md
261
README.md
@@ -2,7 +2,8 @@
|
|||||||
|
|
||||||
**Server Configuration & Quotation Tool**
|
**Server Configuration & Quotation Tool**
|
||||||
|
|
||||||
QuoteForge — корпоративный инструмент для конфигурирования серверов и формирования коммерческих предложений (КП). Приложение интегрируется с существующей базой данных RFQ_LOG.
|
QuoteForge — корпоративный инструмент для конфигурирования серверов и формирования коммерческих предложений (КП).
|
||||||
|
Приложение работает в strict local-first режиме: пользовательские операции выполняются через локальную SQLite, MariaDB используется для синхронизации и серверного администрирования прайслистов.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||
@@ -16,6 +17,8 @@ QuoteForge — корпоративный инструмент для конфи
|
|||||||
- 💰 **Автоматический расчёт цен** — актуальные цены на основе истории закупок
|
- 💰 **Автоматический расчёт цен** — актуальные цены на основе истории закупок
|
||||||
- 📊 **Экспорт в CSV/XLSX** — готовые спецификации для клиентов
|
- 📊 **Экспорт в CSV/XLSX** — готовые спецификации для клиентов
|
||||||
- 💾 **Сохранение конфигураций** — история и шаблоны для повторного использования
|
- 💾 **Сохранение конфигураций** — история и шаблоны для повторного использования
|
||||||
|
- 🔌 **Полная офлайн-работа** — можно продолжать работу без сети и синхронизировать позже
|
||||||
|
- 🛡️ **Защищенная синхронизация** — sync блокируется preflight-проверкой, если локальная схема не готова
|
||||||
|
|
||||||
### Для ценовых администраторов
|
### Для ценовых администраторов
|
||||||
- 📈 **Умный расчёт цен** — медиана, взвешенная медиана, среднее
|
- 📈 **Умный расчёт цен** — медиана, взвешенная медиана, среднее
|
||||||
@@ -35,7 +38,7 @@ QuoteForge — корпоративный инструмент для конфи
|
|||||||
|
|
||||||
- **Backend:** Go 1.22+, Gin, GORM
|
- **Backend:** Go 1.22+, Gin, GORM
|
||||||
- **Frontend:** HTML, Tailwind CSS, htmx
|
- **Frontend:** HTML, Tailwind CSS, htmx
|
||||||
- **Database:** MariaDB 11+
|
- **Database:** SQLite (runtime/local-first), MariaDB 11+ (sync + server admin)
|
||||||
- **Export:** excelize (XLSX), encoding/csv
|
- **Export:** excelize (XLSX), encoding/csv
|
||||||
|
|
||||||
## Требования
|
## Требования
|
||||||
@@ -53,13 +56,13 @@ git clone https://github.com/your-company/quoteforge.git
|
|||||||
cd quoteforge
|
cd quoteforge
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Настройка конфигурации
|
### 2. Настройка runtime-конфига (опционально)
|
||||||
|
|
||||||
```bash
|
`config.yaml` создаётся автоматически при первом старте в той же user-state папке, где находится `qfs.db`.
|
||||||
cp config.example.yaml config.yaml
|
Если найден старый формат, приложение автоматически мигрирует файл в актуальный runtime-формат
|
||||||
```
|
(оставляя только используемые секции `server` и `logging`).
|
||||||
|
|
||||||
Отредактируйте `config.yaml`:
|
При необходимости можно создать/отредактировать файл вручную:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
server:
|
server:
|
||||||
@@ -67,16 +70,10 @@ server:
|
|||||||
port: 8080
|
port: 8080
|
||||||
mode: "release"
|
mode: "release"
|
||||||
|
|
||||||
database:
|
logging:
|
||||||
host: "localhost"
|
level: "info"
|
||||||
port: 3306
|
format: "json"
|
||||||
name: "RFQ_LOG"
|
output: "stdout"
|
||||||
user: "quoteforge"
|
|
||||||
password: "your-secure-password"
|
|
||||||
|
|
||||||
auth:
|
|
||||||
jwt_secret: "your-jwt-secret-min-32-chars"
|
|
||||||
token_expiry: "24h"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Миграции базы данных
|
### 3. Миграции базы данных
|
||||||
@@ -93,73 +90,100 @@ go run ./cmd/qfs -migrate
|
|||||||
Сначала всегда смотрите preview:
|
Сначала всегда смотрите preview:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go run ./cmd/migrate_ops_projects -config config.yaml
|
go run ./cmd/migrate_ops_projects
|
||||||
```
|
```
|
||||||
|
|
||||||
Применение изменений:
|
Применение изменений:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go run ./cmd/migrate_ops_projects -config config.yaml -apply
|
go run ./cmd/migrate_ops_projects -apply
|
||||||
```
|
```
|
||||||
|
|
||||||
Без интерактивного подтверждения:
|
Без интерактивного подтверждения:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go run ./cmd/migrate_ops_projects -config config.yaml -apply -yes
|
go run ./cmd/migrate_ops_projects -apply -yes
|
||||||
```
|
```
|
||||||
|
|
||||||
### Минимальные права БД для пользователя квотаций
|
### Права БД для пользователя приложения
|
||||||
|
|
||||||
Если нужен пользователь, который может работать с конфигурациями, но не может создавать/удалять прайслисты:
|
#### Полный набор прав для обычного пользователя
|
||||||
|
|
||||||
|
Чтобы выдать существующему пользователю все необходимые права (без переоздания):
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
-- 1) Создать пользователя (если его ещё нет)
|
-- Справочные таблицы (только чтение)
|
||||||
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY 'StrongPassword!';
|
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
|
||||||
|
|
||||||
-- 2) Если пользователь уже существовал, принудительно обновить пароль
|
-- Таблицы конфигураций и проектов (чтение и запись)
|
||||||
ALTER USER 'quote_user'@'%' IDENTIFIED BY 'StrongPassword!';
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
|
||||||
|
|
||||||
-- 3) (Опционально, но рекомендуется) удалить дубли пользователя с другими host,
|
-- Таблицы синхронизации (только чтение для миграций, чтение+запись для статуса)
|
||||||
-- чтобы не возникало конфликтов вида user@localhost vs user@'%'
|
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO '<DB_USER>'@'%';
|
||||||
DROP USER IF EXISTS 'quote_user'@'localhost';
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
|
||||||
DROP USER IF EXISTS 'quote_user'@'127.0.0.1';
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
|
||||||
DROP USER IF EXISTS 'quote_user'@'::1';
|
|
||||||
|
|
||||||
-- 4) Сбросить лишние права
|
|
||||||
REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'quote_user'@'%';
|
|
||||||
|
|
||||||
-- 5) Чтение данных для конфигуратора и синка
|
|
||||||
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
|
||||||
|
|
||||||
-- 6) Работа с конфигурациями
|
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
|
||||||
|
|
||||||
|
-- Применить изменения
|
||||||
FLUSH PRIVILEGES;
|
FLUSH PRIVILEGES;
|
||||||
|
|
||||||
SHOW GRANTS FOR 'quote_user'@'%';
|
-- Проверка выданных прав
|
||||||
SHOW CREATE USER 'quote_user'@'%';
|
SHOW GRANTS FOR '<DB_USER>'@'%';
|
||||||
```
|
```
|
||||||
|
|
||||||
Полный набор прав для пользователя квотаций:
|
#### Таблицы и их назначение
|
||||||
|
|
||||||
|
| Таблица | Назначение | Права | Примечание |
|
||||||
|
|---------|-----------|-------|-----------|
|
||||||
|
| `lot` | Справочник компонентов | SELECT | Существующая таблица |
|
||||||
|
| `qt_lot_metadata` | Расширенные данные компонентов | SELECT | Метаданные компонентов |
|
||||||
|
| `qt_categories` | Категории компонентов | SELECT | Справочник |
|
||||||
|
| `qt_pricelists` | Прайслисты | SELECT | Управляется сервером |
|
||||||
|
| `qt_pricelist_items` | Позиции прайслистов | SELECT | Управляется сервером |
|
||||||
|
| `qt_configurations` | Сохранённые конфигурации | SELECT, INSERT, UPDATE | Основная таблица работы |
|
||||||
|
| `qt_projects` | Проекты | SELECT, INSERT, UPDATE | Для группировки конфигураций |
|
||||||
|
| `qt_client_local_migrations` | Справочник миграций БД | SELECT | Только чтение (управляется админом) |
|
||||||
|
| `qt_client_schema_state` | Состояние локальной схемы | SELECT, INSERT, UPDATE | Отслеживание примененных миграций |
|
||||||
|
| `qt_pricelist_sync_status` | Статус синхронизации | SELECT, INSERT, UPDATE | Отслеживание активности синхронизации |
|
||||||
|
|
||||||
|
#### При создании нового пользователя
|
||||||
|
|
||||||
|
Если нужно создать нового пользователя с нуля:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
GRANT USAGE ON *.* TO 'quote_user'@'%' IDENTIFIED BY 'StrongPassword!';
|
-- 1) Создать пользователя
|
||||||
|
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
|
||||||
|
|
||||||
|
-- 2) Выдать все необходимые права
|
||||||
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
||||||
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
||||||
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
||||||
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
||||||
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
|
||||||
|
|
||||||
|
-- 3) Применить изменения
|
||||||
|
FLUSH PRIVILEGES;
|
||||||
|
|
||||||
|
-- 4) Проверить права
|
||||||
|
SHOW GRANTS FOR 'quote_user'@'%';
|
||||||
```
|
```
|
||||||
|
|
||||||
Важно:
|
#### Важные замечания
|
||||||
- не выдавайте `INSERT/UPDATE/DELETE` на `qt_pricelists` и `qt_pricelist_items`, если пользователь не должен управлять прайслистами;
|
|
||||||
- если видите ошибку `Access denied for user ...@'<ip>'`, проверьте, что не осталось других записей `quote_user@host` кроме `quote_user@'%'`;
|
- **Таблицы синхронизации** должны быть созданы администратором БД один раз. Приложение не требует прав CREATE TABLE.
|
||||||
- после смены DB-настроек через `/setup` приложение перезапускается автоматически и подхватывает нового пользователя.
|
- **Прайслисты** (`qt_pricelists`, `qt_pricelist_items`) — справочные таблицы, управляются сервером, пользователь имеет только SELECT.
|
||||||
|
- **Конфигурации и проекты** — таблицы, в которые пишет само приложение (INSERT, UPDATE при сохранении изменений).
|
||||||
|
- **Таблицы миграций** нужны для синхронизации: приложение читает список миграций и отчитывается о применённых.
|
||||||
|
- Если видите ошибку `Access denied for user ...@'<ip>'`, проверьте наличие конфликтующих записей пользователя с разными хостами (user@localhost vs user@'%').
|
||||||
|
|
||||||
### 4. Импорт метаданных компонентов
|
### 4. Импорт метаданных компонентов
|
||||||
|
|
||||||
@@ -190,6 +214,7 @@ make build-all # Сборка для всех платформ (Linux, mac
|
|||||||
make build-windows # Только для Windows
|
make build-windows # Только для Windows
|
||||||
make run # Запуск dev сервера
|
make run # Запуск dev сервера
|
||||||
make test # Запуск тестов
|
make test # Запуск тестов
|
||||||
|
make install-hooks # Установить git hooks (блокировка коммита с секретами)
|
||||||
make clean # Очистка bin/
|
make clean # Очистка bin/
|
||||||
make help # Показать все команды
|
make help # Показать все команды
|
||||||
```
|
```
|
||||||
@@ -207,6 +232,56 @@ make help # Показать все команды
|
|||||||
|
|
||||||
Можно переопределить путь через `-localdb` или переменную окружения `QFS_DB_PATH`.
|
Можно переопределить путь через `-localdb` или переменную окружения `QFS_DB_PATH`.
|
||||||
|
|
||||||
|
#### Sync readiness guard
|
||||||
|
|
||||||
|
Перед `push/pull` выполняется preflight-проверка:
|
||||||
|
- доступен ли сервер (MariaDB);
|
||||||
|
- можно ли проверить и применить централизованные миграции локальной БД;
|
||||||
|
- подходит ли версия приложения под `min_app_version` миграций.
|
||||||
|
|
||||||
|
Если проверка не пройдена:
|
||||||
|
- локальная работа (CRUD) продолжается;
|
||||||
|
- sync API возвращает `423 Locked` с `reason_code` и `reason_text`;
|
||||||
|
- в UI показывается красный индикатор и причина блокировки в модалке синхронизации.
|
||||||
|
|
||||||
|
#### Схема потоков данных синхронизации
|
||||||
|
|
||||||
|
```text
|
||||||
|
[ SERVER / MariaDB ]
|
||||||
|
┌───────────────────────────┐
|
||||||
|
│ qt_projects │
|
||||||
|
│ qt_configurations │
|
||||||
|
│ qt_pricelists │
|
||||||
|
│ qt_pricelist_items │
|
||||||
|
│ qt_pricelist_sync_status │
|
||||||
|
└─────────────┬─────────────┘
|
||||||
|
│
|
||||||
|
pull (projects/configs/pricelists)
|
||||||
|
│
|
||||||
|
┌──────────────────┴──────────────────┐
|
||||||
|
│ │
|
||||||
|
[ CLIENT A / local SQLite ] [ CLIENT B / local SQLite ]
|
||||||
|
┌───────────────────────────────┐ ┌───────────────────────────────┐
|
||||||
|
│ local_projects │ │ local_projects │
|
||||||
|
│ local_configurations │ │ local_configurations │
|
||||||
|
│ local_pricelists │ │ local_pricelists │
|
||||||
|
│ local_pricelist_items │ │ local_pricelist_items │
|
||||||
|
│ pending_changes (proj/config) │ │ pending_changes (proj/config) │
|
||||||
|
└───────────────┬───────────────┘ └───────────────┬───────────────┘
|
||||||
|
│ │
|
||||||
|
push (projects/configurations only) push (projects/configurations only)
|
||||||
|
│ │
|
||||||
|
└──────────────────┬────────────────────┘
|
||||||
|
│
|
||||||
|
[ SERVER / MariaDB ]
|
||||||
|
```
|
||||||
|
|
||||||
|
По сущностям:
|
||||||
|
- Конфигурации: `Client <-> Server <-> Other Clients`
|
||||||
|
- Проекты: `Client <-> Server <-> Other Clients`
|
||||||
|
- Прайслисты: `Server -> Clients only` (локальный push отсутствует)
|
||||||
|
- Локальная очистка прайслистов на клиенте: удаляются записи, которых нет на сервере и которые не используются активными локальными конфигурациями
|
||||||
|
|
||||||
### Версионность конфигураций (local-first)
|
### Версионность конфигураций (local-first)
|
||||||
|
|
||||||
Для `local_configurations` используется append-only versioning через полные snapshot-версии:
|
Для `local_configurations` используется append-only versioning через полные snapshot-версии:
|
||||||
@@ -240,6 +315,7 @@ POST /api/configs/:uuid/rollback
|
|||||||
### Локальный config.yaml
|
### Локальный config.yaml
|
||||||
|
|
||||||
По умолчанию `qfs` ищет `config.yaml` в той же user-state папке, где лежит `qfs.db` (а не рядом с бинарником).
|
По умолчанию `qfs` ищет `config.yaml` в той же user-state папке, где лежит `qfs.db` (а не рядом с бинарником).
|
||||||
|
Если файла нет, он создаётся автоматически. Если формат устарел, он автоматически мигрируется в runtime-формат (`server` + `logging`).
|
||||||
Можно переопределить путь через `-config` или `QFS_CONFIG_PATH`.
|
Можно переопределить путь через `-config` или `QFS_CONFIG_PATH`.
|
||||||
|
|
||||||
## Docker
|
## Docker
|
||||||
@@ -270,12 +346,23 @@ quoteforge/
|
|||||||
│ ├── templates/ # HTML шаблоны
|
│ ├── templates/ # HTML шаблоны
|
||||||
│ └── static/ # CSS, JS, изображения
|
│ └── static/ # CSS, JS, изображения
|
||||||
├── migrations/ # SQL миграции
|
├── migrations/ # SQL миграции
|
||||||
├── config.yaml # Конфигурация
|
├── config.example.yaml # Пример конфигурации
|
||||||
├── Dockerfile
|
├── releases/
|
||||||
├── docker-compose.yml
|
│ └── memory/ # Changelog между тегами (v1.2.1.md, v1.2.2.md, ...)
|
||||||
└── go.mod
|
└── go.mod
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Releases & Changelog
|
||||||
|
|
||||||
|
Change log между версиями хранится в `releases/memory/` каталоге в файлах вида `v{major}.{minor}.{patch}.md`.
|
||||||
|
|
||||||
|
Каждый файл содержит:
|
||||||
|
- Список коммитов между версиями
|
||||||
|
- Описание изменений и их влияния
|
||||||
|
- Breaking changes и заметки о миграции
|
||||||
|
|
||||||
|
**Перед работой над кодом проверьте последний файл в этой папке, чтобы понять текущее состояние проекта.**
|
||||||
|
|
||||||
## Роли пользователей
|
## Роли пользователей
|
||||||
|
|
||||||
| Роль | Описание |
|
| Роль | Описание |
|
||||||
@@ -301,8 +388,26 @@ GET /api/configs/:uuid/versions # Список версий конф
|
|||||||
GET /api/configs/:uuid/versions/:version # Получить конкретную версию
|
GET /api/configs/:uuid/versions/:version # Получить конкретную версию
|
||||||
POST /api/configs/:uuid/rollback # Rollback на указанную версию
|
POST /api/configs/:uuid/rollback # Rollback на указанную версию
|
||||||
POST /api/configs/:uuid/reactivate # Вернуть архивную конфигурацию в активные
|
POST /api/configs/:uuid/reactivate # Вернуть архивную конфигурацию в активные
|
||||||
|
GET /api/sync/readiness # Статус readiness guard (ready|blocked|unknown)
|
||||||
|
GET /api/sync/status # Сводный статус синхронизации
|
||||||
|
GET /api/sync/info # Данные для модалки синхронизации
|
||||||
|
POST /api/sync/push # Push pending changes (423, если blocked)
|
||||||
|
POST /api/sync/all # Full sync push+pull (423, если blocked)
|
||||||
|
POST /api/sync/components # Pull components (423, если blocked)
|
||||||
|
POST /api/sync/pricelists # Pull pricelists (423, если blocked)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Краткая карта sync API
|
||||||
|
|
||||||
|
| Endpoint | Назначение | Поток |
|
||||||
|
|----------|------------|-------|
|
||||||
|
| `POST /api/sync/push` | Отправить локальные pending-изменения | `SQLite -> MariaDB` |
|
||||||
|
| `POST /api/sync/components` | Подтянуть справочник компонентов | `MariaDB -> SQLite` |
|
||||||
|
| `POST /api/sync/pricelists` | Подтянуть прайслисты и позиции | `MariaDB -> SQLite` |
|
||||||
|
| `POST /api/sync/all` | Полный цикл: push + pull + импорт проектов/конфигураций | `двунаправленно` |
|
||||||
|
| `GET /api/sync/readiness` | Статус preflight/readiness | `read-only` |
|
||||||
|
| `GET /api/sync/status` / `GET /api/sync/info` | Сводка статуса и данных синхронизации | `read-only` |
|
||||||
|
|
||||||
#### Sync payload для versioning
|
#### Sync payload для versioning
|
||||||
|
|
||||||
События в `pending_changes` для конфигураций содержат:
|
События в `pending_changes` для конфигураций содержат:
|
||||||
@@ -314,50 +419,6 @@ POST /api/configs/:uuid/reactivate # Вернуть архивную к
|
|||||||
|
|
||||||
Это позволяет push-слою отправлять на сервер актуальное состояние и готовит основу для будущего conflict resolution.
|
Это позволяет push-слою отправлять на сервер актуальное состояние и готовит основу для будущего conflict resolution.
|
||||||
|
|
||||||
## Cron Jobs
|
|
||||||
|
|
||||||
QuoteForge now includes automated cron jobs for maintenance tasks. These can be run using the built-in cron functionality in the Docker container.
|
|
||||||
|
|
||||||
### Docker Compose Setup
|
|
||||||
|
|
||||||
The Docker setup includes a dedicated cron service that runs the following jobs:
|
|
||||||
|
|
||||||
- **Alerts check**: Every hour (0 * * * *)
|
|
||||||
- **Price updates**: Daily at 2 AM (0 2 * * *)
|
|
||||||
- **Usage counter reset**: Weekly on Sunday at 1 AM (0 1 * * 0)
|
|
||||||
- **Popularity score updates**: Daily at 3 AM (0 3 * * *)
|
|
||||||
|
|
||||||
To enable cron jobs in Docker, run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker-compose up -d
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual Cron Job Execution
|
|
||||||
|
|
||||||
You can also run cron jobs manually using the quoteforge-cron binary:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Check and generate alerts
|
|
||||||
go run ./cmd/cron -job=alerts
|
|
||||||
|
|
||||||
# Recalculate all prices
|
|
||||||
go run ./cmd/cron -job=update-prices
|
|
||||||
|
|
||||||
# Reset usage counters
|
|
||||||
go run ./cmd/cron -job=reset-counters
|
|
||||||
|
|
||||||
# Update popularity scores
|
|
||||||
go run ./cmd/cron -job=update-popularity
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cron Job Details
|
|
||||||
|
|
||||||
- **Alerts check**: Generates alerts for components with high demand and stale prices, trending components without prices, and components with no recent quotes
|
|
||||||
- **Price updates**: Recalculates prices for all components using configured methods (median, weighted median, average)
|
|
||||||
- **Usage counter reset**: Resets weekly and monthly usage counters for components
|
|
||||||
- **Popularity score updates**: Recalculates popularity scores based on supplier quote activity
|
|
||||||
|
|
||||||
## Разработка
|
## Разработка
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -385,6 +446,8 @@ CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/qfs ./cmd/qfs
|
|||||||
| `QFS_DB_PATH` | Полный путь к локальной SQLite БД | OS-specific user state dir |
|
| `QFS_DB_PATH` | Полный путь к локальной SQLite БД | OS-specific user state dir |
|
||||||
| `QFS_STATE_DIR` | Каталог state (если `QFS_DB_PATH` не задан) | OS-specific user state dir |
|
| `QFS_STATE_DIR` | Каталог state (если `QFS_DB_PATH` не задан) | OS-specific user state dir |
|
||||||
| `QFS_CONFIG_PATH` | Полный путь к `config.yaml` | OS-specific user state dir |
|
| `QFS_CONFIG_PATH` | Полный путь к `config.yaml` | OS-specific user state dir |
|
||||||
|
| `QFS_BACKUP_DIR` | Каталог для ротационных бэкапов локальных данных | `<db dir>/backups` |
|
||||||
|
| `QFS_BACKUP_DISABLE` | Отключить автоматические бэкапы (`1/true/yes`) | — |
|
||||||
|
|
||||||
## Интеграция с существующей БД
|
## Интеграция с существующей БД
|
||||||
|
|
||||||
|
|||||||
@@ -1,84 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"log"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/alerts"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricing"
|
|
||||||
"gorm.io/driver/mysql"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
"gorm.io/gorm/logger"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
configPath := flag.String("config", "config.yaml", "path to config file")
|
|
||||||
cronJob := flag.String("job", "", "type of cron job to run (alerts, update-prices)")
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
cfg, err := config.Load(*configPath)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to load config: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
db, err := gorm.Open(mysql.Open(cfg.Database.DSN()), &gorm.Config{
|
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to connect to database: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure tables exist
|
|
||||||
if err := models.Migrate(db); err != nil {
|
|
||||||
log.Fatalf("Migration failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Initialize repositories
|
|
||||||
statsRepo := repository.NewStatsRepository(db)
|
|
||||||
alertRepo := repository.NewAlertRepository(db)
|
|
||||||
componentRepo := repository.NewComponentRepository(db)
|
|
||||||
priceRepo := repository.NewPriceRepository(db)
|
|
||||||
|
|
||||||
// Initialize services
|
|
||||||
alertService := alerts.NewService(alertRepo, componentRepo, priceRepo, statsRepo, cfg.Alerts, cfg.Pricing)
|
|
||||||
pricingService := pricing.NewService(componentRepo, priceRepo, cfg.Pricing)
|
|
||||||
|
|
||||||
switch *cronJob {
|
|
||||||
case "alerts":
|
|
||||||
log.Println("Running alerts check...")
|
|
||||||
if err := alertService.CheckAndGenerateAlerts(); err != nil {
|
|
||||||
log.Printf("Error running alerts check: %v", err)
|
|
||||||
} else {
|
|
||||||
log.Println("Alerts check completed successfully")
|
|
||||||
}
|
|
||||||
case "update-prices":
|
|
||||||
log.Println("Recalculating all prices...")
|
|
||||||
updated, errors := pricingService.RecalculateAllPrices()
|
|
||||||
log.Printf("Prices recalculated: %d updated, %d errors", updated, errors)
|
|
||||||
case "reset-counters":
|
|
||||||
log.Println("Resetting usage counters...")
|
|
||||||
if err := statsRepo.ResetWeeklyCounters(); err != nil {
|
|
||||||
log.Printf("Error resetting weekly counters: %v", err)
|
|
||||||
}
|
|
||||||
if err := statsRepo.ResetMonthlyCounters(); err != nil {
|
|
||||||
log.Printf("Error resetting monthly counters: %v", err)
|
|
||||||
}
|
|
||||||
log.Println("Usage counters reset completed")
|
|
||||||
case "update-popularity":
|
|
||||||
log.Println("Updating popularity scores...")
|
|
||||||
if err := statsRepo.UpdatePopularityScores(); err != nil {
|
|
||||||
log.Printf("Error updating popularity scores: %v", err)
|
|
||||||
} else {
|
|
||||||
log.Println("Popularity scores updated successfully")
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
log.Println("No valid cron job specified. Available jobs:")
|
|
||||||
log.Println(" - alerts: Check and generate alerts")
|
|
||||||
log.Println(" - update-prices: Recalculate all prices")
|
|
||||||
log.Println(" - reset-counters: Reset usage counters")
|
|
||||||
log.Println(" - update-popularity: Update popularity scores")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,160 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"log"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"gorm.io/driver/mysql"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
"gorm.io/gorm/logger"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
configPath := flag.String("config", "config.yaml", "path to config file")
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
cfg, err := config.Load(*configPath)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to load config: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
db, err := gorm.Open(mysql.Open(cfg.Database.DSN()), &gorm.Config{
|
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to connect to database: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Println("Connected to database")
|
|
||||||
|
|
||||||
// Ensure tables exist
|
|
||||||
if err := models.Migrate(db); err != nil {
|
|
||||||
log.Fatalf("Migration failed: %v", err)
|
|
||||||
}
|
|
||||||
if err := models.SeedCategories(db); err != nil {
|
|
||||||
log.Fatalf("Seeding categories failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load categories for lookup
|
|
||||||
var categories []models.Category
|
|
||||||
db.Find(&categories)
|
|
||||||
categoryMap := make(map[string]uint)
|
|
||||||
for _, c := range categories {
|
|
||||||
categoryMap[c.Code] = c.ID
|
|
||||||
}
|
|
||||||
log.Printf("Loaded %d categories", len(categories))
|
|
||||||
|
|
||||||
// Get all lots
|
|
||||||
var lots []models.Lot
|
|
||||||
if err := db.Find(&lots).Error; err != nil {
|
|
||||||
log.Fatalf("Failed to load lots: %v", err)
|
|
||||||
}
|
|
||||||
log.Printf("Found %d lots to import", len(lots))
|
|
||||||
|
|
||||||
// Import each lot
|
|
||||||
var imported, skipped, updated int
|
|
||||||
for _, lot := range lots {
|
|
||||||
category, model := ParsePartNumber(lot.LotName)
|
|
||||||
|
|
||||||
var categoryID *uint
|
|
||||||
if id, ok := categoryMap[category]; ok && id > 0 {
|
|
||||||
categoryID = &id
|
|
||||||
} else {
|
|
||||||
// Try to find by prefix match
|
|
||||||
for code, id := range categoryMap {
|
|
||||||
if strings.HasPrefix(category, code) {
|
|
||||||
categoryID = &id
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if already exists
|
|
||||||
var existing models.LotMetadata
|
|
||||||
result := db.Where("lot_name = ?", lot.LotName).First(&existing)
|
|
||||||
|
|
||||||
if result.Error == gorm.ErrRecordNotFound {
|
|
||||||
// Check if there are prices in the last 90 days
|
|
||||||
var recentPriceCount int64
|
|
||||||
db.Model(&models.LotLog{}).
|
|
||||||
Where("lot = ? AND date >= DATE_SUB(NOW(), INTERVAL 90 DAY)", lot.LotName).
|
|
||||||
Count(&recentPriceCount)
|
|
||||||
|
|
||||||
// Default to 90 days, but use "all time" (0) if no recent prices
|
|
||||||
periodDays := 90
|
|
||||||
if recentPriceCount == 0 {
|
|
||||||
periodDays = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create new
|
|
||||||
metadata := models.LotMetadata{
|
|
||||||
LotName: lot.LotName,
|
|
||||||
CategoryID: categoryID,
|
|
||||||
Model: model,
|
|
||||||
PricePeriodDays: periodDays,
|
|
||||||
}
|
|
||||||
if err := db.Create(&metadata).Error; err != nil {
|
|
||||||
log.Printf("Failed to create metadata for %s: %v", lot.LotName, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
imported++
|
|
||||||
} else if result.Error == nil {
|
|
||||||
// Update if needed
|
|
||||||
needsUpdate := false
|
|
||||||
|
|
||||||
if existing.Model == "" {
|
|
||||||
existing.Model = model
|
|
||||||
needsUpdate = true
|
|
||||||
}
|
|
||||||
if existing.CategoryID == nil {
|
|
||||||
existing.CategoryID = categoryID
|
|
||||||
needsUpdate = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if using default period (90 days) but no recent prices
|
|
||||||
if existing.PricePeriodDays == 90 {
|
|
||||||
var recentPriceCount int64
|
|
||||||
db.Model(&models.LotLog{}).
|
|
||||||
Where("lot = ? AND date >= DATE_SUB(NOW(), INTERVAL 90 DAY)", lot.LotName).
|
|
||||||
Count(&recentPriceCount)
|
|
||||||
|
|
||||||
if recentPriceCount == 0 {
|
|
||||||
existing.PricePeriodDays = 0
|
|
||||||
needsUpdate = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if needsUpdate {
|
|
||||||
db.Save(&existing)
|
|
||||||
updated++
|
|
||||||
} else {
|
|
||||||
skipped++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("Import complete: %d imported, %d updated, %d skipped", imported, updated, skipped)
|
|
||||||
|
|
||||||
// Show final counts
|
|
||||||
var metadataCount int64
|
|
||||||
db.Model(&models.LotMetadata{}).Count(&metadataCount)
|
|
||||||
log.Printf("Total metadata records: %d", metadataCount)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ParsePartNumber extracts category and model from lot_name
|
|
||||||
// Examples:
|
|
||||||
// "CPU_AMD_9654" → category="CPU", model="AMD_9654"
|
|
||||||
// "MB_INTEL_4.Sapphire_2S" → category="MB", model="INTEL_4.Sapphire_2S"
|
|
||||||
func ParsePartNumber(lotName string) (category, model string) {
|
|
||||||
parts := strings.SplitN(lotName, "_", 2)
|
|
||||||
if len(parts) >= 1 {
|
|
||||||
category = parts[0]
|
|
||||||
}
|
|
||||||
if len(parts) >= 2 {
|
|
||||||
model = parts[1]
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
@@ -7,7 +7,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/driver/mysql"
|
"gorm.io/driver/mysql"
|
||||||
@@ -16,7 +15,6 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
configPath := flag.String("config", "config.yaml", "path to config file")
|
|
||||||
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to resolve default local SQLite path: %v", err)
|
log.Fatalf("Failed to resolve default local SQLite path: %v", err)
|
||||||
@@ -28,22 +26,6 @@ func main() {
|
|||||||
log.Println("QuoteForge Configuration Migration Tool")
|
log.Println("QuoteForge Configuration Migration Tool")
|
||||||
log.Println("========================================")
|
log.Println("========================================")
|
||||||
|
|
||||||
// Load config for MariaDB connection
|
|
||||||
cfg, err := config.Load(*configPath)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to load config: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Connect to MariaDB
|
|
||||||
log.Printf("Connecting to MariaDB at %s:%d...", cfg.Database.Host, cfg.Database.Port)
|
|
||||||
mariaDB, err := gorm.Open(mysql.Open(cfg.Database.DSN()), &gorm.Config{
|
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Failed to connect to MariaDB: %v", err)
|
|
||||||
}
|
|
||||||
log.Println("Connected to MariaDB")
|
|
||||||
|
|
||||||
// Initialize local SQLite
|
// Initialize local SQLite
|
||||||
log.Printf("Opening local SQLite at %s...", *localDBPath)
|
log.Printf("Opening local SQLite at %s...", *localDBPath)
|
||||||
local, err := localdb.New(*localDBPath)
|
local, err := localdb.New(*localDBPath)
|
||||||
@@ -51,6 +33,28 @@ func main() {
|
|||||||
log.Fatalf("Failed to initialize local database: %v", err)
|
log.Fatalf("Failed to initialize local database: %v", err)
|
||||||
}
|
}
|
||||||
log.Println("Local SQLite initialized")
|
log.Println("Local SQLite initialized")
|
||||||
|
if !local.HasSettings() {
|
||||||
|
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||||
|
}
|
||||||
|
|
||||||
|
settings, err := local.GetSettings()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to load SQLite connection settings: %v", err)
|
||||||
|
}
|
||||||
|
dsn, err := local.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to build DSN from SQLite settings: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Connect to MariaDB
|
||||||
|
log.Printf("Connecting to MariaDB at %s:%d...", settings.Host, settings.Port)
|
||||||
|
mariaDB, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to connect to MariaDB: %v", err)
|
||||||
|
}
|
||||||
|
log.Println("Connected to MariaDB")
|
||||||
|
|
||||||
// Count configurations in MariaDB
|
// Count configurations in MariaDB
|
||||||
var serverCount int64
|
var serverCount int64
|
||||||
@@ -149,23 +153,7 @@ func main() {
|
|||||||
log.Printf(" Skipped: %d", skipped)
|
log.Printf(" Skipped: %d", skipped)
|
||||||
log.Printf(" Errors: %d", errors)
|
log.Printf(" Errors: %d", errors)
|
||||||
|
|
||||||
// Save connection settings to local SQLite if not exists
|
fmt.Println("\nDone! You can now run the server with: go run ./cmd/qfs")
|
||||||
if !local.HasSettings() {
|
|
||||||
log.Println("\nSaving connection settings to local SQLite...")
|
|
||||||
if err := local.SaveSettings(
|
|
||||||
cfg.Database.Host,
|
|
||||||
cfg.Database.Port,
|
|
||||||
cfg.Database.Name,
|
|
||||||
cfg.Database.User,
|
|
||||||
cfg.Database.Password,
|
|
||||||
); err != nil {
|
|
||||||
log.Printf("Warning: Failed to save settings: %v", err)
|
|
||||||
} else {
|
|
||||||
log.Println("Connection settings saved")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println("\nDone! You can now run the server with: go run ./cmd/server")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func derefUint(v *uint) uint {
|
func derefUint(v *uint) uint {
|
||||||
|
|||||||
@@ -10,7 +10,8 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
"gorm.io/driver/mysql"
|
"gorm.io/driver/mysql"
|
||||||
@@ -38,17 +39,29 @@ type migrationAction struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
configPath := flag.String("config", "config.yaml", "path to config file")
|
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to resolve default local SQLite path: %v", err)
|
||||||
|
}
|
||||||
|
localDBPath := flag.String("localdb", defaultLocalDBPath, "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||||
apply := flag.Bool("apply", false, "apply migration (default is preview only)")
|
apply := flag.Bool("apply", false, "apply migration (default is preview only)")
|
||||||
yes := flag.Bool("yes", false, "skip interactive confirmation (works only with -apply)")
|
yes := flag.Bool("yes", false, "skip interactive confirmation (works only with -apply)")
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
|
||||||
cfg, err := config.Load(*configPath)
|
local, err := localdb.New(*localDBPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("failed to load config: %v", err)
|
log.Fatalf("failed to initialize local database: %v", err)
|
||||||
}
|
}
|
||||||
|
if !local.HasSettings() {
|
||||||
|
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||||
|
}
|
||||||
|
dsn, err := local.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to build DSN from SQLite settings: %v", err)
|
||||||
|
}
|
||||||
|
dbUser := strings.TrimSpace(local.GetDBUser())
|
||||||
|
|
||||||
db, err := gorm.Open(mysql.Open(cfg.Database.DSN()), &gorm.Config{
|
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -59,7 +72,7 @@ func main() {
|
|||||||
log.Fatalf("precheck failed: %v", err)
|
log.Fatalf("precheck failed: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
actions, existingProjects, err := buildPlan(db, cfg.Database.User)
|
actions, existingProjects, err := buildPlan(db, dbUser)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("failed to build migration plan: %v", err)
|
log.Fatalf("failed to build migration plan: %v", err)
|
||||||
}
|
}
|
||||||
@@ -150,7 +163,7 @@ func buildPlan(db *gorm.DB, fallbackOwner string) ([]migrationAction, map[string
|
|||||||
}
|
}
|
||||||
for i := range projects {
|
for i := range projects {
|
||||||
p := projects[i]
|
p := projects[i]
|
||||||
existingProjects[projectKey(p.OwnerUsername, p.Name)] = &p
|
existingProjects[projectKey(p.OwnerUsername, derefString(p.Name))] = &p
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -240,12 +253,13 @@ func executePlan(db *gorm.DB, actions []migrationAction, existingProjects map[st
|
|||||||
|
|
||||||
for _, action := range actions {
|
for _, action := range actions {
|
||||||
key := projectKey(action.OwnerUsername, action.TargetProjectName)
|
key := projectKey(action.OwnerUsername, action.TargetProjectName)
|
||||||
project := projectCache[key]
|
project := projectCache[key]
|
||||||
if project == nil {
|
if project == nil {
|
||||||
project = &models.Project{
|
project = &models.Project{
|
||||||
UUID: uuid.NewString(),
|
UUID: uuid.NewString(),
|
||||||
OwnerUsername: action.OwnerUsername,
|
OwnerUsername: action.OwnerUsername,
|
||||||
Name: action.TargetProjectName,
|
Code: action.TargetProjectName,
|
||||||
|
Name: ptrString(action.TargetProjectName),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: false,
|
IsSystem: false,
|
||||||
}
|
}
|
||||||
@@ -255,7 +269,7 @@ func executePlan(db *gorm.DB, actions []migrationAction, existingProjects map[st
|
|||||||
projectCache[key] = project
|
projectCache[key] = project
|
||||||
} else if !project.IsActive {
|
} else if !project.IsActive {
|
||||||
if err := tx.Model(&models.Project{}).Where("uuid = ?", project.UUID).Update("is_active", true).Error; err != nil {
|
if err := tx.Model(&models.Project{}).Where("uuid = ?", project.UUID).Update("is_active", true).Error; err != nil {
|
||||||
return fmt.Errorf("reactivate project %s (%s): %w", project.Name, project.UUID, err)
|
return fmt.Errorf("reactivate project %s (%s): %w", derefString(project.Name), project.UUID, err)
|
||||||
}
|
}
|
||||||
project.IsActive = true
|
project.IsActive = true
|
||||||
}
|
}
|
||||||
@@ -281,3 +295,14 @@ func setKeys(set map[string]struct{}) []string {
|
|||||||
func projectKey(owner, name string) string {
|
func projectKey(owner, name string) string {
|
||||||
return owner + "||" + name
|
return owner + "||" + name
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func derefString(value *string) string {
|
||||||
|
if value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return *value
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|||||||
66
cmd/qfs/config_migration_test.go
Normal file
66
cmd/qfs/config_migration_test.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMigrateConfigFileToRuntimeShapeDropsDeprecatedSections(t *testing.T) {
|
||||||
|
t.Helper()
|
||||||
|
dir := t.TempDir()
|
||||||
|
path := filepath.Join(dir, "config.yaml")
|
||||||
|
|
||||||
|
legacy := `server:
|
||||||
|
host: "0.0.0.0"
|
||||||
|
port: 9191
|
||||||
|
database:
|
||||||
|
host: "legacy-db"
|
||||||
|
port: 3306
|
||||||
|
name: "RFQ_LOG"
|
||||||
|
user: "old"
|
||||||
|
password: "REDACTED_TEST_PASSWORD"
|
||||||
|
pricing:
|
||||||
|
default_method: "median"
|
||||||
|
logging:
|
||||||
|
level: "debug"
|
||||||
|
format: "text"
|
||||||
|
output: "stdout"
|
||||||
|
`
|
||||||
|
if err := os.WriteFile(path, []byte(legacy), 0644); err != nil {
|
||||||
|
t.Fatalf("write legacy config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := config.Load(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load legacy config: %v", err)
|
||||||
|
}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
if err := migrateConfigFileToRuntimeShape(path, cfg); err != nil {
|
||||||
|
t.Fatalf("migrate config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
got, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read migrated config: %v", err)
|
||||||
|
}
|
||||||
|
text := string(got)
|
||||||
|
if strings.Contains(text, "database:") {
|
||||||
|
t.Fatalf("migrated config still contains deprecated database section:\n%s", text)
|
||||||
|
}
|
||||||
|
if strings.Contains(text, "pricing:") {
|
||||||
|
t.Fatalf("migrated config still contains deprecated pricing section:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "server:") || !strings.Contains(text, "logging:") {
|
||||||
|
t.Fatalf("migrated config missing required sections:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "port: 9191") {
|
||||||
|
t.Fatalf("migrated config did not preserve server port:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "level: debug") {
|
||||||
|
t.Fatalf("migrated config did not preserve logging level:\n%s", text)
|
||||||
|
}
|
||||||
|
}
|
||||||
522
cmd/qfs/main.go
522
cmd/qfs/main.go
@@ -1,6 +1,7 @@
|
|||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"context"
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"flag"
|
"flag"
|
||||||
@@ -17,6 +18,7 @@ import (
|
|||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
syncpkg "sync"
|
||||||
"syscall"
|
"syscall"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -31,11 +33,9 @@ import (
|
|||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/alerts"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricelist"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricing"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
"gorm.io/driver/mysql"
|
"gorm.io/driver/mysql"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
"gorm.io/gorm/logger"
|
"gorm.io/gorm/logger"
|
||||||
@@ -45,10 +45,12 @@ import (
|
|||||||
var Version = "dev"
|
var Version = "dev"
|
||||||
|
|
||||||
const backgroundSyncInterval = 5 * time.Minute
|
const backgroundSyncInterval = 5 * time.Minute
|
||||||
|
const onDemandPullCooldown = 30 * time.Second
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
configPath := flag.String("config", "", "path to config file (default: user state dir or QFS_CONFIG_PATH)")
|
configPath := flag.String("config", "", "path to config file (default: user state dir or QFS_CONFIG_PATH)")
|
||||||
localDBPath := flag.String("localdb", "", "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
localDBPath := flag.String("localdb", "", "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||||
|
resetLocalDB := flag.Bool("reset-localdb", false, "reset local SQLite data on startup (keeps connection settings)")
|
||||||
migrate := flag.Bool("migrate", false, "run database migrations")
|
migrate := flag.Bool("migrate", false, "run database migrations")
|
||||||
version := flag.Bool("version", false, "show version information")
|
version := flag.Bool("version", false, "show version information")
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
@@ -63,18 +65,18 @@ func main() {
|
|||||||
slog.Info("starting qfs", "version", Version, "executable", exePath)
|
slog.Info("starting qfs", "version", Version, "executable", exePath)
|
||||||
appmeta.SetVersion(Version)
|
appmeta.SetVersion(Version)
|
||||||
|
|
||||||
resolvedConfigPath, err := appstate.ResolveConfigPath(*configPath)
|
|
||||||
if err != nil {
|
|
||||||
slog.Error("failed to resolve config path", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
resolvedLocalDBPath, err := appstate.ResolveDBPath(*localDBPath)
|
resolvedLocalDBPath, err := appstate.ResolveDBPath(*localDBPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Error("failed to resolve local database path", "error", err)
|
slog.Error("failed to resolve local database path", "error", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
resolvedConfigPath, err := appstate.ResolveConfigPathNearDB(*configPath, resolvedLocalDBPath)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("failed to resolve config path", "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
// Migrate legacy project-local config path to the user state directory when using defaults.
|
// Migrate legacy project-local config path to the user state directory when using defaults.
|
||||||
if *configPath == "" && os.Getenv("QFS_CONFIG_PATH") == "" {
|
if *configPath == "" && os.Getenv("QFS_CONFIG_PATH") == "" {
|
||||||
migratedFrom, migrateErr := appstate.MigrateLegacyFile(resolvedConfigPath, []string{"config.yaml"})
|
migratedFrom, migrateErr := appstate.MigrateLegacyFile(resolvedConfigPath, []string{"config.yaml"})
|
||||||
@@ -99,6 +101,13 @@ func main() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if shouldResetLocalDB(*resetLocalDB) {
|
||||||
|
if err := localdb.ResetData(resolvedLocalDBPath); err != nil {
|
||||||
|
slog.Error("failed to reset local database", "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Initialize local SQLite database (always used)
|
// Initialize local SQLite database (always used)
|
||||||
local, err := localdb.New(resolvedLocalDBPath)
|
local, err := localdb.New(resolvedLocalDBPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -114,6 +123,10 @@ func main() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Load config for server settings (optional)
|
// Load config for server settings (optional)
|
||||||
|
if err := ensureDefaultConfigFile(resolvedConfigPath); err != nil {
|
||||||
|
slog.Error("failed to ensure default config file", "path", resolvedConfigPath, "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
cfg, err := config.Load(resolvedConfigPath)
|
cfg, err := config.Load(resolvedConfigPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if errors.Is(err, fs.ErrNotExist) {
|
if errors.Is(err, fs.ErrNotExist) {
|
||||||
@@ -126,6 +139,10 @@ func main() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
setConfigDefaults(cfg)
|
setConfigDefaults(cfg)
|
||||||
|
if err := migrateConfigFileToRuntimeShape(resolvedConfigPath, cfg); err != nil {
|
||||||
|
slog.Error("failed to migrate config file format", "path", resolvedConfigPath, "error", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
slog.Info("resolved runtime files", "config_path", resolvedConfigPath, "localdb_path", resolvedLocalDBPath)
|
slog.Info("resolved runtime files", "config_path", resolvedConfigPath, "localdb_path", resolvedLocalDBPath)
|
||||||
|
|
||||||
setupLogger(cfg.Logging)
|
setupLogger(cfg.Logging)
|
||||||
@@ -207,6 +224,15 @@ func main() {
|
|||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if readiness, readinessErr := syncService.GetReadiness(); readinessErr != nil {
|
||||||
|
slog.Warn("sync readiness check failed on startup", "error", readinessErr)
|
||||||
|
} else if readiness != nil && readiness.Blocked {
|
||||||
|
slog.Warn("sync readiness blocked on startup",
|
||||||
|
"reason_code", readiness.ReasonCode,
|
||||||
|
"reason_text", readiness.ReasonText,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
// Start background sync worker (will auto-skip when offline)
|
// Start background sync worker (will auto-skip when offline)
|
||||||
workerCtx, workerCancel := context.WithCancel(context.Background())
|
workerCtx, workerCancel := context.WithCancel(context.Background())
|
||||||
defer workerCancel()
|
defer workerCancel()
|
||||||
@@ -214,6 +240,10 @@ func main() {
|
|||||||
syncWorker := sync.NewWorker(syncService, connMgr, backgroundSyncInterval)
|
syncWorker := sync.NewWorker(syncService, connMgr, backgroundSyncInterval)
|
||||||
go syncWorker.Start(workerCtx)
|
go syncWorker.Start(workerCtx)
|
||||||
|
|
||||||
|
backupCtx, backupCancel := context.WithCancel(context.Background())
|
||||||
|
defer backupCancel()
|
||||||
|
go startBackupScheduler(backupCtx, cfg, resolvedLocalDBPath, resolvedConfigPath)
|
||||||
|
|
||||||
srv := &http.Server{
|
srv := &http.Server{
|
||||||
Addr: cfg.Address(),
|
Addr: cfg.Address(),
|
||||||
Handler: router,
|
Handler: router,
|
||||||
@@ -256,6 +286,7 @@ func main() {
|
|||||||
// Stop background sync worker first
|
// Stop background sync worker first
|
||||||
syncWorker.Stop()
|
syncWorker.Stop()
|
||||||
workerCancel()
|
workerCancel()
|
||||||
|
backupCancel()
|
||||||
|
|
||||||
// Then shutdown HTTP server
|
// Then shutdown HTTP server
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||||
@@ -272,6 +303,31 @@ func main() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func shouldResetLocalDB(flagValue bool) bool {
|
||||||
|
if flagValue {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
value := strings.TrimSpace(os.Getenv("QFS_RESET_LOCAL_DB"))
|
||||||
|
if value == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
switch strings.ToLower(value) {
|
||||||
|
case "1", "true", "yes", "y":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func derefString(value *string) string {
|
||||||
|
if value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return *value
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
func setConfigDefaults(cfg *config.Config) {
|
func setConfigDefaults(cfg *config.Config) {
|
||||||
if cfg.Server.Host == "" {
|
if cfg.Server.Host == "" {
|
||||||
cfg.Server.Host = "127.0.0.1"
|
cfg.Server.Host = "127.0.0.1"
|
||||||
@@ -306,6 +362,173 @@ func setConfigDefaults(cfg *config.Config) {
|
|||||||
if cfg.Pricing.MinQuotesForMedian == 0 {
|
if cfg.Pricing.MinQuotesForMedian == 0 {
|
||||||
cfg.Pricing.MinQuotesForMedian = 3
|
cfg.Pricing.MinQuotesForMedian = 3
|
||||||
}
|
}
|
||||||
|
if cfg.Backup.Time == "" {
|
||||||
|
cfg.Backup.Time = "00:00"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureDefaultConfigFile(configPath string) error {
|
||||||
|
if strings.TrimSpace(configPath) == "" {
|
||||||
|
return fmt.Errorf("config path is empty")
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(configPath); err == nil {
|
||||||
|
return nil
|
||||||
|
} else if !errors.Is(err, os.ErrNotExist) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(filepath.Dir(configPath), 0755); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
const defaultConfigYAML = `server:
|
||||||
|
host: "127.0.0.1"
|
||||||
|
port: 8080
|
||||||
|
mode: "release"
|
||||||
|
read_timeout: 30s
|
||||||
|
write_timeout: 30s
|
||||||
|
|
||||||
|
backup:
|
||||||
|
time: "00:00"
|
||||||
|
|
||||||
|
logging:
|
||||||
|
level: "info"
|
||||||
|
format: "json"
|
||||||
|
output: "stdout"
|
||||||
|
`
|
||||||
|
if err := os.WriteFile(configPath, []byte(defaultConfigYAML), 0644); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
slog.Info("created default config file", "path", configPath)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type runtimeServerConfig struct {
|
||||||
|
Host string `yaml:"host"`
|
||||||
|
Port int `yaml:"port"`
|
||||||
|
Mode string `yaml:"mode"`
|
||||||
|
ReadTimeout time.Duration `yaml:"read_timeout"`
|
||||||
|
WriteTimeout time.Duration `yaml:"write_timeout"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type runtimeLoggingConfig struct {
|
||||||
|
Level string `yaml:"level"`
|
||||||
|
Format string `yaml:"format"`
|
||||||
|
Output string `yaml:"output"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type runtimeBackupConfig struct {
|
||||||
|
Time string `yaml:"time"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type runtimeConfigFile struct {
|
||||||
|
Server runtimeServerConfig `yaml:"server"`
|
||||||
|
Logging runtimeLoggingConfig `yaml:"logging"`
|
||||||
|
Backup runtimeBackupConfig `yaml:"backup"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// migrateConfigFileToRuntimeShape rewrites config.yaml in a minimal runtime format.
|
||||||
|
// Deprecated sections from legacy configs are intentionally dropped.
|
||||||
|
func migrateConfigFileToRuntimeShape(configPath string, cfg *config.Config) error {
|
||||||
|
if cfg == nil {
|
||||||
|
return fmt.Errorf("config is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
runtimeCfg := runtimeConfigFile{
|
||||||
|
Server: runtimeServerConfig{
|
||||||
|
Host: cfg.Server.Host,
|
||||||
|
Port: cfg.Server.Port,
|
||||||
|
Mode: cfg.Server.Mode,
|
||||||
|
ReadTimeout: cfg.Server.ReadTimeout,
|
||||||
|
WriteTimeout: cfg.Server.WriteTimeout,
|
||||||
|
},
|
||||||
|
Logging: runtimeLoggingConfig{
|
||||||
|
Level: cfg.Logging.Level,
|
||||||
|
Format: cfg.Logging.Format,
|
||||||
|
Output: cfg.Logging.Output,
|
||||||
|
},
|
||||||
|
Backup: runtimeBackupConfig{
|
||||||
|
Time: cfg.Backup.Time,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
rendered, err := yaml.Marshal(&runtimeCfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("marshal runtime config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
current, err := os.ReadFile(configPath)
|
||||||
|
if err == nil && bytes.Equal(bytes.TrimSpace(current), bytes.TrimSpace(rendered)) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(configPath, rendered, 0644); err != nil {
|
||||||
|
return fmt.Errorf("write runtime config: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("migrated config.yaml to runtime format", "path", configPath)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string) {
|
||||||
|
if cfg == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
hour, minute, err := parseBackupTime(cfg.Backup.Time)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("invalid backup time; using 00:00", "value", cfg.Backup.Time, "error", err)
|
||||||
|
hour = 0
|
||||||
|
minute = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
if created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath); backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr)
|
||||||
|
} else if len(created) > 0 {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
next := nextBackupTime(time.Now(), hour, minute)
|
||||||
|
timer := time.NewTimer(time.Until(next))
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
timer.Stop()
|
||||||
|
return
|
||||||
|
case <-timer.C:
|
||||||
|
start := time.Now()
|
||||||
|
created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath)
|
||||||
|
duration := time.Since(start)
|
||||||
|
if backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr, "duration", duration)
|
||||||
|
} else {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path, "duration", duration)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseBackupTime(value string) (int, int, error) {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return 0, 0, fmt.Errorf("empty backup time")
|
||||||
|
}
|
||||||
|
parsed, err := time.Parse("15:04", value)
|
||||||
|
if err != nil {
|
||||||
|
return 0, 0, err
|
||||||
|
}
|
||||||
|
return parsed.Hour(), parsed.Minute(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func nextBackupTime(now time.Time, hour, minute int) time.Time {
|
||||||
|
location := now.Location()
|
||||||
|
target := time.Date(now.Year(), now.Month(), now.Day(), hour, minute, 0, 0, location)
|
||||||
|
if !now.Before(target) {
|
||||||
|
target = target.Add(24 * time.Hour)
|
||||||
|
}
|
||||||
|
return target
|
||||||
}
|
}
|
||||||
|
|
||||||
// runSetupMode starts a minimal server that only serves the setup page
|
// runSetupMode starts a minimal server that only serves the setup page
|
||||||
@@ -446,8 +669,6 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
// Repositories
|
// Repositories
|
||||||
var componentRepo *repository.ComponentRepository
|
var componentRepo *repository.ComponentRepository
|
||||||
var categoryRepo *repository.CategoryRepository
|
var categoryRepo *repository.CategoryRepository
|
||||||
var priceRepo *repository.PriceRepository
|
|
||||||
var alertRepo *repository.AlertRepository
|
|
||||||
var statsRepo *repository.StatsRepository
|
var statsRepo *repository.StatsRepository
|
||||||
var pricelistRepo *repository.PricelistRepository
|
var pricelistRepo *repository.PricelistRepository
|
||||||
|
|
||||||
@@ -455,8 +676,6 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if mariaDB != nil {
|
if mariaDB != nil {
|
||||||
componentRepo = repository.NewComponentRepository(mariaDB)
|
componentRepo = repository.NewComponentRepository(mariaDB)
|
||||||
categoryRepo = repository.NewCategoryRepository(mariaDB)
|
categoryRepo = repository.NewCategoryRepository(mariaDB)
|
||||||
priceRepo = repository.NewPriceRepository(mariaDB)
|
|
||||||
alertRepo = repository.NewAlertRepository(mariaDB)
|
|
||||||
statsRepo = repository.NewStatsRepository(mariaDB)
|
statsRepo = repository.NewStatsRepository(mariaDB)
|
||||||
pricelistRepo = repository.NewPricelistRepository(mariaDB)
|
pricelistRepo = repository.NewPricelistRepository(mariaDB)
|
||||||
} else {
|
} else {
|
||||||
@@ -465,12 +684,9 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Services
|
// Services
|
||||||
var pricingService *pricing.Service
|
|
||||||
var componentService *services.ComponentService
|
var componentService *services.ComponentService
|
||||||
var quoteService *services.QuoteService
|
var quoteService *services.QuoteService
|
||||||
var exportService *services.ExportService
|
var exportService *services.ExportService
|
||||||
var alertService *alerts.Service
|
|
||||||
var pricelistService *pricelist.Service
|
|
||||||
var syncService *sync.Service
|
var syncService *sync.Service
|
||||||
var projectService *services.ProjectService
|
var projectService *services.ProjectService
|
||||||
|
|
||||||
@@ -478,20 +694,14 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
syncService = sync.NewService(connMgr, local)
|
syncService = sync.NewService(connMgr, local)
|
||||||
|
|
||||||
if mariaDB != nil {
|
if mariaDB != nil {
|
||||||
pricingService = pricing.NewService(componentRepo, priceRepo, cfg.Pricing)
|
|
||||||
componentService = services.NewComponentService(componentRepo, categoryRepo, statsRepo)
|
componentService = services.NewComponentService(componentRepo, categoryRepo, statsRepo)
|
||||||
quoteService = services.NewQuoteService(componentRepo, statsRepo, pricelistRepo, local, pricingService)
|
quoteService = services.NewQuoteService(componentRepo, statsRepo, pricelistRepo, local, nil)
|
||||||
exportService = services.NewExportService(cfg.Export, categoryRepo)
|
exportService = services.NewExportService(cfg.Export, categoryRepo)
|
||||||
alertService = alerts.NewService(alertRepo, componentRepo, priceRepo, statsRepo, cfg.Alerts, cfg.Pricing)
|
|
||||||
pricelistService = pricelist.NewService(mariaDB, pricelistRepo, componentRepo, pricingService)
|
|
||||||
} else {
|
} else {
|
||||||
// In offline mode, we still need to create services that don't require DB
|
// In offline mode, we still need to create services that don't require DB.
|
||||||
pricingService = pricing.NewService(nil, nil, cfg.Pricing)
|
|
||||||
componentService = services.NewComponentService(nil, nil, nil)
|
componentService = services.NewComponentService(nil, nil, nil)
|
||||||
quoteService = services.NewQuoteService(nil, nil, nil, local, pricingService)
|
quoteService = services.NewQuoteService(nil, nil, nil, local, nil)
|
||||||
exportService = services.NewExportService(cfg.Export, nil)
|
exportService = services.NewExportService(cfg.Export, nil)
|
||||||
alertService = alerts.NewService(nil, nil, nil, nil, cfg.Alerts, cfg.Pricing)
|
|
||||||
pricelistService = pricelist.NewService(nil, nil, nil, nil)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// isOnline function for local-first architecture
|
// isOnline function for local-first architecture
|
||||||
@@ -523,20 +733,75 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
syncProjectsFromServer := func() {
|
type pullState struct {
|
||||||
if !connMgr.IsOnline() {
|
mu syncpkg.Mutex
|
||||||
|
running bool
|
||||||
|
lastStarted time.Time
|
||||||
|
}
|
||||||
|
triggerPull := func(label string, state *pullState, pullFn func() error) {
|
||||||
|
state.mu.Lock()
|
||||||
|
if state.running {
|
||||||
|
state.mu.Unlock()
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if _, err := syncService.ImportProjectsToLocal(); err != nil && !errors.Is(err, sync.ErrOffline) {
|
if !state.lastStarted.IsZero() && time.Since(state.lastStarted) < onDemandPullCooldown {
|
||||||
slog.Warn("failed to sync projects from server", "error", err)
|
state.mu.Unlock()
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
state.running = true
|
||||||
|
state.lastStarted = time.Now()
|
||||||
|
state.mu.Unlock()
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
defer func() {
|
||||||
|
state.mu.Lock()
|
||||||
|
state.running = false
|
||||||
|
state.mu.Unlock()
|
||||||
|
}()
|
||||||
|
if err := pullFn(); err != nil {
|
||||||
|
slog.Warn("on-demand pull failed", "scope", label, "error", err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
syncConfigurationsFromServer := func() {
|
var projectsPullState pullState
|
||||||
|
var configsPullState pullState
|
||||||
|
|
||||||
|
syncProjectsFromServer := func() error {
|
||||||
if !connMgr.IsOnline() {
|
if !connMgr.IsOnline() {
|
||||||
return
|
return nil
|
||||||
}
|
}
|
||||||
_, _ = configService.ImportFromServer()
|
if readiness, err := syncService.EnsureReadinessForSync(); err != nil {
|
||||||
|
slog.Warn("skipping project pull: sync readiness blocked",
|
||||||
|
"error", err,
|
||||||
|
"reason_code", readiness.ReasonCode,
|
||||||
|
"reason_text", readiness.ReasonText,
|
||||||
|
)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if _, err := syncService.ImportProjectsToLocal(); err != nil && !errors.Is(err, sync.ErrOffline) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
syncConfigurationsFromServer := func() error {
|
||||||
|
if !connMgr.IsOnline() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if readiness, err := syncService.EnsureReadinessForSync(); err != nil {
|
||||||
|
slog.Warn("skipping configuration pull: sync readiness blocked",
|
||||||
|
"error", err,
|
||||||
|
"reason_code", readiness.ReasonCode,
|
||||||
|
"reason_text", readiness.ReasonText,
|
||||||
|
)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
_, err := configService.ImportFromServer()
|
||||||
|
if err != nil && !errors.Is(err, sync.ErrOffline) {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Use filepath.Join for cross-platform path compatibility
|
// Use filepath.Join for cross-platform path compatibility
|
||||||
@@ -545,9 +810,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
// Handlers
|
// Handlers
|
||||||
componentHandler := handlers.NewComponentHandler(componentService, local)
|
componentHandler := handlers.NewComponentHandler(componentService, local)
|
||||||
quoteHandler := handlers.NewQuoteHandler(quoteService)
|
quoteHandler := handlers.NewQuoteHandler(quoteService)
|
||||||
exportHandler := handlers.NewExportHandler(exportService, configService, componentService)
|
exportHandler := handlers.NewExportHandler(exportService, configService, componentService, projectService)
|
||||||
pricingHandler := handlers.NewPricingHandler(mariaDB, pricingService, alertService, componentRepo, priceRepo, statsRepo)
|
pricelistHandler := handlers.NewPricelistHandler(local)
|
||||||
pricelistHandler := handlers.NewPricelistHandler(pricelistService, local)
|
|
||||||
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
|
syncHandler, err := handlers.NewSyncHandler(local, syncService, connMgr, templatesPath, backgroundSyncInterval)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
|
return nil, nil, fmt.Errorf("creating sync handler: %w", err)
|
||||||
@@ -603,28 +867,39 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
// DB status endpoint
|
// DB status endpoint
|
||||||
router.GET("/api/db-status", func(c *gin.Context) {
|
router.GET("/api/db-status", func(c *gin.Context) {
|
||||||
var lotCount, lotLogCount, metadataCount int64
|
var lotCount, lotLogCount, metadataCount int64
|
||||||
var dbOK bool = false
|
var dbOK bool
|
||||||
var dbError string
|
var dbError string
|
||||||
|
includeCounts := c.Query("include_counts") == "true"
|
||||||
|
|
||||||
// Check if connection exists (fast check, no reconnect attempt)
|
// Fast status path: do not execute heavy COUNT queries unless requested.
|
||||||
status := connMgr.GetStatus()
|
status := connMgr.GetStatus()
|
||||||
if status.IsConnected {
|
dbOK = status.IsConnected
|
||||||
// Already connected, safe to use
|
if !status.IsConnected {
|
||||||
if db, err := connMgr.GetDB(); err == nil && db != nil {
|
|
||||||
dbOK = true
|
|
||||||
db.Table("lot").Count(&lotCount)
|
|
||||||
db.Table("lot_log").Count(&lotLogCount)
|
|
||||||
db.Table("qt_lot_metadata").Count(&metadataCount)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Not connected - don't try to reconnect on status check
|
|
||||||
// This prevents 3s timeout on every request
|
|
||||||
dbError = "Database not connected (offline mode)"
|
dbError = "Database not connected (offline mode)"
|
||||||
if status.LastError != "" {
|
if status.LastError != "" {
|
||||||
dbError = status.LastError
|
dbError = status.LastError
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Optional diagnostics mode with server table counts.
|
||||||
|
if includeCounts && status.IsConnected {
|
||||||
|
if db, err := connMgr.GetDB(); err == nil && db != nil {
|
||||||
|
_ = db.Table("lot").Count(&lotCount)
|
||||||
|
_ = db.Table("lot_log").Count(&lotLogCount)
|
||||||
|
_ = db.Table("qt_lot_metadata").Count(&metadataCount)
|
||||||
|
} else if err != nil {
|
||||||
|
dbOK = false
|
||||||
|
dbError = err.Error()
|
||||||
|
} else {
|
||||||
|
dbOK = false
|
||||||
|
dbError = "Database not connected (offline mode)"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
lotCount = 0
|
||||||
|
lotLogCount = 0
|
||||||
|
metadataCount = 0
|
||||||
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"connected": dbOK,
|
"connected": dbOK,
|
||||||
"error": dbError,
|
"error": dbError,
|
||||||
@@ -655,12 +930,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
router.GET("/configurator", webHandler.Configurator)
|
router.GET("/configurator", webHandler.Configurator)
|
||||||
router.GET("/projects", webHandler.Projects)
|
router.GET("/projects", webHandler.Projects)
|
||||||
router.GET("/projects/:uuid", webHandler.ProjectDetail)
|
router.GET("/projects/:uuid", webHandler.ProjectDetail)
|
||||||
router.GET("/pricelists", func(c *gin.Context) {
|
router.GET("/pricelists", webHandler.Pricelists)
|
||||||
// Redirect to admin/pricing with pricelists tab
|
|
||||||
c.Redirect(http.StatusFound, "/admin/pricing?tab=pricelists")
|
|
||||||
})
|
|
||||||
router.GET("/pricelists/:id", webHandler.PricelistDetail)
|
router.GET("/pricelists/:id", webHandler.PricelistDetail)
|
||||||
router.GET("/admin/pricing", webHandler.AdminPricing)
|
|
||||||
|
|
||||||
// htmx partials
|
// htmx partials
|
||||||
partials := router.Group("/partials")
|
partials := router.Group("/partials")
|
||||||
@@ -704,21 +975,17 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
pricelists := api.Group("/pricelists")
|
pricelists := api.Group("/pricelists")
|
||||||
{
|
{
|
||||||
pricelists.GET("", pricelistHandler.List)
|
pricelists.GET("", pricelistHandler.List)
|
||||||
pricelists.GET("/can-write", pricelistHandler.CanWrite)
|
|
||||||
pricelists.GET("/latest", pricelistHandler.GetLatest)
|
pricelists.GET("/latest", pricelistHandler.GetLatest)
|
||||||
pricelists.GET("/:id", pricelistHandler.Get)
|
pricelists.GET("/:id", pricelistHandler.Get)
|
||||||
pricelists.GET("/:id/items", pricelistHandler.GetItems)
|
pricelists.GET("/:id/items", pricelistHandler.GetItems)
|
||||||
pricelists.POST("", pricelistHandler.Create)
|
pricelists.GET("/:id/lots", pricelistHandler.GetLotNames)
|
||||||
pricelists.POST("/create-with-progress", pricelistHandler.CreateWithProgress)
|
|
||||||
pricelists.PATCH("/:id/active", pricelistHandler.SetActive)
|
|
||||||
pricelists.DELETE("/:id", pricelistHandler.Delete)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Configurations (public - RBAC disabled)
|
// Configurations (public - RBAC disabled)
|
||||||
configs := api.Group("/configs")
|
configs := api.Group("/configs")
|
||||||
{
|
{
|
||||||
configs.GET("", func(c *gin.Context) {
|
configs.GET("", func(c *gin.Context) {
|
||||||
syncConfigurationsFromServer()
|
triggerPull("configs", &configsPullState, syncConfigurationsFromServer)
|
||||||
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
@@ -774,6 +1041,23 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
c.JSON(http.StatusCreated, config)
|
c.JSON(http.StatusCreated, config)
|
||||||
})
|
})
|
||||||
|
|
||||||
|
configs.POST("/preview-article", func(c *gin.Context) {
|
||||||
|
var req services.ArticlePreviewRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
result, err := configService.BuildArticlePreview(&req)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"article": result.Article,
|
||||||
|
"warnings": result.Warnings,
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
configs.GET("/:uuid", func(c *gin.Context) {
|
configs.GET("/:uuid", func(c *gin.Context) {
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
config, err := configService.GetByUUIDNoAuth(uuid)
|
config, err := configService.GetByUUIDNoAuth(uuid)
|
||||||
@@ -1000,19 +1284,22 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
"current_version": currentVersion,
|
"current_version": currentVersion,
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
configs.GET("/:uuid/export", exportHandler.ExportConfigCSV)
|
||||||
}
|
}
|
||||||
|
|
||||||
projects := api.Group("/projects")
|
projects := api.Group("/projects")
|
||||||
{
|
{
|
||||||
projects.GET("", func(c *gin.Context) {
|
projects.GET("", func(c *gin.Context) {
|
||||||
syncProjectsFromServer()
|
triggerPull("projects", &projectsPullState, syncProjectsFromServer)
|
||||||
syncConfigurationsFromServer()
|
triggerPull("configs", &configsPullState, syncConfigurationsFromServer)
|
||||||
|
|
||||||
status := c.DefaultQuery("status", "active")
|
status := c.DefaultQuery("status", "active")
|
||||||
search := strings.ToLower(strings.TrimSpace(c.Query("search")))
|
search := strings.ToLower(strings.TrimSpace(c.Query("search")))
|
||||||
author := strings.ToLower(strings.TrimSpace(c.Query("author")))
|
author := strings.ToLower(strings.TrimSpace(c.Query("author")))
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "10"))
|
// Return all projects by default (set high limit for configs to reference)
|
||||||
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "1000"))
|
||||||
sortField := strings.ToLower(strings.TrimSpace(c.DefaultQuery("sort", "created_at")))
|
sortField := strings.ToLower(strings.TrimSpace(c.DefaultQuery("sort", "created_at")))
|
||||||
sortDir := strings.ToLower(strings.TrimSpace(c.DefaultQuery("dir", "desc")))
|
sortDir := strings.ToLower(strings.TrimSpace(c.DefaultQuery("dir", "desc")))
|
||||||
if status != "active" && status != "archived" && status != "all" {
|
if status != "active" && status != "archived" && status != "all" {
|
||||||
@@ -1052,7 +1339,10 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
if status == "archived" && p.IsActive {
|
if status == "archived" && p.IsActive {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if search != "" && !strings.Contains(strings.ToLower(p.Name), search) {
|
if search != "" &&
|
||||||
|
!strings.Contains(strings.ToLower(derefString(p.Name)), search) &&
|
||||||
|
!strings.Contains(strings.ToLower(p.Code), search) &&
|
||||||
|
!strings.Contains(strings.ToLower(p.Variant), search) {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if author != "" && !strings.Contains(strings.ToLower(strings.TrimSpace(p.OwnerUsername)), author) {
|
if author != "" && !strings.Contains(strings.ToLower(strings.TrimSpace(p.OwnerUsername)), author) {
|
||||||
@@ -1065,8 +1355,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
left := filtered[i]
|
left := filtered[i]
|
||||||
right := filtered[j]
|
right := filtered[j]
|
||||||
if sortField == "name" {
|
if sortField == "name" {
|
||||||
leftName := strings.ToLower(strings.TrimSpace(left.Name))
|
leftName := strings.ToLower(strings.TrimSpace(derefString(left.Name)))
|
||||||
rightName := strings.ToLower(strings.TrimSpace(right.Name))
|
rightName := strings.ToLower(strings.TrimSpace(derefString(right.Name)))
|
||||||
if leftName == rightName {
|
if leftName == rightName {
|
||||||
if sortDir == "asc" {
|
if sortDir == "asc" {
|
||||||
return left.CreatedAt.Before(right.CreatedAt)
|
return left.CreatedAt.Before(right.CreatedAt)
|
||||||
@@ -1079,8 +1369,8 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
return leftName > rightName
|
return leftName > rightName
|
||||||
}
|
}
|
||||||
if left.CreatedAt.Equal(right.CreatedAt) {
|
if left.CreatedAt.Equal(right.CreatedAt) {
|
||||||
leftName := strings.ToLower(strings.TrimSpace(left.Name))
|
leftName := strings.ToLower(strings.TrimSpace(derefString(left.Name)))
|
||||||
rightName := strings.ToLower(strings.TrimSpace(right.Name))
|
rightName := strings.ToLower(strings.TrimSpace(derefString(right.Name)))
|
||||||
if sortDir == "asc" {
|
if sortDir == "asc" {
|
||||||
return leftName < rightName
|
return leftName < rightName
|
||||||
}
|
}
|
||||||
@@ -1115,29 +1405,40 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
paged = filtered[start:end]
|
paged = filtered[start:end]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Build per-project active config stats in one pass (avoid N+1 scans).
|
||||||
|
projectConfigCount := map[string]int{}
|
||||||
|
projectConfigTotal := map[string]float64{}
|
||||||
|
if localConfigs, cfgErr := local.GetConfigurations(); cfgErr == nil {
|
||||||
|
for i := range localConfigs {
|
||||||
|
cfg := localConfigs[i]
|
||||||
|
if !cfg.IsActive || cfg.ProjectUUID == nil || *cfg.ProjectUUID == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
projectUUID := *cfg.ProjectUUID
|
||||||
|
projectConfigCount[projectUUID]++
|
||||||
|
if cfg.TotalPrice != nil {
|
||||||
|
projectConfigTotal[projectUUID] += *cfg.TotalPrice
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
projectRows := make([]gin.H, 0, len(paged))
|
projectRows := make([]gin.H, 0, len(paged))
|
||||||
for i := range paged {
|
for i := range paged {
|
||||||
p := paged[i]
|
p := paged[i]
|
||||||
configs, err := projectService.ListConfigurations(p.UUID, dbUsername, "active")
|
|
||||||
if err != nil {
|
|
||||||
configs = &services.ProjectConfigurationsResult{
|
|
||||||
ProjectUUID: p.UUID,
|
|
||||||
Configs: []models.Configuration{},
|
|
||||||
Total: 0,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
projectRows = append(projectRows, gin.H{
|
projectRows = append(projectRows, gin.H{
|
||||||
"id": p.ID,
|
"id": p.ID,
|
||||||
"uuid": p.UUID,
|
"uuid": p.UUID,
|
||||||
"owner_username": p.OwnerUsername,
|
"owner_username": p.OwnerUsername,
|
||||||
|
"code": p.Code,
|
||||||
|
"variant": p.Variant,
|
||||||
"name": p.Name,
|
"name": p.Name,
|
||||||
"tracker_url": p.TrackerURL,
|
"tracker_url": p.TrackerURL,
|
||||||
"is_active": p.IsActive,
|
"is_active": p.IsActive,
|
||||||
"is_system": p.IsSystem,
|
"is_system": p.IsSystem,
|
||||||
"created_at": p.CreatedAt,
|
"created_at": p.CreatedAt,
|
||||||
"updated_at": p.UpdatedAt,
|
"updated_at": p.UpdatedAt,
|
||||||
"config_count": len(configs.Configs),
|
"config_count": projectConfigCount[p.UUID],
|
||||||
"total": configs.Total,
|
"total": projectConfigTotal[p.UUID],
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1155,19 +1456,55 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
|
// GET /api/projects/all - Returns all projects without pagination for UI dropdowns
|
||||||
|
projects.GET("/all", func(c *gin.Context) {
|
||||||
|
allProjects, err := projectService.ListByUser(dbUsername, true)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Return simplified list of all projects (UUID + Name only)
|
||||||
|
type ProjectSimple struct {
|
||||||
|
UUID string `json:"uuid"`
|
||||||
|
Code string `json:"code"`
|
||||||
|
Variant string `json:"variant"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
IsActive bool `json:"is_active"`
|
||||||
|
}
|
||||||
|
|
||||||
|
simplified := make([]ProjectSimple, 0, len(allProjects))
|
||||||
|
for _, p := range allProjects {
|
||||||
|
simplified = append(simplified, ProjectSimple{
|
||||||
|
UUID: p.UUID,
|
||||||
|
Code: p.Code,
|
||||||
|
Variant: p.Variant,
|
||||||
|
Name: derefString(p.Name),
|
||||||
|
IsActive: p.IsActive,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, simplified)
|
||||||
|
})
|
||||||
|
|
||||||
projects.POST("", func(c *gin.Context) {
|
projects.POST("", func(c *gin.Context) {
|
||||||
var req services.CreateProjectRequest
|
var req services.CreateProjectRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(req.Name) == "" {
|
if strings.TrimSpace(req.Code) == "" {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "project name is required"})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "project code is required"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
project, err := projectService.Create(dbUsername, &req)
|
project, err := projectService.Create(dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
switch {
|
||||||
|
case errors.Is(err, services.ErrProjectCodeExists):
|
||||||
|
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
||||||
|
default:
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
c.JSON(http.StatusCreated, project)
|
c.JSON(http.StatusCreated, project)
|
||||||
@@ -1195,13 +1532,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if strings.TrimSpace(req.Name) == "" {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "project name is required"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
project, err := projectService.Update(c.Param("uuid"), dbUsername, &req)
|
project, err := projectService.Update(c.Param("uuid"), dbUsername, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
switch {
|
switch {
|
||||||
|
case errors.Is(err, services.ErrProjectCodeExists):
|
||||||
|
c.JSON(http.StatusConflict, gin.H{"error": err.Error()})
|
||||||
case errors.Is(err, services.ErrProjectNotFound):
|
case errors.Is(err, services.ErrProjectNotFound):
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
case errors.Is(err, services.ErrProjectForbidden):
|
case errors.Is(err, services.ErrProjectForbidden):
|
||||||
@@ -1245,7 +1580,7 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
})
|
})
|
||||||
|
|
||||||
projects.GET("/:uuid/configs", func(c *gin.Context) {
|
projects.GET("/:uuid/configs", func(c *gin.Context) {
|
||||||
syncConfigurationsFromServer()
|
triggerPull("configs", &configsPullState, syncConfigurationsFromServer)
|
||||||
|
|
||||||
status := c.DefaultQuery("status", "active")
|
status := c.DefaultQuery("status", "active")
|
||||||
if status != "active" && status != "archived" && status != "all" {
|
if status != "active" && status != "archived" && status != "all" {
|
||||||
@@ -1305,26 +1640,11 @@ func setupRouter(cfg *config.Config, local *localdb.LocalDB, connMgr *db.Connect
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pricing admin (public - RBAC disabled)
|
|
||||||
pricingAdmin := api.Group("/admin/pricing")
|
|
||||||
{
|
|
||||||
pricingAdmin.GET("/stats", pricingHandler.GetStats)
|
|
||||||
pricingAdmin.GET("/components", pricingHandler.ListComponents)
|
|
||||||
pricingAdmin.GET("/components/:lot_name", pricingHandler.GetComponentPricing)
|
|
||||||
pricingAdmin.POST("/update", pricingHandler.UpdatePrice)
|
|
||||||
pricingAdmin.POST("/preview", pricingHandler.PreviewPrice)
|
|
||||||
pricingAdmin.POST("/recalculate-all", pricingHandler.RecalculateAll)
|
|
||||||
|
|
||||||
pricingAdmin.GET("/alerts", pricingHandler.ListAlerts)
|
|
||||||
pricingAdmin.POST("/alerts/:id/acknowledge", pricingHandler.AcknowledgeAlert)
|
|
||||||
pricingAdmin.POST("/alerts/:id/resolve", pricingHandler.ResolveAlert)
|
|
||||||
pricingAdmin.POST("/alerts/:id/ignore", pricingHandler.IgnoreAlert)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sync API (for offline mode)
|
// Sync API (for offline mode)
|
||||||
syncAPI := api.Group("/sync")
|
syncAPI := api.Group("/sync")
|
||||||
{
|
{
|
||||||
syncAPI.GET("/status", syncHandler.GetStatus)
|
syncAPI.GET("/status", syncHandler.GetStatus)
|
||||||
|
syncAPI.GET("/readiness", syncHandler.GetReadiness)
|
||||||
syncAPI.GET("/info", syncHandler.GetInfo)
|
syncAPI.GET("/info", syncHandler.GetInfo)
|
||||||
syncAPI.GET("/users-status", syncHandler.GetUsersStatus)
|
syncAPI.GET("/users-status", syncHandler.GetUsersStatus)
|
||||||
syncAPI.POST("/components", syncHandler.SyncComponents)
|
syncAPI.POST("/components", syncHandler.SyncComponents)
|
||||||
|
|||||||
@@ -149,7 +149,7 @@ func TestProjectArchiveHidesConfigsAndCloneIntoProject(t *testing.T) {
|
|||||||
t.Fatalf("setup router: %v", err)
|
t.Fatalf("setup router: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"P1"}`)))
|
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"P1","code":"P1"}`)))
|
||||||
createProjectReq.Header.Set("Content-Type", "application/json")
|
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||||
createProjectRec := httptest.NewRecorder()
|
createProjectRec := httptest.NewRecorder()
|
||||||
router.ServeHTTP(createProjectRec, createProjectReq)
|
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||||
@@ -243,7 +243,7 @@ func TestConfigMoveToProjectEndpoint(t *testing.T) {
|
|||||||
t.Fatalf("setup router: %v", err)
|
t.Fatalf("setup router: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"Move Project"}`)))
|
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"Move Project","code":"MOVE"}`)))
|
||||||
createProjectReq.Header.Set("Content-Type", "application/json")
|
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||||
createProjectRec := httptest.NewRecorder()
|
createProjectRec := httptest.NewRecorder()
|
||||||
router.ServeHTTP(createProjectRec, createProjectReq)
|
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||||
|
|||||||
@@ -37,6 +37,9 @@ export:
|
|||||||
max_file_age: "1h"
|
max_file_age: "1h"
|
||||||
company_name: "Your Company Name"
|
company_name: "Your Company Name"
|
||||||
|
|
||||||
|
backup:
|
||||||
|
time: "00:00"
|
||||||
|
|
||||||
alerts:
|
alerts:
|
||||||
enabled: true
|
enabled: true
|
||||||
check_interval: "1h"
|
check_interval: "1h"
|
||||||
|
|||||||
15
crontab
15
crontab
@@ -1,15 +0,0 @@
|
|||||||
# Cron jobs for QuoteForge
|
|
||||||
# Run alerts check every hour
|
|
||||||
0 * * * * /app/quoteforge-cron -job=alerts
|
|
||||||
|
|
||||||
# Run price updates daily at 2 AM
|
|
||||||
0 2 * * * /app/quoteforge-cron -job=update-prices
|
|
||||||
|
|
||||||
# Reset weekly counters every Sunday at 1 AM
|
|
||||||
0 1 * * 0 /app/quoteforge-cron -job=reset-counters
|
|
||||||
|
|
||||||
# Update popularity scores daily at 3 AM
|
|
||||||
0 3 * * * /app/quoteforge-cron -job=update-popularity
|
|
||||||
|
|
||||||
# Log rotation (optional)
|
|
||||||
# 0 0 * * * /usr/bin/logrotate /etc/logrotate.conf
|
|
||||||
BIN
dist/qfs-darwin-amd64
vendored
Executable file
BIN
dist/qfs-darwin-amd64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-darwin-arm64
vendored
Executable file
BIN
dist/qfs-darwin-arm64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-linux-amd64
vendored
Executable file
BIN
dist/qfs-linux-amd64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-windows-amd64.exe
vendored
Executable file
BIN
dist/qfs-windows-amd64.exe
vendored
Executable file
Binary file not shown.
273
internal/appstate/backup.go
Normal file
273
internal/appstate/backup.go
Normal file
@@ -0,0 +1,273 @@
|
|||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/zip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type backupPeriod struct {
|
||||||
|
name string
|
||||||
|
retention int
|
||||||
|
key func(time.Time) string
|
||||||
|
date func(time.Time) string
|
||||||
|
}
|
||||||
|
|
||||||
|
var backupPeriods = []backupPeriod{
|
||||||
|
{
|
||||||
|
name: "daily",
|
||||||
|
retention: 7,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "weekly",
|
||||||
|
retention: 4,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
y, w := t.ISOWeek()
|
||||||
|
return fmt.Sprintf("%04d-W%02d", y, w)
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "monthly",
|
||||||
|
retention: 12,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yearly",
|
||||||
|
retention: 10,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
envBackupDisable = "QFS_BACKUP_DISABLE"
|
||||||
|
envBackupDir = "QFS_BACKUP_DIR"
|
||||||
|
)
|
||||||
|
|
||||||
|
var backupNow = time.Now
|
||||||
|
|
||||||
|
// EnsureRotatingLocalBackup creates or refreshes daily/weekly/monthly/yearly backups
|
||||||
|
// for the local database and config. It keeps a limited number per period.
|
||||||
|
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
||||||
|
if isBackupDisabled() {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
if dbPath == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(dbPath); err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("stat db: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
root := resolveBackupRoot(dbPath)
|
||||||
|
now := backupNow()
|
||||||
|
|
||||||
|
created := make([]string, 0)
|
||||||
|
for _, period := range backupPeriods {
|
||||||
|
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
||||||
|
if err != nil {
|
||||||
|
return created, err
|
||||||
|
}
|
||||||
|
if len(newFiles) > 0 {
|
||||||
|
created = append(created, newFiles...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return created, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveBackupRoot(dbPath string) string {
|
||||||
|
if fromEnv := strings.TrimSpace(os.Getenv(envBackupDir)); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv)
|
||||||
|
}
|
||||||
|
return filepath.Join(filepath.Dir(dbPath), "backups")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isBackupDisabled() bool {
|
||||||
|
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
||||||
|
return val == "1" || val == "true" || val == "yes"
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensurePeriodBackup(root string, period backupPeriod, now time.Time, dbPath, configPath string) ([]string, error) {
|
||||||
|
key := period.key(now)
|
||||||
|
periodDir := filepath.Join(root, period.name)
|
||||||
|
if err := os.MkdirAll(periodDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup dir: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasBackupForKey(periodDir, key) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
archiveName := fmt.Sprintf("qfs-backp-%s.zip", period.date(now))
|
||||||
|
archivePath := filepath.Join(periodDir, archiveName)
|
||||||
|
|
||||||
|
if err := createBackupArchive(archivePath, dbPath, configPath); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup archive: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writePeriodMarker(periodDir, key); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pruneOldBackups(periodDir, period.retention); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string{archivePath}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasBackupForKey(periodDir, key string) bool {
|
||||||
|
marker := periodMarker{Key: ""}
|
||||||
|
data, err := os.ReadFile(periodMarkerPath(periodDir))
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(data, &marker); err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return marker.Key == key
|
||||||
|
}
|
||||||
|
|
||||||
|
type periodMarker struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func periodMarkerPath(periodDir string) string {
|
||||||
|
return filepath.Join(periodDir, ".period.json")
|
||||||
|
}
|
||||||
|
|
||||||
|
func writePeriodMarker(periodDir, key string) error {
|
||||||
|
data, err := json.MarshalIndent(periodMarker{Key: key}, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return os.WriteFile(periodMarkerPath(periodDir), data, 0644)
|
||||||
|
}
|
||||||
|
|
||||||
|
func pruneOldBackups(periodDir string, keep int) error {
|
||||||
|
entries, err := os.ReadDir(periodDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read backups dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files := make([]os.DirEntry, 0, len(entries))
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(entry.Name(), ".zip") {
|
||||||
|
files = append(files, entry)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) <= keep {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(files, func(i, j int) bool {
|
||||||
|
infoI, errI := files[i].Info()
|
||||||
|
infoJ, errJ := files[j].Info()
|
||||||
|
if errI != nil || errJ != nil {
|
||||||
|
return files[i].Name() < files[j].Name()
|
||||||
|
}
|
||||||
|
return infoI.ModTime().Before(infoJ.ModTime())
|
||||||
|
})
|
||||||
|
|
||||||
|
for i := 0; i < len(files)-keep; i++ {
|
||||||
|
path := filepath.Join(periodDir, files[i].Name())
|
||||||
|
if err := os.Remove(path); err != nil {
|
||||||
|
return fmt.Errorf("remove old backup %s: %w", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||||
|
file, err := os.Create(destPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
zipWriter := zip.NewWriter(file)
|
||||||
|
if err := addZipFile(zipWriter, dbPath); err != nil {
|
||||||
|
_ = zipWriter.Close()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
||||||
|
|
||||||
|
if strings.TrimSpace(configPath) != "" {
|
||||||
|
_ = addZipOptionalFile(zipWriter, configPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := zipWriter.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return file.Sync()
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||||
|
if _, err := os.Stat(path); err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return addZipFile(writer, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipFile(writer *zip.Writer, path string) error {
|
||||||
|
in, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
info, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
header, err := zip.FileInfoHeader(info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
header.Name = filepath.Base(path)
|
||||||
|
header.Method = zip.Deflate
|
||||||
|
|
||||||
|
out, err := writer.CreateHeader(header)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(out, in)
|
||||||
|
return err
|
||||||
|
}
|
||||||
83
internal/appstate/backup_test.go
Normal file
83
internal/appstate/backup_test.go
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||||
|
temp := t.TempDir()
|
||||||
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
|
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||||
|
t.Fatalf("write db: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
|
t.Fatalf("write config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
prevNow := backupNow
|
||||||
|
defer func() { backupNow = prevNow }()
|
||||||
|
backupNow = func() time.Time { return time.Date(2026, 2, 11, 10, 0, 0, 0, time.UTC) }
|
||||||
|
|
||||||
|
created, err := EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("backup: %v", err)
|
||||||
|
}
|
||||||
|
if len(created) == 0 {
|
||||||
|
t.Fatalf("expected backup to be created")
|
||||||
|
}
|
||||||
|
|
||||||
|
dailyArchive := filepath.Join(temp, "backups", "daily", "qfs-backp-2026-02-11.zip")
|
||||||
|
if _, err := os.Stat(dailyArchive); err != nil {
|
||||||
|
t.Fatalf("daily archive missing: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
backupNow = func() time.Time { return time.Date(2026, 2, 12, 10, 0, 0, 0, time.UTC) }
|
||||||
|
created, err = EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("backup rotate: %v", err)
|
||||||
|
}
|
||||||
|
if len(created) == 0 {
|
||||||
|
t.Fatalf("expected backup to be created for new day")
|
||||||
|
}
|
||||||
|
|
||||||
|
dailyArchive = filepath.Join(temp, "backups", "daily", "qfs-backp-2026-02-12.zip")
|
||||||
|
if _, err := os.Stat(dailyArchive); err != nil {
|
||||||
|
t.Fatalf("daily archive missing after rotate: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
||||||
|
temp := t.TempDir()
|
||||||
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
|
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||||
|
t.Fatalf("write db: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
|
t.Fatalf("write config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
backupRoot := filepath.Join(temp, "custom_backups")
|
||||||
|
t.Setenv(envBackupDir, backupRoot)
|
||||||
|
|
||||||
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
|
t.Fatalf("backup with env: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
||||||
|
t.Fatalf("expected backup in custom dir: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Setenv(envBackupDisable, "1")
|
||||||
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
|
t.Fatalf("backup disabled: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
||||||
|
t.Fatalf("backup should remain from previous run: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,6 +6,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -55,6 +56,25 @@ func ResolveConfigPath(explicitPath string) (string, error) {
|
|||||||
return filepath.Join(dir, defaultCfg), nil
|
return filepath.Join(dir, defaultCfg), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ResolveConfigPathNearDB returns config path using priority:
|
||||||
|
// explicit CLI path > QFS_CONFIG_PATH > directory of resolved local DB path.
|
||||||
|
// Falls back to ResolveConfigPath when dbPath is empty.
|
||||||
|
func ResolveConfigPathNearDB(explicitPath, dbPath string) (string, error) {
|
||||||
|
if explicitPath != "" {
|
||||||
|
return filepath.Clean(explicitPath), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if fromEnv := os.Getenv(envCfgPath); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(dbPath) != "" {
|
||||||
|
return filepath.Join(filepath.Dir(filepath.Clean(dbPath)), defaultCfg), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return ResolveConfigPath("")
|
||||||
|
}
|
||||||
|
|
||||||
// MigrateLegacyDB copies an existing legacy DB (and optional SQLite sidecars)
|
// MigrateLegacyDB copies an existing legacy DB (and optional SQLite sidecars)
|
||||||
// to targetPath if targetPath does not already exist.
|
// to targetPath if targetPath does not already exist.
|
||||||
// Returns source path if migration happened.
|
// Returns source path if migration happened.
|
||||||
|
|||||||
124
internal/article/categories.go
Normal file
124
internal/article/categories.go
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrMissingCategoryForLot is returned when a lot has no category in local_pricelist_items.lot_category.
|
||||||
|
var ErrMissingCategoryForLot = errors.New("missing_category_for_lot")
|
||||||
|
|
||||||
|
type MissingCategoryForLotError struct {
|
||||||
|
LotName string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *MissingCategoryForLotError) Error() string {
|
||||||
|
if e == nil || strings.TrimSpace(e.LotName) == "" {
|
||||||
|
return ErrMissingCategoryForLot.Error()
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s: %s", ErrMissingCategoryForLot.Error(), e.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *MissingCategoryForLotError) Unwrap() error {
|
||||||
|
return ErrMissingCategoryForLot
|
||||||
|
}
|
||||||
|
|
||||||
|
type Group string
|
||||||
|
|
||||||
|
const (
|
||||||
|
GroupCPU Group = "CPU"
|
||||||
|
GroupMEM Group = "MEM"
|
||||||
|
GroupGPU Group = "GPU"
|
||||||
|
GroupDISK Group = "DISK"
|
||||||
|
GroupNET Group = "NET"
|
||||||
|
GroupPSU Group = "PSU"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GroupForLotCategory maps pricelist lot_category codes into article groups.
|
||||||
|
// Unknown/unrelated categories return ok=false.
|
||||||
|
func GroupForLotCategory(cat string) (group Group, ok bool) {
|
||||||
|
c := strings.ToUpper(strings.TrimSpace(cat))
|
||||||
|
switch c {
|
||||||
|
case "CPU":
|
||||||
|
return GroupCPU, true
|
||||||
|
case "MEM":
|
||||||
|
return GroupMEM, true
|
||||||
|
case "GPU":
|
||||||
|
return GroupGPU, true
|
||||||
|
case "M2", "SSD", "HDD", "EDSFF", "HHHL":
|
||||||
|
return GroupDISK, true
|
||||||
|
case "NIC", "HCA", "DPU":
|
||||||
|
return GroupNET, true
|
||||||
|
case "HBA":
|
||||||
|
return GroupNET, true
|
||||||
|
case "PSU", "PS":
|
||||||
|
return GroupPSU, true
|
||||||
|
default:
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveLotCategoriesStrict resolves categories for lotNames using local_pricelist_items.lot_category
|
||||||
|
// for a given server pricelist id. If any lot is missing or has empty category, returns an error.
|
||||||
|
func ResolveLotCategoriesStrict(local *localdb.LocalDB, serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
||||||
|
if local == nil {
|
||||||
|
return nil, fmt.Errorf("local db is nil")
|
||||||
|
}
|
||||||
|
cats, err := local.GetLocalLotCategoriesByServerPricelistID(serverPricelistID, lotNames)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
missing := make([]string, 0)
|
||||||
|
for _, lot := range lotNames {
|
||||||
|
cat := strings.TrimSpace(cats[lot])
|
||||||
|
if cat == "" {
|
||||||
|
missing = append(missing, lot)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
cats[lot] = cat
|
||||||
|
}
|
||||||
|
if len(missing) > 0 {
|
||||||
|
fallback, err := local.GetLocalComponentCategoriesByLotNames(missing)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, lot := range missing {
|
||||||
|
if cat := strings.TrimSpace(fallback[lot]); cat != "" {
|
||||||
|
cats[lot] = cat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, lot := range missing {
|
||||||
|
if strings.TrimSpace(cats[lot]) == "" {
|
||||||
|
return nil, &MissingCategoryForLotError{LotName: lot}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cats, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// NormalizeServerModel produces a stable article segment for the server model.
|
||||||
|
func NormalizeServerModel(model string) string {
|
||||||
|
trimmed := strings.TrimSpace(model)
|
||||||
|
if trimmed == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(trimmed)
|
||||||
|
var b strings.Builder
|
||||||
|
for _, r := range upper {
|
||||||
|
if r >= 'A' && r <= 'Z' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if r >= '0' && r <= '9' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if r == '.' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
98
internal/article/categories_test.go
Normal file
98
internal/article/categories_test.go
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestResolveLotCategoriesStrict_MissingCategoryReturnsError(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "CPU_A", LotCategory: "", Price: 10},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = ResolveLotCategoriesStrict(local, 1, []string{"CPU_A"})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error")
|
||||||
|
}
|
||||||
|
if !errors.Is(err, ErrMissingCategoryForLot) {
|
||||||
|
t.Fatalf("expected ErrMissingCategoryForLot, got %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveLotCategoriesStrict_FallbackToLocalComponents(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 2,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-002",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "CPU_B", LotCategory: "", Price: 10},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&localdb.LocalComponent{
|
||||||
|
LotName: "CPU_B",
|
||||||
|
Category: "CPU",
|
||||||
|
LotDescription: "cpu",
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("save local components: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cats, err := ResolveLotCategoriesStrict(local, 2, []string{"CPU_B"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expected fallback, got error: %v", err)
|
||||||
|
}
|
||||||
|
if cats["CPU_B"] != "CPU" {
|
||||||
|
t.Fatalf("expected CPU, got %q", cats["CPU_B"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGroupForLotCategory(t *testing.T) {
|
||||||
|
if g, ok := GroupForLotCategory("cpu"); !ok || g != GroupCPU {
|
||||||
|
t.Fatalf("expected cpu -> GroupCPU")
|
||||||
|
}
|
||||||
|
if g, ok := GroupForLotCategory("SFP"); ok || g != "" {
|
||||||
|
t.Fatalf("expected SFP to be excluded")
|
||||||
|
}
|
||||||
|
}
|
||||||
602
internal/article/generator.go
Normal file
602
internal/article/generator.go
Normal file
@@ -0,0 +1,602 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
type BuildOptions struct {
|
||||||
|
ServerModel string
|
||||||
|
SupportCode string
|
||||||
|
ServerPricelist *uint
|
||||||
|
}
|
||||||
|
|
||||||
|
type BuildResult struct {
|
||||||
|
Article string
|
||||||
|
Warnings []string
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
reMemGiB = regexp.MustCompile(`(?i)(\d+)\s*(GB|G)`)
|
||||||
|
reMemTiB = regexp.MustCompile(`(?i)(\d+)\s*(TB|T)`)
|
||||||
|
reCapacityT = regexp.MustCompile(`(?i)(\d+(?:[.,]\d+)?)T`)
|
||||||
|
reCapacityG = regexp.MustCompile(`(?i)(\d+(?:[.,]\d+)?)G`)
|
||||||
|
rePortSpeed = regexp.MustCompile(`(?i)(\d+)p(\d+)(GbE|G)`)
|
||||||
|
rePortFC = regexp.MustCompile(`(?i)(\d+)pFC(\d+)`)
|
||||||
|
reWatts = regexp.MustCompile(`(?i)(\d{3,5})\s*W`)
|
||||||
|
)
|
||||||
|
|
||||||
|
func Build(local *localdb.LocalDB, items []models.ConfigItem, opts BuildOptions) (BuildResult, error) {
|
||||||
|
segments := make([]string, 0, 8)
|
||||||
|
warnings := make([]string, 0)
|
||||||
|
|
||||||
|
model := NormalizeServerModel(opts.ServerModel)
|
||||||
|
if model == "" {
|
||||||
|
return BuildResult{}, fmt.Errorf("server_model required")
|
||||||
|
}
|
||||||
|
segments = append(segments, model)
|
||||||
|
|
||||||
|
lotNames := make([]string, 0, len(items))
|
||||||
|
for _, it := range items {
|
||||||
|
lotNames = append(lotNames, it.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.ServerPricelist == nil || *opts.ServerPricelist == 0 {
|
||||||
|
return BuildResult{}, fmt.Errorf("pricelist_id required for article")
|
||||||
|
}
|
||||||
|
|
||||||
|
cats, err := ResolveLotCategoriesStrict(local, *opts.ServerPricelist, lotNames)
|
||||||
|
if err != nil {
|
||||||
|
return BuildResult{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
cpuSeg := buildCPUSegment(items, cats)
|
||||||
|
if cpuSeg != "" {
|
||||||
|
segments = append(segments, cpuSeg)
|
||||||
|
}
|
||||||
|
memSeg, memWarn := buildMemSegment(items, cats)
|
||||||
|
if memWarn != "" {
|
||||||
|
warnings = append(warnings, memWarn)
|
||||||
|
}
|
||||||
|
if memSeg != "" {
|
||||||
|
segments = append(segments, memSeg)
|
||||||
|
}
|
||||||
|
gpuSeg := buildGPUSegment(items, cats)
|
||||||
|
if gpuSeg != "" {
|
||||||
|
segments = append(segments, gpuSeg)
|
||||||
|
}
|
||||||
|
diskSeg, diskWarn := buildDiskSegment(items, cats)
|
||||||
|
if diskWarn != "" {
|
||||||
|
warnings = append(warnings, diskWarn)
|
||||||
|
}
|
||||||
|
if diskSeg != "" {
|
||||||
|
segments = append(segments, diskSeg)
|
||||||
|
}
|
||||||
|
netSeg, netWarn := buildNetSegment(items, cats)
|
||||||
|
if netWarn != "" {
|
||||||
|
warnings = append(warnings, netWarn)
|
||||||
|
}
|
||||||
|
if netSeg != "" {
|
||||||
|
segments = append(segments, netSeg)
|
||||||
|
}
|
||||||
|
psuSeg, psuWarn := buildPSUSegment(items, cats)
|
||||||
|
if psuWarn != "" {
|
||||||
|
warnings = append(warnings, psuWarn)
|
||||||
|
}
|
||||||
|
if psuSeg != "" {
|
||||||
|
segments = append(segments, psuSeg)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(opts.SupportCode) != "" {
|
||||||
|
code := strings.TrimSpace(opts.SupportCode)
|
||||||
|
if !isSupportCodeValid(code) {
|
||||||
|
return BuildResult{}, fmt.Errorf("invalid_support_code")
|
||||||
|
}
|
||||||
|
segments = append(segments, code)
|
||||||
|
}
|
||||||
|
|
||||||
|
article := strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) > 80 {
|
||||||
|
article = compressArticle(segments)
|
||||||
|
warnings = append(warnings, "compressed")
|
||||||
|
}
|
||||||
|
if len([]rune(article)) > 80 {
|
||||||
|
return BuildResult{}, fmt.Errorf("article_overflow")
|
||||||
|
}
|
||||||
|
|
||||||
|
return BuildResult{Article: article, Warnings: warnings}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSupportCodeValid(code string) bool {
|
||||||
|
if len(code) < 3 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if !strings.Contains(code, "y") {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
parts := strings.Split(code, "y")
|
||||||
|
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, r := range parts[0] {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
switch parts[1] {
|
||||||
|
case "W", "B", "S", "P":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildCPUSegment(items []models.ConfigItem, cats map[string]string) string {
|
||||||
|
type agg struct {
|
||||||
|
qty int
|
||||||
|
}
|
||||||
|
models := map[string]*agg{}
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupCPU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
model := parseCPUModel(it.LotName)
|
||||||
|
if model == "" {
|
||||||
|
model = "UNK"
|
||||||
|
}
|
||||||
|
if _, ok := models[model]; !ok {
|
||||||
|
models[model] = &agg{}
|
||||||
|
}
|
||||||
|
models[model].qty += it.Quantity
|
||||||
|
}
|
||||||
|
if len(models) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(models))
|
||||||
|
for model, a := range models {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", a.qty, model))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildMemSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
totalGiB := 0
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupMEM {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
per := parseMemGiB(it.LotName)
|
||||||
|
if per <= 0 {
|
||||||
|
return "", "mem_unknown"
|
||||||
|
}
|
||||||
|
totalGiB += per * it.Quantity
|
||||||
|
}
|
||||||
|
if totalGiB == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
if totalGiB%1024 == 0 {
|
||||||
|
return fmt.Sprintf("%dT", totalGiB/1024), ""
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%dG", totalGiB), ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildGPUSegment(items []models.ConfigItem, cats map[string]string) string {
|
||||||
|
models := map[string]int{}
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupGPU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
model := parseGPUModel(it.LotName)
|
||||||
|
if model == "" {
|
||||||
|
model = "UNK"
|
||||||
|
}
|
||||||
|
models[model] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(models) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(models))
|
||||||
|
for model, qty := range models {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, model))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildDiskSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
type key struct {
|
||||||
|
t string
|
||||||
|
c string
|
||||||
|
}
|
||||||
|
groupQty := map[key]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupDISK {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
capToken := parseCapacity(it.LotName)
|
||||||
|
if capToken == "" {
|
||||||
|
warn = "disk_unknown"
|
||||||
|
}
|
||||||
|
typeCode := diskTypeCode(cats[it.LotName], it.LotName)
|
||||||
|
k := key{t: typeCode, c: capToken}
|
||||||
|
groupQty[k] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for k, qty := range groupQty {
|
||||||
|
if k.c == "" {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, k.t))
|
||||||
|
} else {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s%s", qty, k.c, k.t))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildNetSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
groupQty := map[string]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupNET {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
profile := parsePortSpeed(it.LotName)
|
||||||
|
if profile == "" {
|
||||||
|
profile = "UNKNET"
|
||||||
|
warn = "net_unknown"
|
||||||
|
}
|
||||||
|
groupQty[profile] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for profile, qty := range groupQty {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, profile))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildPSUSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
groupQty := map[string]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupPSU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
rating := parseWatts(it.LotName)
|
||||||
|
if rating == "" {
|
||||||
|
rating = "UNKPSU"
|
||||||
|
warn = "psu_unknown"
|
||||||
|
}
|
||||||
|
groupQty[rating] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for rating, qty := range groupQty {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, rating))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeModelToken(lotName string) string {
|
||||||
|
if idx := strings.Index(lotName, "_"); idx >= 0 && idx+1 < len(lotName) {
|
||||||
|
lotName = lotName[idx+1:]
|
||||||
|
}
|
||||||
|
parts := strings.Split(lotName, "_")
|
||||||
|
token := parts[len(parts)-1]
|
||||||
|
return strings.ToUpper(strings.TrimSpace(token))
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseCPUModel(lotName string) string {
|
||||||
|
parts := strings.Split(lotName, "_")
|
||||||
|
if len(parts) >= 2 {
|
||||||
|
last := strings.ToUpper(strings.TrimSpace(parts[len(parts)-1]))
|
||||||
|
if last != "" {
|
||||||
|
return last
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return normalizeModelToken(lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseGPUModel(lotName string) string {
|
||||||
|
upper := strings.ToUpper(lotName)
|
||||||
|
if idx := strings.Index(upper, "GPU_"); idx >= 0 {
|
||||||
|
upper = upper[idx+4:]
|
||||||
|
}
|
||||||
|
parts := strings.Split(upper, "_")
|
||||||
|
model := ""
|
||||||
|
mem := ""
|
||||||
|
for i, p := range parts {
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
switch p {
|
||||||
|
case "NV", "NVIDIA", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
|
||||||
|
continue
|
||||||
|
default:
|
||||||
|
if strings.Contains(p, "GB") {
|
||||||
|
mem = p
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if model == "" && (i > 0) {
|
||||||
|
model = p
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if model != "" && mem != "" {
|
||||||
|
return model + "_" + mem
|
||||||
|
}
|
||||||
|
if model != "" {
|
||||||
|
return model
|
||||||
|
}
|
||||||
|
return normalizeModelToken(lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseMemGiB(lotName string) int {
|
||||||
|
if m := reMemTiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return atoi(m[1]) * 1024
|
||||||
|
}
|
||||||
|
if m := reMemGiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return atoi(m[1])
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseCapacity(lotName string) string {
|
||||||
|
if m := reCapacityT.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
return normalizeTToken(strings.ReplaceAll(m[1], ",", ".")) + "T"
|
||||||
|
}
|
||||||
|
if m := reCapacityG.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
return normalizeNumberToken(strings.ReplaceAll(m[1], ",", ".")) + "G"
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func diskTypeCode(cat string, lotName string) string {
|
||||||
|
c := strings.ToUpper(strings.TrimSpace(cat))
|
||||||
|
if c == "M2" {
|
||||||
|
return "M2"
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(lotName)
|
||||||
|
if strings.Contains(upper, "NVME") {
|
||||||
|
return "NV"
|
||||||
|
}
|
||||||
|
if strings.Contains(upper, "SAS") {
|
||||||
|
return "SAS"
|
||||||
|
}
|
||||||
|
if strings.Contains(upper, "SATA") {
|
||||||
|
return "SAT"
|
||||||
|
}
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
func parsePortSpeed(lotName string) string {
|
||||||
|
if m := rePortSpeed.FindStringSubmatch(lotName); len(m) == 4 {
|
||||||
|
return fmt.Sprintf("%sp%sG", m[1], m[2])
|
||||||
|
}
|
||||||
|
if m := rePortFC.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return fmt.Sprintf("%spFC%s", m[1], m[2])
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseWatts(lotName string) string {
|
||||||
|
if m := reWatts.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
w := atoi(m[1])
|
||||||
|
if w >= 1000 {
|
||||||
|
kw := fmt.Sprintf("%.1f", float64(w)/1000.0)
|
||||||
|
kw = strings.TrimSuffix(kw, ".0")
|
||||||
|
return fmt.Sprintf("%skW", kw)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%dW", w)
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeNumberToken(raw string) string {
|
||||||
|
raw = strings.TrimSpace(raw)
|
||||||
|
raw = strings.TrimLeft(raw, "0")
|
||||||
|
if raw == "" || raw[0] == '.' {
|
||||||
|
raw = "0" + raw
|
||||||
|
}
|
||||||
|
return raw
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeTToken(raw string) string {
|
||||||
|
raw = normalizeNumberToken(raw)
|
||||||
|
parts := strings.SplitN(raw, ".", 2)
|
||||||
|
intPart := parts[0]
|
||||||
|
frac := ""
|
||||||
|
if len(parts) == 2 {
|
||||||
|
frac = parts[1]
|
||||||
|
}
|
||||||
|
if frac == "" {
|
||||||
|
frac = "0"
|
||||||
|
}
|
||||||
|
if len(intPart) >= 2 {
|
||||||
|
return intPart + "." + frac
|
||||||
|
}
|
||||||
|
if len(frac) > 1 {
|
||||||
|
frac = frac[:1]
|
||||||
|
}
|
||||||
|
return intPart + "." + frac
|
||||||
|
}
|
||||||
|
|
||||||
|
func atoi(v string) int {
|
||||||
|
n := 0
|
||||||
|
for _, r := range v {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n = n*10 + int(r-'0')
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressArticle(segments []string) string {
|
||||||
|
if len(segments) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
normalized := make([]string, 0, len(segments))
|
||||||
|
for _, s := range segments {
|
||||||
|
normalized = append(normalized, strings.ReplaceAll(s, "GbE", "G"))
|
||||||
|
}
|
||||||
|
segments = normalized
|
||||||
|
article := strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
|
||||||
|
// segment order: model, cpu, mem, gpu, disk, net, psu, support
|
||||||
|
index := func(i int) (int, bool) {
|
||||||
|
if i >= 0 && i < len(segments) {
|
||||||
|
return i, true
|
||||||
|
}
|
||||||
|
return -1, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// 1) remove PSU
|
||||||
|
if i, ok := index(6); ok {
|
||||||
|
segments = append(segments[:i], segments[i+1:]...)
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2) compress NET/HBA/HCA
|
||||||
|
if i, ok := index(5); ok {
|
||||||
|
segments[i] = compressNetSegment(segments[i])
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3) compress DISK
|
||||||
|
if i, ok := index(4); ok {
|
||||||
|
segments[i] = compressDiskSegment(segments[i])
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4) compress GPU to vendor only (GPU_NV)
|
||||||
|
if i, ok := index(3); ok {
|
||||||
|
segments[i] = compressGPUSegment(segments[i])
|
||||||
|
}
|
||||||
|
return strings.Join(segments, "-")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressNetSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
profile := p
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
profile = x[1]
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(profile)
|
||||||
|
label := "NIC"
|
||||||
|
if strings.Contains(upper, "FC") {
|
||||||
|
label = "HBA"
|
||||||
|
} else if strings.Contains(upper, "HCA") || strings.Contains(upper, "IB") {
|
||||||
|
label = "HCA"
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sx%s", qty, label))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressDiskSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
spec := p
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
spec = x[1]
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(spec)
|
||||||
|
label := "DSK"
|
||||||
|
for _, t := range []string{"M2", "NV", "SAS", "SAT", "SSD", "HDD", "EDS", "HHH"} {
|
||||||
|
if strings.Contains(upper, t) {
|
||||||
|
label = t
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sx%s", qty, label))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressGPUSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sxGPU_NV", qty))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
66
internal/article/generator_test.go
Normal file
66
internal/article/generator_test.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestBuild_ParsesNetAndPSU(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "NIC_2p25G_MCX512A-AC", LotCategory: "NIC", Price: 1},
|
||||||
|
{PricelistID: localPL.ID, LotName: "HBA_2pFC32_Gen6", LotCategory: "HBA", Price: 1},
|
||||||
|
{PricelistID: localPL.ID, LotName: "PS_1000W_Platinum", LotCategory: "PS", Price: 1},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := models.ConfigItems{
|
||||||
|
{LotName: "NIC_2p25G_MCX512A-AC", Quantity: 1},
|
||||||
|
{LotName: "HBA_2pFC32_Gen6", Quantity: 1},
|
||||||
|
{LotName: "PS_1000W_Platinum", Quantity: 2},
|
||||||
|
}
|
||||||
|
result, err := Build(local, items, BuildOptions{
|
||||||
|
ServerModel: "DL380GEN11",
|
||||||
|
SupportCode: "1yW",
|
||||||
|
ServerPricelist: &localPL.ServerID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("build article: %v", err)
|
||||||
|
}
|
||||||
|
if result.Article == "" {
|
||||||
|
t.Fatalf("expected article to be non-empty")
|
||||||
|
}
|
||||||
|
if contains(result.Article, "UNKNET") || contains(result.Article, "UNKPSU") {
|
||||||
|
t.Fatalf("unexpected UNK in article: %s", result.Article)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func contains(s, sub string) bool {
|
||||||
|
return strings.Contains(s, sub)
|
||||||
|
}
|
||||||
@@ -20,6 +20,7 @@ type Config struct {
|
|||||||
Alerts AlertsConfig `yaml:"alerts"`
|
Alerts AlertsConfig `yaml:"alerts"`
|
||||||
Notifications NotificationsConfig `yaml:"notifications"`
|
Notifications NotificationsConfig `yaml:"notifications"`
|
||||||
Logging LoggingConfig `yaml:"logging"`
|
Logging LoggingConfig `yaml:"logging"`
|
||||||
|
Backup BackupConfig `yaml:"backup"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ServerConfig struct {
|
type ServerConfig struct {
|
||||||
@@ -101,6 +102,10 @@ type LoggingConfig struct {
|
|||||||
FilePath string `yaml:"file_path"`
|
FilePath string `yaml:"file_path"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type BackupConfig struct {
|
||||||
|
Time string `yaml:"time"`
|
||||||
|
}
|
||||||
|
|
||||||
func Load(path string) (*Config, error) {
|
func Load(path string) (*Config, error) {
|
||||||
data, err := os.ReadFile(path)
|
data, err := os.ReadFile(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -182,6 +187,10 @@ func (c *Config) setDefaults() {
|
|||||||
if c.Logging.Output == "" {
|
if c.Logging.Output == "" {
|
||||||
c.Logging.Output = "stdout"
|
c.Logging.Output = "stdout"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if c.Backup.Time == "" {
|
||||||
|
c.Backup.Time = "00:00"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Config) Address() string {
|
func (c *Config) Address() string {
|
||||||
|
|||||||
@@ -3,8 +3,10 @@ package handlers
|
|||||||
import (
|
import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
@@ -25,6 +27,12 @@ func NewComponentHandler(componentService *services.ComponentService, localDB *l
|
|||||||
func (h *ComponentHandler) List(c *gin.Context) {
|
func (h *ComponentHandler) List(c *gin.Context) {
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 20
|
||||||
|
}
|
||||||
|
|
||||||
filter := repository.ComponentFilter{
|
filter := repository.ComponentFilter{
|
||||||
Category: c.Query("category"),
|
Category: c.Query("category"),
|
||||||
@@ -33,73 +41,68 @@ func (h *ComponentHandler) List(c *gin.Context) {
|
|||||||
ExcludeHidden: c.Query("include_hidden") != "true", // По умолчанию скрытые не показываются
|
ExcludeHidden: c.Query("include_hidden") != "true", // По умолчанию скрытые не показываются
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.componentService.List(filter, page, perPage)
|
localFilter := localdb.ComponentFilter{
|
||||||
|
Category: filter.Category,
|
||||||
|
Search: filter.Search,
|
||||||
|
HasPrice: filter.HasPrice,
|
||||||
|
}
|
||||||
|
offset := (page - 1) * perPage
|
||||||
|
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// If offline mode (empty result), fallback to local components
|
components := make([]services.ComponentView, len(localComps))
|
||||||
isOffline := false
|
for i, lc := range localComps {
|
||||||
if v, ok := c.Get("is_offline"); ok {
|
components[i] = services.ComponentView{
|
||||||
if b, ok := v.(bool); ok {
|
LotName: lc.LotName,
|
||||||
isOffline = b
|
Description: lc.LotDescription,
|
||||||
}
|
Category: lc.Category,
|
||||||
}
|
CategoryName: lc.Category,
|
||||||
if isOffline && result.Total == 0 && h.localDB != nil {
|
Model: lc.Model,
|
||||||
localFilter := localdb.ComponentFilter{
|
|
||||||
Category: filter.Category,
|
|
||||||
Search: filter.Search,
|
|
||||||
HasPrice: filter.HasPrice,
|
|
||||||
}
|
|
||||||
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
|
||||||
if err == nil && len(localComps) > 0 {
|
|
||||||
// Convert local components to ComponentView format
|
|
||||||
components := make([]services.ComponentView, len(localComps))
|
|
||||||
for i, lc := range localComps {
|
|
||||||
components[i] = services.ComponentView{
|
|
||||||
LotName: lc.LotName,
|
|
||||||
Description: lc.LotDescription,
|
|
||||||
Category: lc.Category,
|
|
||||||
CategoryName: lc.Category, // No translation in local mode
|
|
||||||
Model: lc.Model,
|
|
||||||
CurrentPrice: lc.CurrentPrice,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, &services.ComponentListResult{
|
|
||||||
Components: components,
|
|
||||||
Total: total,
|
|
||||||
Page: page,
|
|
||||||
PerPage: perPage,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, result)
|
c.JSON(http.StatusOK, &services.ComponentListResult{
|
||||||
|
Components: components,
|
||||||
|
Total: total,
|
||||||
|
Page: page,
|
||||||
|
PerPage: perPage,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ComponentHandler) Get(c *gin.Context) {
|
func (h *ComponentHandler) Get(c *gin.Context) {
|
||||||
lotName := c.Param("lot_name")
|
lotName := c.Param("lot_name")
|
||||||
|
component, err := h.localDB.GetLocalComponent(lotName)
|
||||||
component, err := h.componentService.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, component)
|
c.JSON(http.StatusOK, services.ComponentView{
|
||||||
|
LotName: component.LotName,
|
||||||
|
Description: component.LotDescription,
|
||||||
|
Category: component.Category,
|
||||||
|
CategoryName: component.Category,
|
||||||
|
Model: component.Model,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ComponentHandler) GetCategories(c *gin.Context) {
|
func (h *ComponentHandler) GetCategories(c *gin.Context) {
|
||||||
categories, err := h.componentService.GetCategories()
|
codes, err := h.localDB.GetLocalComponentCategories()
|
||||||
if err != nil {
|
if err == nil && len(codes) > 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
categories := make([]models.Category, 0, len(codes))
|
||||||
|
for _, code := range codes {
|
||||||
|
trimmed := strings.TrimSpace(code)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
categories = append(categories, models.Category{Code: trimmed, Name: trimmed})
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, categories)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, categories)
|
c.JSON(http.StatusOK, models.DefaultCategories)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ package handlers
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
||||||
@@ -14,23 +15,29 @@ type ExportHandler struct {
|
|||||||
exportService *services.ExportService
|
exportService *services.ExportService
|
||||||
configService services.ConfigurationGetter
|
configService services.ConfigurationGetter
|
||||||
componentService *services.ComponentService
|
componentService *services.ComponentService
|
||||||
|
projectService *services.ProjectService
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewExportHandler(
|
func NewExportHandler(
|
||||||
exportService *services.ExportService,
|
exportService *services.ExportService,
|
||||||
configService services.ConfigurationGetter,
|
configService services.ConfigurationGetter,
|
||||||
componentService *services.ComponentService,
|
componentService *services.ComponentService,
|
||||||
|
projectService *services.ProjectService,
|
||||||
) *ExportHandler {
|
) *ExportHandler {
|
||||||
return &ExportHandler{
|
return &ExportHandler{
|
||||||
exportService: exportService,
|
exportService: exportService,
|
||||||
configService: configService,
|
configService: configService,
|
||||||
componentService: componentService,
|
componentService: componentService,
|
||||||
|
projectService: projectService,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type ExportRequest struct {
|
type ExportRequest struct {
|
||||||
Name string `json:"name" binding:"required"`
|
Name string `json:"name" binding:"required"`
|
||||||
Items []struct {
|
ProjectName string `json:"project_name"`
|
||||||
|
ProjectUUID string `json:"project_uuid"`
|
||||||
|
Article string `json:"article"`
|
||||||
|
Items []struct {
|
||||||
LotName string `json:"lot_name" binding:"required"`
|
LotName string `json:"lot_name" binding:"required"`
|
||||||
Quantity int `json:"quantity" binding:"required,min=1"`
|
Quantity int `json:"quantity" binding:"required,min=1"`
|
||||||
UnitPrice float64 `json:"unit_price"`
|
UnitPrice float64 `json:"unit_price"`
|
||||||
@@ -47,15 +54,47 @@ func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
|||||||
|
|
||||||
data := h.buildExportData(&req)
|
data := h.buildExportData(&req)
|
||||||
|
|
||||||
csvData, err := h.exportService.ToCSV(data)
|
// Validate before streaming (can return JSON error)
|
||||||
if err != nil {
|
if len(data.Items) == 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no items to export"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s %s SPEC.csv", time.Now().Format("2006-01-02"), req.Name)
|
// Get project name if available
|
||||||
|
projectName := req.ProjectName
|
||||||
|
if projectName == "" && req.ProjectUUID != "" {
|
||||||
|
// Try to load project name from database
|
||||||
|
username := middleware.GetUsername(c)
|
||||||
|
if project, err := h.projectService.GetByUUID(req.ProjectUUID, username); err == nil && project != nil {
|
||||||
|
projectName = derefString(project.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if projectName == "" {
|
||||||
|
projectName = req.Name
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set headers before streaming
|
||||||
|
exportDate := data.CreatedAt
|
||||||
|
articleSegment := sanitizeFilenameSegment(req.Article)
|
||||||
|
if articleSegment == "" {
|
||||||
|
articleSegment = "BOM"
|
||||||
|
}
|
||||||
|
filename := fmt.Sprintf("%s (%s) %s %s.csv", exportDate.Format("2006-01-02"), projectName, req.Name, articleSegment)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
c.Data(http.StatusOK, "text/csv; charset=utf-8", csvData)
|
|
||||||
|
// Stream CSV (cannot return JSON after this point)
|
||||||
|
if err := h.exportService.ToCSV(c.Writer, data); err != nil {
|
||||||
|
c.Error(err) // Log only
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func derefString(value *string) string {
|
||||||
|
if value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return *value
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ExportData {
|
func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ExportData {
|
||||||
@@ -90,6 +129,7 @@ func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ExportData
|
|||||||
|
|
||||||
return &services.ExportData{
|
return &services.ExportData{
|
||||||
Name: req.Name,
|
Name: req.Name,
|
||||||
|
Article: req.Article,
|
||||||
Items: items,
|
Items: items,
|
||||||
Total: total,
|
Total: total,
|
||||||
Notes: req.Notes,
|
Notes: req.Notes,
|
||||||
@@ -97,10 +137,29 @@ func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ExportData
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func sanitizeFilenameSegment(value string) string {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
replacer := strings.NewReplacer(
|
||||||
|
"/", "_",
|
||||||
|
"\\", "_",
|
||||||
|
":", "_",
|
||||||
|
"*", "_",
|
||||||
|
"?", "_",
|
||||||
|
"\"", "_",
|
||||||
|
"<", "_",
|
||||||
|
">", "_",
|
||||||
|
"|", "_",
|
||||||
|
)
|
||||||
|
return strings.TrimSpace(replacer.Replace(value))
|
||||||
|
}
|
||||||
|
|
||||||
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
||||||
username := middleware.GetUsername(c)
|
username := middleware.GetUsername(c)
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
|
// Get config before streaming (can return JSON error)
|
||||||
config, err := h.configService.GetByUUID(uuid, username)
|
config, err := h.configService.GetByUUID(uuid, username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
@@ -109,13 +168,33 @@ func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
|||||||
|
|
||||||
data := h.exportService.ConfigToExportData(config, h.componentService)
|
data := h.exportService.ConfigToExportData(config, h.componentService)
|
||||||
|
|
||||||
csvData, err := h.exportService.ToCSV(data)
|
// Validate before streaming (can return JSON error)
|
||||||
if err != nil {
|
if len(data.Items) == 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no items to export"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s %s SPEC.csv", config.CreatedAt.Format("2006-01-02"), config.Name)
|
// Get project name if configuration belongs to a project
|
||||||
|
projectName := config.Name // fallback: use config name if no project
|
||||||
|
if config.ProjectUUID != nil && *config.ProjectUUID != "" {
|
||||||
|
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, username); err == nil && project != nil {
|
||||||
|
projectName = derefString(project.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set headers before streaming
|
||||||
|
// Use price update time if available, otherwise creation time
|
||||||
|
exportDate := config.CreatedAt
|
||||||
|
if config.PriceUpdatedAt != nil {
|
||||||
|
exportDate = *config.PriceUpdatedAt
|
||||||
|
}
|
||||||
|
filename := fmt.Sprintf("%s (%s) %s BOM.csv", exportDate.Format("2006-01-02"), projectName, config.Name)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
c.Data(http.StatusOK, "text/csv; charset=utf-8", csvData)
|
|
||||||
|
// Stream CSV (cannot return JSON after this point)
|
||||||
|
if err := h.exportService.ToCSV(c.Writer, data); err != nil {
|
||||||
|
c.Error(err) // Log only
|
||||||
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
314
internal/handlers/export_test.go
Normal file
314
internal/handlers/export_test.go
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/csv"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Mock services for testing
|
||||||
|
type mockConfigService struct {
|
||||||
|
config *models.Configuration
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockConfigService) GetByUUID(uuid string, ownerUsername string) (*models.Configuration, error) {
|
||||||
|
return m.config, m.err
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func TestExportCSV_Success(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Create a basic mock component service that doesn't panic
|
||||||
|
mockComponentService := &services.ComponentService{}
|
||||||
|
|
||||||
|
// Create handler with mocks
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
mockComponentService,
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create JSON request body
|
||||||
|
jsonBody := `{
|
||||||
|
"name": "Test Export",
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"lot_name": "LOT-001",
|
||||||
|
"quantity": 2,
|
||||||
|
"unit_price": 100.50
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"notes": "Test notes"
|
||||||
|
}`
|
||||||
|
|
||||||
|
// Create HTTP request
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(jsonBody))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
// Create response recorder
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
// Create Gin context
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
// Call handler
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Check status code
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("Expected status 200, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check Content-Type header
|
||||||
|
contentType := w.Header().Get("Content-Type")
|
||||||
|
if contentType != "text/csv; charset=utf-8" {
|
||||||
|
t.Errorf("Expected Content-Type 'text/csv; charset=utf-8', got %q", contentType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for BOM
|
||||||
|
responseBody := w.Body.Bytes()
|
||||||
|
if len(responseBody) < 3 {
|
||||||
|
t.Fatalf("Response too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := responseBody[:3]
|
||||||
|
if bytes.Compare(actualBOM, expectedBOM) != 0 {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch. Expected %v, got %v", expectedBOM, actualBOM)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check semicolon delimiter in CSV
|
||||||
|
reader := csv.NewReader(bytes.NewReader(responseBody[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("Failed to parse CSV header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 6 {
|
||||||
|
t.Errorf("Expected 6 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportCSV_InvalidRequest(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
&services.ComponentService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create invalid request (missing required field)
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(`{"name": "Test"}`))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Errorf("Expected status 400, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportCSV_EmptyItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
&services.ComponentService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create request with empty items array - should fail binding validation
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(`{"name":"Empty Export","items":[],"notes":""}`))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request (validation error from gin binding)
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Logf("Status code: %d (expected 400 for empty items)", w.Code)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_Success(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Mock configuration
|
||||||
|
mockConfig := &models.Configuration{
|
||||||
|
UUID: "test-uuid",
|
||||||
|
Name: "Test Config",
|
||||||
|
OwnerUsername: "testuser",
|
||||||
|
Items: models.ConfigItems{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{config: mockConfig},
|
||||||
|
&services.ComponentService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create HTTP request
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mock middleware.GetUsername
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Check status code
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("Expected status 200, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check Content-Type header
|
||||||
|
contentType := w.Header().Get("Content-Type")
|
||||||
|
if contentType != "text/csv; charset=utf-8" {
|
||||||
|
t.Errorf("Expected Content-Type 'text/csv; charset=utf-8', got %q", contentType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for BOM
|
||||||
|
responseBody := w.Body.Bytes()
|
||||||
|
if len(responseBody) < 3 {
|
||||||
|
t.Fatalf("Response too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := responseBody[:3]
|
||||||
|
if bytes.Compare(actualBOM, expectedBOM) != 0 {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_NotFound(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{err: errors.New("config not found")},
|
||||||
|
&services.ComponentService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/nonexistent-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "nonexistent-uuid"},
|
||||||
|
}
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Should return 404 Not Found
|
||||||
|
if w.Code != http.StatusNotFound {
|
||||||
|
t.Errorf("Expected status 404, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_EmptyItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Mock configuration with empty items
|
||||||
|
mockConfig := &models.Configuration{
|
||||||
|
UUID: "test-uuid",
|
||||||
|
Name: "Empty Config",
|
||||||
|
OwnerUsername: "testuser",
|
||||||
|
Items: models.ConfigItems{},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{config: mockConfig},
|
||||||
|
&services.ComponentService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
|
}
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Errorf("Expected status 400, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,99 +1,94 @@
|
|||||||
package handlers
|
package handlers
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricelist"
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
)
|
)
|
||||||
|
|
||||||
type PricelistHandler struct {
|
type PricelistHandler struct {
|
||||||
service *pricelist.Service
|
|
||||||
localDB *localdb.LocalDB
|
localDB *localdb.LocalDB
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewPricelistHandler(service *pricelist.Service, localDB *localdb.LocalDB) *PricelistHandler {
|
func NewPricelistHandler(localDB *localdb.LocalDB) *PricelistHandler {
|
||||||
return &PricelistHandler{service: service, localDB: localDB}
|
return &PricelistHandler{localDB: localDB}
|
||||||
}
|
}
|
||||||
|
|
||||||
// List returns all pricelists with pagination
|
// List returns all pricelists with pagination.
|
||||||
func (h *PricelistHandler) List(c *gin.Context) {
|
func (h *PricelistHandler) List(c *gin.Context) {
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
activeOnly := c.DefaultQuery("active_only", "false") == "true"
|
if page < 1 {
|
||||||
source := c.Query("source")
|
page = 1
|
||||||
|
|
||||||
var (
|
|
||||||
pricelists any
|
|
||||||
total int64
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
|
|
||||||
if activeOnly {
|
|
||||||
pricelists, total, err = h.service.ListActiveBySource(page, perPage, source)
|
|
||||||
} else {
|
|
||||||
pricelists, total, err = h.service.ListBySource(page, perPage, source)
|
|
||||||
}
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 20
|
||||||
|
}
|
||||||
|
source := c.Query("source")
|
||||||
|
activeOnly := c.DefaultQuery("active_only", "false") == "true"
|
||||||
|
|
||||||
|
localPLs, err := h.localDB.GetLocalPricelists()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
if source != "" {
|
||||||
// If offline (empty list), fallback to local pricelists
|
filtered := localPLs[:0]
|
||||||
if total == 0 && h.localDB != nil {
|
for _, lpl := range localPLs {
|
||||||
localPLs, err := h.localDB.GetLocalPricelists()
|
if strings.EqualFold(lpl.Source, source) {
|
||||||
if err == nil && len(localPLs) > 0 {
|
filtered = append(filtered, lpl)
|
||||||
if source != "" {
|
|
||||||
filtered := localPLs[:0]
|
|
||||||
for _, lpl := range localPLs {
|
|
||||||
if lpl.Source == source {
|
|
||||||
filtered = append(filtered, lpl)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
localPLs = filtered
|
|
||||||
}
|
}
|
||||||
// Convert to PricelistSummary format
|
|
||||||
summaries := make([]map[string]interface{}, len(localPLs))
|
|
||||||
for i, lpl := range localPLs {
|
|
||||||
summaries[i] = map[string]interface{}{
|
|
||||||
"id": lpl.ServerID,
|
|
||||||
"source": lpl.Source,
|
|
||||||
"version": lpl.Version,
|
|
||||||
"created_by": "sync",
|
|
||||||
"item_count": 0, // Not tracked
|
|
||||||
"usage_count": 0, // Not tracked in local
|
|
||||||
"is_active": true,
|
|
||||||
"created_at": lpl.CreatedAt,
|
|
||||||
"synced_from": "local",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"pricelists": summaries,
|
|
||||||
"total": len(summaries),
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
localPLs = filtered
|
||||||
|
}
|
||||||
|
if activeOnly {
|
||||||
|
// Local cache stores only active snapshots for normal operations.
|
||||||
|
}
|
||||||
|
sort.SliceStable(localPLs, func(i, j int) bool { return localPLs[i].CreatedAt.After(localPLs[j].CreatedAt) })
|
||||||
|
total := len(localPLs)
|
||||||
|
start := (page - 1) * perPage
|
||||||
|
if start > total {
|
||||||
|
start = total
|
||||||
|
}
|
||||||
|
end := start + perPage
|
||||||
|
if end > total {
|
||||||
|
end = total
|
||||||
|
}
|
||||||
|
pageSlice := localPLs[start:end]
|
||||||
|
summaries := make([]map[string]interface{}, 0, len(pageSlice))
|
||||||
|
for _, lpl := range pageSlice {
|
||||||
|
itemCount := h.localDB.CountLocalPricelistItems(lpl.ID)
|
||||||
|
usageCount := 0
|
||||||
|
if lpl.IsUsed {
|
||||||
|
usageCount = 1
|
||||||
|
}
|
||||||
|
summaries = append(summaries, map[string]interface{}{
|
||||||
|
"id": lpl.ServerID,
|
||||||
|
"source": lpl.Source,
|
||||||
|
"version": lpl.Version,
|
||||||
|
"created_by": "sync",
|
||||||
|
"item_count": itemCount,
|
||||||
|
"usage_count": usageCount,
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": lpl.CreatedAt,
|
||||||
|
"synced_from": "local",
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"pricelists": pricelists,
|
"pricelists": summaries,
|
||||||
"total": total,
|
"total": total,
|
||||||
"page": page,
|
"page": page,
|
||||||
"per_page": perPage,
|
"per_page": perPage,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get returns a single pricelist by ID
|
// Get returns a single pricelist by ID.
|
||||||
func (h *PricelistHandler) Get(c *gin.Context) {
|
func (h *PricelistHandler) Get(c *gin.Context) {
|
||||||
idStr := c.Param("id")
|
idStr := c.Param("id")
|
||||||
id, err := strconv.ParseUint(idStr, 10, 32)
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
@@ -102,210 +97,25 @@ func (h *PricelistHandler) Get(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
pl, err := h.service.GetByID(uint(id))
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, pl)
|
c.JSON(http.StatusOK, gin.H{
|
||||||
}
|
"id": localPL.ServerID,
|
||||||
|
"source": localPL.Source,
|
||||||
// Create creates a new pricelist from current prices
|
"version": localPL.Version,
|
||||||
func (h *PricelistHandler) Create(c *gin.Context) {
|
"created_by": "sync",
|
||||||
canWrite, debugInfo := h.service.CanWriteDebug()
|
"item_count": h.localDB.CountLocalPricelistItems(localPL.ID),
|
||||||
if !canWrite {
|
"is_active": true,
|
||||||
c.JSON(http.StatusForbidden, gin.H{
|
"created_at": localPL.CreatedAt,
|
||||||
"error": "pricelist write is not allowed",
|
"synced_from": "local",
|
||||||
"debug": debugInfo,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req struct {
|
|
||||||
Source string `json:"source"`
|
|
||||||
Items []struct {
|
|
||||||
LotName string `json:"lot_name"`
|
|
||||||
Price float64 `json:"price"`
|
|
||||||
} `json:"items"`
|
|
||||||
}
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil && !errors.Is(err, io.EOF) {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
source := string(models.NormalizePricelistSource(req.Source))
|
|
||||||
|
|
||||||
// Get the database username as the creator
|
|
||||||
createdBy := h.localDB.GetDBUser()
|
|
||||||
if createdBy == "" {
|
|
||||||
createdBy = "unknown"
|
|
||||||
}
|
|
||||||
sourceItems := make([]pricelist.CreateItemInput, 0, len(req.Items))
|
|
||||||
for _, item := range req.Items {
|
|
||||||
sourceItems = append(sourceItems, pricelist.CreateItemInput{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Price: item.Price,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
pl, err := h.service.CreateForSourceWithProgress(createdBy, source, sourceItems, nil)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusCreated, pl)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateWithProgress creates a pricelist and streams progress updates over SSE.
|
|
||||||
func (h *PricelistHandler) CreateWithProgress(c *gin.Context) {
|
|
||||||
canWrite, debugInfo := h.service.CanWriteDebug()
|
|
||||||
if !canWrite {
|
|
||||||
c.JSON(http.StatusForbidden, gin.H{
|
|
||||||
"error": "pricelist write is not allowed",
|
|
||||||
"debug": debugInfo,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req struct {
|
|
||||||
Source string `json:"source"`
|
|
||||||
Items []struct {
|
|
||||||
LotName string `json:"lot_name"`
|
|
||||||
Price float64 `json:"price"`
|
|
||||||
} `json:"items"`
|
|
||||||
}
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil && !errors.Is(err, io.EOF) {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
source := string(models.NormalizePricelistSource(req.Source))
|
|
||||||
|
|
||||||
createdBy := h.localDB.GetDBUser()
|
|
||||||
if createdBy == "" {
|
|
||||||
createdBy = "unknown"
|
|
||||||
}
|
|
||||||
sourceItems := make([]pricelist.CreateItemInput, 0, len(req.Items))
|
|
||||||
for _, item := range req.Items {
|
|
||||||
sourceItems = append(sourceItems, pricelist.CreateItemInput{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Price: item.Price,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
c.Header("Content-Type", "text/event-stream")
|
|
||||||
c.Header("Cache-Control", "no-cache")
|
|
||||||
c.Header("Connection", "keep-alive")
|
|
||||||
c.Header("X-Accel-Buffering", "no")
|
|
||||||
|
|
||||||
flusher, ok := c.Writer.(http.Flusher)
|
|
||||||
if !ok {
|
|
||||||
pl, err := h.service.CreateForSourceWithProgress(createdBy, source, sourceItems, nil)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
c.JSON(http.StatusCreated, pl)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
sendProgress := func(payload gin.H) {
|
|
||||||
c.SSEvent("progress", payload)
|
|
||||||
flusher.Flush()
|
|
||||||
}
|
|
||||||
|
|
||||||
sendProgress(gin.H{"current": 0, "total": 4, "status": "starting", "message": "Запуск..."})
|
|
||||||
pl, err := h.service.CreateForSourceWithProgress(createdBy, source, sourceItems, func(p pricelist.CreateProgress) {
|
|
||||||
sendProgress(gin.H{
|
|
||||||
"current": p.Current,
|
|
||||||
"total": p.Total,
|
|
||||||
"status": p.Status,
|
|
||||||
"message": p.Message,
|
|
||||||
"updated": p.Updated,
|
|
||||||
"errors": p.Errors,
|
|
||||||
"lot_name": p.LotName,
|
|
||||||
})
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
sendProgress(gin.H{
|
|
||||||
"current": 0,
|
|
||||||
"total": 4,
|
|
||||||
"status": "error",
|
|
||||||
"message": fmt.Sprintf("Ошибка: %v", err),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
sendProgress(gin.H{
|
|
||||||
"current": 4,
|
|
||||||
"total": 4,
|
|
||||||
"status": "completed",
|
|
||||||
"message": "Готово",
|
|
||||||
"pricelist": pl,
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// Delete deletes a pricelist by ID
|
// GetItems returns items for a pricelist with pagination.
|
||||||
func (h *PricelistHandler) Delete(c *gin.Context) {
|
|
||||||
canWrite, debugInfo := h.service.CanWriteDebug()
|
|
||||||
if !canWrite {
|
|
||||||
c.JSON(http.StatusForbidden, gin.H{
|
|
||||||
"error": "pricelist write is not allowed",
|
|
||||||
"debug": debugInfo,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
idStr := c.Param("id")
|
|
||||||
id, err := strconv.ParseUint(idStr, 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.service.Delete(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "pricelist deleted"})
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetActive toggles active flag on a pricelist.
|
|
||||||
func (h *PricelistHandler) SetActive(c *gin.Context) {
|
|
||||||
canWrite, debugInfo := h.service.CanWriteDebug()
|
|
||||||
if !canWrite {
|
|
||||||
c.JSON(http.StatusForbidden, gin.H{
|
|
||||||
"error": "pricelist write is not allowed",
|
|
||||||
"debug": debugInfo,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
idStr := c.Param("id")
|
|
||||||
id, err := strconv.ParseUint(idStr, 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req struct {
|
|
||||||
IsActive bool `json:"is_active"`
|
|
||||||
}
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.service.SetActive(uint(id), req.IsActive); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "updated", "is_active": req.IsActive})
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetItems returns items for a pricelist with pagination
|
|
||||||
func (h *PricelistHandler) GetItems(c *gin.Context) {
|
func (h *PricelistHandler) GetItems(c *gin.Context) {
|
||||||
idStr := c.Param("id")
|
idStr := c.Param("id")
|
||||||
id, err := strconv.ParseUint(idStr, 10, 32)
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
@@ -318,67 +128,102 @@ func (h *PricelistHandler) GetItems(c *gin.Context) {
|
|||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "50"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "50"))
|
||||||
search := c.Query("search")
|
search := c.Query("search")
|
||||||
|
|
||||||
items, total, err := h.service.GetItems(uint(id), page, perPage, search)
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 50
|
||||||
|
}
|
||||||
|
var items []localdb.LocalPricelistItem
|
||||||
|
dbq := h.localDB.DB().Model(&localdb.LocalPricelistItem{}).Where("pricelist_id = ?", localPL.ID)
|
||||||
|
if strings.TrimSpace(search) != "" {
|
||||||
|
dbq = dbq.Where("lot_name LIKE ?", "%"+strings.TrimSpace(search)+"%")
|
||||||
|
}
|
||||||
|
var total int64
|
||||||
|
if err := dbq.Count(&total).Error; err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
pl, _ := h.service.GetByID(uint(id))
|
offset := (page - 1) * perPage
|
||||||
source := ""
|
|
||||||
if pl != nil {
|
if err := dbq.Order("lot_name").Offset(offset).Limit(perPage).Find(&items).Error; err != nil {
|
||||||
source = pl.Source
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
resultItems := make([]gin.H, 0, len(items))
|
||||||
|
for _, item := range items {
|
||||||
|
resultItems = append(resultItems, gin.H{
|
||||||
|
"id": item.ID,
|
||||||
|
"lot_name": item.LotName,
|
||||||
|
"price": item.Price,
|
||||||
|
"category": item.LotCategory,
|
||||||
|
"available_qty": item.AvailableQty,
|
||||||
|
"partnumbers": []string(item.Partnumbers),
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
c.JSON(http.StatusOK, gin.H{
|
||||||
"source": source,
|
"source": localPL.Source,
|
||||||
"items": items,
|
"items": resultItems,
|
||||||
"total": total,
|
"total": total,
|
||||||
"page": page,
|
"page": page,
|
||||||
"per_page": perPage,
|
"per_page": perPage,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// CanWrite returns whether the current user can create pricelists
|
func (h *PricelistHandler) GetLotNames(c *gin.Context) {
|
||||||
func (h *PricelistHandler) CanWrite(c *gin.Context) {
|
idStr := c.Param("id")
|
||||||
canWrite, debugInfo := h.service.CanWriteDebug()
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
c.JSON(http.StatusOK, gin.H{"can_write": canWrite, "debug": debugInfo})
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
items, err := h.localDB.GetLocalPricelistItems(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
lotNames := make([]string, 0, len(items))
|
||||||
|
for _, item := range items {
|
||||||
|
lotNames = append(lotNames, item.LotName)
|
||||||
|
}
|
||||||
|
sort.Strings(lotNames)
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"lot_names": lotNames,
|
||||||
|
"total": len(lotNames),
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetLatest returns the most recent active pricelist
|
// GetLatest returns the most recent active pricelist.
|
||||||
func (h *PricelistHandler) GetLatest(c *gin.Context) {
|
func (h *PricelistHandler) GetLatest(c *gin.Context) {
|
||||||
source := c.DefaultQuery("source", string(models.PricelistSourceEstimate))
|
source := c.DefaultQuery("source", string(models.PricelistSourceEstimate))
|
||||||
source = string(models.NormalizePricelistSource(source))
|
source = string(models.NormalizePricelistSource(source))
|
||||||
|
|
||||||
// Try to get from server first
|
localPL, err := h.localDB.GetLatestLocalPricelistBySource(source)
|
||||||
pl, err := h.service.GetLatestActiveBySource(source)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// If offline or no server pricelists, try to get from local cache
|
c.JSON(http.StatusNotFound, gin.H{"error": "no pricelists available"})
|
||||||
if h.localDB == nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "no database available"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
localPL, localErr := h.localDB.GetLatestLocalPricelistBySource(source)
|
|
||||||
if localErr != nil {
|
|
||||||
// No local pricelists either
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{
|
|
||||||
"error": "no pricelists available",
|
|
||||||
"local_error": localErr.Error(),
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// Return local pricelist
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"id": localPL.ServerID,
|
|
||||||
"source": localPL.Source,
|
|
||||||
"version": localPL.Version,
|
|
||||||
"created_by": "sync",
|
|
||||||
"item_count": 0, // Not tracked in local pricelists
|
|
||||||
"is_active": true,
|
|
||||||
"created_at": localPL.CreatedAt,
|
|
||||||
"synced_from": "local",
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
c.JSON(http.StatusOK, pl)
|
"id": localPL.ServerID,
|
||||||
|
"source": localPL.Source,
|
||||||
|
"version": localPL.Version,
|
||||||
|
"created_by": "sync",
|
||||||
|
"item_count": h.localDB.CountLocalPricelistItems(localPL.ID),
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": localPL.CreatedAt,
|
||||||
|
"synced_from": "local",
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
84
internal/handlers/pricelist_test.go
Normal file
84
internal/handlers/pricelist_test.go
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPricelistGetItems_ReturnsLotCategoryFromLocalPricelistItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{
|
||||||
|
PricelistID: localPL.ID,
|
||||||
|
LotName: "NO_UNDERSCORE_NAME",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
h := NewPricelistHandler(local)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/pricelists/1/items?page=1&per_page=50", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{{Key: "id", Value: "1"}}
|
||||||
|
|
||||||
|
h.GetItems(c)
|
||||||
|
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Fatalf("expected status 200, got %d: %s", w.Code, w.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp struct {
|
||||||
|
Items []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Category string `json:"category"`
|
||||||
|
UnitPrice any `json:"price"`
|
||||||
|
} `json:"items"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||||
|
t.Fatalf("unmarshal response: %v", err)
|
||||||
|
}
|
||||||
|
if len(resp.Items) != 1 {
|
||||||
|
t.Fatalf("expected 1 item, got %d", len(resp.Items))
|
||||||
|
}
|
||||||
|
if resp.Items[0].LotName != "NO_UNDERSCORE_NAME" {
|
||||||
|
t.Fatalf("expected lot_name NO_UNDERSCORE_NAME, got %q", resp.Items[0].LotName)
|
||||||
|
}
|
||||||
|
if resp.Items[0].Category != "CPU" {
|
||||||
|
t.Fatalf("expected category CPU, got %q", resp.Items[0].Category)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,938 +0,0 @@
|
|||||||
package handlers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"net/http"
|
|
||||||
"sort"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/alerts"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricing"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
)
|
|
||||||
|
|
||||||
// calculateMedian returns the median of a sorted slice of prices
|
|
||||||
func calculateMedian(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
sort.Float64s(prices)
|
|
||||||
n := len(prices)
|
|
||||||
if n%2 == 0 {
|
|
||||||
return (prices[n/2-1] + prices[n/2]) / 2
|
|
||||||
}
|
|
||||||
return prices[n/2]
|
|
||||||
}
|
|
||||||
|
|
||||||
// calculateAverage returns the arithmetic mean of prices
|
|
||||||
func calculateAverage(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
var sum float64
|
|
||||||
for _, p := range prices {
|
|
||||||
sum += p
|
|
||||||
}
|
|
||||||
return sum / float64(len(prices))
|
|
||||||
}
|
|
||||||
|
|
||||||
type PricingHandler struct {
|
|
||||||
db *gorm.DB
|
|
||||||
pricingService *pricing.Service
|
|
||||||
alertService *alerts.Service
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
priceRepo *repository.PriceRepository
|
|
||||||
statsRepo *repository.StatsRepository
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewPricingHandler(
|
|
||||||
db *gorm.DB,
|
|
||||||
pricingService *pricing.Service,
|
|
||||||
alertService *alerts.Service,
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
priceRepo *repository.PriceRepository,
|
|
||||||
statsRepo *repository.StatsRepository,
|
|
||||||
) *PricingHandler {
|
|
||||||
return &PricingHandler{
|
|
||||||
db: db,
|
|
||||||
pricingService: pricingService,
|
|
||||||
alertService: alertService,
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
priceRepo: priceRepo,
|
|
||||||
statsRepo: statsRepo,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) GetStats(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.statsRepo == nil || h.alertService == nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"new_alerts_count": 0,
|
|
||||||
"top_components": []interface{}{},
|
|
||||||
"trending_components": []interface{}{},
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
newAlerts, _ := h.alertService.GetNewAlertsCount()
|
|
||||||
topComponents, _ := h.statsRepo.GetTopComponents(10)
|
|
||||||
trendingComponents, _ := h.statsRepo.GetTrendingComponents(10)
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"new_alerts_count": newAlerts,
|
|
||||||
"top_components": topComponents,
|
|
||||||
"trending_components": trendingComponents,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
type ComponentWithCount struct {
|
|
||||||
models.LotMetadata
|
|
||||||
QuoteCount int64 `json:"quote_count"`
|
|
||||||
UsedInMeta []string `json:"used_in_meta,omitempty"` // List of meta-articles that use this component
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ListComponents(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.componentRepo == nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"components": []ComponentWithCount{},
|
|
||||||
"total": 0,
|
|
||||||
"page": 1,
|
|
||||||
"per_page": 20,
|
|
||||||
"offline": true,
|
|
||||||
"message": "Управление ценами доступно только в онлайн режиме",
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
|
||||||
|
|
||||||
filter := repository.ComponentFilter{
|
|
||||||
Category: c.Query("category"),
|
|
||||||
Search: c.Query("search"),
|
|
||||||
SortField: c.Query("sort"),
|
|
||||||
SortDir: c.Query("dir"),
|
|
||||||
}
|
|
||||||
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 || perPage > 100 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
|
|
||||||
components, total, err := h.componentRepo.List(filter, offset, perPage)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get quote counts
|
|
||||||
lotNames := make([]string, len(components))
|
|
||||||
for i, comp := range components {
|
|
||||||
lotNames[i] = comp.LotName
|
|
||||||
}
|
|
||||||
|
|
||||||
counts, _ := h.priceRepo.GetQuoteCounts(lotNames)
|
|
||||||
|
|
||||||
// Get meta usage information
|
|
||||||
metaUsage := h.getMetaUsageMap(lotNames)
|
|
||||||
|
|
||||||
// Combine components with counts
|
|
||||||
result := make([]ComponentWithCount, len(components))
|
|
||||||
for i, comp := range components {
|
|
||||||
result[i] = ComponentWithCount{
|
|
||||||
LotMetadata: comp,
|
|
||||||
QuoteCount: counts[comp.LotName],
|
|
||||||
UsedInMeta: metaUsage[comp.LotName],
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"components": result,
|
|
||||||
"total": total,
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// getMetaUsageMap returns a map of lot_name -> list of meta-articles that use this component
|
|
||||||
func (h *PricingHandler) getMetaUsageMap(lotNames []string) map[string][]string {
|
|
||||||
result := make(map[string][]string)
|
|
||||||
|
|
||||||
// Get all components with meta_prices
|
|
||||||
var metaComponents []models.LotMetadata
|
|
||||||
h.db.Where("meta_prices IS NOT NULL AND meta_prices != ''").Find(&metaComponents)
|
|
||||||
|
|
||||||
// Build reverse lookup: which components are used in which meta-articles
|
|
||||||
for _, meta := range metaComponents {
|
|
||||||
sources := strings.Split(meta.MetaPrices, ",")
|
|
||||||
for _, source := range sources {
|
|
||||||
source = strings.TrimSpace(source)
|
|
||||||
if source == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle wildcard patterns
|
|
||||||
if strings.HasSuffix(source, "*") {
|
|
||||||
prefix := strings.TrimSuffix(source, "*")
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
if strings.HasPrefix(lotName, prefix) && lotName != meta.LotName {
|
|
||||||
result[lotName] = append(result[lotName], meta.LotName)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Direct match
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
if lotName == source && lotName != meta.LotName {
|
|
||||||
result[lotName] = append(result[lotName], meta.LotName)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
// expandMetaPrices expands meta_prices string to list of actual lot names
|
|
||||||
func (h *PricingHandler) expandMetaPrices(metaPrices, excludeLot string) []string {
|
|
||||||
sources := strings.Split(metaPrices, ",")
|
|
||||||
var result []string
|
|
||||||
seen := make(map[string]bool)
|
|
||||||
|
|
||||||
for _, source := range sources {
|
|
||||||
source = strings.TrimSpace(source)
|
|
||||||
if source == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.HasSuffix(source, "*") {
|
|
||||||
// Wildcard pattern - find matching lots
|
|
||||||
prefix := strings.TrimSuffix(source, "*")
|
|
||||||
var matchingLots []string
|
|
||||||
h.db.Model(&models.LotMetadata{}).
|
|
||||||
Where("lot_name LIKE ? AND lot_name != ?", prefix+"%", excludeLot).
|
|
||||||
Pluck("lot_name", &matchingLots)
|
|
||||||
for _, lot := range matchingLots {
|
|
||||||
if !seen[lot] {
|
|
||||||
result = append(result, lot)
|
|
||||||
seen[lot] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if source != excludeLot && !seen[source] {
|
|
||||||
result = append(result, source)
|
|
||||||
seen[source] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) GetComponentPricing(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.componentRepo == nil || h.pricingService == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Управление ценами доступно только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
lotName := c.Param("lot_name")
|
|
||||||
|
|
||||||
component, err := h.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
stats, err := h.pricingService.GetPriceStats(lotName, 0)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"component": component,
|
|
||||||
"price_stats": stats,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
type UpdatePriceRequest struct {
|
|
||||||
LotName string `json:"lot_name" binding:"required"`
|
|
||||||
Method models.PriceMethod `json:"method"`
|
|
||||||
PeriodDays int `json:"period_days"`
|
|
||||||
Coefficient float64 `json:"coefficient"`
|
|
||||||
ManualPrice *float64 `json:"manual_price"`
|
|
||||||
ClearManual bool `json:"clear_manual"`
|
|
||||||
MetaEnabled bool `json:"meta_enabled"`
|
|
||||||
MetaPrices string `json:"meta_prices"`
|
|
||||||
MetaMethod string `json:"meta_method"`
|
|
||||||
MetaPeriod int `json:"meta_period"`
|
|
||||||
IsHidden bool `json:"is_hidden"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) UpdatePrice(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Обновление цен доступно только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req UpdatePriceRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
updates := map[string]interface{}{}
|
|
||||||
|
|
||||||
// Update method if specified
|
|
||||||
if req.Method != "" {
|
|
||||||
updates["price_method"] = req.Method
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update period days
|
|
||||||
if req.PeriodDays >= 0 {
|
|
||||||
updates["price_period_days"] = req.PeriodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update coefficient
|
|
||||||
updates["price_coefficient"] = req.Coefficient
|
|
||||||
|
|
||||||
// Handle meta prices
|
|
||||||
if req.MetaEnabled && req.MetaPrices != "" {
|
|
||||||
updates["meta_prices"] = req.MetaPrices
|
|
||||||
} else {
|
|
||||||
updates["meta_prices"] = ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle hidden flag
|
|
||||||
updates["is_hidden"] = req.IsHidden
|
|
||||||
|
|
||||||
// Handle manual price
|
|
||||||
if req.ClearManual {
|
|
||||||
updates["manual_price"] = nil
|
|
||||||
} else if req.ManualPrice != nil {
|
|
||||||
updates["manual_price"] = *req.ManualPrice
|
|
||||||
// Also update current price immediately when setting manual
|
|
||||||
updates["current_price"] = *req.ManualPrice
|
|
||||||
updates["price_updated_at"] = time.Now()
|
|
||||||
}
|
|
||||||
|
|
||||||
err := h.db.Model(&models.LotMetadata{}).
|
|
||||||
Where("lot_name = ?", req.LotName).
|
|
||||||
Updates(updates).Error
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Recalculate price if not using manual price
|
|
||||||
if req.ManualPrice == nil {
|
|
||||||
h.recalculateSinglePrice(req.LotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get updated component to return new price
|
|
||||||
var comp models.LotMetadata
|
|
||||||
h.db.Where("lot_name = ?", req.LotName).First(&comp)
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"message": "price updated",
|
|
||||||
"current_price": comp.CurrentPrice,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) recalculateSinglePrice(lotName string) {
|
|
||||||
var comp models.LotMetadata
|
|
||||||
if err := h.db.Where("lot_name = ?", lotName).First(&comp).Error; err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip if manual price is set
|
|
||||||
if comp.ManualPrice != nil && *comp.ManualPrice > 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
periodDays := comp.PricePeriodDays
|
|
||||||
method := comp.PriceMethod
|
|
||||||
if method == "" {
|
|
||||||
method = models.PriceMethodMedian
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine which lot names to use for price calculation
|
|
||||||
lotNames := []string{lotName}
|
|
||||||
if comp.MetaPrices != "" {
|
|
||||||
lotNames = h.expandMetaPrices(comp.MetaPrices, lotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get prices based on period from all relevant lots
|
|
||||||
var prices []float64
|
|
||||||
for _, ln := range lotNames {
|
|
||||||
var lotPrices []float64
|
|
||||||
if strings.HasSuffix(ln, "*") {
|
|
||||||
pattern := strings.TrimSuffix(ln, "*") + "%"
|
|
||||||
if periodDays > 0 {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot LIKE ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
pattern, periodDays).Pluck("price", &lotPrices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot LIKE ? ORDER BY price`, pattern).Pluck("price", &lotPrices)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if periodDays > 0 {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot = ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
ln, periodDays).Pluck("price", &lotPrices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot = ? ORDER BY price`, ln).Pluck("price", &lotPrices)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
prices = append(prices, lotPrices...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// If no prices in period, try all time
|
|
||||||
if len(prices) == 0 && periodDays > 0 {
|
|
||||||
for _, ln := range lotNames {
|
|
||||||
var lotPrices []float64
|
|
||||||
if strings.HasSuffix(ln, "*") {
|
|
||||||
pattern := strings.TrimSuffix(ln, "*") + "%"
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot LIKE ? ORDER BY price`, pattern).Pluck("price", &lotPrices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot = ? ORDER BY price`, ln).Pluck("price", &lotPrices)
|
|
||||||
}
|
|
||||||
prices = append(prices, lotPrices...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate price based on method
|
|
||||||
sortFloat64s(prices)
|
|
||||||
var finalPrice float64
|
|
||||||
switch method {
|
|
||||||
case models.PriceMethodMedian:
|
|
||||||
finalPrice = calculateMedian(prices)
|
|
||||||
case models.PriceMethodAverage:
|
|
||||||
finalPrice = calculateAverage(prices)
|
|
||||||
default:
|
|
||||||
finalPrice = calculateMedian(prices)
|
|
||||||
}
|
|
||||||
|
|
||||||
if finalPrice <= 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Apply coefficient
|
|
||||||
if comp.PriceCoefficient != 0 {
|
|
||||||
finalPrice = finalPrice * (1 + comp.PriceCoefficient/100)
|
|
||||||
}
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
// Only update price, preserve all user settings
|
|
||||||
h.db.Model(&models.LotMetadata{}).
|
|
||||||
Where("lot_name = ?", lotName).
|
|
||||||
Updates(map[string]interface{}{
|
|
||||||
"current_price": finalPrice,
|
|
||||||
"price_updated_at": now,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) RecalculateAll(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Пересчёт цен доступен только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set headers for SSE
|
|
||||||
c.Header("Content-Type", "text/event-stream")
|
|
||||||
c.Header("Cache-Control", "no-cache")
|
|
||||||
c.Header("Connection", "keep-alive")
|
|
||||||
|
|
||||||
// Get all components with their settings
|
|
||||||
var components []models.LotMetadata
|
|
||||||
h.db.Find(&components)
|
|
||||||
total := int64(len(components))
|
|
||||||
|
|
||||||
// Pre-load all lot names for efficient wildcard matching
|
|
||||||
var allLotNames []string
|
|
||||||
h.db.Model(&models.LotMetadata{}).Pluck("lot_name", &allLotNames)
|
|
||||||
lotNameSet := make(map[string]bool, len(allLotNames))
|
|
||||||
for _, ln := range allLotNames {
|
|
||||||
lotNameSet[ln] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pre-load latest quote dates for all lots (for checking updates)
|
|
||||||
type LotDate struct {
|
|
||||||
Lot string
|
|
||||||
Date time.Time
|
|
||||||
}
|
|
||||||
var latestDates []LotDate
|
|
||||||
h.db.Raw(`SELECT lot, MAX(date) as date FROM lot_log GROUP BY lot`).Scan(&latestDates)
|
|
||||||
lotLatestDate := make(map[string]time.Time, len(latestDates))
|
|
||||||
for _, ld := range latestDates {
|
|
||||||
lotLatestDate[ld.Lot] = ld.Date
|
|
||||||
}
|
|
||||||
|
|
||||||
// Send initial progress
|
|
||||||
c.SSEvent("progress", gin.H{"current": 0, "total": total, "status": "starting"})
|
|
||||||
c.Writer.Flush()
|
|
||||||
|
|
||||||
// Process components individually to respect their settings
|
|
||||||
var updated, skipped, manual, unchanged, errors int
|
|
||||||
now := time.Now()
|
|
||||||
progressCounter := 0
|
|
||||||
|
|
||||||
for _, comp := range components {
|
|
||||||
progressCounter++
|
|
||||||
|
|
||||||
// If manual price is set, skip recalculation
|
|
||||||
if comp.ManualPrice != nil && *comp.ManualPrice > 0 {
|
|
||||||
manual++
|
|
||||||
goto sendProgress
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate price based on component's individual settings
|
|
||||||
{
|
|
||||||
periodDays := comp.PricePeriodDays
|
|
||||||
method := comp.PriceMethod
|
|
||||||
if method == "" {
|
|
||||||
method = models.PriceMethodMedian
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine source lots for price calculation (using cached lot names)
|
|
||||||
var sourceLots []string
|
|
||||||
if comp.MetaPrices != "" {
|
|
||||||
sourceLots = expandMetaPricesWithCache(comp.MetaPrices, comp.LotName, allLotNames)
|
|
||||||
} else {
|
|
||||||
sourceLots = []string{comp.LotName}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(sourceLots) == 0 {
|
|
||||||
skipped++
|
|
||||||
goto sendProgress
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if there are new quotes since last update (using cached dates)
|
|
||||||
if comp.PriceUpdatedAt != nil {
|
|
||||||
hasNewData := false
|
|
||||||
for _, lot := range sourceLots {
|
|
||||||
if latestDate, ok := lotLatestDate[lot]; ok {
|
|
||||||
if latestDate.After(*comp.PriceUpdatedAt) {
|
|
||||||
hasNewData = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !hasNewData {
|
|
||||||
unchanged++
|
|
||||||
goto sendProgress
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get prices from source lots
|
|
||||||
var prices []float64
|
|
||||||
if periodDays > 0 {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot IN ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
sourceLots, periodDays).Pluck("price", &prices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot IN ? ORDER BY price`,
|
|
||||||
sourceLots).Pluck("price", &prices)
|
|
||||||
}
|
|
||||||
|
|
||||||
// If no prices in period, try all time
|
|
||||||
if len(prices) == 0 && periodDays > 0 {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot IN ? ORDER BY price`, sourceLots).Pluck("price", &prices)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(prices) == 0 {
|
|
||||||
skipped++
|
|
||||||
goto sendProgress
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate price based on method
|
|
||||||
var basePrice float64
|
|
||||||
switch method {
|
|
||||||
case models.PriceMethodMedian:
|
|
||||||
basePrice = calculateMedian(prices)
|
|
||||||
case models.PriceMethodAverage:
|
|
||||||
basePrice = calculateAverage(prices)
|
|
||||||
default:
|
|
||||||
basePrice = calculateMedian(prices)
|
|
||||||
}
|
|
||||||
|
|
||||||
if basePrice <= 0 {
|
|
||||||
skipped++
|
|
||||||
goto sendProgress
|
|
||||||
}
|
|
||||||
|
|
||||||
finalPrice := basePrice
|
|
||||||
|
|
||||||
// Apply coefficient
|
|
||||||
if comp.PriceCoefficient != 0 {
|
|
||||||
finalPrice = finalPrice * (1 + comp.PriceCoefficient/100)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update only price fields
|
|
||||||
err := h.db.Model(&models.LotMetadata{}).
|
|
||||||
Where("lot_name = ?", comp.LotName).
|
|
||||||
Updates(map[string]interface{}{
|
|
||||||
"current_price": finalPrice,
|
|
||||||
"price_updated_at": now,
|
|
||||||
}).Error
|
|
||||||
if err != nil {
|
|
||||||
errors++
|
|
||||||
} else {
|
|
||||||
updated++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
sendProgress:
|
|
||||||
// Send progress update every 10 components to reduce overhead
|
|
||||||
if progressCounter%10 == 0 || progressCounter == int(total) {
|
|
||||||
c.SSEvent("progress", gin.H{
|
|
||||||
"current": updated + skipped + manual + unchanged + errors,
|
|
||||||
"total": total,
|
|
||||||
"updated": updated,
|
|
||||||
"skipped": skipped,
|
|
||||||
"manual": manual,
|
|
||||||
"unchanged": unchanged,
|
|
||||||
"errors": errors,
|
|
||||||
"status": "processing",
|
|
||||||
"lot_name": comp.LotName,
|
|
||||||
})
|
|
||||||
c.Writer.Flush()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update popularity scores
|
|
||||||
h.statsRepo.UpdatePopularityScores()
|
|
||||||
|
|
||||||
// Send completion
|
|
||||||
c.SSEvent("progress", gin.H{
|
|
||||||
"current": updated + skipped + manual + unchanged + errors,
|
|
||||||
"total": total,
|
|
||||||
"updated": updated,
|
|
||||||
"skipped": skipped,
|
|
||||||
"manual": manual,
|
|
||||||
"unchanged": unchanged,
|
|
||||||
"errors": errors,
|
|
||||||
"status": "completed",
|
|
||||||
})
|
|
||||||
c.Writer.Flush()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ListAlerts(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"alerts": []interface{}{},
|
|
||||||
"total": 0,
|
|
||||||
"page": 1,
|
|
||||||
"per_page": 20,
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
|
||||||
|
|
||||||
filter := repository.AlertFilter{
|
|
||||||
Status: models.AlertStatus(c.Query("status")),
|
|
||||||
Severity: models.AlertSeverity(c.Query("severity")),
|
|
||||||
Type: models.AlertType(c.Query("type")),
|
|
||||||
LotName: c.Query("lot_name"),
|
|
||||||
}
|
|
||||||
|
|
||||||
alertsList, total, err := h.alertService.List(filter, page, perPage)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"alerts": alertsList,
|
|
||||||
"total": total,
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) AcknowledgeAlert(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Управление алертами доступно только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Acknowledge(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "acknowledged"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ResolveAlert(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Управление алертами доступно только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Resolve(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "resolved"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) IgnoreAlert(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Управление алертами доступно только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Ignore(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "ignored"})
|
|
||||||
}
|
|
||||||
|
|
||||||
type PreviewPriceRequest struct {
|
|
||||||
LotName string `json:"lot_name" binding:"required"`
|
|
||||||
Method string `json:"method"`
|
|
||||||
PeriodDays int `json:"period_days"`
|
|
||||||
Coefficient float64 `json:"coefficient"`
|
|
||||||
MetaEnabled bool `json:"meta_enabled"`
|
|
||||||
MetaPrices string `json:"meta_prices"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) PreviewPrice(c *gin.Context) {
|
|
||||||
// Check if we're in offline mode
|
|
||||||
if h.db == nil {
|
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"error": "Предпросмотр цены доступен только в онлайн режиме",
|
|
||||||
"offline": true,
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var req PreviewPriceRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get component
|
|
||||||
var comp models.LotMetadata
|
|
||||||
if err := h.db.Where("lot_name = ?", req.LotName).First(&comp).Error; err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine which lot names to use for price calculation
|
|
||||||
lotNames := []string{req.LotName}
|
|
||||||
if req.MetaEnabled && req.MetaPrices != "" {
|
|
||||||
lotNames = h.expandMetaPrices(req.MetaPrices, req.LotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get all prices for calculations (from all relevant lots)
|
|
||||||
var allPrices []float64
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
var lotPrices []float64
|
|
||||||
if strings.HasSuffix(lotName, "*") {
|
|
||||||
// Wildcard pattern
|
|
||||||
pattern := strings.TrimSuffix(lotName, "*") + "%"
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot LIKE ? ORDER BY price`, pattern).Pluck("price", &lotPrices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot = ? ORDER BY price`, lotName).Pluck("price", &lotPrices)
|
|
||||||
}
|
|
||||||
allPrices = append(allPrices, lotPrices...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate median for all time
|
|
||||||
var medianAllTime *float64
|
|
||||||
if len(allPrices) > 0 {
|
|
||||||
sortFloat64s(allPrices)
|
|
||||||
median := calculateMedian(allPrices)
|
|
||||||
medianAllTime = &median
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get quote count (from all relevant lots) - total count
|
|
||||||
var quoteCountTotal int64
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
var count int64
|
|
||||||
if strings.HasSuffix(lotName, "*") {
|
|
||||||
pattern := strings.TrimSuffix(lotName, "*") + "%"
|
|
||||||
h.db.Model(&models.LotLog{}).Where("lot LIKE ?", pattern).Count(&count)
|
|
||||||
} else {
|
|
||||||
h.db.Model(&models.LotLog{}).Where("lot = ?", lotName).Count(&count)
|
|
||||||
}
|
|
||||||
quoteCountTotal += count
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get quote count for specified period (if period is > 0)
|
|
||||||
var quoteCountPeriod int64
|
|
||||||
if req.PeriodDays > 0 {
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
var count int64
|
|
||||||
if strings.HasSuffix(lotName, "*") {
|
|
||||||
pattern := strings.TrimSuffix(lotName, "*") + "%"
|
|
||||||
h.db.Raw(`SELECT COUNT(*) FROM lot_log WHERE lot LIKE ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY)`, pattern, req.PeriodDays).Scan(&count)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT COUNT(*) FROM lot_log WHERE lot = ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY)`, lotName, req.PeriodDays).Scan(&count)
|
|
||||||
}
|
|
||||||
quoteCountPeriod += count
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// If no period specified, period count equals total count
|
|
||||||
quoteCountPeriod = quoteCountTotal
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get last received price (from the main lot only)
|
|
||||||
var lastPrice struct {
|
|
||||||
Price *float64
|
|
||||||
Date *time.Time
|
|
||||||
}
|
|
||||||
h.db.Raw(`SELECT price, date FROM lot_log WHERE lot = ? ORDER BY date DESC, lot_log_id DESC LIMIT 1`, req.LotName).Scan(&lastPrice)
|
|
||||||
|
|
||||||
// Calculate new price based on parameters (method, period, coefficient)
|
|
||||||
method := req.Method
|
|
||||||
if method == "" {
|
|
||||||
method = "median"
|
|
||||||
}
|
|
||||||
|
|
||||||
var prices []float64
|
|
||||||
if req.PeriodDays > 0 {
|
|
||||||
for _, lotName := range lotNames {
|
|
||||||
var lotPrices []float64
|
|
||||||
if strings.HasSuffix(lotName, "*") {
|
|
||||||
pattern := strings.TrimSuffix(lotName, "*") + "%"
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot LIKE ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
pattern, req.PeriodDays).Pluck("price", &lotPrices)
|
|
||||||
} else {
|
|
||||||
h.db.Raw(`SELECT price FROM lot_log WHERE lot = ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
lotName, req.PeriodDays).Pluck("price", &lotPrices)
|
|
||||||
}
|
|
||||||
prices = append(prices, lotPrices...)
|
|
||||||
}
|
|
||||||
// Fall back to all time if no prices in period
|
|
||||||
if len(prices) == 0 {
|
|
||||||
prices = allPrices
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
prices = allPrices
|
|
||||||
}
|
|
||||||
|
|
||||||
var newPrice *float64
|
|
||||||
if len(prices) > 0 {
|
|
||||||
sortFloat64s(prices)
|
|
||||||
var basePrice float64
|
|
||||||
if method == "average" {
|
|
||||||
basePrice = calculateAverage(prices)
|
|
||||||
} else {
|
|
||||||
basePrice = calculateMedian(prices)
|
|
||||||
}
|
|
||||||
|
|
||||||
if req.Coefficient != 0 {
|
|
||||||
basePrice = basePrice * (1 + req.Coefficient/100)
|
|
||||||
}
|
|
||||||
newPrice = &basePrice
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"lot_name": req.LotName,
|
|
||||||
"current_price": comp.CurrentPrice,
|
|
||||||
"median_all_time": medianAllTime,
|
|
||||||
"new_price": newPrice,
|
|
||||||
"quote_count_total": quoteCountTotal,
|
|
||||||
"quote_count_period": quoteCountPeriod,
|
|
||||||
"manual_price": comp.ManualPrice,
|
|
||||||
"last_price": lastPrice.Price,
|
|
||||||
"last_price_date": lastPrice.Date,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// sortFloat64s sorts a slice of float64 in ascending order
|
|
||||||
func sortFloat64s(data []float64) {
|
|
||||||
sort.Float64s(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
// expandMetaPricesWithCache expands meta_prices using pre-loaded lot names (no DB queries)
|
|
||||||
func expandMetaPricesWithCache(metaPrices, excludeLot string, allLotNames []string) []string {
|
|
||||||
sources := strings.Split(metaPrices, ",")
|
|
||||||
var result []string
|
|
||||||
seen := make(map[string]bool)
|
|
||||||
|
|
||||||
for _, source := range sources {
|
|
||||||
source = strings.TrimSpace(source)
|
|
||||||
if source == "" || source == excludeLot {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.HasSuffix(source, "*") {
|
|
||||||
// Wildcard pattern - find matching lots from cache
|
|
||||||
prefix := strings.TrimSuffix(source, "*")
|
|
||||||
for _, lot := range allLotNames {
|
|
||||||
if strings.HasPrefix(lot, prefix) && lot != excludeLot && !seen[lot] {
|
|
||||||
result = append(result, lot)
|
|
||||||
seen[lot] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if !seen[source] {
|
|
||||||
result = append(result, source)
|
|
||||||
seen[source] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@@ -1,11 +1,14 @@
|
|||||||
package handlers
|
package handlers
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
"html/template"
|
"html/template"
|
||||||
"log/slog"
|
"log/slog"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
stdsync "sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
qfassets "git.mchus.pro/mchus/quoteforge"
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
@@ -23,6 +26,9 @@ type SyncHandler struct {
|
|||||||
autoSyncInterval time.Duration
|
autoSyncInterval time.Duration
|
||||||
onlineGraceFactor float64
|
onlineGraceFactor float64
|
||||||
tmpl *template.Template
|
tmpl *template.Template
|
||||||
|
readinessMu stdsync.Mutex
|
||||||
|
readinessCached *sync.SyncReadiness
|
||||||
|
readinessCachedAt time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewSyncHandler creates a new sync handler
|
// NewSyncHandler creates a new sync handler
|
||||||
@@ -52,14 +58,24 @@ func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr
|
|||||||
|
|
||||||
// SyncStatusResponse represents the sync status
|
// SyncStatusResponse represents the sync status
|
||||||
type SyncStatusResponse struct {
|
type SyncStatusResponse struct {
|
||||||
LastComponentSync *time.Time `json:"last_component_sync"`
|
LastComponentSync *time.Time `json:"last_component_sync"`
|
||||||
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
||||||
IsOnline bool `json:"is_online"`
|
IsOnline bool `json:"is_online"`
|
||||||
ComponentsCount int64 `json:"components_count"`
|
ComponentsCount int64 `json:"components_count"`
|
||||||
PricelistsCount int64 `json:"pricelists_count"`
|
PricelistsCount int64 `json:"pricelists_count"`
|
||||||
ServerPricelists int `json:"server_pricelists"`
|
ServerPricelists int `json:"server_pricelists"`
|
||||||
NeedComponentSync bool `json:"need_component_sync"`
|
NeedComponentSync bool `json:"need_component_sync"`
|
||||||
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||||
|
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type SyncReadinessResponse struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
Blocked bool `json:"blocked"`
|
||||||
|
ReasonCode string `json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetStatus returns current sync status
|
// GetStatus returns current sync status
|
||||||
@@ -89,6 +105,7 @@ func (h *SyncHandler) GetStatus(c *gin.Context) {
|
|||||||
|
|
||||||
// Check if component sync is needed (older than 24 hours)
|
// Check if component sync is needed (older than 24 hours)
|
||||||
needComponentSync := h.localDB.NeedComponentSync(24)
|
needComponentSync := h.localDB.NeedComponentSync(24)
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
|
||||||
c.JSON(http.StatusOK, SyncStatusResponse{
|
c.JSON(http.StatusOK, SyncStatusResponse{
|
||||||
LastComponentSync: lastComponentSync,
|
LastComponentSync: lastComponentSync,
|
||||||
@@ -99,9 +116,63 @@ func (h *SyncHandler) GetStatus(c *gin.Context) {
|
|||||||
ServerPricelists: serverPricelists,
|
ServerPricelists: serverPricelists,
|
||||||
NeedComponentSync: needComponentSync,
|
NeedComponentSync: needComponentSync,
|
||||||
NeedPricelistSync: needPricelistSync,
|
NeedPricelistSync: needPricelistSync,
|
||||||
|
Readiness: readiness,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetReadiness returns sync readiness guard status.
|
||||||
|
// GET /api/sync/readiness
|
||||||
|
func (h *SyncHandler) GetReadiness(c *gin.Context) {
|
||||||
|
readiness, err := h.syncService.GetReadiness()
|
||||||
|
if err != nil && readiness == nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if readiness == nil {
|
||||||
|
c.JSON(http.StatusOK, SyncReadinessResponse{Status: sync.ReadinessUnknown, Blocked: false})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, SyncReadinessResponse{
|
||||||
|
Status: readiness.Status,
|
||||||
|
Blocked: readiness.Blocked,
|
||||||
|
ReasonCode: readiness.ReasonCode,
|
||||||
|
ReasonText: readiness.ReasonText,
|
||||||
|
RequiredMinAppVersion: readiness.RequiredMinAppVersion,
|
||||||
|
LastCheckedAt: readiness.LastCheckedAt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *SyncHandler) ensureSyncReadiness(c *gin.Context) bool {
|
||||||
|
readiness, err := h.syncService.EnsureReadinessForSync()
|
||||||
|
if err == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
blocked := &sync.SyncBlockedError{}
|
||||||
|
if errors.As(err, &blocked) {
|
||||||
|
c.JSON(http.StatusLocked, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": blocked.Error(),
|
||||||
|
"reason_code": blocked.Readiness.ReasonCode,
|
||||||
|
"reason_text": blocked.Readiness.ReasonText,
|
||||||
|
"required_min_app_version": blocked.Readiness.RequiredMinAppVersion,
|
||||||
|
"status": blocked.Readiness.Status,
|
||||||
|
"blocked": true,
|
||||||
|
"last_checked_at": blocked.Readiness.LastCheckedAt,
|
||||||
|
})
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
_ = readiness
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
// SyncResultResponse represents sync operation result
|
// SyncResultResponse represents sync operation result
|
||||||
type SyncResultResponse struct {
|
type SyncResultResponse struct {
|
||||||
Success bool `json:"success"`
|
Success bool `json:"success"`
|
||||||
@@ -113,11 +184,7 @@ type SyncResultResponse struct {
|
|||||||
// SyncComponents syncs components from MariaDB to local SQLite
|
// SyncComponents syncs components from MariaDB to local SQLite
|
||||||
// POST /api/sync/components
|
// POST /api/sync/components
|
||||||
func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
||||||
if !h.checkOnline() {
|
if !h.ensureSyncReadiness(c) {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": "Database is offline",
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -152,11 +219,7 @@ func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
|||||||
// SyncPricelists syncs pricelists from MariaDB to local SQLite
|
// SyncPricelists syncs pricelists from MariaDB to local SQLite
|
||||||
// POST /api/sync/pricelists
|
// POST /api/sync/pricelists
|
||||||
func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
||||||
if !h.checkOnline() {
|
if !h.ensureSyncReadiness(c) {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": "Database is offline",
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -201,11 +264,7 @@ type SyncAllResponse struct {
|
|||||||
// - pull components, pricelists, projects, and configurations from server
|
// - pull components, pricelists, projects, and configurations from server
|
||||||
// POST /api/sync/all
|
// POST /api/sync/all
|
||||||
func (h *SyncHandler) SyncAll(c *gin.Context) {
|
func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||||
if !h.checkOnline() {
|
if !h.ensureSyncReadiness(c) {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": "Database is offline",
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -311,11 +370,7 @@ func (h *SyncHandler) checkOnline() bool {
|
|||||||
// PushPendingChanges pushes all pending changes to the server
|
// PushPendingChanges pushes all pending changes to the server
|
||||||
// POST /api/sync/push
|
// POST /api/sync/push
|
||||||
func (h *SyncHandler) PushPendingChanges(c *gin.Context) {
|
func (h *SyncHandler) PushPendingChanges(c *gin.Context) {
|
||||||
if !h.checkOnline() {
|
if !h.ensureSyncReadiness(c) {
|
||||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
|
||||||
"success": false,
|
|
||||||
"error": "Database is offline",
|
|
||||||
})
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -364,12 +419,32 @@ func (h *SyncHandler) GetPendingChanges(c *gin.Context) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncInfoResponse represents sync information
|
// SyncInfoResponse represents sync information for the modal
|
||||||
type SyncInfoResponse struct {
|
type SyncInfoResponse struct {
|
||||||
LastSyncAt *time.Time `json:"last_sync_at"`
|
// Connection
|
||||||
IsOnline bool `json:"is_online"`
|
DBHost string `json:"db_host"`
|
||||||
|
DBUser string `json:"db_user"`
|
||||||
|
DBName string `json:"db_name"`
|
||||||
|
|
||||||
|
// Status
|
||||||
|
IsOnline bool `json:"is_online"`
|
||||||
|
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
LotCount int64 `json:"lot_count"`
|
||||||
|
LotLogCount int64 `json:"lot_log_count"`
|
||||||
|
ConfigCount int64 `json:"config_count"`
|
||||||
|
ProjectCount int64 `json:"project_count"`
|
||||||
|
|
||||||
|
// Pending changes
|
||||||
|
PendingChanges []localdb.PendingChange `json:"pending_changes"`
|
||||||
|
|
||||||
|
// Errors
|
||||||
ErrorCount int `json:"error_count"`
|
ErrorCount int `json:"error_count"`
|
||||||
Errors []SyncError `json:"errors,omitempty"`
|
Errors []SyncError `json:"errors,omitempty"`
|
||||||
|
|
||||||
|
// Readiness guard
|
||||||
|
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type SyncUsersStatusResponse struct {
|
type SyncUsersStatusResponse struct {
|
||||||
@@ -392,31 +467,44 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
|||||||
// Check online status by pinging MariaDB
|
// Check online status by pinging MariaDB
|
||||||
isOnline := h.checkOnline()
|
isOnline := h.checkOnline()
|
||||||
|
|
||||||
|
// Get DB connection info
|
||||||
|
var dbHost, dbUser, dbName string
|
||||||
|
if settings, err := h.localDB.GetSettings(); err == nil {
|
||||||
|
dbHost = settings.Host + ":" + fmt.Sprintf("%d", settings.Port)
|
||||||
|
dbUser = settings.User
|
||||||
|
dbName = settings.Database
|
||||||
|
}
|
||||||
|
|
||||||
// Get sync times
|
// Get sync times
|
||||||
lastPricelistSync := h.localDB.GetLastSyncTime()
|
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||||
|
|
||||||
|
// Get MariaDB counts (if online)
|
||||||
|
var lotCount, lotLogCount int64
|
||||||
|
if isOnline {
|
||||||
|
if mariaDB, err := h.connMgr.GetDB(); err == nil {
|
||||||
|
mariaDB.Table("lot").Count(&lotCount)
|
||||||
|
mariaDB.Table("lot_log").Count(&lotLogCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get local counts
|
||||||
|
configCount := h.localDB.CountConfigurations()
|
||||||
|
projectCount := h.localDB.CountProjects()
|
||||||
|
|
||||||
// Get error count (only changes with LastError != "")
|
// Get error count (only changes with LastError != "")
|
||||||
errorCount := int(h.localDB.CountErroredChanges())
|
errorCount := int(h.localDB.CountErroredChanges())
|
||||||
|
|
||||||
// Get recent errors (last 10)
|
// Get pending changes
|
||||||
changes, err := h.localDB.GetPendingChanges()
|
changes, err := h.localDB.GetPendingChanges()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Error("failed to get pending changes for sync info", "error", err)
|
slog.Error("failed to get pending changes for sync info", "error", err)
|
||||||
// Even if we can't get changes, we can still return the error count
|
changes = []localdb.PendingChange{}
|
||||||
c.JSON(http.StatusOK, SyncInfoResponse{
|
|
||||||
LastSyncAt: lastPricelistSync,
|
|
||||||
IsOnline: isOnline,
|
|
||||||
ErrorCount: errorCount,
|
|
||||||
Errors: []SyncError{}, // Return empty errors list
|
|
||||||
})
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var errors []SyncError
|
var syncErrors []SyncError
|
||||||
for _, change := range changes {
|
for _, change := range changes {
|
||||||
// Check if there's a last error and it's not empty
|
|
||||||
if change.LastError != "" {
|
if change.LastError != "" {
|
||||||
errors = append(errors, SyncError{
|
syncErrors = append(syncErrors, SyncError{
|
||||||
Timestamp: change.CreatedAt,
|
Timestamp: change.CreatedAt,
|
||||||
Message: change.LastError,
|
Message: change.LastError,
|
||||||
})
|
})
|
||||||
@@ -424,15 +512,26 @@ func (h *SyncHandler) GetInfo(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Limit to last 10 errors
|
// Limit to last 10 errors
|
||||||
if len(errors) > 10 {
|
if len(syncErrors) > 10 {
|
||||||
errors = errors[:10]
|
syncErrors = syncErrors[:10]
|
||||||
}
|
}
|
||||||
|
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
|
||||||
c.JSON(http.StatusOK, SyncInfoResponse{
|
c.JSON(http.StatusOK, SyncInfoResponse{
|
||||||
LastSyncAt: lastPricelistSync,
|
DBHost: dbHost,
|
||||||
IsOnline: isOnline,
|
DBUser: dbUser,
|
||||||
ErrorCount: errorCount,
|
DBName: dbName,
|
||||||
Errors: errors,
|
IsOnline: isOnline,
|
||||||
|
LastSyncAt: lastPricelistSync,
|
||||||
|
LotCount: lotCount,
|
||||||
|
LotLogCount: lotLogCount,
|
||||||
|
ConfigCount: configCount,
|
||||||
|
ProjectCount: projectCount,
|
||||||
|
PendingChanges: changes,
|
||||||
|
ErrorCount: errorCount,
|
||||||
|
Errors: syncErrors,
|
||||||
|
Readiness: readiness,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -489,12 +588,21 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
|||||||
|
|
||||||
// Get pending count
|
// Get pending count
|
||||||
pendingCount := h.localDB.GetPendingCount()
|
pendingCount := h.localDB.GetPendingCount()
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
isBlocked := readiness != nil && readiness.Blocked
|
||||||
|
|
||||||
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount)
|
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount, "sync_blocked", isBlocked)
|
||||||
|
|
||||||
data := gin.H{
|
data := gin.H{
|
||||||
"IsOffline": isOffline,
|
"IsOffline": isOffline,
|
||||||
"PendingCount": pendingCount,
|
"PendingCount": pendingCount,
|
||||||
|
"IsBlocked": isBlocked,
|
||||||
|
"BlockedReason": func() string {
|
||||||
|
if readiness == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return readiness.ReasonText
|
||||||
|
}(),
|
||||||
}
|
}
|
||||||
|
|
||||||
c.Header("Content-Type", "text/html; charset=utf-8")
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
@@ -503,3 +611,24 @@ func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
|||||||
c.String(http.StatusInternalServerError, "Template error: "+err.Error())
|
c.String(http.StatusInternalServerError, "Template error: "+err.Error())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *SyncHandler) getReadinessCached(maxAge time.Duration) *sync.SyncReadiness {
|
||||||
|
h.readinessMu.Lock()
|
||||||
|
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < maxAge {
|
||||||
|
cached := *h.readinessCached
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
return &cached
|
||||||
|
}
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
|
||||||
|
readiness, err := h.syncService.GetReadiness()
|
||||||
|
if err != nil && readiness == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
h.readinessMu.Lock()
|
||||||
|
h.readinessCached = readiness
|
||||||
|
h.readinessCachedAt = time.Now()
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
return readiness
|
||||||
|
}
|
||||||
|
|||||||
64
internal/handlers/sync_readiness_test.go
Normal file
64
internal/handlers/sync_readiness_test.go
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncReadinessOfflineBlocked(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
dir := t.TempDir()
|
||||||
|
local, err := localdb.New(filepath.Join(dir, "qfs.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
service := syncsvc.NewService(nil, local)
|
||||||
|
h, err := NewSyncHandler(local, service, nil, filepath.Join("web", "templates"), 5*time.Minute)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("new sync handler: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
router := gin.New()
|
||||||
|
router.GET("/api/sync/readiness", h.GetReadiness)
|
||||||
|
router.POST("/api/sync/push", h.PushPendingChanges)
|
||||||
|
|
||||||
|
readinessResp := httptest.NewRecorder()
|
||||||
|
readinessReq, _ := http.NewRequest(http.MethodGet, "/api/sync/readiness", nil)
|
||||||
|
router.ServeHTTP(readinessResp, readinessReq)
|
||||||
|
if readinessResp.Code != http.StatusOK {
|
||||||
|
t.Fatalf("unexpected readiness status: %d", readinessResp.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
var readinessBody map[string]any
|
||||||
|
if err := json.Unmarshal(readinessResp.Body.Bytes(), &readinessBody); err != nil {
|
||||||
|
t.Fatalf("decode readiness body: %v", err)
|
||||||
|
}
|
||||||
|
if blocked, _ := readinessBody["blocked"].(bool); !blocked {
|
||||||
|
t.Fatalf("expected blocked readiness, got %v", readinessBody["blocked"])
|
||||||
|
}
|
||||||
|
|
||||||
|
pushResp := httptest.NewRecorder()
|
||||||
|
pushReq, _ := http.NewRequest(http.MethodPost, "/api/sync/push", nil)
|
||||||
|
router.ServeHTTP(pushResp, pushReq)
|
||||||
|
if pushResp.Code != http.StatusLocked {
|
||||||
|
t.Fatalf("expected 423 for blocked sync push, got %d body=%s", pushResp.Code, pushResp.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var pushBody map[string]any
|
||||||
|
if err := json.Unmarshal(pushResp.Body.Bytes(), &pushBody); err != nil {
|
||||||
|
t.Fatalf("decode push body: %v", err)
|
||||||
|
}
|
||||||
|
if pushBody["reason_text"] == nil || pushBody["reason_text"] == "" {
|
||||||
|
t.Fatalf("expected reason_text in blocked response, got %v", pushBody)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -67,7 +67,7 @@ func NewWebHandler(templatesPath string, componentService *services.ComponentSer
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Load each page template with base
|
// Load each page template with base
|
||||||
simplePages := []string{"login.html", "configs.html", "projects.html", "project_detail.html", "admin_pricing.html", "pricelists.html", "pricelist_detail.html"}
|
simplePages := []string{"login.html", "configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html"}
|
||||||
for _, page := range simplePages {
|
for _, page := range simplePages {
|
||||||
pagePath := filepath.Join(templatesPath, page)
|
pagePath := filepath.Join(templatesPath, page)
|
||||||
var tmpl *template.Template
|
var tmpl *template.Template
|
||||||
@@ -147,8 +147,8 @@ func (h *WebHandler) render(c *gin.Context, name string, data gin.H) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) Index(c *gin.Context) {
|
func (h *WebHandler) Index(c *gin.Context) {
|
||||||
// Redirect to configs page - configurator is accessed via /configurator?uuid=...
|
// Redirect to projects page - configurator is accessed via /configurator?uuid=...
|
||||||
c.Redirect(302, "/configs")
|
c.Redirect(302, "/projects")
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) Configurator(c *gin.Context) {
|
func (h *WebHandler) Configurator(c *gin.Context) {
|
||||||
@@ -197,10 +197,6 @@ func (h *WebHandler) ProjectDetail(c *gin.Context) {
|
|||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *WebHandler) AdminPricing(c *gin.Context) {
|
|
||||||
h.render(c, "admin_pricing.html", gin.H{"ActivePage": "admin"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *WebHandler) Pricelists(c *gin.Context) {
|
func (h *WebHandler) Pricelists(c *gin.Context) {
|
||||||
h.render(c, "pricelists.html", gin.H{"ActivePage": "pricelists"})
|
h.render(c, "pricelists.html", gin.H{"ActivePage": "pricelists"})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -28,14 +28,13 @@ type ComponentSyncResult struct {
|
|||||||
func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error) {
|
func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error) {
|
||||||
startTime := time.Now()
|
startTime := time.Now()
|
||||||
|
|
||||||
// Query to join lot with qt_lot_metadata
|
// Query to join lot with qt_lot_metadata (metadata only, no pricing)
|
||||||
// Use LEFT JOIN to include lots without metadata
|
// Use LEFT JOIN to include lots without metadata
|
||||||
type componentRow struct {
|
type componentRow struct {
|
||||||
LotName string
|
LotName string
|
||||||
LotDescription string
|
LotDescription string
|
||||||
Category *string
|
Category *string
|
||||||
Model *string
|
Model *string
|
||||||
CurrentPrice *float64
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var rows []componentRow
|
var rows []componentRow
|
||||||
@@ -44,8 +43,7 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
|||||||
l.lot_name,
|
l.lot_name,
|
||||||
l.lot_description,
|
l.lot_description,
|
||||||
COALESCE(c.code, SUBSTRING_INDEX(l.lot_name, '_', 1)) as category,
|
COALESCE(c.code, SUBSTRING_INDEX(l.lot_name, '_', 1)) as category,
|
||||||
m.model,
|
m.model
|
||||||
m.current_price
|
|
||||||
FROM lot l
|
FROM lot l
|
||||||
LEFT JOIN qt_lot_metadata m ON l.lot_name = m.lot_name
|
LEFT JOIN qt_lot_metadata m ON l.lot_name = m.lot_name
|
||||||
LEFT JOIN qt_categories c ON m.category_id = c.id
|
LEFT JOIN qt_categories c ON m.category_id = c.id
|
||||||
@@ -100,8 +98,6 @@ func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error)
|
|||||||
LotDescription: row.LotDescription,
|
LotDescription: row.LotDescription,
|
||||||
Category: category,
|
Category: category,
|
||||||
Model: model,
|
Model: model,
|
||||||
CurrentPrice: row.CurrentPrice,
|
|
||||||
SyncedAt: syncTime,
|
|
||||||
}
|
}
|
||||||
components = append(components, comp)
|
components = append(components, comp)
|
||||||
|
|
||||||
@@ -221,11 +217,6 @@ func (l *LocalDB) ListComponents(filter ComponentFilter, offset, limit int) ([]L
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Apply price filter
|
|
||||||
if filter.HasPrice {
|
|
||||||
db = db.Where("current_price IS NOT NULL")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get total count
|
// Get total count
|
||||||
var total int64
|
var total int64
|
||||||
if err := db.Model(&LocalComponent{}).Count(&total).Error; err != nil {
|
if err := db.Model(&LocalComponent{}).Count(&total).Error; err != nil {
|
||||||
@@ -251,6 +242,31 @@ func (l *LocalDB) GetLocalComponent(lotName string) (*LocalComponent, error) {
|
|||||||
return &component, nil
|
return &component, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetLocalComponentCategoriesByLotNames returns category for each lot_name in the local component cache.
|
||||||
|
// Missing lots are not included in the map; caller is responsible for strict validation.
|
||||||
|
func (l *LocalDB) GetLocalComponentCategoriesByLotNames(lotNames []string) (map[string]string, error) {
|
||||||
|
result := make(map[string]string, len(lotNames))
|
||||||
|
if len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type row struct {
|
||||||
|
LotName string `gorm:"column:lot_name"`
|
||||||
|
Category string `gorm:"column:category"`
|
||||||
|
}
|
||||||
|
var rows []row
|
||||||
|
if err := l.db.Model(&LocalComponent{}).
|
||||||
|
Select("lot_name, category").
|
||||||
|
Where("lot_name IN ?", lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, r := range rows {
|
||||||
|
result[r.LotName] = r.Category
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetLocalComponentCategories returns distinct categories from local components
|
// GetLocalComponentCategories returns distinct categories from local components
|
||||||
func (l *LocalDB) GetLocalComponentCategories() ([]string, error) {
|
func (l *LocalDB) GetLocalComponentCategories() ([]string, error) {
|
||||||
var categories []string
|
var categories []string
|
||||||
@@ -311,99 +327,3 @@ func (l *LocalDB) NeedComponentSync(maxAgeHours int) bool {
|
|||||||
}
|
}
|
||||||
return time.Since(*syncTime).Hours() > float64(maxAgeHours)
|
return time.Since(*syncTime).Hours() > float64(maxAgeHours)
|
||||||
}
|
}
|
||||||
|
|
||||||
// UpdateComponentPricesFromPricelist updates current_price in local_components from pricelist items
|
|
||||||
// This allows offline price updates using synced pricelists without MariaDB connection
|
|
||||||
func (l *LocalDB) UpdateComponentPricesFromPricelist(pricelistID uint) (int, error) {
|
|
||||||
// Get all items from the specified pricelist
|
|
||||||
var items []LocalPricelistItem
|
|
||||||
if err := l.db.Where("pricelist_id = ?", pricelistID).Find(&items).Error; err != nil {
|
|
||||||
return 0, fmt.Errorf("fetching pricelist items: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(items) == 0 {
|
|
||||||
slog.Warn("no items found in pricelist", "pricelist_id", pricelistID)
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update current_price for each component
|
|
||||||
updated := 0
|
|
||||||
err := l.db.Transaction(func(tx *gorm.DB) error {
|
|
||||||
for _, item := range items {
|
|
||||||
result := tx.Model(&LocalComponent{}).
|
|
||||||
Where("lot_name = ?", item.LotName).
|
|
||||||
Update("current_price", item.Price)
|
|
||||||
|
|
||||||
if result.Error != nil {
|
|
||||||
return fmt.Errorf("updating price for %s: %w", item.LotName, result.Error)
|
|
||||||
}
|
|
||||||
|
|
||||||
if result.RowsAffected > 0 {
|
|
||||||
updated++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("updated component prices from pricelist",
|
|
||||||
"pricelist_id", pricelistID,
|
|
||||||
"total_items", len(items),
|
|
||||||
"updated_components", updated)
|
|
||||||
|
|
||||||
return updated, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// EnsureComponentPricesFromPricelists loads prices from the latest pricelist into local_components
|
|
||||||
// if no components exist or all current prices are NULL
|
|
||||||
func (l *LocalDB) EnsureComponentPricesFromPricelists() error {
|
|
||||||
// Check if we have any components with prices
|
|
||||||
var count int64
|
|
||||||
if err := l.db.Model(&LocalComponent{}).Where("current_price IS NOT NULL").Count(&count).Error; err != nil {
|
|
||||||
return fmt.Errorf("checking component prices: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we have components with prices, don't load from pricelists
|
|
||||||
if count > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if we have any components at all
|
|
||||||
var totalComponents int64
|
|
||||||
if err := l.db.Model(&LocalComponent{}).Count(&totalComponents).Error; err != nil {
|
|
||||||
return fmt.Errorf("counting components: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we have no components, we need to load them from pricelists
|
|
||||||
if totalComponents == 0 {
|
|
||||||
slog.Info("no components found in local database, loading from latest pricelist")
|
|
||||||
// This would typically be called from the sync service or setup process
|
|
||||||
// For now, we'll just return nil to indicate no action needed
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we have components but no prices, load from latest estimate pricelist.
|
|
||||||
var latestPricelist LocalPricelist
|
|
||||||
if err := l.db.Where("source = ?", "estimate").Order("created_at DESC").First(&latestPricelist).Error; err != nil {
|
|
||||||
if err == gorm.ErrRecordNotFound {
|
|
||||||
slog.Warn("no pricelists found in local database")
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return fmt.Errorf("finding latest pricelist: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update prices from the latest pricelist
|
|
||||||
updated, err := l.UpdateComponentPricesFromPricelist(latestPricelist.ID)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("updating component prices from pricelist: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("loaded component prices from latest pricelist",
|
|
||||||
"pricelist_id", latestPricelist.ID,
|
|
||||||
"updated_components", updated)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -28,7 +28,11 @@ func ConfigurationToLocal(cfg *models.Configuration) *LocalConfiguration {
|
|||||||
Notes: cfg.Notes,
|
Notes: cfg.Notes,
|
||||||
IsTemplate: cfg.IsTemplate,
|
IsTemplate: cfg.IsTemplate,
|
||||||
ServerCount: cfg.ServerCount,
|
ServerCount: cfg.ServerCount,
|
||||||
|
ServerModel: cfg.ServerModel,
|
||||||
|
SupportCode: cfg.SupportCode,
|
||||||
|
Article: cfg.Article,
|
||||||
PricelistID: cfg.PricelistID,
|
PricelistID: cfg.PricelistID,
|
||||||
|
OnlyInStock: cfg.OnlyInStock,
|
||||||
PriceUpdatedAt: cfg.PriceUpdatedAt,
|
PriceUpdatedAt: cfg.PriceUpdatedAt,
|
||||||
CreatedAt: cfg.CreatedAt,
|
CreatedAt: cfg.CreatedAt,
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
@@ -71,7 +75,11 @@ func LocalToConfiguration(local *LocalConfiguration) *models.Configuration {
|
|||||||
Notes: local.Notes,
|
Notes: local.Notes,
|
||||||
IsTemplate: local.IsTemplate,
|
IsTemplate: local.IsTemplate,
|
||||||
ServerCount: local.ServerCount,
|
ServerCount: local.ServerCount,
|
||||||
|
ServerModel: local.ServerModel,
|
||||||
|
SupportCode: local.SupportCode,
|
||||||
|
Article: local.Article,
|
||||||
PricelistID: local.PricelistID,
|
PricelistID: local.PricelistID,
|
||||||
|
OnlyInStock: local.OnlyInStock,
|
||||||
PriceUpdatedAt: local.PriceUpdatedAt,
|
PriceUpdatedAt: local.PriceUpdatedAt,
|
||||||
CreatedAt: local.CreatedAt,
|
CreatedAt: local.CreatedAt,
|
||||||
}
|
}
|
||||||
@@ -98,6 +106,8 @@ func ProjectToLocal(project *models.Project) *LocalProject {
|
|||||||
local := &LocalProject{
|
local := &LocalProject{
|
||||||
UUID: project.UUID,
|
UUID: project.UUID,
|
||||||
OwnerUsername: project.OwnerUsername,
|
OwnerUsername: project.OwnerUsername,
|
||||||
|
Code: project.Code,
|
||||||
|
Variant: project.Variant,
|
||||||
Name: project.Name,
|
Name: project.Name,
|
||||||
TrackerURL: project.TrackerURL,
|
TrackerURL: project.TrackerURL,
|
||||||
IsActive: project.IsActive,
|
IsActive: project.IsActive,
|
||||||
@@ -117,6 +127,8 @@ func LocalToProject(local *LocalProject) *models.Project {
|
|||||||
project := &models.Project{
|
project := &models.Project{
|
||||||
UUID: local.UUID,
|
UUID: local.UUID,
|
||||||
OwnerUsername: local.OwnerUsername,
|
OwnerUsername: local.OwnerUsername,
|
||||||
|
Code: local.Code,
|
||||||
|
Variant: local.Variant,
|
||||||
Name: local.Name,
|
Name: local.Name,
|
||||||
TrackerURL: local.TrackerURL,
|
TrackerURL: local.TrackerURL,
|
||||||
IsActive: local.IsActive,
|
IsActive: local.IsActive,
|
||||||
@@ -162,20 +174,30 @@ func LocalToPricelist(local *LocalPricelist) *models.Pricelist {
|
|||||||
|
|
||||||
// PricelistItemToLocal converts models.PricelistItem to LocalPricelistItem
|
// PricelistItemToLocal converts models.PricelistItem to LocalPricelistItem
|
||||||
func PricelistItemToLocal(item *models.PricelistItem, localPricelistID uint) *LocalPricelistItem {
|
func PricelistItemToLocal(item *models.PricelistItem, localPricelistID uint) *LocalPricelistItem {
|
||||||
|
partnumbers := make(LocalStringList, 0, len(item.Partnumbers))
|
||||||
|
partnumbers = append(partnumbers, item.Partnumbers...)
|
||||||
return &LocalPricelistItem{
|
return &LocalPricelistItem{
|
||||||
PricelistID: localPricelistID,
|
PricelistID: localPricelistID,
|
||||||
LotName: item.LotName,
|
LotName: item.LotName,
|
||||||
Price: item.Price,
|
LotCategory: item.LotCategory,
|
||||||
|
Price: item.Price,
|
||||||
|
AvailableQty: item.AvailableQty,
|
||||||
|
Partnumbers: partnumbers,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// LocalToPricelistItem converts LocalPricelistItem to models.PricelistItem
|
// LocalToPricelistItem converts LocalPricelistItem to models.PricelistItem
|
||||||
func LocalToPricelistItem(local *LocalPricelistItem, serverPricelistID uint) *models.PricelistItem {
|
func LocalToPricelistItem(local *LocalPricelistItem, serverPricelistID uint) *models.PricelistItem {
|
||||||
|
partnumbers := make([]string, 0, len(local.Partnumbers))
|
||||||
|
partnumbers = append(partnumbers, local.Partnumbers...)
|
||||||
return &models.PricelistItem{
|
return &models.PricelistItem{
|
||||||
ID: local.ID,
|
ID: local.ID,
|
||||||
PricelistID: serverPricelistID,
|
PricelistID: serverPricelistID,
|
||||||
LotName: local.LotName,
|
LotName: local.LotName,
|
||||||
Price: local.Price,
|
LotCategory: local.LotCategory,
|
||||||
|
Price: local.Price,
|
||||||
|
AvailableQty: local.AvailableQty,
|
||||||
|
Partnumbers: partnumbers,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -203,17 +225,14 @@ func ComponentToLocal(meta *models.LotMetadata) *LocalComponent {
|
|||||||
LotDescription: lotDesc,
|
LotDescription: lotDesc,
|
||||||
Category: category,
|
Category: category,
|
||||||
Model: meta.Model,
|
Model: meta.Model,
|
||||||
CurrentPrice: meta.CurrentPrice,
|
|
||||||
SyncedAt: time.Now(),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// LocalToComponent converts LocalComponent to models.LotMetadata
|
// LocalToComponent converts LocalComponent to models.LotMetadata
|
||||||
func LocalToComponent(local *LocalComponent) *models.LotMetadata {
|
func LocalToComponent(local *LocalComponent) *models.LotMetadata {
|
||||||
return &models.LotMetadata{
|
return &models.LotMetadata{
|
||||||
LotName: local.LotName,
|
LotName: local.LotName,
|
||||||
Model: local.Model,
|
Model: local.Model,
|
||||||
CurrentPrice: local.CurrentPrice,
|
|
||||||
Lot: &models.Lot{
|
Lot: &models.Lot{
|
||||||
LotName: local.LotName,
|
LotName: local.LotName,
|
||||||
LotDescription: local.LotDescription,
|
LotDescription: local.LotDescription,
|
||||||
|
|||||||
34
internal/localdb/converters_test.go
Normal file
34
internal/localdb/converters_test.go
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPricelistItemToLocal_PreservesLotCategory(t *testing.T) {
|
||||||
|
item := &models.PricelistItem{
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
local := PricelistItemToLocal(item, 123)
|
||||||
|
if local.LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected LotCategory=CPU, got %q", local.LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLocalToPricelistItem_PreservesLotCategory(t *testing.T) {
|
||||||
|
local := &LocalPricelistItem{
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
item := LocalToPricelistItem(local, 456)
|
||||||
|
if item.LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected LotCategory=CPU, got %q", item.LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
"github.com/glebarez/sqlite"
|
"github.com/glebarez/sqlite"
|
||||||
mysqlDriver "github.com/go-sql-driver/mysql"
|
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||||
uuidpkg "github.com/google/uuid"
|
uuidpkg "github.com/google/uuid"
|
||||||
@@ -41,6 +42,49 @@ type LocalDB struct {
|
|||||||
path string
|
path string
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ResetData clears local data tables while keeping connection settings.
|
||||||
|
// It does not drop schema or connection_settings.
|
||||||
|
func ResetData(dbPath string) error {
|
||||||
|
if strings.TrimSpace(dbPath) == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(dbPath); err != nil {
|
||||||
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return fmt.Errorf("stat local db: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("opening sqlite database: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Order does not matter because we use DELETEs without FK constraints in SQLite.
|
||||||
|
tables := []string{
|
||||||
|
"local_projects",
|
||||||
|
"local_configurations",
|
||||||
|
"local_configuration_versions",
|
||||||
|
"local_pricelists",
|
||||||
|
"local_pricelist_items",
|
||||||
|
"local_components",
|
||||||
|
"local_remote_migrations_applied",
|
||||||
|
"local_sync_guard_state",
|
||||||
|
"pending_changes",
|
||||||
|
"app_settings",
|
||||||
|
}
|
||||||
|
for _, table := range tables {
|
||||||
|
if err := db.Exec("DELETE FROM " + table).Error; err != nil {
|
||||||
|
return fmt.Errorf("clear %s: %w", table, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("local database data reset", "path", dbPath)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// New creates a new LocalDB instance
|
// New creates a new LocalDB instance
|
||||||
func New(dbPath string) (*LocalDB, error) {
|
func New(dbPath string) (*LocalDB, error) {
|
||||||
// Ensure directory exists
|
// Ensure directory exists
|
||||||
@@ -49,6 +93,14 @@ func New(dbPath string) (*LocalDB, error) {
|
|||||||
return nil, fmt.Errorf("creating data directory: %w", err)
|
return nil, fmt.Errorf("creating data directory: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if cfgPath, err := appstate.ResolveConfigPathNearDB("", dbPath); err == nil {
|
||||||
|
if _, err := appstate.EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
|
return nil, fmt.Errorf("backup local data: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return nil, fmt.Errorf("resolve config path: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{
|
||||||
Logger: logger.Default.LogMode(logger.Silent),
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
})
|
})
|
||||||
@@ -56,16 +108,39 @@ func New(dbPath string) (*LocalDB, error) {
|
|||||||
return nil, fmt.Errorf("opening sqlite database: %w", err)
|
return nil, fmt.Errorf("opening sqlite database: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if err := ensureLocalProjectsTable(db); err != nil {
|
||||||
|
return nil, fmt.Errorf("ensure local_projects table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Preflight: ensure local_projects has non-null UUIDs before AutoMigrate rebuilds tables.
|
||||||
|
if db.Migrator().HasTable(&LocalProject{}) {
|
||||||
|
if !db.Migrator().HasColumn(&LocalProject{}, "uuid") {
|
||||||
|
if err := db.Exec(`ALTER TABLE local_projects ADD COLUMN uuid TEXT`).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("adding local_projects.uuid: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
var ids []uint
|
||||||
|
if err := db.Raw(`SELECT id FROM local_projects WHERE uuid IS NULL OR uuid = ''`).Scan(&ids).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("finding local_projects without uuid: %w", err)
|
||||||
|
}
|
||||||
|
for _, id := range ids {
|
||||||
|
if err := db.Exec(`UPDATE local_projects SET uuid = ? WHERE id = ?`, uuidpkg.New().String(), id).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("backfilling local_projects.uuid: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Auto-migrate all local tables
|
// Auto-migrate all local tables
|
||||||
if err := db.AutoMigrate(
|
if err := db.AutoMigrate(
|
||||||
&ConnectionSettings{},
|
&ConnectionSettings{},
|
||||||
&LocalProject{},
|
|
||||||
&LocalConfiguration{},
|
&LocalConfiguration{},
|
||||||
&LocalConfigurationVersion{},
|
&LocalConfigurationVersion{},
|
||||||
&LocalPricelist{},
|
&LocalPricelist{},
|
||||||
&LocalPricelistItem{},
|
&LocalPricelistItem{},
|
||||||
&LocalComponent{},
|
&LocalComponent{},
|
||||||
&AppSetting{},
|
&AppSetting{},
|
||||||
|
&LocalRemoteMigrationApplied{},
|
||||||
|
&LocalSyncGuardState{},
|
||||||
&PendingChange{},
|
&PendingChange{},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, fmt.Errorf("migrating sqlite database: %w", err)
|
return nil, fmt.Errorf("migrating sqlite database: %w", err)
|
||||||
@@ -82,6 +157,38 @@ func New(dbPath string) (*LocalDB, error) {
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ensureLocalProjectsTable(db *gorm.DB) error {
|
||||||
|
if db.Migrator().HasTable(&LocalProject{}) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_projects (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
server_id INTEGER NULL,
|
||||||
|
owner_username TEXT NOT NULL,
|
||||||
|
code TEXT NOT NULL,
|
||||||
|
variant TEXT NOT NULL DEFAULT '',
|
||||||
|
name TEXT NULL,
|
||||||
|
tracker_url TEXT NULL,
|
||||||
|
is_active INTEGER NOT NULL DEFAULT 1,
|
||||||
|
is_system INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME,
|
||||||
|
synced_at DATETIME NULL,
|
||||||
|
sync_status TEXT DEFAULT 'local'
|
||||||
|
)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = db.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_owner_username ON local_projects(owner_username)`).Error
|
||||||
|
_ = db.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_active ON local_projects(is_active)`).Error
|
||||||
|
_ = db.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_system ON local_projects(is_system)`).Error
|
||||||
|
_ = db.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code_variant ON local_projects(code, variant)`).Error
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// HasSettings returns true if connection settings exist
|
// HasSettings returns true if connection settings exist
|
||||||
func (l *LocalDB) HasSettings() bool {
|
func (l *LocalDB) HasSettings() bool {
|
||||||
var count int64
|
var count int64
|
||||||
@@ -256,7 +363,8 @@ func (l *LocalDB) EnsureDefaultProject(ownerUsername string) (*LocalProject, err
|
|||||||
project = &LocalProject{
|
project = &LocalProject{
|
||||||
UUID: uuidpkg.NewString(),
|
UUID: uuidpkg.NewString(),
|
||||||
OwnerUsername: "",
|
OwnerUsername: "",
|
||||||
Name: "Без проекта",
|
Code: "Без проекта",
|
||||||
|
Name: ptrString("Без проекта"),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: true,
|
IsSystem: true,
|
||||||
CreatedAt: now,
|
CreatedAt: now,
|
||||||
@@ -284,7 +392,8 @@ func (l *LocalDB) ConsolidateSystemProjects() (int64, error) {
|
|||||||
canonical = LocalProject{
|
canonical = LocalProject{
|
||||||
UUID: uuidpkg.NewString(),
|
UUID: uuidpkg.NewString(),
|
||||||
OwnerUsername: "",
|
OwnerUsername: "",
|
||||||
Name: "Без проекта",
|
Code: "Без проекта",
|
||||||
|
Name: ptrString("Без проекта"),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: true,
|
IsSystem: true,
|
||||||
CreatedAt: now,
|
CreatedAt: now,
|
||||||
@@ -365,6 +474,10 @@ WHERE (
|
|||||||
return tx.RowsAffected, tx.Error
|
return tx.RowsAffected, tx.Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
// BackfillConfigurationProjects ensures every configuration has project_uuid set.
|
// BackfillConfigurationProjects ensures every configuration has project_uuid set.
|
||||||
// If missing, it assigns system project "Без проекта" for configuration owner.
|
// If missing, it assigns system project "Без проекта" for configuration owner.
|
||||||
func (l *LocalDB) BackfillConfigurationProjects(defaultOwner string) error {
|
func (l *LocalDB) BackfillConfigurationProjects(defaultOwner string) error {
|
||||||
@@ -418,6 +531,55 @@ func (l *LocalDB) GetConfigurationByUUID(uuid string) (*LocalConfiguration, erro
|
|||||||
return &config, err
|
return &config, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ListConfigurationsWithFilters returns configurations with DB-level filtering and pagination.
|
||||||
|
func (l *LocalDB) ListConfigurationsWithFilters(status string, search string, offset, limit int) ([]LocalConfiguration, int64, error) {
|
||||||
|
query := l.db.Model(&LocalConfiguration{})
|
||||||
|
switch status {
|
||||||
|
case "active":
|
||||||
|
query = query.Where("local_configurations.is_active = ?", true)
|
||||||
|
case "archived":
|
||||||
|
query = query.Where("local_configurations.is_active = ?", false)
|
||||||
|
case "all", "":
|
||||||
|
// no-op
|
||||||
|
default:
|
||||||
|
query = query.Where("local_configurations.is_active = ?", true)
|
||||||
|
}
|
||||||
|
|
||||||
|
search = strings.TrimSpace(search)
|
||||||
|
if search != "" {
|
||||||
|
needle := "%" + strings.ToLower(search) + "%"
|
||||||
|
hasProjectsTable := l.db.Migrator().HasTable(&LocalProject{})
|
||||||
|
hasServerModel := l.db.Migrator().HasColumn(&LocalConfiguration{}, "server_model")
|
||||||
|
|
||||||
|
conditions := []string{"LOWER(local_configurations.name) LIKE ?"}
|
||||||
|
args := []interface{}{needle}
|
||||||
|
|
||||||
|
if hasProjectsTable {
|
||||||
|
query = query.Joins("LEFT JOIN local_projects lp ON lp.uuid = local_configurations.project_uuid")
|
||||||
|
conditions = append(conditions, "LOWER(COALESCE(lp.name, '')) LIKE ?")
|
||||||
|
args = append(args, needle)
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasServerModel {
|
||||||
|
conditions = append(conditions, "LOWER(COALESCE(local_configurations.server_model, '')) LIKE ?")
|
||||||
|
args = append(args, needle)
|
||||||
|
}
|
||||||
|
|
||||||
|
query = query.Where(strings.Join(conditions, " OR "), args...)
|
||||||
|
}
|
||||||
|
|
||||||
|
var total int64
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
var configs []LocalConfiguration
|
||||||
|
if err := query.Order("local_configurations.created_at DESC").Offset(offset).Limit(limit).Find(&configs).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
return configs, total, nil
|
||||||
|
}
|
||||||
|
|
||||||
// DeleteConfiguration deletes a configuration by UUID
|
// DeleteConfiguration deletes a configuration by UUID
|
||||||
func (l *LocalDB) DeleteConfiguration(uuid string) error {
|
func (l *LocalDB) DeleteConfiguration(uuid string) error {
|
||||||
return l.DeactivateConfiguration(uuid)
|
return l.DeactivateConfiguration(uuid)
|
||||||
@@ -483,6 +645,13 @@ func (l *LocalDB) CountConfigurations() int64 {
|
|||||||
return count
|
return count
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CountProjects returns the number of local projects
|
||||||
|
func (l *LocalDB) CountProjects() int64 {
|
||||||
|
var count int64
|
||||||
|
l.db.Model(&LocalProject{}).Count(&count)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
// Pricelist methods
|
// Pricelist methods
|
||||||
|
|
||||||
// GetLastSyncTime returns the last sync timestamp
|
// GetLastSyncTime returns the last sync timestamp
|
||||||
@@ -605,6 +774,17 @@ func (l *LocalDB) CountLocalPricelistItems(pricelistID uint) int64 {
|
|||||||
return count
|
return count
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CountLocalPricelistItemsWithEmptyCategory returns the number of items for a pricelist with missing lot_category.
|
||||||
|
func (l *LocalDB) CountLocalPricelistItemsWithEmptyCategory(pricelistID uint) (int64, error) {
|
||||||
|
var count int64
|
||||||
|
if err := l.db.Model(&LocalPricelistItem{}).
|
||||||
|
Where("pricelist_id = ? AND (lot_category IS NULL OR TRIM(lot_category) = '')", pricelistID).
|
||||||
|
Count(&count).Error; err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return count, nil
|
||||||
|
}
|
||||||
|
|
||||||
// SaveLocalPricelistItems saves pricelist items to local SQLite
|
// SaveLocalPricelistItems saves pricelist items to local SQLite
|
||||||
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
||||||
if len(items) == 0 {
|
if len(items) == 0 {
|
||||||
@@ -625,6 +805,30 @@ func (l *LocalDB) SaveLocalPricelistItems(items []LocalPricelistItem) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ReplaceLocalPricelistItems atomically replaces all items for a pricelist.
|
||||||
|
func (l *LocalDB) ReplaceLocalPricelistItems(pricelistID uint, items []LocalPricelistItem) error {
|
||||||
|
return l.db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
if err := tx.Where("pricelist_id = ?", pricelistID).Delete(&LocalPricelistItem{}).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
batchSize := 500
|
||||||
|
for i := 0; i < len(items); i += batchSize {
|
||||||
|
end := i + batchSize
|
||||||
|
if end > len(items) {
|
||||||
|
end = len(items)
|
||||||
|
}
|
||||||
|
if err := tx.CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// GetLocalPricelistItems returns items for a local pricelist
|
// GetLocalPricelistItems returns items for a local pricelist
|
||||||
func (l *LocalDB) GetLocalPricelistItems(pricelistID uint) ([]LocalPricelistItem, error) {
|
func (l *LocalDB) GetLocalPricelistItems(pricelistID uint) ([]LocalPricelistItem, error) {
|
||||||
var items []LocalPricelistItem
|
var items []LocalPricelistItem
|
||||||
@@ -644,6 +848,36 @@ func (l *LocalDB) GetLocalPriceForLot(pricelistID uint, lotName string) (float64
|
|||||||
return item.Price, nil
|
return item.Price, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetLocalLotCategoriesByServerPricelistID returns lot_category for each lot_name from a local pricelist resolved by server ID.
|
||||||
|
// Missing lots are not included in the map; caller is responsible for strict validation.
|
||||||
|
func (l *LocalDB) GetLocalLotCategoriesByServerPricelistID(serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
||||||
|
result := make(map[string]string, len(lotNames))
|
||||||
|
if serverPricelistID == 0 || len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
localPL, err := l.GetLocalPricelistByServerID(serverPricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
type row struct {
|
||||||
|
LotName string `gorm:"column:lot_name"`
|
||||||
|
LotCategory string `gorm:"column:lot_category"`
|
||||||
|
}
|
||||||
|
var rows []row
|
||||||
|
if err := l.db.Model(&LocalPricelistItem{}).
|
||||||
|
Select("lot_name, lot_category").
|
||||||
|
Where("pricelist_id = ? AND lot_name IN ?", localPL.ID, lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, r := range rows {
|
||||||
|
result[r.LotName] = r.LotCategory
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
// MarkPricelistAsUsed marks a pricelist as used by a configuration
|
// MarkPricelistAsUsed marks a pricelist as used by a configuration
|
||||||
func (l *LocalDB) MarkPricelistAsUsed(pricelistID uint, isUsed bool) error {
|
func (l *LocalDB) MarkPricelistAsUsed(pricelistID uint, isUsed bool) error {
|
||||||
return l.db.Model(&LocalPricelist{}).Where("id = ?", pricelistID).
|
return l.db.Model(&LocalPricelist{}).Where("id = ?", pricelistID).
|
||||||
@@ -679,6 +913,47 @@ func (l *LocalDB) DeleteLocalPricelist(id uint) error {
|
|||||||
return l.db.Delete(&LocalPricelist{}, id).Error
|
return l.db.Delete(&LocalPricelist{}, id).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DeleteUnusedLocalPricelistsMissingOnServer removes local pricelists that are absent on server
|
||||||
|
// and not referenced by active local configurations.
|
||||||
|
func (l *LocalDB) DeleteUnusedLocalPricelistsMissingOnServer(serverPricelistIDs []uint) (int, error) {
|
||||||
|
returned := 0
|
||||||
|
err := l.db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
var candidates []LocalPricelist
|
||||||
|
query := tx.Model(&LocalPricelist{})
|
||||||
|
if len(serverPricelistIDs) > 0 {
|
||||||
|
query = query.Where("server_id NOT IN ?", serverPricelistIDs)
|
||||||
|
}
|
||||||
|
if err := query.Find(&candidates).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range candidates {
|
||||||
|
pl := candidates[i]
|
||||||
|
var refs int64
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("pricelist_id = ? AND is_active = 1", pl.ServerID).
|
||||||
|
Count(&refs).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if refs > 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := tx.Where("pricelist_id = ?", pl.ID).Delete(&LocalPricelistItem{}).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := tx.Delete(&LocalPricelist{}, pl.ID).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
returned++
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return returned, nil
|
||||||
|
}
|
||||||
|
|
||||||
// PendingChange methods
|
// PendingChange methods
|
||||||
|
|
||||||
// AddPendingChange adds a change to the sync queue
|
// AddPendingChange adds a change to the sync queue
|
||||||
@@ -765,3 +1040,71 @@ func (l *LocalDB) PurgeOrphanConfigurationPendingChanges() (int64, error) {
|
|||||||
func (l *LocalDB) GetPendingCount() int64 {
|
func (l *LocalDB) GetPendingCount() int64 {
|
||||||
return l.CountPendingChanges()
|
return l.CountPendingChanges()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetRemoteMigrationApplied returns a locally applied remote migration by ID.
|
||||||
|
func (l *LocalDB) GetRemoteMigrationApplied(id string) (*LocalRemoteMigrationApplied, error) {
|
||||||
|
var migration LocalRemoteMigrationApplied
|
||||||
|
if err := l.db.Where("id = ?", id).First(&migration).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &migration, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpsertRemoteMigrationApplied writes applied migration metadata.
|
||||||
|
func (l *LocalDB) UpsertRemoteMigrationApplied(id, checksum, appVersion string, appliedAt time.Time) error {
|
||||||
|
record := &LocalRemoteMigrationApplied{
|
||||||
|
ID: id,
|
||||||
|
Checksum: checksum,
|
||||||
|
AppVersion: appVersion,
|
||||||
|
AppliedAt: appliedAt,
|
||||||
|
}
|
||||||
|
return l.db.Clauses(clause.OnConflict{
|
||||||
|
Columns: []clause.Column{{Name: "id"}},
|
||||||
|
DoUpdates: clause.Assignments(map[string]interface{}{
|
||||||
|
"checksum": checksum,
|
||||||
|
"app_version": appVersion,
|
||||||
|
"applied_at": appliedAt,
|
||||||
|
}),
|
||||||
|
}).Create(record).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLatestAppliedRemoteMigrationID returns last applied remote migration id.
|
||||||
|
func (l *LocalDB) GetLatestAppliedRemoteMigrationID() (string, error) {
|
||||||
|
var record LocalRemoteMigrationApplied
|
||||||
|
if err := l.db.Order("applied_at DESC").First(&record).Error; err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return record.ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSyncGuardState returns the latest readiness guard state.
|
||||||
|
func (l *LocalDB) GetSyncGuardState() (*LocalSyncGuardState, error) {
|
||||||
|
var state LocalSyncGuardState
|
||||||
|
if err := l.db.Order("id DESC").First(&state).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &state, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetSyncGuardState upserts readiness guard state (single-row logical table).
|
||||||
|
func (l *LocalDB) SetSyncGuardState(status, reasonCode, reasonText string, requiredMinAppVersion *string, checkedAt *time.Time) error {
|
||||||
|
state := &LocalSyncGuardState{
|
||||||
|
ID: 1,
|
||||||
|
Status: status,
|
||||||
|
ReasonCode: reasonCode,
|
||||||
|
ReasonText: reasonText,
|
||||||
|
RequiredMinAppVersion: requiredMinAppVersion,
|
||||||
|
LastCheckedAt: checkedAt,
|
||||||
|
}
|
||||||
|
return l.db.Clauses(clause.OnConflict{
|
||||||
|
Columns: []clause.Column{{Name: "id"}},
|
||||||
|
DoUpdates: clause.Assignments(map[string]interface{}{
|
||||||
|
"status": status,
|
||||||
|
"reason_code": reasonCode,
|
||||||
|
"reason_text": reasonText,
|
||||||
|
"required_min_app_version": requiredMinAppVersion,
|
||||||
|
"last_checked_at": checkedAt,
|
||||||
|
"updated_at": time.Now(),
|
||||||
|
}),
|
||||||
|
}).Create(state).Error
|
||||||
|
}
|
||||||
|
|||||||
@@ -51,8 +51,8 @@ func TestRunLocalMigrationsBackfillsDefaultProject(t *testing.T) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("get system project: %v", err)
|
t.Fatalf("get system project: %v", err)
|
||||||
}
|
}
|
||||||
if project.Name != "Без проекта" {
|
if project.Name == nil || *project.Name != "Без проекта" {
|
||||||
t.Fatalf("expected system project name, got %q", project.Name)
|
t.Fatalf("expected system project name, got %v", project.Name)
|
||||||
}
|
}
|
||||||
if !project.IsSystem {
|
if !project.IsSystem {
|
||||||
t.Fatalf("expected system project flag")
|
t.Fatalf("expected system project flag")
|
||||||
|
|||||||
@@ -58,6 +58,51 @@ var localMigrations = []localMigration{
|
|||||||
name: "Backfill source for local pricelists and create source indexes",
|
name: "Backfill source for local pricelists and create source indexes",
|
||||||
run: backfillLocalPricelistSource,
|
run: backfillLocalPricelistSource,
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_09_drop_component_unused_fields",
|
||||||
|
name: "Remove current_price and synced_at from local_components (unused fields)",
|
||||||
|
run: dropComponentUnusedFields,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_09_add_warehouse_competitor_pricelists",
|
||||||
|
name: "Add warehouse_pricelist_id and competitor_pricelist_id to local_configurations",
|
||||||
|
run: addWarehouseCompetitorPriceLists,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_pricelist_item_category",
|
||||||
|
name: "Add lot_category to local_pricelist_items and create indexes",
|
||||||
|
run: addLocalPricelistItemCategoryAndIndexes,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_article",
|
||||||
|
name: "Add article to local_configurations",
|
||||||
|
run: addLocalConfigurationArticle,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_server_model",
|
||||||
|
name: "Add server_model to local_configurations",
|
||||||
|
run: addLocalConfigurationServerModel,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_support_code",
|
||||||
|
name: "Add support_code to local_configurations",
|
||||||
|
run: addLocalConfigurationSupportCode,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_code",
|
||||||
|
name: "Add project code to local_projects and backfill",
|
||||||
|
run: addLocalProjectCode,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_variant",
|
||||||
|
name: "Add project variant to local_projects and backfill",
|
||||||
|
run: addLocalProjectVariant,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_name_nullable",
|
||||||
|
name: "Allow NULL project names in local_projects",
|
||||||
|
run: allowLocalProjectNameNull,
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func runLocalMigrations(db *gorm.DB) error {
|
func runLocalMigrations(db *gorm.DB) error {
|
||||||
@@ -194,7 +239,8 @@ func ensureDefaultProjectTx(tx *gorm.DB, ownerUsername string) (*LocalProject, e
|
|||||||
project = LocalProject{
|
project = LocalProject{
|
||||||
UUID: uuid.NewString(),
|
UUID: uuid.NewString(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
Name: "Без проекта",
|
Code: "Без проекта",
|
||||||
|
Name: ptrString("Без проекта"),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: true,
|
IsSystem: true,
|
||||||
CreatedAt: now,
|
CreatedAt: now,
|
||||||
@@ -208,6 +254,139 @@ func ensureDefaultProjectTx(tx *gorm.DB, ownerUsername string) (*LocalProject, e
|
|||||||
return &project, nil
|
return &project, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func addLocalProjectCode(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects ADD COLUMN code TEXT`).Error; err != nil {
|
||||||
|
if !strings.Contains(strings.ToLower(err.Error()), "duplicate") &&
|
||||||
|
!strings.Contains(strings.ToLower(err.Error()), "exists") {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop unique index if it already exists to allow de-duplication updates.
|
||||||
|
if err := tx.Exec(`DROP INDEX IF EXISTS idx_local_projects_code`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy code from current project name.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = TRIM(COALESCE(name, ''))`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure any remaining blanks have a unique fallback.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = 'P-' || uuid
|
||||||
|
WHERE code IS NULL OR TRIM(code) = ''`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// De-duplicate codes: OPS-1948-2, OPS-1948-3...
|
||||||
|
if err := tx.Exec(`
|
||||||
|
WITH ranked AS (
|
||||||
|
SELECT id, code,
|
||||||
|
ROW_NUMBER() OVER (PARTITION BY code ORDER BY id) AS rn
|
||||||
|
FROM local_projects
|
||||||
|
)
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = code || '-' || (SELECT rn FROM ranked WHERE ranked.id = local_projects.id)
|
||||||
|
WHERE id IN (SELECT id FROM ranked WHERE rn > 1)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create unique index for project codes (ignore if exists).
|
||||||
|
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code ON local_projects(code)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalProjectVariant(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects ADD COLUMN variant TEXT NOT NULL DEFAULT ''`).Error; err != nil {
|
||||||
|
if !strings.Contains(strings.ToLower(err.Error()), "duplicate") &&
|
||||||
|
!strings.Contains(strings.ToLower(err.Error()), "exists") {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop legacy code index if present.
|
||||||
|
if err := tx.Exec(`DROP INDEX IF EXISTS idx_local_projects_code`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset code from name and clear variant.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = TRIM(COALESCE(name, '')),
|
||||||
|
variant = ''`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// De-duplicate by assigning variant numbers: 2,3...
|
||||||
|
if err := tx.Exec(`
|
||||||
|
WITH ranked AS (
|
||||||
|
SELECT id, code,
|
||||||
|
ROW_NUMBER() OVER (PARTITION BY code ORDER BY id) AS rn
|
||||||
|
FROM local_projects
|
||||||
|
)
|
||||||
|
UPDATE local_projects
|
||||||
|
SET variant = CASE
|
||||||
|
WHEN (SELECT rn FROM ranked WHERE ranked.id = local_projects.id) = 1 THEN ''
|
||||||
|
ELSE '-' || CAST((SELECT rn FROM ranked WHERE ranked.id = local_projects.id) AS TEXT)
|
||||||
|
END`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code_variant ON local_projects(code, variant)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func allowLocalProjectNameNull(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects RENAME TO local_projects_old`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE TABLE local_projects (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
server_id INTEGER NULL,
|
||||||
|
owner_username TEXT NOT NULL,
|
||||||
|
code TEXT NOT NULL,
|
||||||
|
variant TEXT NOT NULL DEFAULT '',
|
||||||
|
name TEXT NULL,
|
||||||
|
tracker_url TEXT NULL,
|
||||||
|
is_active INTEGER NOT NULL DEFAULT 1,
|
||||||
|
is_system INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME,
|
||||||
|
synced_at DATETIME NULL,
|
||||||
|
sync_status TEXT DEFAULT 'local'
|
||||||
|
)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_owner_username ON local_projects(owner_username)`).Error
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_active ON local_projects(is_active)`).Error
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_system ON local_projects(is_system)`).Error
|
||||||
|
_ = tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code_variant ON local_projects(code, variant)`).Error
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO local_projects (id, uuid, server_id, owner_username, code, variant, name, tracker_url, is_active, is_system, created_at, updated_at, synced_at, sync_status)
|
||||||
|
SELECT id, uuid, server_id, owner_username, code, variant, name, tracker_url, is_active, is_system, created_at, updated_at, synced_at, sync_status
|
||||||
|
FROM local_projects_old`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = tx.Exec(`DROP TABLE local_projects_old`).Error
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func backfillConfigurationPricelists(tx *gorm.DB) error {
|
func backfillConfigurationPricelists(tx *gorm.DB) error {
|
||||||
var latest LocalPricelist
|
var latest LocalPricelist
|
||||||
if err := tx.Where("source = ?", "estimate").Order("created_at DESC").First(&latest).Error; err != nil {
|
if err := tx.Where("source = ?", "estimate").Order("created_at DESC").First(&latest).Error; err != nil {
|
||||||
@@ -249,6 +428,7 @@ func chooseNonZeroTime(candidate time.Time, fallback time.Time) time.Time {
|
|||||||
return candidate
|
return candidate
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
func fixLocalPricelistIndexes(tx *gorm.DB) error {
|
func fixLocalPricelistIndexes(tx *gorm.DB) error {
|
||||||
type indexRow struct {
|
type indexRow struct {
|
||||||
Name string `gorm:"column:name"`
|
Name string `gorm:"column:name"`
|
||||||
@@ -316,3 +496,222 @@ func backfillLocalPricelistSource(tx *gorm.DB) error {
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func dropComponentUnusedFields(tx *gorm.DB) error {
|
||||||
|
// Check if columns exist
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_components')
|
||||||
|
WHERE name IN ('current_price', 'synced_at')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check columns existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 0 {
|
||||||
|
slog.Info("unused fields already removed from local_components")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SQLite: recreate table without current_price and synced_at
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE TABLE local_components_new (
|
||||||
|
lot_name TEXT PRIMARY KEY,
|
||||||
|
lot_description TEXT,
|
||||||
|
category TEXT,
|
||||||
|
model TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create new local_components table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO local_components_new (lot_name, lot_description, category, model)
|
||||||
|
SELECT lot_name, lot_description, category, model
|
||||||
|
FROM local_components
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("copy data to new table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`DROP TABLE local_components`).Error; err != nil {
|
||||||
|
return fmt.Errorf("drop old table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_components_new RENAME TO local_components`).Error; err != nil {
|
||||||
|
return fmt.Errorf("rename new table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("dropped current_price and synced_at columns from local_components")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addWarehouseCompetitorPriceLists(tx *gorm.DB) error {
|
||||||
|
// Check if columns exist
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('warehouse_pricelist_id', 'competitor_pricelist_id')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check columns existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 2 {
|
||||||
|
slog.Info("warehouse and competitor pricelist columns already exist")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add columns if they don't exist
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN warehouse_pricelist_id INTEGER
|
||||||
|
`).Error; err != nil {
|
||||||
|
// Column might already exist, ignore
|
||||||
|
if !strings.Contains(err.Error(), "duplicate column") {
|
||||||
|
return fmt.Errorf("add warehouse_pricelist_id column: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN competitor_pricelist_id INTEGER
|
||||||
|
`).Error; err != nil {
|
||||||
|
// Column might already exist, ignore
|
||||||
|
if !strings.Contains(err.Error(), "duplicate column") {
|
||||||
|
return fmt.Errorf("add competitor_pricelist_id column: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create indexes
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_configurations_warehouse_pricelist
|
||||||
|
ON local_configurations(warehouse_pricelist_id)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create warehouse pricelist index: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_configurations_competitor_pricelist
|
||||||
|
ON local_configurations(competitor_pricelist_id)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create competitor pricelist index: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("added warehouse and competitor pricelist fields to local_configurations")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalPricelistItemCategoryAndIndexes(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_pricelist_items')
|
||||||
|
WHERE name IN ('lot_category')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_pricelist_items(lot_category) existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_pricelist_items
|
||||||
|
ADD COLUMN lot_category TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_pricelist_items.lot_category: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added lot_category to local_pricelist_items")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelist_items_pricelist_lot
|
||||||
|
ON local_pricelist_items(pricelist_id, lot_name)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_pricelist_items_pricelist_lot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelist_items_lot_category
|
||||||
|
ON local_pricelist_items(lot_category)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_pricelist_items_lot_category: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationArticle(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('article')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(article) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN article TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.article: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added article to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationServerModel(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('server_model')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(server_model) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN server_model TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.server_model: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added server_model to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationSupportCode(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('support_code')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(support_code) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN support_code TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.support_code: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added support_code to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -57,6 +57,30 @@ func (c LocalConfigItems) Total() float64 {
|
|||||||
return total
|
return total
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// LocalStringList is a JSON-encoded list of strings stored as TEXT in SQLite.
|
||||||
|
type LocalStringList []string
|
||||||
|
|
||||||
|
func (s LocalStringList) Value() (driver.Value, error) {
|
||||||
|
return json.Marshal(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalStringList) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*s = make(LocalStringList, 0)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch v := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = v
|
||||||
|
case string:
|
||||||
|
bytes = []byte(v)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for LocalStringList")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, s)
|
||||||
|
}
|
||||||
|
|
||||||
// LocalConfiguration stores configurations in local SQLite
|
// LocalConfiguration stores configurations in local SQLite
|
||||||
type LocalConfiguration struct {
|
type LocalConfiguration struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
@@ -72,7 +96,13 @@ type LocalConfiguration struct {
|
|||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
||||||
ServerCount int `gorm:"default:1" json:"server_count"`
|
ServerCount int `gorm:"default:1" json:"server_count"`
|
||||||
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
||||||
|
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
||||||
|
Article string `gorm:"size:80" json:"article,omitempty"`
|
||||||
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
CreatedAt time.Time `json:"created_at"`
|
CreatedAt time.Time `json:"created_at"`
|
||||||
UpdatedAt time.Time `json:"updated_at"`
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
@@ -93,7 +123,9 @@ type LocalProject struct {
|
|||||||
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
||||||
ServerID *uint `json:"server_id,omitempty"`
|
ServerID *uint `json:"server_id,omitempty"`
|
||||||
OwnerUsername string `gorm:"not null;index" json:"owner_username"`
|
OwnerUsername string `gorm:"not null;index" json:"owner_username"`
|
||||||
Name string `gorm:"not null" json:"name"`
|
Code string `gorm:"not null;index:idx_local_projects_code_variant,priority:1" json:"code"`
|
||||||
|
Variant string `gorm:"default:'';index:idx_local_projects_code_variant,priority:2" json:"variant"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
TrackerURL string `json:"tracker_url"`
|
TrackerURL string `json:"tracker_url"`
|
||||||
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
||||||
@@ -142,30 +174,59 @@ func (LocalPricelist) TableName() string {
|
|||||||
|
|
||||||
// LocalPricelistItem stores pricelist items
|
// LocalPricelistItem stores pricelist items
|
||||||
type LocalPricelistItem struct {
|
type LocalPricelistItem struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
PricelistID uint `gorm:"not null;index" json:"pricelist_id"`
|
PricelistID uint `gorm:"not null;index" json:"pricelist_id"`
|
||||||
LotName string `gorm:"not null" json:"lot_name"`
|
LotName string `gorm:"not null" json:"lot_name"`
|
||||||
Price float64 `gorm:"not null" json:"price"`
|
LotCategory string `gorm:"column:lot_category" json:"lot_category,omitempty"`
|
||||||
|
Price float64 `gorm:"not null" json:"price"`
|
||||||
|
AvailableQty *float64 `json:"available_qty,omitempty"`
|
||||||
|
Partnumbers LocalStringList `gorm:"type:text" json:"partnumbers,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (LocalPricelistItem) TableName() string {
|
func (LocalPricelistItem) TableName() string {
|
||||||
return "local_pricelist_items"
|
return "local_pricelist_items"
|
||||||
}
|
}
|
||||||
|
|
||||||
// LocalComponent stores cached components for offline search
|
// LocalComponent stores cached components for offline search (metadata only)
|
||||||
|
// All pricing is now sourced from local_pricelist_items based on configuration pricelist selection
|
||||||
type LocalComponent struct {
|
type LocalComponent struct {
|
||||||
LotName string `gorm:"primaryKey" json:"lot_name"`
|
LotName string `gorm:"primaryKey" json:"lot_name"`
|
||||||
LotDescription string `json:"lot_description"`
|
LotDescription string `json:"lot_description"`
|
||||||
Category string `json:"category"`
|
Category string `json:"category"`
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
CurrentPrice *float64 `json:"current_price"`
|
|
||||||
SyncedAt time.Time `json:"synced_at"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (LocalComponent) TableName() string {
|
func (LocalComponent) TableName() string {
|
||||||
return "local_components"
|
return "local_components"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// LocalRemoteMigrationApplied tracks remote SQLite migrations received from server and applied locally.
|
||||||
|
type LocalRemoteMigrationApplied struct {
|
||||||
|
ID string `gorm:"primaryKey;size:128" json:"id"`
|
||||||
|
Checksum string `gorm:"size:128;not null" json:"checksum"`
|
||||||
|
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
||||||
|
AppliedAt time.Time `gorm:"not null" json:"applied_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalRemoteMigrationApplied) TableName() string {
|
||||||
|
return "local_remote_migrations_applied"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalSyncGuardState stores latest sync readiness decision for UI and preflight checks.
|
||||||
|
type LocalSyncGuardState struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
Status string `gorm:"size:32;not null;index" json:"status"` // ready|blocked|unknown
|
||||||
|
ReasonCode string `gorm:"size:128" json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `gorm:"type:text" json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `gorm:"size:64" json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalSyncGuardState) TableName() string {
|
||||||
|
return "local_sync_guard_state"
|
||||||
|
}
|
||||||
|
|
||||||
// PendingChange stores changes that need to be synced to the server
|
// PendingChange stores changes that need to be synced to the server
|
||||||
type PendingChange struct {
|
type PendingChange struct {
|
||||||
ID int64 `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID int64 `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
|||||||
@@ -22,7 +22,11 @@ func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
|
|||||||
"notes": localCfg.Notes,
|
"notes": localCfg.Notes,
|
||||||
"is_template": localCfg.IsTemplate,
|
"is_template": localCfg.IsTemplate,
|
||||||
"server_count": localCfg.ServerCount,
|
"server_count": localCfg.ServerCount,
|
||||||
|
"server_model": localCfg.ServerModel,
|
||||||
|
"support_code": localCfg.SupportCode,
|
||||||
|
"article": localCfg.Article,
|
||||||
"pricelist_id": localCfg.PricelistID,
|
"pricelist_id": localCfg.PricelistID,
|
||||||
|
"only_in_stock": localCfg.OnlyInStock,
|
||||||
"price_updated_at": localCfg.PriceUpdatedAt,
|
"price_updated_at": localCfg.PriceUpdatedAt,
|
||||||
"created_at": localCfg.CreatedAt,
|
"created_at": localCfg.CreatedAt,
|
||||||
"updated_at": localCfg.UpdatedAt,
|
"updated_at": localCfg.UpdatedAt,
|
||||||
@@ -51,7 +55,11 @@ func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
|||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `json:"is_template"`
|
IsTemplate bool `json:"is_template"`
|
||||||
ServerCount int `json:"server_count"`
|
ServerCount int `json:"server_count"`
|
||||||
|
ServerModel string `json:"server_model"`
|
||||||
|
SupportCode string `json:"support_code"`
|
||||||
|
Article string `json:"article"`
|
||||||
PricelistID *uint `json:"pricelist_id"`
|
PricelistID *uint `json:"pricelist_id"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
||||||
OriginalUserID uint `json:"original_user_id"`
|
OriginalUserID uint `json:"original_user_id"`
|
||||||
OriginalUsername string `json:"original_username"`
|
OriginalUsername string `json:"original_username"`
|
||||||
@@ -76,7 +84,11 @@ func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
|||||||
Notes: snapshot.Notes,
|
Notes: snapshot.Notes,
|
||||||
IsTemplate: snapshot.IsTemplate,
|
IsTemplate: snapshot.IsTemplate,
|
||||||
ServerCount: snapshot.ServerCount,
|
ServerCount: snapshot.ServerCount,
|
||||||
|
ServerModel: snapshot.ServerModel,
|
||||||
|
SupportCode: snapshot.SupportCode,
|
||||||
|
Article: snapshot.Article,
|
||||||
PricelistID: snapshot.PricelistID,
|
PricelistID: snapshot.PricelistID,
|
||||||
|
OnlyInStock: snapshot.OnlyInStock,
|
||||||
PriceUpdatedAt: snapshot.PriceUpdatedAt,
|
PriceUpdatedAt: snapshot.PriceUpdatedAt,
|
||||||
OriginalUserID: snapshot.OriginalUserID,
|
OriginalUserID: snapshot.OriginalUserID,
|
||||||
OriginalUsername: snapshot.OriginalUsername,
|
OriginalUsername: snapshot.OriginalUsername,
|
||||||
|
|||||||
238
internal/lotmatch/resolver.go
Normal file
238
internal/lotmatch/resolver.go
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
package lotmatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrResolveConflict = errors.New("multiple lot matches")
|
||||||
|
ErrResolveNotFound = errors.New("lot not found")
|
||||||
|
)
|
||||||
|
|
||||||
|
type LotResolver struct {
|
||||||
|
partnumberToLots map[string][]string
|
||||||
|
exactLots map[string]string
|
||||||
|
allLots []string
|
||||||
|
}
|
||||||
|
|
||||||
|
type MappingMatcher struct {
|
||||||
|
exact map[string][]string
|
||||||
|
exactLot map[string]string
|
||||||
|
wildcard []wildcardMapping
|
||||||
|
}
|
||||||
|
|
||||||
|
type wildcardMapping struct {
|
||||||
|
lotName string
|
||||||
|
re *regexp.Regexp
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLotResolverFromDB(db *gorm.DB) (*LotResolver, error) {
|
||||||
|
mappings, lots, err := loadMappingsAndLots(db)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return NewLotResolver(mappings, lots), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewMappingMatcherFromDB(db *gorm.DB) (*MappingMatcher, error) {
|
||||||
|
mappings, lots, err := loadMappingsAndLots(db)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return NewMappingMatcher(mappings, lots), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLotResolver(mappings []models.LotPartnumber, lots []models.Lot) *LotResolver {
|
||||||
|
partnumberToLots := make(map[string][]string, len(mappings))
|
||||||
|
for _, m := range mappings {
|
||||||
|
pn := NormalizeKey(m.Partnumber)
|
||||||
|
lot := strings.TrimSpace(m.LotName)
|
||||||
|
if pn == "" || lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
partnumberToLots[pn] = append(partnumberToLots[pn], lot)
|
||||||
|
}
|
||||||
|
for key := range partnumberToLots {
|
||||||
|
partnumberToLots[key] = uniqueCaseInsensitive(partnumberToLots[key])
|
||||||
|
}
|
||||||
|
|
||||||
|
exactLots := make(map[string]string, len(lots))
|
||||||
|
allLots := make([]string, 0, len(lots))
|
||||||
|
for _, l := range lots {
|
||||||
|
name := strings.TrimSpace(l.LotName)
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exactLots[NormalizeKey(name)] = name
|
||||||
|
allLots = append(allLots, name)
|
||||||
|
}
|
||||||
|
sort.Slice(allLots, func(i, j int) bool {
|
||||||
|
li := len([]rune(allLots[i]))
|
||||||
|
lj := len([]rune(allLots[j]))
|
||||||
|
if li == lj {
|
||||||
|
return allLots[i] < allLots[j]
|
||||||
|
}
|
||||||
|
return li > lj
|
||||||
|
})
|
||||||
|
|
||||||
|
return &LotResolver{
|
||||||
|
partnumberToLots: partnumberToLots,
|
||||||
|
exactLots: exactLots,
|
||||||
|
allLots: allLots,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewMappingMatcher(mappings []models.LotPartnumber, lots []models.Lot) *MappingMatcher {
|
||||||
|
exact := make(map[string][]string, len(mappings))
|
||||||
|
wildcards := make([]wildcardMapping, 0, len(mappings))
|
||||||
|
for _, m := range mappings {
|
||||||
|
pn := NormalizeKey(m.Partnumber)
|
||||||
|
lot := strings.TrimSpace(m.LotName)
|
||||||
|
if pn == "" || lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.Contains(pn, "*") {
|
||||||
|
pattern := "^" + regexp.QuoteMeta(pn) + "$"
|
||||||
|
pattern = strings.ReplaceAll(pattern, "\\*", ".*")
|
||||||
|
re, err := regexp.Compile(pattern)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
wildcards = append(wildcards, wildcardMapping{lotName: lot, re: re})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exact[pn] = append(exact[pn], lot)
|
||||||
|
}
|
||||||
|
for key := range exact {
|
||||||
|
exact[key] = uniqueCaseInsensitive(exact[key])
|
||||||
|
}
|
||||||
|
|
||||||
|
exactLot := make(map[string]string, len(lots))
|
||||||
|
for _, l := range lots {
|
||||||
|
name := strings.TrimSpace(l.LotName)
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exactLot[NormalizeKey(name)] = name
|
||||||
|
}
|
||||||
|
|
||||||
|
return &MappingMatcher{
|
||||||
|
exact: exact,
|
||||||
|
exactLot: exactLot,
|
||||||
|
wildcard: wildcards,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *LotResolver) Resolve(partnumber string) (string, string, error) {
|
||||||
|
key := NormalizeKey(partnumber)
|
||||||
|
if key == "" {
|
||||||
|
return "", "", ErrResolveNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if mapped := r.partnumberToLots[key]; len(mapped) > 0 {
|
||||||
|
if len(mapped) == 1 {
|
||||||
|
return mapped[0], "mapping_table", nil
|
||||||
|
}
|
||||||
|
return "", "", ErrResolveConflict
|
||||||
|
}
|
||||||
|
if exact, ok := r.exactLots[key]; ok {
|
||||||
|
return exact, "article_exact", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
best := ""
|
||||||
|
bestLen := -1
|
||||||
|
tie := false
|
||||||
|
for _, lot := range r.allLots {
|
||||||
|
lotKey := NormalizeKey(lot)
|
||||||
|
if lotKey == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(key, lotKey) {
|
||||||
|
l := len([]rune(lotKey))
|
||||||
|
if l > bestLen {
|
||||||
|
best = lot
|
||||||
|
bestLen = l
|
||||||
|
tie = false
|
||||||
|
} else if l == bestLen && !strings.EqualFold(best, lot) {
|
||||||
|
tie = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if best == "" {
|
||||||
|
return "", "", ErrResolveNotFound
|
||||||
|
}
|
||||||
|
if tie {
|
||||||
|
return "", "", ErrResolveConflict
|
||||||
|
}
|
||||||
|
return best, "prefix", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *MappingMatcher) MatchLots(partnumber string) []string {
|
||||||
|
if m == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
key := NormalizeKey(partnumber)
|
||||||
|
if key == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lots := make([]string, 0, 2)
|
||||||
|
if exact := m.exact[key]; len(exact) > 0 {
|
||||||
|
lots = append(lots, exact...)
|
||||||
|
}
|
||||||
|
for _, wc := range m.wildcard {
|
||||||
|
if wc.re == nil || !wc.re.MatchString(key) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lots = append(lots, wc.lotName)
|
||||||
|
}
|
||||||
|
if lot, ok := m.exactLot[key]; ok && strings.TrimSpace(lot) != "" {
|
||||||
|
lots = append(lots, lot)
|
||||||
|
}
|
||||||
|
return uniqueCaseInsensitive(lots)
|
||||||
|
}
|
||||||
|
|
||||||
|
func NormalizeKey(v string) string {
|
||||||
|
s := strings.ToLower(strings.TrimSpace(v))
|
||||||
|
replacer := strings.NewReplacer(" ", "", "-", "", "_", "", ".", "", "/", "", "\\", "", "\"", "", "'", "", "(", "", ")", "")
|
||||||
|
return replacer.Replace(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadMappingsAndLots(db *gorm.DB) ([]models.LotPartnumber, []models.Lot, error) {
|
||||||
|
var mappings []models.LotPartnumber
|
||||||
|
if err := db.Find(&mappings).Error; err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
var lots []models.Lot
|
||||||
|
if err := db.Select("lot_name").Find(&lots).Error; err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
return mappings, lots, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func uniqueCaseInsensitive(values []string) []string {
|
||||||
|
seen := make(map[string]struct{}, len(values))
|
||||||
|
out := make([]string, 0, len(values))
|
||||||
|
for _, v := range values {
|
||||||
|
trimmed := strings.TrimSpace(v)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
key := strings.ToLower(trimmed)
|
||||||
|
if _, ok := seen[key]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[key] = struct{}{}
|
||||||
|
out = append(out, trimmed)
|
||||||
|
}
|
||||||
|
sort.Slice(out, func(i, j int) bool {
|
||||||
|
return strings.ToLower(out[i]) < strings.ToLower(out[j])
|
||||||
|
})
|
||||||
|
return out
|
||||||
|
}
|
||||||
62
internal/lotmatch/resolver_test.go
Normal file
62
internal/lotmatch/resolver_test.go
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
package lotmatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestLotResolverPrecedence(t *testing.T) {
|
||||||
|
resolver := NewLotResolver(
|
||||||
|
[]models.LotPartnumber{
|
||||||
|
{Partnumber: "PN-1", LotName: "LOT_A"},
|
||||||
|
},
|
||||||
|
[]models.Lot{
|
||||||
|
{LotName: "CPU_X_LONG"},
|
||||||
|
{LotName: "CPU_X"},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
lot, by, err := resolver.Resolve("PN-1")
|
||||||
|
if err != nil || lot != "LOT_A" || by != "mapping_table" {
|
||||||
|
t.Fatalf("expected mapping_table LOT_A, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lot, by, err = resolver.Resolve("CPU_X")
|
||||||
|
if err != nil || lot != "CPU_X" || by != "article_exact" {
|
||||||
|
t.Fatalf("expected article_exact CPU_X, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lot, by, err = resolver.Resolve("CPU_X_LONG_001")
|
||||||
|
if err != nil || lot != "CPU_X_LONG" || by != "prefix" {
|
||||||
|
t.Fatalf("expected prefix CPU_X_LONG, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMappingMatcherWildcardAndExactLot(t *testing.T) {
|
||||||
|
matcher := NewMappingMatcher(
|
||||||
|
[]models.LotPartnumber{
|
||||||
|
{Partnumber: "R750*", LotName: "SERVER_R750"},
|
||||||
|
{Partnumber: "HDD-01", LotName: "HDD_01"},
|
||||||
|
},
|
||||||
|
[]models.Lot{
|
||||||
|
{LotName: "MEM_DDR5_16G_4800"},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
check := func(partnumber string, want string) {
|
||||||
|
t.Helper()
|
||||||
|
got := matcher.MatchLots(partnumber)
|
||||||
|
if len(got) != 1 || got[0] != want {
|
||||||
|
t.Fatalf("partnumber %s: expected [%s], got %#v", partnumber, want, got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
check("R750XD", "SERVER_R750")
|
||||||
|
check("HDD-01", "HDD_01")
|
||||||
|
check("MEM_DDR5_16G_4800", "MEM_DDR5_16G_4800")
|
||||||
|
|
||||||
|
if got := matcher.MatchLots("UNKNOWN"); len(got) != 0 {
|
||||||
|
t.Fatalf("expected no matches for UNKNOWN, got %#v", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,22 +1,55 @@
|
|||||||
package middleware
|
package middleware
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
)
|
)
|
||||||
|
|
||||||
func CORS() gin.HandlerFunc {
|
func CORS() gin.HandlerFunc {
|
||||||
return func(c *gin.Context) {
|
return func(c *gin.Context) {
|
||||||
c.Header("Access-Control-Allow-Origin", "*")
|
origin := strings.TrimSpace(c.GetHeader("Origin"))
|
||||||
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS")
|
if origin != "" {
|
||||||
c.Header("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization")
|
if isLoopbackOrigin(origin) {
|
||||||
c.Header("Access-Control-Expose-Headers", "Content-Length, Content-Disposition")
|
c.Header("Access-Control-Allow-Origin", origin)
|
||||||
c.Header("Access-Control-Max-Age", "86400")
|
c.Header("Vary", "Origin")
|
||||||
|
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS")
|
||||||
|
c.Header("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization")
|
||||||
|
c.Header("Access-Control-Expose-Headers", "Content-Length, Content-Disposition")
|
||||||
|
c.Header("Access-Control-Max-Age", "86400")
|
||||||
|
} else if c.Request.Method == http.MethodOptions {
|
||||||
|
c.AbortWithStatus(http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if c.Request.Method == "OPTIONS" {
|
if c.Request.Method == http.MethodOptions {
|
||||||
c.AbortWithStatus(204)
|
c.AbortWithStatus(http.StatusNoContent)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.Next()
|
c.Next()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isLoopbackOrigin(origin string) bool {
|
||||||
|
u, err := url.Parse(origin)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if u.Scheme != "http" && u.Scheme != "https" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
host := strings.TrimSpace(u.Hostname())
|
||||||
|
if host == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if strings.EqualFold(host, "localhost") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
ip := net.ParseIP(host)
|
||||||
|
return ip != nil && ip.IsLoopback()
|
||||||
|
}
|
||||||
|
|||||||
@@ -40,25 +40,29 @@ func (c ConfigItems) Total() float64 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Configuration struct {
|
type Configuration struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
||||||
UserID *uint `json:"user_id,omitempty"` // Legacy field, no longer required for ownership
|
UserID *uint `json:"user_id,omitempty"` // Legacy field, no longer required for ownership
|
||||||
OwnerUsername string `gorm:"size:100;not null;default:'';index" json:"owner_username"`
|
OwnerUsername string `gorm:"size:100;not null;default:'';index" json:"owner_username"`
|
||||||
ProjectUUID *string `gorm:"size:36;index" json:"project_uuid,omitempty"`
|
ProjectUUID *string `gorm:"size:36;index" json:"project_uuid,omitempty"`
|
||||||
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
||||||
Name string `gorm:"size:200;not null" json:"name"`
|
Name string `gorm:"size:200;not null" json:"name"`
|
||||||
Items ConfigItems `gorm:"type:json;not null" json:"items"`
|
Items ConfigItems `gorm:"type:json;not null" json:"items"`
|
||||||
TotalPrice *float64 `gorm:"type:decimal(12,2)" json:"total_price"`
|
TotalPrice *float64 `gorm:"type:decimal(12,2)" json:"total_price"`
|
||||||
CustomPrice *float64 `gorm:"type:decimal(12,2)" json:"custom_price"`
|
CustomPrice *float64 `gorm:"type:decimal(12,2)" json:"custom_price"`
|
||||||
Notes string `gorm:"type:text" json:"notes"`
|
Notes string `gorm:"type:text" json:"notes"`
|
||||||
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
||||||
ServerCount int `gorm:"default:1" json:"server_count"`
|
ServerCount int `gorm:"default:1" json:"server_count"`
|
||||||
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
||||||
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
||||||
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
Article string `gorm:"size:80" json:"article,omitempty"`
|
||||||
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
||||||
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
|
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
||||||
|
|
||||||
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
|
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -37,3 +37,44 @@ type Supplier struct {
|
|||||||
func (Supplier) TableName() string {
|
func (Supplier) TableName() string {
|
||||||
return "supplier"
|
return "supplier"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// StockLog stores warehouse stock snapshots imported from external files.
|
||||||
|
type StockLog struct {
|
||||||
|
StockLogID uint `gorm:"column:stock_log_id;primaryKey;autoIncrement"`
|
||||||
|
Partnumber string `gorm:"column:partnumber;size:255;not null"`
|
||||||
|
Supplier *string `gorm:"column:supplier;size:255"`
|
||||||
|
Date time.Time `gorm:"column:date;type:date;not null"`
|
||||||
|
Price float64 `gorm:"column:price;not null"`
|
||||||
|
Quality *string `gorm:"column:quality;size:255"`
|
||||||
|
Comments *string `gorm:"column:comments;size:15000"`
|
||||||
|
Vendor *string `gorm:"column:vendor;size:255"`
|
||||||
|
Qty *float64 `gorm:"column:qty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (StockLog) TableName() string {
|
||||||
|
return "stock_log"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LotPartnumber maps external part numbers to internal lots.
|
||||||
|
type LotPartnumber struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber;size:255;primaryKey" json:"partnumber"`
|
||||||
|
LotName string `gorm:"column:lot_name;size:255;primaryKey" json:"lot_name"`
|
||||||
|
Description *string `gorm:"column:description;size:10000" json:"description,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LotPartnumber) TableName() string {
|
||||||
|
return "lot_partnumbers"
|
||||||
|
}
|
||||||
|
|
||||||
|
// StockIgnoreRule contains import ignore pattern rules.
|
||||||
|
type StockIgnoreRule struct {
|
||||||
|
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
|
||||||
|
Target string `gorm:"column:target;size:20;not null" json:"target"` // partnumber|description
|
||||||
|
MatchType string `gorm:"column:match_type;size:20;not null" json:"match_type"` // exact|prefix|suffix
|
||||||
|
Pattern string `gorm:"column:pattern;size:500;not null" json:"pattern"`
|
||||||
|
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"created_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (StockIgnoreRule) TableName() string {
|
||||||
|
return "stock_ignore_rules"
|
||||||
|
}
|
||||||
|
|||||||
@@ -55,6 +55,7 @@ type PricelistItem struct {
|
|||||||
ID uint `gorm:"primaryKey" json:"id"`
|
ID uint `gorm:"primaryKey" json:"id"`
|
||||||
PricelistID uint `gorm:"not null;index:idx_pricelist_lot" json:"pricelist_id"`
|
PricelistID uint `gorm:"not null;index:idx_pricelist_lot" json:"pricelist_id"`
|
||||||
LotName string `gorm:"size:255;not null;index:idx_pricelist_lot" json:"lot_name"`
|
LotName string `gorm:"size:255;not null;index:idx_pricelist_lot" json:"lot_name"`
|
||||||
|
LotCategory string `gorm:"column:lot_category;size:50" json:"lot_category,omitempty"`
|
||||||
Price float64 `gorm:"type:decimal(12,2);not null" json:"price"`
|
Price float64 `gorm:"type:decimal(12,2);not null" json:"price"`
|
||||||
PriceMethod string `gorm:"size:20" json:"price_method"`
|
PriceMethod string `gorm:"size:20" json:"price_method"`
|
||||||
|
|
||||||
@@ -65,8 +66,10 @@ type PricelistItem struct {
|
|||||||
MetaPrices string `gorm:"size:1000" json:"meta_prices,omitempty"`
|
MetaPrices string `gorm:"size:1000" json:"meta_prices,omitempty"`
|
||||||
|
|
||||||
// Virtual fields for display
|
// Virtual fields for display
|
||||||
LotDescription string `gorm:"-" json:"lot_description,omitempty"`
|
LotDescription string `gorm:"-" json:"lot_description,omitempty"`
|
||||||
Category string `gorm:"-" json:"category,omitempty"`
|
Category string `gorm:"-" json:"category,omitempty"`
|
||||||
|
AvailableQty *float64 `gorm:"-" json:"available_qty,omitempty"`
|
||||||
|
Partnumbers []string `gorm:"-" json:"partnumbers,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (PricelistItem) TableName() string {
|
func (PricelistItem) TableName() string {
|
||||||
|
|||||||
@@ -6,7 +6,9 @@ type Project struct {
|
|||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
||||||
OwnerUsername string `gorm:"size:100;not null;index" json:"owner_username"`
|
OwnerUsername string `gorm:"size:100;not null;index" json:"owner_username"`
|
||||||
Name string `gorm:"size:200;not null" json:"name"`
|
Code string `gorm:"size:100;not null;index:idx_qt_projects_code_variant,priority:1" json:"code"`
|
||||||
|
Variant string `gorm:"size:100;not null;default:'';index:idx_qt_projects_code_variant,priority:2" json:"variant"`
|
||||||
|
Name *string `gorm:"size:200" json:"name,omitempty"`
|
||||||
TrackerURL string `gorm:"size:500" json:"tracker_url"`
|
TrackerURL string `gorm:"size:500" json:"tracker_url"`
|
||||||
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
||||||
|
|||||||
@@ -3,10 +3,12 @@ package repository
|
|||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"sort"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/lotmatch"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
@@ -26,7 +28,8 @@ func (r *PricelistRepository) List(offset, limit int) ([]models.PricelistSummary
|
|||||||
|
|
||||||
// ListBySource returns pricelists filtered by source when provided.
|
// ListBySource returns pricelists filtered by source when provided.
|
||||||
func (r *PricelistRepository) ListBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
func (r *PricelistRepository) ListBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
query := r.db.Model(&models.Pricelist{})
|
query := r.db.Model(&models.Pricelist{}).
|
||||||
|
Where("EXISTS (SELECT 1 FROM qt_pricelist_items WHERE qt_pricelist_items.pricelist_id = qt_pricelists.id)")
|
||||||
if source != "" {
|
if source != "" {
|
||||||
query = query.Where("source = ?", source)
|
query = query.Where("source = ?", source)
|
||||||
}
|
}
|
||||||
@@ -51,7 +54,9 @@ func (r *PricelistRepository) ListActive(offset, limit int) ([]models.PricelistS
|
|||||||
|
|
||||||
// ListActiveBySource returns active pricelists filtered by source when provided.
|
// ListActiveBySource returns active pricelists filtered by source when provided.
|
||||||
func (r *PricelistRepository) ListActiveBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
func (r *PricelistRepository) ListActiveBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
query := r.db.Model(&models.Pricelist{}).Where("is_active = ?", true)
|
query := r.db.Model(&models.Pricelist{}).
|
||||||
|
Where("is_active = ?", true).
|
||||||
|
Where("EXISTS (SELECT 1 FROM qt_pricelist_items WHERE qt_pricelist_items.pricelist_id = qt_pricelists.id)")
|
||||||
if source != "" {
|
if source != "" {
|
||||||
query = query.Where("source = ?", source)
|
query = query.Where("source = ?", source)
|
||||||
}
|
}
|
||||||
@@ -233,16 +238,107 @@ func (r *PricelistRepository) GetItems(pricelistID uint, offset, limit int, sear
|
|||||||
if err := r.db.Where("lot_name = ?", items[i].LotName).First(&lot).Error; err == nil {
|
if err := r.db.Where("lot_name = ?", items[i].LotName).First(&lot).Error; err == nil {
|
||||||
items[i].LotDescription = lot.LotDescription
|
items[i].LotDescription = lot.LotDescription
|
||||||
}
|
}
|
||||||
// Parse category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
items[i].Category = strings.TrimSpace(items[i].LotCategory)
|
||||||
parts := strings.SplitN(items[i].LotName, "_", 2)
|
}
|
||||||
if len(parts) >= 1 {
|
|
||||||
items[i].Category = parts[0]
|
if err := r.enrichItemsWithStock(items); err != nil {
|
||||||
}
|
return nil, 0, fmt.Errorf("enriching pricelist items with stock: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return items, total, nil
|
return items, total, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *PricelistRepository) enrichItemsWithStock(items []models.PricelistItem) error {
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
resolver, err := lotmatch.NewLotResolverFromDB(r.db)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type stockRow struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber"`
|
||||||
|
Qty *float64 `gorm:"column:qty"`
|
||||||
|
}
|
||||||
|
rows := make([]stockRow, 0)
|
||||||
|
if err := r.db.Raw(`
|
||||||
|
SELECT s.partnumber, s.qty
|
||||||
|
FROM stock_log s
|
||||||
|
INNER JOIN (
|
||||||
|
SELECT partnumber, MAX(date) AS max_date
|
||||||
|
FROM stock_log
|
||||||
|
GROUP BY partnumber
|
||||||
|
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
|
||||||
|
WHERE s.qty IS NOT NULL
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
lotTotals := make(map[string]float64, len(items))
|
||||||
|
lotPartnumbers := make(map[string][]string, len(items))
|
||||||
|
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
|
||||||
|
|
||||||
|
for i := range rows {
|
||||||
|
row := rows[i]
|
||||||
|
if strings.TrimSpace(row.Partnumber) == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lotName, _, resolveErr := resolver.Resolve(row.Partnumber)
|
||||||
|
if resolveErr != nil || strings.TrimSpace(lotName) == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if row.Qty != nil {
|
||||||
|
lotTotals[lotName] += *row.Qty
|
||||||
|
}
|
||||||
|
|
||||||
|
pn := strings.TrimSpace(row.Partnumber)
|
||||||
|
if pn == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := seenPartnumbers[lotName]; !ok {
|
||||||
|
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
|
||||||
|
}
|
||||||
|
key := strings.ToLower(pn)
|
||||||
|
if _, exists := seenPartnumbers[lotName][key]; exists {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seenPartnumbers[lotName][key] = struct{}{}
|
||||||
|
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range items {
|
||||||
|
lotName := items[i].LotName
|
||||||
|
if qty, ok := lotTotals[lotName]; ok {
|
||||||
|
qtyCopy := qty
|
||||||
|
items[i].AvailableQty = &qtyCopy
|
||||||
|
}
|
||||||
|
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
|
||||||
|
sort.Slice(partnumbers, func(a, b int) bool {
|
||||||
|
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
|
||||||
|
})
|
||||||
|
items[i].Partnumbers = partnumbers
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLotNames returns distinct lot names from pricelist items.
|
||||||
|
func (r *PricelistRepository) GetLotNames(pricelistID uint) ([]string, error) {
|
||||||
|
var lotNames []string
|
||||||
|
if err := r.db.Model(&models.PricelistItem{}).
|
||||||
|
Where("pricelist_id = ?", pricelistID).
|
||||||
|
Distinct("lot_name").
|
||||||
|
Order("lot_name ASC").
|
||||||
|
Pluck("lot_name", &lotNames).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("listing pricelist lot names: %w", err)
|
||||||
|
}
|
||||||
|
return lotNames, nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetPriceForLot returns item price for a lot within a pricelist.
|
// GetPriceForLot returns item price for a lot within a pricelist.
|
||||||
func (r *PricelistRepository) GetPriceForLot(pricelistID uint, lotName string) (float64, error) {
|
func (r *PricelistRepository) GetPriceForLot(pricelistID uint, lotName string) (float64, error) {
|
||||||
var item models.PricelistItem
|
var item models.PricelistItem
|
||||||
@@ -252,6 +348,28 @@ func (r *PricelistRepository) GetPriceForLot(pricelistID uint, lotName string) (
|
|||||||
return item.Price, nil
|
return item.Price, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetPricesForLots returns price map for given lots within a pricelist.
|
||||||
|
func (r *PricelistRepository) GetPricesForLots(pricelistID uint, lotNames []string) (map[string]float64, error) {
|
||||||
|
result := make(map[string]float64, len(lotNames))
|
||||||
|
if pricelistID == 0 || len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows []models.PricelistItem
|
||||||
|
if err := r.db.Select("lot_name, price").
|
||||||
|
Where("pricelist_id = ? AND lot_name IN ?", pricelistID, lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, row := range rows {
|
||||||
|
if row.Price > 0 {
|
||||||
|
result[row.LotName] = row.Price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
// SetActive toggles active flag on a pricelist.
|
// SetActive toggles active flag on a pricelist.
|
||||||
func (r *PricelistRepository) SetActive(id uint, isActive bool) error {
|
func (r *PricelistRepository) SetActive(id uint, isActive bool) error {
|
||||||
return r.db.Model(&models.Pricelist{}).Where("id = ?", id).Update("is_active", isActive).Error
|
return r.db.Model(&models.Pricelist{}).Where("id = ?", id).Update("is_active", isActive).Error
|
||||||
@@ -265,17 +383,18 @@ func (r *PricelistRepository) GenerateVersion() (string, error) {
|
|||||||
// GenerateVersionBySource generates a new version string in format YYYY-MM-DD-NNN scoped by source.
|
// GenerateVersionBySource generates a new version string in format YYYY-MM-DD-NNN scoped by source.
|
||||||
func (r *PricelistRepository) GenerateVersionBySource(source string) (string, error) {
|
func (r *PricelistRepository) GenerateVersionBySource(source string) (string, error) {
|
||||||
today := time.Now().Format("2006-01-02")
|
today := time.Now().Format("2006-01-02")
|
||||||
|
prefix := versionPrefixBySource(source)
|
||||||
|
|
||||||
var last models.Pricelist
|
var last models.Pricelist
|
||||||
err := r.db.Model(&models.Pricelist{}).
|
err := r.db.Model(&models.Pricelist{}).
|
||||||
Select("version").
|
Select("version").
|
||||||
Where("source = ? AND version LIKE ?", source, today+"-%").
|
Where("source = ? AND version LIKE ?", source, prefix+"-"+today+"-%").
|
||||||
Order("version DESC").
|
Order("version DESC").
|
||||||
Limit(1).
|
Limit(1).
|
||||||
Take(&last).Error
|
Take(&last).Error
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if errors.Is(err, gorm.ErrRecordNotFound) {
|
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
return fmt.Sprintf("%s-001", today), nil
|
return fmt.Sprintf("%s-%s-001", prefix, today), nil
|
||||||
}
|
}
|
||||||
return "", fmt.Errorf("loading latest today's pricelist version: %w", err)
|
return "", fmt.Errorf("loading latest today's pricelist version: %w", err)
|
||||||
}
|
}
|
||||||
@@ -290,7 +409,18 @@ func (r *PricelistRepository) GenerateVersionBySource(source string) (string, er
|
|||||||
return "", fmt.Errorf("parsing pricelist sequence %q: %w", parts[len(parts)-1], err)
|
return "", fmt.Errorf("parsing pricelist sequence %q: %w", parts[len(parts)-1], err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return fmt.Sprintf("%s-%03d", today, n+1), nil
|
return fmt.Sprintf("%s-%s-%03d", prefix, today, n+1), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func versionPrefixBySource(source string) string {
|
||||||
|
switch models.NormalizePricelistSource(source) {
|
||||||
|
case models.PricelistSourceWarehouse:
|
||||||
|
return "S"
|
||||||
|
case models.PricelistSourceCompetitor:
|
||||||
|
return "B"
|
||||||
|
default:
|
||||||
|
return "E"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetPriceForLotBySource returns item price for a lot from latest active pricelist of source.
|
// GetPriceForLotBySource returns item price for a lot from latest active pricelist of source.
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ func TestGenerateVersion_FirstOfDay(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
today := time.Now().Format("2006-01-02")
|
today := time.Now().Format("2006-01-02")
|
||||||
want := fmt.Sprintf("%s-001", today)
|
want := fmt.Sprintf("E-%s-001", today)
|
||||||
if version != want {
|
if version != want {
|
||||||
t.Fatalf("expected %s, got %s", want, version)
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
}
|
}
|
||||||
@@ -30,8 +30,8 @@ func TestGenerateVersion_UsesMaxSuffixNotCount(t *testing.T) {
|
|||||||
today := time.Now().Format("2006-01-02")
|
today := time.Now().Format("2006-01-02")
|
||||||
|
|
||||||
seed := []models.Pricelist{
|
seed := []models.Pricelist{
|
||||||
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("%s-001", today), CreatedBy: "test", IsActive: true},
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-001", today), CreatedBy: "test", IsActive: true},
|
||||||
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("%s-003", today), CreatedBy: "test", IsActive: true},
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-003", today), CreatedBy: "test", IsActive: true},
|
||||||
}
|
}
|
||||||
for _, pl := range seed {
|
for _, pl := range seed {
|
||||||
if err := repo.Create(&pl); err != nil {
|
if err := repo.Create(&pl); err != nil {
|
||||||
@@ -44,7 +44,7 @@ func TestGenerateVersion_UsesMaxSuffixNotCount(t *testing.T) {
|
|||||||
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
want := fmt.Sprintf("%s-004", today)
|
want := fmt.Sprintf("E-%s-004", today)
|
||||||
if version != want {
|
if version != want {
|
||||||
t.Fatalf("expected %s, got %s", want, version)
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
}
|
}
|
||||||
@@ -55,8 +55,8 @@ func TestGenerateVersion_IsolatedBySource(t *testing.T) {
|
|||||||
today := time.Now().Format("2006-01-02")
|
today := time.Now().Format("2006-01-02")
|
||||||
|
|
||||||
seed := []models.Pricelist{
|
seed := []models.Pricelist{
|
||||||
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("%s-009", today), CreatedBy: "test", IsActive: true},
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-009", today), CreatedBy: "test", IsActive: true},
|
||||||
{Source: string(models.PricelistSourceWarehouse), Version: fmt.Sprintf("%s-002", today), CreatedBy: "test", IsActive: true},
|
{Source: string(models.PricelistSourceWarehouse), Version: fmt.Sprintf("S-%s-002", today), CreatedBy: "test", IsActive: true},
|
||||||
}
|
}
|
||||||
for _, pl := range seed {
|
for _, pl := range seed {
|
||||||
if err := repo.Create(&pl); err != nil {
|
if err := repo.Create(&pl); err != nil {
|
||||||
@@ -69,12 +69,63 @@ func TestGenerateVersion_IsolatedBySource(t *testing.T) {
|
|||||||
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
want := fmt.Sprintf("%s-003", today)
|
want := fmt.Sprintf("S-%s-003", today)
|
||||||
if version != want {
|
if version != want {
|
||||||
t.Fatalf("expected %s, got %s", want, version)
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestGetItems_WarehouseAvailableQtyUsesPrefixResolver(t *testing.T) {
|
||||||
|
repo := newTestPricelistRepository(t)
|
||||||
|
db := repo.db
|
||||||
|
|
||||||
|
warehouse := models.Pricelist{
|
||||||
|
Source: string(models.PricelistSourceWarehouse),
|
||||||
|
Version: "S-2026-02-07-001",
|
||||||
|
CreatedBy: "test",
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
if err := db.Create(&warehouse).Error; err != nil {
|
||||||
|
t.Fatalf("create pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Create(&models.PricelistItem{
|
||||||
|
PricelistID: warehouse.ID,
|
||||||
|
LotName: "SSD_NVME_03.2T",
|
||||||
|
Price: 100,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Create(&models.Lot{LotName: "SSD_NVME_03.2T"}).Error; err != nil {
|
||||||
|
t.Fatalf("create lot: %v", err)
|
||||||
|
}
|
||||||
|
qty := 5.0
|
||||||
|
if err := db.Create(&models.StockLog{
|
||||||
|
Partnumber: "SSD_NVME_03.2T_GEN3_P4610",
|
||||||
|
Date: time.Now(),
|
||||||
|
Price: 200,
|
||||||
|
Qty: &qty,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create stock log: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items, total, err := repo.GetItems(warehouse.ID, 0, 20, "")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetItems: %v", err)
|
||||||
|
}
|
||||||
|
if total != 1 {
|
||||||
|
t.Fatalf("expected total=1, got %d", total)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 item, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].AvailableQty == nil {
|
||||||
|
t.Fatalf("expected available qty to be set")
|
||||||
|
}
|
||||||
|
if *items[0].AvailableQty != 5 {
|
||||||
|
t.Fatalf("expected available qty=5, got %v", *items[0].AvailableQty)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func newTestPricelistRepository(t *testing.T) *PricelistRepository {
|
func newTestPricelistRepository(t *testing.T) *PricelistRepository {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
@@ -82,7 +133,7 @@ func newTestPricelistRepository(t *testing.T) *PricelistRepository {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("open sqlite: %v", err)
|
t.Fatalf("open sqlite: %v", err)
|
||||||
}
|
}
|
||||||
if err := db.AutoMigrate(&models.Pricelist{}); err != nil {
|
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.LotPartnumber{}, &models.StockLog{}); err != nil {
|
||||||
t.Fatalf("migrate: %v", err)
|
t.Fatalf("migrate: %v", err)
|
||||||
}
|
}
|
||||||
return NewPricelistRepository(db)
|
return NewPricelistRepository(db)
|
||||||
|
|||||||
@@ -27,6 +27,8 @@ func (r *ProjectRepository) UpsertByUUID(project *models.Project) error {
|
|||||||
Columns: []clause.Column{{Name: "uuid"}},
|
Columns: []clause.Column{{Name: "uuid"}},
|
||||||
DoUpdates: clause.AssignmentColumns([]string{
|
DoUpdates: clause.AssignmentColumns([]string{
|
||||||
"owner_username",
|
"owner_username",
|
||||||
|
"code",
|
||||||
|
"variant",
|
||||||
"name",
|
"name",
|
||||||
"tracker_url",
|
"tracker_url",
|
||||||
"is_active",
|
"is_active",
|
||||||
|
|||||||
@@ -83,10 +83,6 @@ func (r *UnifiedRepo) getComponentsOffline(filter ComponentFilter, offset, limit
|
|||||||
search := "%" + filter.Search + "%"
|
search := "%" + filter.Search + "%"
|
||||||
query = query.Where("lot_name LIKE ? OR lot_description LIKE ? OR model LIKE ?", search, search, search)
|
query = query.Where("lot_name LIKE ? OR lot_description LIKE ? OR model LIKE ?", search, search, search)
|
||||||
}
|
}
|
||||||
if filter.HasPrice {
|
|
||||||
query = query.Where("current_price IS NOT NULL AND current_price > 0")
|
|
||||||
}
|
|
||||||
|
|
||||||
var total int64
|
var total int64
|
||||||
query.Count(&total)
|
query.Count(&total)
|
||||||
|
|
||||||
@@ -96,8 +92,6 @@ func (r *UnifiedRepo) getComponentsOffline(filter ComponentFilter, offset, limit
|
|||||||
sortDir = "DESC"
|
sortDir = "DESC"
|
||||||
}
|
}
|
||||||
switch filter.SortField {
|
switch filter.SortField {
|
||||||
case "current_price":
|
|
||||||
query = query.Order("current_price " + sortDir)
|
|
||||||
case "lot_name":
|
case "lot_name":
|
||||||
query = query.Order("lot_name " + sortDir)
|
query = query.Order("lot_name " + sortDir)
|
||||||
default:
|
default:
|
||||||
@@ -112,9 +106,8 @@ func (r *UnifiedRepo) getComponentsOffline(filter ComponentFilter, offset, limit
|
|||||||
result := make([]models.LotMetadata, len(components))
|
result := make([]models.LotMetadata, len(components))
|
||||||
for i, comp := range components {
|
for i, comp := range components {
|
||||||
result[i] = models.LotMetadata{
|
result[i] = models.LotMetadata{
|
||||||
LotName: comp.LotName,
|
LotName: comp.LotName,
|
||||||
Model: comp.Model,
|
Model: comp.Model,
|
||||||
CurrentPrice: comp.CurrentPrice,
|
|
||||||
Lot: &models.Lot{
|
Lot: &models.Lot{
|
||||||
LotName: comp.LotName,
|
LotName: comp.LotName,
|
||||||
LotDescription: comp.LotDescription,
|
LotDescription: comp.LotDescription,
|
||||||
@@ -138,9 +131,8 @@ func (r *UnifiedRepo) GetComponent(lotName string) (*models.LotMetadata, error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
return &models.LotMetadata{
|
return &models.LotMetadata{
|
||||||
LotName: comp.LotName,
|
LotName: comp.LotName,
|
||||||
Model: comp.Model,
|
Model: comp.Model,
|
||||||
CurrentPrice: comp.CurrentPrice,
|
|
||||||
Lot: &models.Lot{
|
Lot: &models.Lot{
|
||||||
LotName: comp.LotName,
|
LotName: comp.LotName,
|
||||||
LotDescription: comp.LotDescription,
|
LotDescription: comp.LotDescription,
|
||||||
|
|||||||
@@ -1,199 +0,0 @@
|
|||||||
package alerts
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Service struct {
|
|
||||||
alertRepo *repository.AlertRepository
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
priceRepo *repository.PriceRepository
|
|
||||||
statsRepo *repository.StatsRepository
|
|
||||||
config config.AlertsConfig
|
|
||||||
pricingConfig config.PricingConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewService(
|
|
||||||
alertRepo *repository.AlertRepository,
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
priceRepo *repository.PriceRepository,
|
|
||||||
statsRepo *repository.StatsRepository,
|
|
||||||
alertCfg config.AlertsConfig,
|
|
||||||
pricingCfg config.PricingConfig,
|
|
||||||
) *Service {
|
|
||||||
return &Service{
|
|
||||||
alertRepo: alertRepo,
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
priceRepo: priceRepo,
|
|
||||||
statsRepo: statsRepo,
|
|
||||||
config: alertCfg,
|
|
||||||
pricingConfig: pricingCfg,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) List(filter repository.AlertFilter, page, perPage int) ([]models.PricingAlert, int64, error) {
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 || perPage > 100 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
|
|
||||||
return s.alertRepo.List(filter, offset, perPage)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Acknowledge(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusAcknowledged)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Resolve(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusResolved)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Ignore(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusIgnored)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) GetNewAlertsCount() (int64, error) {
|
|
||||||
return s.alertRepo.CountByStatus(models.AlertStatusNew)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CheckAndGenerateAlerts scans components and creates alerts
|
|
||||||
func (s *Service) CheckAndGenerateAlerts() error {
|
|
||||||
if !s.config.Enabled {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get top components by usage
|
|
||||||
topComponents, err := s.statsRepo.GetTopComponents(100)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, stats := range topComponents {
|
|
||||||
component, err := s.componentRepo.GetByLotName(stats.LotName)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check high demand + stale price
|
|
||||||
if err := s.checkHighDemandStalePrice(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check trending without price
|
|
||||||
if err := s.checkTrendingNoPrice(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check no recent quotes
|
|
||||||
if err := s.checkNoRecentQuotes(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkHighDemandStalePrice(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// high_demand_stale_price: >= 5 quotes/month AND price > 60 days old
|
|
||||||
if stats.QuotesLast30d < s.config.HighDemandThreshold {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.PriceUpdatedAt == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
daysSinceUpdate := int(time.Since(*comp.PriceUpdatedAt).Hours() / 24)
|
|
||||||
if daysSinceUpdate <= s.pricingConfig.FreshnessYellowDays {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if alert already exists
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertHighDemandStalePrice)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertHighDemandStalePrice,
|
|
||||||
Severity: models.SeverityCritical,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: высокий спрос (%d КП/мес), но цена устарела (%d дней)", comp.LotName, stats.QuotesLast30d, daysSinceUpdate),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"quotes_30d": stats.QuotesLast30d,
|
|
||||||
"days_since_update": daysSinceUpdate,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkTrendingNoPrice(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// trending_no_price: trend > 50% AND no price
|
|
||||||
if stats.TrendDirection != models.TrendUp || stats.TrendPercent < float64(s.config.TrendingThresholdPercent) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.CurrentPrice != nil && *comp.CurrentPrice > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertTrendingNoPrice)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertTrendingNoPrice,
|
|
||||||
Severity: models.SeverityHigh,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: рост спроса +%.0f%%, но цена не установлена", comp.LotName, stats.TrendPercent),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"trend_percent": stats.TrendPercent,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkNoRecentQuotes(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// no_recent_quotes: popular component, no supplier quotes > 90 days
|
|
||||||
if stats.QuotesLast30d < 3 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
quoteCount, err := s.priceRepo.GetQuoteCount(comp.LotName, s.pricingConfig.FreshnessRedDays)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if quoteCount > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertNoRecentQuotes)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertNoRecentQuotes,
|
|
||||||
Severity: models.SeverityMedium,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: популярный (%d КП), но нет новых котировок >%d дней", comp.LotName, stats.QuotesLast30d, s.pricingConfig.FreshnessRedDays),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"quotes_30d": stats.QuotesLast30d,
|
|
||||||
"no_quotes_days": s.pricingConfig.FreshnessRedDays,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
@@ -53,7 +53,6 @@ type ComponentView struct {
|
|||||||
Category string `json:"category"`
|
Category string `json:"category"`
|
||||||
CategoryName string `json:"category_name"`
|
CategoryName string `json:"category_name"`
|
||||||
Model string `json:"model"`
|
Model string `json:"model"`
|
||||||
CurrentPrice *float64 `json:"current_price"`
|
|
||||||
PriceFreshness models.PriceFreshness `json:"price_freshness"`
|
PriceFreshness models.PriceFreshness `json:"price_freshness"`
|
||||||
PopularityScore float64 `json:"popularity_score"`
|
PopularityScore float64 `json:"popularity_score"`
|
||||||
Specs models.Specs `json:"specs,omitempty"`
|
Specs models.Specs `json:"specs,omitempty"`
|
||||||
@@ -92,7 +91,6 @@ func (s *ComponentService) List(filter repository.ComponentFilter, page, perPage
|
|||||||
view := ComponentView{
|
view := ComponentView{
|
||||||
LotName: c.LotName,
|
LotName: c.LotName,
|
||||||
Model: c.Model,
|
Model: c.Model,
|
||||||
CurrentPrice: c.CurrentPrice,
|
|
||||||
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
||||||
PopularityScore: c.PopularityScore,
|
PopularityScore: c.PopularityScore,
|
||||||
Specs: c.Specs,
|
Specs: c.Specs,
|
||||||
@@ -134,7 +132,6 @@ func (s *ComponentService) GetByLotName(lotName string) (*ComponentView, error)
|
|||||||
view := &ComponentView{
|
view := &ComponentView{
|
||||||
LotName: c.LotName,
|
LotName: c.LotName,
|
||||||
Model: c.Model,
|
Model: c.Model,
|
||||||
CurrentPrice: c.CurrentPrice,
|
|
||||||
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
||||||
PopularityScore: c.PopularityScore,
|
PopularityScore: c.PopularityScore,
|
||||||
Specs: c.Specs,
|
Specs: c.Specs,
|
||||||
|
|||||||
@@ -52,6 +52,17 @@ type CreateConfigRequest struct {
|
|||||||
Notes string `json:"notes"`
|
Notes string `json:"notes"`
|
||||||
IsTemplate bool `json:"is_template"`
|
IsTemplate bool `json:"is_template"`
|
||||||
ServerCount int `json:"server_count"`
|
ServerCount int `json:"server_count"`
|
||||||
|
ServerModel string `json:"server_model,omitempty"`
|
||||||
|
SupportCode string `json:"support_code,omitempty"`
|
||||||
|
Article string `json:"article,omitempty"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ArticlePreviewRequest struct {
|
||||||
|
Items models.ConfigItems `json:"items"`
|
||||||
|
ServerModel string `json:"server_model"`
|
||||||
|
SupportCode string `json:"support_code,omitempty"`
|
||||||
PricelistID *uint `json:"pricelist_id,omitempty"`
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -83,7 +94,11 @@ func (s *ConfigurationService) Create(ownerUsername string, req *CreateConfigReq
|
|||||||
Notes: req.Notes,
|
Notes: req.Notes,
|
||||||
IsTemplate: req.IsTemplate,
|
IsTemplate: req.IsTemplate,
|
||||||
ServerCount: req.ServerCount,
|
ServerCount: req.ServerCount,
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
Article: req.Article,
|
||||||
PricelistID: pricelistID,
|
PricelistID: pricelistID,
|
||||||
|
OnlyInStock: req.OnlyInStock,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(config); err != nil {
|
if err := s.configRepo.Create(config); err != nil {
|
||||||
@@ -144,7 +159,11 @@ func (s *ConfigurationService) Update(uuid string, ownerUsername string, req *Cr
|
|||||||
config.Notes = req.Notes
|
config.Notes = req.Notes
|
||||||
config.IsTemplate = req.IsTemplate
|
config.IsTemplate = req.IsTemplate
|
||||||
config.ServerCount = req.ServerCount
|
config.ServerCount = req.ServerCount
|
||||||
|
config.ServerModel = req.ServerModel
|
||||||
|
config.SupportCode = req.SupportCode
|
||||||
|
config.Article = req.Article
|
||||||
config.PricelistID = pricelistID
|
config.PricelistID = pricelistID
|
||||||
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
if err := s.configRepo.Update(config); err != nil {
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -222,6 +241,7 @@ func (s *ConfigurationService) CloneToProject(configUUID string, ownerUsername s
|
|||||||
IsTemplate: false, // Clone is never a template
|
IsTemplate: false, // Clone is never a template
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
PricelistID: original.PricelistID,
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(clone); err != nil {
|
if err := s.configRepo.Create(clone); err != nil {
|
||||||
@@ -295,6 +315,7 @@ func (s *ConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigReques
|
|||||||
config.IsTemplate = req.IsTemplate
|
config.IsTemplate = req.IsTemplate
|
||||||
config.ServerCount = req.ServerCount
|
config.ServerCount = req.ServerCount
|
||||||
config.PricelistID = pricelistID
|
config.PricelistID = pricelistID
|
||||||
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
if err := s.configRepo.Update(config); err != nil {
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -362,6 +383,7 @@ func (s *ConfigurationService) CloneNoAuthToProject(configUUID string, newName s
|
|||||||
IsTemplate: false,
|
IsTemplate: false,
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
PricelistID: original.PricelistID,
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(clone); err != nil {
|
if err := s.configRepo.Create(clone); err != nil {
|
||||||
|
|||||||
@@ -4,6 +4,8 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
@@ -25,6 +27,7 @@ func NewExportService(cfg config.ExportConfig, categoryRepo *repository.Category
|
|||||||
|
|
||||||
type ExportData struct {
|
type ExportData struct {
|
||||||
Name string
|
Name string
|
||||||
|
Article string
|
||||||
Items []ExportItem
|
Items []ExportItem
|
||||||
Total float64
|
Total float64
|
||||||
Notes string
|
Notes string
|
||||||
@@ -40,14 +43,21 @@ type ExportItem struct {
|
|||||||
TotalPrice float64
|
TotalPrice float64
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ExportService) ToCSV(data *ExportData) ([]byte, error) {
|
func (s *ExportService) ToCSV(w io.Writer, data *ExportData) error {
|
||||||
var buf bytes.Buffer
|
// Write UTF-8 BOM for Excel compatibility
|
||||||
w := csv.NewWriter(&buf)
|
if _, err := w.Write([]byte{0xEF, 0xBB, 0xBF}); err != nil {
|
||||||
|
return fmt.Errorf("failed to write BOM: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvWriter := csv.NewWriter(w)
|
||||||
|
// Use semicolon as delimiter for Russian Excel locale
|
||||||
|
csvWriter.Comma = ';'
|
||||||
|
defer csvWriter.Flush()
|
||||||
|
|
||||||
// Header
|
// Header
|
||||||
headers := []string{"Артикул", "Описание", "Категория", "Количество", "Цена за единицу", "Сумма"}
|
headers := []string{"Артикул", "Описание", "Категория", "Количество", "Цена за единицу", "Сумма"}
|
||||||
if err := w.Write(headers); err != nil {
|
if err := csvWriter.Write(headers); err != nil {
|
||||||
return nil, err
|
return fmt.Errorf("failed to write header: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get category hierarchy for sorting
|
// Get category hierarchy for sorting
|
||||||
@@ -90,21 +100,35 @@ func (s *ExportService) ToCSV(data *ExportData) ([]byte, error) {
|
|||||||
item.Description,
|
item.Description,
|
||||||
item.Category,
|
item.Category,
|
||||||
fmt.Sprintf("%d", item.Quantity),
|
fmt.Sprintf("%d", item.Quantity),
|
||||||
fmt.Sprintf("%.2f", item.UnitPrice),
|
strings.ReplaceAll(fmt.Sprintf("%.2f", item.UnitPrice), ".", ","),
|
||||||
fmt.Sprintf("%.2f", item.TotalPrice),
|
strings.ReplaceAll(fmt.Sprintf("%.2f", item.TotalPrice), ".", ","),
|
||||||
}
|
}
|
||||||
if err := w.Write(row); err != nil {
|
if err := csvWriter.Write(row); err != nil {
|
||||||
return nil, err
|
return fmt.Errorf("failed to write row: %w", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Total row
|
// Total row
|
||||||
if err := w.Write([]string{"", "", "", "", "ИТОГО:", fmt.Sprintf("%.2f", data.Total)}); err != nil {
|
totalStr := strings.ReplaceAll(fmt.Sprintf("%.2f", data.Total), ".", ",")
|
||||||
return nil, err
|
if err := csvWriter.Write([]string{data.Article, "", "", "", "ИТОГО:", totalStr}); err != nil {
|
||||||
|
return fmt.Errorf("failed to write total row: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
w.Flush()
|
csvWriter.Flush()
|
||||||
return buf.Bytes(), w.Error()
|
if err := csvWriter.Error(); err != nil {
|
||||||
|
return fmt.Errorf("csv writer error: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToCSVBytes is a backward-compatible wrapper that returns CSV data as bytes
|
||||||
|
func (s *ExportService) ToCSVBytes(data *ExportData) ([]byte, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := s.ToCSV(&buf, data); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return buf.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ExportService) ConfigToExportData(config *models.Configuration, componentService *ComponentService) *ExportData {
|
func (s *ExportService) ConfigToExportData(config *models.Configuration, componentService *ComponentService) *ExportData {
|
||||||
@@ -139,6 +163,7 @@ func (s *ExportService) ConfigToExportData(config *models.Configuration, compone
|
|||||||
|
|
||||||
return &ExportData{
|
return &ExportData{
|
||||||
Name: config.Name,
|
Name: config.Name,
|
||||||
|
Article: "",
|
||||||
Items: items,
|
Items: items,
|
||||||
Total: total,
|
Total: total,
|
||||||
Notes: config.Notes,
|
Notes: config.Notes,
|
||||||
|
|||||||
343
internal/services/export_test.go
Normal file
343
internal/services/export_test.go
Normal file
@@ -0,0 +1,343 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/csv"
|
||||||
|
"io"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
func TestToCSV_UTF8BOM(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Description: "Test Item",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 100.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
if len(csvBytes) < 3 {
|
||||||
|
t.Fatalf("CSV too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check UTF-8 BOM: 0xEF 0xBB 0xBF
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := csvBytes[:3]
|
||||||
|
|
||||||
|
if bytes.Compare(actualBOM, expectedBOM) != 0 {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch. Expected %v, got %v", expectedBOM, actualBOM)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_SemicolonDelimiter(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Description: "Test Item",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 2,
|
||||||
|
UnitPrice: 100.50,
|
||||||
|
TotalPrice: 201.00,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 201.00,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip BOM and read CSV with semicolon delimiter
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Read header
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 6 {
|
||||||
|
t.Errorf("Expected 6 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedHeader := []string{"Артикул", "Описание", "Категория", "Количество", "Цена за единицу", "Сумма"}
|
||||||
|
for i, col := range expectedHeader {
|
||||||
|
if i < len(header) && header[i] != col {
|
||||||
|
t.Errorf("Column %d: expected %q, got %q", i, col, header[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read item row
|
||||||
|
itemRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read item row: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if itemRow[0] != "LOT-001" {
|
||||||
|
t.Errorf("Lot name mismatch: expected LOT-001, got %s", itemRow[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
if itemRow[3] != "2" {
|
||||||
|
t.Errorf("Quantity mismatch: expected 2, got %s", itemRow[3])
|
||||||
|
}
|
||||||
|
|
||||||
|
if itemRow[4] != "100,50" {
|
||||||
|
t.Errorf("Unit price mismatch: expected 100,50, got %s", itemRow[4])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_TotalRow(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Description: "Item 1",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
LotName: "LOT-002",
|
||||||
|
Description: "Item 2",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 2,
|
||||||
|
UnitPrice: 50.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 200.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Skip header and item rows
|
||||||
|
reader.Read()
|
||||||
|
reader.Read()
|
||||||
|
reader.Read()
|
||||||
|
|
||||||
|
// Read total row
|
||||||
|
totalRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read total row: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Total row should have "ИТОГО:" in position 4 and total value in position 5
|
||||||
|
if totalRow[4] != "ИТОГО:" {
|
||||||
|
t.Errorf("Expected 'ИТОГО:' in column 4, got %q", totalRow[4])
|
||||||
|
}
|
||||||
|
|
||||||
|
if totalRow[5] != "200,00" {
|
||||||
|
t.Errorf("Expected total 200,00, got %s", totalRow[5])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_CategorySorting(t *testing.T) {
|
||||||
|
// Test category sorting without category repo (items maintain original order)
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Category: "CAT-A",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
LotName: "LOT-002",
|
||||||
|
Category: "CAT-C",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
LotName: "LOT-003",
|
||||||
|
Category: "CAT-B",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 300.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Skip header
|
||||||
|
reader.Read()
|
||||||
|
|
||||||
|
// Without category repo, items maintain original order
|
||||||
|
row1, _ := reader.Read()
|
||||||
|
if row1[0] != "LOT-001" {
|
||||||
|
t.Errorf("Expected LOT-001 first, got %s", row1[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
row2, _ := reader.Read()
|
||||||
|
if row2[0] != "LOT-002" {
|
||||||
|
t.Errorf("Expected LOT-002 second, got %s", row2[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
row3, _ := reader.Read()
|
||||||
|
if row3[0] != "LOT-003" {
|
||||||
|
t.Errorf("Expected LOT-003 third, got %s", row3[0])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_EmptyData(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{},
|
||||||
|
Total: 0.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Should have header and total row
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 6 {
|
||||||
|
t.Errorf("Expected 6 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
|
||||||
|
totalRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read total row: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if totalRow[4] != "ИТОГО:" {
|
||||||
|
t.Errorf("Expected ИТОГО: in total row, got %s", totalRow[4])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSVBytes_BackwardCompat(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Description: "Test Item",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 100.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes, err := svc.ToCSVBytes(data)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ToCSVBytes failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(csvBytes) < 3 {
|
||||||
|
t.Fatalf("CSV bytes too short")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify BOM is present
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := csvBytes[:3]
|
||||||
|
if bytes.Compare(actualBOM, expectedBOM) != 0 {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch in ToCSVBytes")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_WriterError(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil)
|
||||||
|
|
||||||
|
data := &ExportData{
|
||||||
|
Name: "Test",
|
||||||
|
Items: []ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Description: "Test",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Total: 100.0,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use a failing writer
|
||||||
|
failingWriter := &failingWriter{}
|
||||||
|
|
||||||
|
if err := svc.ToCSV(failingWriter, data); err == nil {
|
||||||
|
t.Errorf("Expected error from failing writer, got nil")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// failingWriter always returns an error
|
||||||
|
type failingWriter struct{}
|
||||||
|
|
||||||
|
func (fw *failingWriter) Write(p []byte) (int, error) {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/article"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
@@ -64,6 +65,18 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(req.ServerModel) != "" {
|
||||||
|
articleResult, articleErr := article.Build(s.localDB, req.Items, article.BuildOptions{
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
ServerPricelist: pricelistID,
|
||||||
|
})
|
||||||
|
if articleErr != nil {
|
||||||
|
return nil, articleErr
|
||||||
|
}
|
||||||
|
req.Article = articleResult.Article
|
||||||
|
}
|
||||||
|
|
||||||
total := req.Items.Total()
|
total := req.Items.Total()
|
||||||
if req.ServerCount > 1 {
|
if req.ServerCount > 1 {
|
||||||
total *= float64(req.ServerCount)
|
total *= float64(req.ServerCount)
|
||||||
@@ -80,7 +93,11 @@ func (s *LocalConfigurationService) Create(ownerUsername string, req *CreateConf
|
|||||||
Notes: req.Notes,
|
Notes: req.Notes,
|
||||||
IsTemplate: req.IsTemplate,
|
IsTemplate: req.IsTemplate,
|
||||||
ServerCount: req.ServerCount,
|
ServerCount: req.ServerCount,
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
Article: req.Article,
|
||||||
PricelistID: pricelistID,
|
PricelistID: pricelistID,
|
||||||
|
OnlyInStock: req.OnlyInStock,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -141,6 +158,18 @@ func (s *LocalConfigurationService) Update(uuid string, ownerUsername string, re
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(req.ServerModel) != "" {
|
||||||
|
articleResult, articleErr := article.Build(s.localDB, req.Items, article.BuildOptions{
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
ServerPricelist: pricelistID,
|
||||||
|
})
|
||||||
|
if articleErr != nil {
|
||||||
|
return nil, articleErr
|
||||||
|
}
|
||||||
|
req.Article = articleResult.Article
|
||||||
|
}
|
||||||
|
|
||||||
total := req.Items.Total()
|
total := req.Items.Total()
|
||||||
if req.ServerCount > 1 {
|
if req.ServerCount > 1 {
|
||||||
total *= float64(req.ServerCount)
|
total *= float64(req.ServerCount)
|
||||||
@@ -162,7 +191,11 @@ func (s *LocalConfigurationService) Update(uuid string, ownerUsername string, re
|
|||||||
localCfg.Notes = req.Notes
|
localCfg.Notes = req.Notes
|
||||||
localCfg.IsTemplate = req.IsTemplate
|
localCfg.IsTemplate = req.IsTemplate
|
||||||
localCfg.ServerCount = req.ServerCount
|
localCfg.ServerCount = req.ServerCount
|
||||||
|
localCfg.ServerModel = req.ServerModel
|
||||||
|
localCfg.SupportCode = req.SupportCode
|
||||||
|
localCfg.Article = req.Article
|
||||||
localCfg.PricelistID = pricelistID
|
localCfg.PricelistID = pricelistID
|
||||||
|
localCfg.OnlyInStock = req.OnlyInStock
|
||||||
localCfg.UpdatedAt = time.Now()
|
localCfg.UpdatedAt = time.Now()
|
||||||
localCfg.SyncStatus = "pending"
|
localCfg.SyncStatus = "pending"
|
||||||
|
|
||||||
@@ -174,6 +207,19 @@ func (s *LocalConfigurationService) Update(uuid string, ownerUsername string, re
|
|||||||
return cfg, nil
|
return cfg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// BuildArticlePreview generates server article based on current items and server_model/support_code.
|
||||||
|
func (s *LocalConfigurationService) BuildArticlePreview(req *ArticlePreviewRequest) (article.BuildResult, error) {
|
||||||
|
pricelistID, err := s.resolvePricelistID(req.PricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return article.BuildResult{}, err
|
||||||
|
}
|
||||||
|
return article.Build(s.localDB, req.Items, article.BuildOptions{
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
ServerPricelist: pricelistID,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// Delete deletes a configuration from local SQLite and queues it for sync
|
// Delete deletes a configuration from local SQLite and queues it for sync
|
||||||
func (s *LocalConfigurationService) Delete(uuid string, ownerUsername string) error {
|
func (s *LocalConfigurationService) Delete(uuid string, ownerUsername string) error {
|
||||||
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
localCfg, err := s.localDB.GetConfigurationByUUID(uuid)
|
||||||
@@ -267,7 +313,11 @@ func (s *LocalConfigurationService) CloneToProject(configUUID string, ownerUsern
|
|||||||
Notes: original.Notes,
|
Notes: original.Notes,
|
||||||
IsTemplate: false,
|
IsTemplate: false,
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
|
ServerModel: original.ServerModel,
|
||||||
|
SupportCode: original.SupportCode,
|
||||||
|
Article: original.Article,
|
||||||
PricelistID: original.PricelistID,
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -344,7 +394,7 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
|||||||
}
|
}
|
||||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
||||||
|
|
||||||
// Update prices for all items
|
// Update prices for all items from pricelist
|
||||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||||
for i, item := range localCfg.Items {
|
for i, item := range localCfg.Items {
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if latestErr == nil && latestPricelist != nil {
|
||||||
@@ -359,20 +409,8 @@ func (s *LocalConfigurationService) RefreshPrices(uuid string, ownerUsername str
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to current component price from local cache
|
// Keep original item if price not found in pricelist
|
||||||
component, err := s.localDB.GetLocalComponent(item.LotName)
|
updatedItems[i] = item
|
||||||
if err != nil || component.CurrentPrice == nil {
|
|
||||||
// Keep original item if component not found or no price available
|
|
||||||
updatedItems[i] = item
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update item with current price from local cache
|
|
||||||
updatedItems[i] = localdb.LocalConfigItem{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Quantity: item.Quantity,
|
|
||||||
UnitPrice: *component.CurrentPrice,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update configuration
|
// Update configuration
|
||||||
@@ -433,6 +471,18 @@ func (s *LocalConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigR
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(req.ServerModel) != "" {
|
||||||
|
articleResult, articleErr := article.Build(s.localDB, req.Items, article.BuildOptions{
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
ServerPricelist: pricelistID,
|
||||||
|
})
|
||||||
|
if articleErr != nil {
|
||||||
|
return nil, articleErr
|
||||||
|
}
|
||||||
|
req.Article = articleResult.Article
|
||||||
|
}
|
||||||
|
|
||||||
total := req.Items.Total()
|
total := req.Items.Total()
|
||||||
if req.ServerCount > 1 {
|
if req.ServerCount > 1 {
|
||||||
total *= float64(req.ServerCount)
|
total *= float64(req.ServerCount)
|
||||||
@@ -453,7 +503,11 @@ func (s *LocalConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigR
|
|||||||
localCfg.Notes = req.Notes
|
localCfg.Notes = req.Notes
|
||||||
localCfg.IsTemplate = req.IsTemplate
|
localCfg.IsTemplate = req.IsTemplate
|
||||||
localCfg.ServerCount = req.ServerCount
|
localCfg.ServerCount = req.ServerCount
|
||||||
|
localCfg.ServerModel = req.ServerModel
|
||||||
|
localCfg.SupportCode = req.SupportCode
|
||||||
|
localCfg.Article = req.Article
|
||||||
localCfg.PricelistID = pricelistID
|
localCfg.PricelistID = pricelistID
|
||||||
|
localCfg.OnlyInStock = req.OnlyInStock
|
||||||
localCfg.UpdatedAt = time.Now()
|
localCfg.UpdatedAt = time.Now()
|
||||||
localCfg.SyncStatus = "pending"
|
localCfg.SyncStatus = "pending"
|
||||||
|
|
||||||
@@ -546,6 +600,7 @@ func (s *LocalConfigurationService) CloneNoAuthToProject(configUUID string, newN
|
|||||||
IsTemplate: false,
|
IsTemplate: false,
|
||||||
ServerCount: original.ServerCount,
|
ServerCount: original.ServerCount,
|
||||||
PricelistID: original.PricelistID,
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -595,26 +650,6 @@ func (s *LocalConfigurationService) ListAll(page, perPage int) ([]models.Configu
|
|||||||
|
|
||||||
// ListAllWithStatus returns configurations filtered by status: active|archived|all.
|
// ListAllWithStatus returns configurations filtered by status: active|archived|all.
|
||||||
func (s *LocalConfigurationService) ListAllWithStatus(page, perPage int, status string, search string) ([]models.Configuration, int64, error) {
|
func (s *LocalConfigurationService) ListAllWithStatus(page, perPage int, status string, search string) ([]models.Configuration, int64, error) {
|
||||||
localConfigs, err := s.localDB.GetConfigurations()
|
|
||||||
if err != nil {
|
|
||||||
return nil, 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
search = strings.ToLower(strings.TrimSpace(search))
|
|
||||||
configs := make([]models.Configuration, len(localConfigs))
|
|
||||||
configs = configs[:0]
|
|
||||||
for _, lc := range localConfigs {
|
|
||||||
if !matchesConfigStatus(lc.IsActive, status) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if search != "" && !strings.Contains(strings.ToLower(lc.Name), search) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
configs = append(configs, *localdb.LocalToConfiguration(&lc))
|
|
||||||
}
|
|
||||||
|
|
||||||
total := int64(len(configs))
|
|
||||||
|
|
||||||
// Apply pagination
|
// Apply pagination
|
||||||
if page < 1 {
|
if page < 1 {
|
||||||
page = 1
|
page = 1
|
||||||
@@ -623,17 +658,15 @@ func (s *LocalConfigurationService) ListAllWithStatus(page, perPage int, status
|
|||||||
perPage = 20
|
perPage = 20
|
||||||
}
|
}
|
||||||
offset := (page - 1) * perPage
|
offset := (page - 1) * perPage
|
||||||
|
localConfigs, total, err := s.localDB.ListConfigurationsWithFilters(status, search, offset, perPage)
|
||||||
start := offset
|
if err != nil {
|
||||||
if start > len(configs) {
|
return nil, 0, err
|
||||||
start = len(configs)
|
|
||||||
}
|
}
|
||||||
end := start + perPage
|
configs := make([]models.Configuration, 0, len(localConfigs))
|
||||||
if end > len(configs) {
|
for _, lc := range localConfigs {
|
||||||
end = len(configs)
|
configs = append(configs, *localdb.LocalToConfiguration(&lc))
|
||||||
}
|
}
|
||||||
|
return configs, total, nil
|
||||||
return configs[start:end], total, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListTemplates returns all template configurations
|
// ListTemplates returns all template configurations
|
||||||
@@ -689,7 +722,7 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
|||||||
}
|
}
|
||||||
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
latestPricelist, latestErr := s.localDB.GetLatestLocalPricelist()
|
||||||
|
|
||||||
// Update prices for all items
|
// Update prices for all items from pricelist
|
||||||
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
updatedItems := make(localdb.LocalConfigItems, len(localCfg.Items))
|
||||||
for i, item := range localCfg.Items {
|
for i, item := range localCfg.Items {
|
||||||
if latestErr == nil && latestPricelist != nil {
|
if latestErr == nil && latestPricelist != nil {
|
||||||
@@ -704,20 +737,8 @@ func (s *LocalConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Co
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fallback to current component price from local cache
|
// Keep original item if price not found in pricelist
|
||||||
component, err := s.localDB.GetLocalComponent(item.LotName)
|
updatedItems[i] = item
|
||||||
if err != nil || component.CurrentPrice == nil {
|
|
||||||
// Keep original item if component not found or no price available
|
|
||||||
updatedItems[i] = item
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update item with current price from local cache
|
|
||||||
updatedItems[i] = localdb.LocalConfigItem{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Quantity: item.Quantity,
|
|
||||||
UnitPrice: *component.CurrentPrice,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update configuration
|
// Update configuration
|
||||||
@@ -1029,6 +1050,7 @@ func (s *LocalConfigurationService) rollbackToVersion(configurationUUID string,
|
|||||||
current.IsTemplate = rollbackData.IsTemplate
|
current.IsTemplate = rollbackData.IsTemplate
|
||||||
current.ServerCount = rollbackData.ServerCount
|
current.ServerCount = rollbackData.ServerCount
|
||||||
current.PricelistID = rollbackData.PricelistID
|
current.PricelistID = rollbackData.PricelistID
|
||||||
|
current.OnlyInStock = rollbackData.OnlyInStock
|
||||||
current.PriceUpdatedAt = rollbackData.PriceUpdatedAt
|
current.PriceUpdatedAt = rollbackData.PriceUpdatedAt
|
||||||
current.UpdatedAt = time.Now()
|
current.UpdatedAt = time.Now()
|
||||||
current.SyncStatus = "pending"
|
current.SyncStatus = "pending"
|
||||||
|
|||||||
@@ -191,7 +191,8 @@ func TestUpdateNoAuthKeepsProjectWhenProjectUUIDOmitted(t *testing.T) {
|
|||||||
project := &localdb.LocalProject{
|
project := &localdb.LocalProject{
|
||||||
UUID: "project-keep",
|
UUID: "project-keep",
|
||||||
OwnerUsername: "tester",
|
OwnerUsername: "tester",
|
||||||
Name: "Keep Project",
|
Code: "TEST-KEEP",
|
||||||
|
Name: ptrString("Keep Project"),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
CreatedAt: time.Now(),
|
CreatedAt: time.Now(),
|
||||||
UpdatedAt: time.Now(),
|
UpdatedAt: time.Now(),
|
||||||
@@ -227,6 +228,10 @@ func TestUpdateNoAuthKeepsProjectWhenProjectUUIDOmitted(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
func newLocalConfigServiceForTest(t *testing.T) (*LocalConfigurationService, *localdb.LocalDB) {
|
func newLocalConfigServiceForTest(t *testing.T) (*LocalConfigurationService, *localdb.LocalDB) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
|
|
||||||
|
|||||||
@@ -1,340 +0,0 @@
|
|||||||
package pricelist
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"log/slog"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricing"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Service struct {
|
|
||||||
repo *repository.PricelistRepository
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
pricingSvc *pricing.Service
|
|
||||||
db *gorm.DB
|
|
||||||
}
|
|
||||||
|
|
||||||
type CreateProgress struct {
|
|
||||||
Current int
|
|
||||||
Total int
|
|
||||||
Status string
|
|
||||||
Message string
|
|
||||||
Updated int
|
|
||||||
Errors int
|
|
||||||
LotName string
|
|
||||||
}
|
|
||||||
|
|
||||||
type CreateItemInput struct {
|
|
||||||
LotName string
|
|
||||||
Price float64
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewService(db *gorm.DB, repo *repository.PricelistRepository, componentRepo *repository.ComponentRepository, pricingSvc *pricing.Service) *Service {
|
|
||||||
return &Service{
|
|
||||||
repo: repo,
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
pricingSvc: pricingSvc,
|
|
||||||
db: db,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateFromCurrentPrices creates a new pricelist by taking a snapshot of current prices
|
|
||||||
func (s *Service) CreateFromCurrentPrices(createdBy string) (*models.Pricelist, error) {
|
|
||||||
return s.CreateFromCurrentPricesForSource(createdBy, string(models.PricelistSourceEstimate))
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateFromCurrentPricesForSource creates a new pricelist snapshot for one source.
|
|
||||||
func (s *Service) CreateFromCurrentPricesForSource(createdBy, source string) (*models.Pricelist, error) {
|
|
||||||
return s.CreateForSourceWithProgress(createdBy, source, nil, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateFromCurrentPricesWithProgress creates a pricelist and reports coarse-grained progress.
|
|
||||||
func (s *Service) CreateFromCurrentPricesWithProgress(createdBy, source string, onProgress func(CreateProgress)) (*models.Pricelist, error) {
|
|
||||||
return s.CreateForSourceWithProgress(createdBy, source, nil, onProgress)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CreateForSourceWithProgress creates a source pricelist from current estimate snapshot or explicit item list.
|
|
||||||
func (s *Service) CreateForSourceWithProgress(createdBy, source string, sourceItems []CreateItemInput, onProgress func(CreateProgress)) (*models.Pricelist, error) {
|
|
||||||
if s.repo == nil || s.db == nil {
|
|
||||||
return nil, fmt.Errorf("offline mode: cannot create pricelists")
|
|
||||||
}
|
|
||||||
source = string(models.NormalizePricelistSource(source))
|
|
||||||
|
|
||||||
report := func(p CreateProgress) {
|
|
||||||
if onProgress != nil {
|
|
||||||
onProgress(p)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
report(CreateProgress{Current: 0, Total: 100, Status: "starting", Message: "Подготовка"})
|
|
||||||
|
|
||||||
updated, errs := 0, 0
|
|
||||||
if source == string(models.PricelistSourceEstimate) && s.pricingSvc != nil {
|
|
||||||
report(CreateProgress{Current: 1, Total: 100, Status: "recalculating", Message: "Обновление цен компонентов"})
|
|
||||||
updated, errs = s.pricingSvc.RecalculateAllPricesWithProgress(func(p pricing.RecalculateProgress) {
|
|
||||||
if p.Total <= 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
phaseCurrent := 1 + int(float64(p.Current)/float64(p.Total)*90.0)
|
|
||||||
if phaseCurrent > 91 {
|
|
||||||
phaseCurrent = 91
|
|
||||||
}
|
|
||||||
report(CreateProgress{
|
|
||||||
Current: phaseCurrent,
|
|
||||||
Total: 100,
|
|
||||||
Status: "recalculating",
|
|
||||||
Message: "Обновление цен компонентов",
|
|
||||||
Updated: p.Updated,
|
|
||||||
Errors: p.Errors,
|
|
||||||
LotName: p.LotName,
|
|
||||||
})
|
|
||||||
})
|
|
||||||
}
|
|
||||||
report(CreateProgress{Current: 92, Total: 100, Status: "recalculated", Message: "Цены обновлены", Updated: updated, Errors: errs})
|
|
||||||
|
|
||||||
report(CreateProgress{Current: 95, Total: 100, Status: "snapshot", Message: "Создание снимка прайслиста"})
|
|
||||||
expiresAt := time.Now().AddDate(1, 0, 0) // +1 year
|
|
||||||
const maxCreateAttempts = 5
|
|
||||||
var pricelist *models.Pricelist
|
|
||||||
for attempt := 1; attempt <= maxCreateAttempts; attempt++ {
|
|
||||||
version, err := s.repo.GenerateVersionBySource(source)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("generating version: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
pricelist = &models.Pricelist{
|
|
||||||
Source: source,
|
|
||||||
Version: version,
|
|
||||||
CreatedBy: createdBy,
|
|
||||||
IsActive: true,
|
|
||||||
ExpiresAt: &expiresAt,
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.repo.Create(pricelist); err != nil {
|
|
||||||
if isVersionConflictError(err) && attempt < maxCreateAttempts {
|
|
||||||
slog.Warn("pricelist version conflict, retrying",
|
|
||||||
"attempt", attempt,
|
|
||||||
"version", version,
|
|
||||||
"error", err,
|
|
||||||
)
|
|
||||||
time.Sleep(time.Duration(attempt*25) * time.Millisecond)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
return nil, fmt.Errorf("creating pricelist: %w", err)
|
|
||||||
}
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
items := make([]models.PricelistItem, 0)
|
|
||||||
if len(sourceItems) > 0 {
|
|
||||||
items = make([]models.PricelistItem, 0, len(sourceItems))
|
|
||||||
for _, srcItem := range sourceItems {
|
|
||||||
if strings.TrimSpace(srcItem.LotName) == "" || srcItem.Price <= 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
items = append(items, models.PricelistItem{
|
|
||||||
PricelistID: pricelist.ID,
|
|
||||||
LotName: strings.TrimSpace(srcItem.LotName),
|
|
||||||
Price: srcItem.Price,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Default snapshot source for estimate and backward compatibility.
|
|
||||||
var metadata []models.LotMetadata
|
|
||||||
if err := s.db.Where("current_price IS NOT NULL AND current_price > 0").Find(&metadata).Error; err != nil {
|
|
||||||
return nil, fmt.Errorf("getting lot metadata: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create pricelist items with all price settings
|
|
||||||
items = make([]models.PricelistItem, 0, len(metadata))
|
|
||||||
for _, m := range metadata {
|
|
||||||
if m.CurrentPrice == nil || *m.CurrentPrice <= 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
items = append(items, models.PricelistItem{
|
|
||||||
PricelistID: pricelist.ID,
|
|
||||||
LotName: m.LotName,
|
|
||||||
Price: *m.CurrentPrice,
|
|
||||||
PriceMethod: string(m.PriceMethod),
|
|
||||||
PricePeriodDays: m.PricePeriodDays,
|
|
||||||
PriceCoefficient: m.PriceCoefficient,
|
|
||||||
ManualPrice: m.ManualPrice,
|
|
||||||
MetaPrices: m.MetaPrices,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.repo.CreateItems(items); err != nil {
|
|
||||||
// Clean up the pricelist if items creation fails
|
|
||||||
s.repo.Delete(pricelist.ID)
|
|
||||||
return nil, fmt.Errorf("creating pricelist items: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
pricelist.ItemCount = len(items)
|
|
||||||
|
|
||||||
slog.Info("pricelist created",
|
|
||||||
"id", pricelist.ID,
|
|
||||||
"version", pricelist.Version,
|
|
||||||
"items", len(items),
|
|
||||||
"created_by", createdBy,
|
|
||||||
)
|
|
||||||
report(CreateProgress{Current: 100, Total: 100, Status: "completed", Message: "Прайслист создан", Updated: updated, Errors: errs})
|
|
||||||
|
|
||||||
return pricelist, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func isVersionConflictError(err error) bool {
|
|
||||||
if errors.Is(err, gorm.ErrDuplicatedKey) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
msg := strings.ToLower(err.Error())
|
|
||||||
return strings.Contains(msg, "duplicate entry") &&
|
|
||||||
(strings.Contains(msg, "idx_qt_pricelists_source_version") || strings.Contains(msg, "idx_qt_pricelists_version"))
|
|
||||||
}
|
|
||||||
|
|
||||||
// List returns pricelists with pagination
|
|
||||||
func (s *Service) List(page, perPage int) ([]models.PricelistSummary, int64, error) {
|
|
||||||
return s.ListBySource(page, perPage, "")
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListBySource returns pricelists with optional source filter.
|
|
||||||
func (s *Service) ListBySource(page, perPage int, source string) ([]models.PricelistSummary, int64, error) {
|
|
||||||
// If no database connection (offline mode), return empty list
|
|
||||||
if s.repo == nil {
|
|
||||||
return []models.PricelistSummary{}, 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
return s.repo.ListBySource(source, offset, perPage)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListActive returns active pricelists with pagination.
|
|
||||||
func (s *Service) ListActive(page, perPage int) ([]models.PricelistSummary, int64, error) {
|
|
||||||
return s.ListActiveBySource(page, perPage, "")
|
|
||||||
}
|
|
||||||
|
|
||||||
// ListActiveBySource returns active pricelists with optional source filter.
|
|
||||||
func (s *Service) ListActiveBySource(page, perPage int, source string) ([]models.PricelistSummary, int64, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return []models.PricelistSummary{}, 0, nil
|
|
||||||
}
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
return s.repo.ListActiveBySource(source, offset, perPage)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetByID returns a pricelist by ID
|
|
||||||
func (s *Service) GetByID(id uint) (*models.Pricelist, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return nil, fmt.Errorf("offline mode: pricelist service not available")
|
|
||||||
}
|
|
||||||
return s.repo.GetByID(id)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetItems returns pricelist items with pagination
|
|
||||||
func (s *Service) GetItems(pricelistID uint, page, perPage int, search string) ([]models.PricelistItem, int64, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return []models.PricelistItem{}, 0, nil
|
|
||||||
}
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 {
|
|
||||||
perPage = 50
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
return s.repo.GetItems(pricelistID, offset, perPage, search)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Delete deletes a pricelist by ID
|
|
||||||
func (s *Service) Delete(id uint) error {
|
|
||||||
if s.repo == nil {
|
|
||||||
return fmt.Errorf("offline mode: cannot delete pricelists")
|
|
||||||
}
|
|
||||||
return s.repo.Delete(id)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetActive toggles active state for a pricelist.
|
|
||||||
func (s *Service) SetActive(id uint, isActive bool) error {
|
|
||||||
if s.repo == nil {
|
|
||||||
return fmt.Errorf("offline mode: cannot update pricelists")
|
|
||||||
}
|
|
||||||
return s.repo.SetActive(id, isActive)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPriceForLot returns price by pricelist/lot.
|
|
||||||
func (s *Service) GetPriceForLot(pricelistID uint, lotName string) (float64, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return 0, fmt.Errorf("offline mode: pricelist service not available")
|
|
||||||
}
|
|
||||||
return s.repo.GetPriceForLot(pricelistID, lotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CanWrite returns true if the user can create pricelists
|
|
||||||
func (s *Service) CanWrite() bool {
|
|
||||||
if s.repo == nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return s.repo.CanWrite()
|
|
||||||
}
|
|
||||||
|
|
||||||
// CanWriteDebug returns write permission status with debug info
|
|
||||||
func (s *Service) CanWriteDebug() (bool, string) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return false, "offline mode"
|
|
||||||
}
|
|
||||||
return s.repo.CanWriteDebug()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetLatestActive returns the most recent active pricelist
|
|
||||||
func (s *Service) GetLatestActive() (*models.Pricelist, error) {
|
|
||||||
return s.GetLatestActiveBySource(string(models.PricelistSourceEstimate))
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetLatestActiveBySource returns the latest active pricelist for a source.
|
|
||||||
func (s *Service) GetLatestActiveBySource(source string) (*models.Pricelist, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return nil, fmt.Errorf("offline mode: pricelist service not available")
|
|
||||||
}
|
|
||||||
return s.repo.GetLatestActiveBySource(source)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CleanupExpired deletes expired and unused pricelists
|
|
||||||
func (s *Service) CleanupExpired() (int, error) {
|
|
||||||
if s.repo == nil {
|
|
||||||
return 0, fmt.Errorf("offline mode: cleanup not available")
|
|
||||||
}
|
|
||||||
|
|
||||||
expired, err := s.repo.GetExpiredUnused()
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
deleted := 0
|
|
||||||
for _, pl := range expired {
|
|
||||||
if err := s.repo.Delete(pl.ID); err != nil {
|
|
||||||
slog.Warn("failed to delete expired pricelist", "id", pl.ID, "error", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
deleted++
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("cleaned up expired pricelists", "deleted", deleted)
|
|
||||||
return deleted, nil
|
|
||||||
}
|
|
||||||
@@ -1,121 +0,0 @@
|
|||||||
package pricing
|
|
||||||
|
|
||||||
import (
|
|
||||||
"math"
|
|
||||||
"sort"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
|
||||||
|
|
||||||
// CalculateMedian returns the median of prices
|
|
||||||
func CalculateMedian(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
sorted := make([]float64, len(prices))
|
|
||||||
copy(sorted, prices)
|
|
||||||
sort.Float64s(sorted)
|
|
||||||
|
|
||||||
n := len(sorted)
|
|
||||||
if n%2 == 0 {
|
|
||||||
return (sorted[n/2-1] + sorted[n/2]) / 2
|
|
||||||
}
|
|
||||||
return sorted[n/2]
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateAverage returns the arithmetic mean of prices
|
|
||||||
func CalculateAverage(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
var sum float64
|
|
||||||
for _, p := range prices {
|
|
||||||
sum += p
|
|
||||||
}
|
|
||||||
return sum / float64(len(prices))
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateWeightedMedian calculates median with exponential decay weights
|
|
||||||
// More recent prices have higher weight
|
|
||||||
func CalculateWeightedMedian(points []repository.PricePoint, decayDays int) float64 {
|
|
||||||
if len(points) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
type weightedPrice struct {
|
|
||||||
price float64
|
|
||||||
weight float64
|
|
||||||
}
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
weighted := make([]weightedPrice, len(points))
|
|
||||||
var totalWeight float64
|
|
||||||
|
|
||||||
for i, p := range points {
|
|
||||||
daysSince := now.Sub(p.Date).Hours() / 24
|
|
||||||
// weight = e^(-days / decay_days)
|
|
||||||
weight := math.Exp(-daysSince / float64(decayDays))
|
|
||||||
weighted[i] = weightedPrice{price: p.Price, weight: weight}
|
|
||||||
totalWeight += weight
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sort by price
|
|
||||||
sort.Slice(weighted, func(i, j int) bool {
|
|
||||||
return weighted[i].price < weighted[j].price
|
|
||||||
})
|
|
||||||
|
|
||||||
// Find weighted median
|
|
||||||
targetWeight := totalWeight / 2
|
|
||||||
var cumulativeWeight float64
|
|
||||||
|
|
||||||
for _, wp := range weighted {
|
|
||||||
cumulativeWeight += wp.weight
|
|
||||||
if cumulativeWeight >= targetWeight {
|
|
||||||
return wp.price
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return weighted[len(weighted)-1].price
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculatePercentile calculates the nth percentile of prices
|
|
||||||
func CalculatePercentile(prices []float64, percentile float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
sorted := make([]float64, len(prices))
|
|
||||||
copy(sorted, prices)
|
|
||||||
sort.Float64s(sorted)
|
|
||||||
|
|
||||||
index := (percentile / 100) * float64(len(sorted)-1)
|
|
||||||
lower := int(math.Floor(index))
|
|
||||||
upper := int(math.Ceil(index))
|
|
||||||
|
|
||||||
if lower == upper {
|
|
||||||
return sorted[lower]
|
|
||||||
}
|
|
||||||
|
|
||||||
fraction := index - float64(lower)
|
|
||||||
return sorted[lower]*(1-fraction) + sorted[upper]*fraction
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateStdDev calculates standard deviation
|
|
||||||
func CalculateStdDev(prices []float64) float64 {
|
|
||||||
if len(prices) < 2 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
mean := CalculateAverage(prices)
|
|
||||||
var sumSquares float64
|
|
||||||
|
|
||||||
for _, p := range prices {
|
|
||||||
diff := p - mean
|
|
||||||
sumSquares += diff * diff
|
|
||||||
}
|
|
||||||
|
|
||||||
return math.Sqrt(sumSquares / float64(len(prices)-1))
|
|
||||||
}
|
|
||||||
@@ -1,378 +0,0 @@
|
|||||||
package pricing
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/config"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Service struct {
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
priceRepo *repository.PriceRepository
|
|
||||||
config config.PricingConfig
|
|
||||||
db *gorm.DB
|
|
||||||
}
|
|
||||||
|
|
||||||
type RecalculateProgress struct {
|
|
||||||
Current int
|
|
||||||
Total int
|
|
||||||
LotName string
|
|
||||||
Updated int
|
|
||||||
Errors int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewService(
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
priceRepo *repository.PriceRepository,
|
|
||||||
cfg config.PricingConfig,
|
|
||||||
) *Service {
|
|
||||||
var db *gorm.DB
|
|
||||||
if componentRepo != nil {
|
|
||||||
db = componentRepo.DB()
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Service{
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
priceRepo: priceRepo,
|
|
||||||
config: cfg,
|
|
||||||
db: db,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetEffectivePrice returns the current effective price for a component
|
|
||||||
// Priority: active override > calculated price > nil
|
|
||||||
func (s *Service) GetEffectivePrice(lotName string) (*float64, error) {
|
|
||||||
// Check for active override first
|
|
||||||
override, err := s.priceRepo.GetPriceOverride(lotName)
|
|
||||||
if err == nil && override != nil {
|
|
||||||
return &override.Price, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get component metadata
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return component.CurrentPrice, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculatePrice calculates price using the specified method
|
|
||||||
func (s *Service) CalculatePrice(lotName string, method models.PriceMethod, periodDays int) (float64, error) {
|
|
||||||
if periodDays == 0 {
|
|
||||||
periodDays = s.config.DefaultPeriodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
points, err := s.priceRepo.GetPriceHistory(lotName, periodDays)
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(points) == 0 {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
prices := make([]float64, len(points))
|
|
||||||
for i, p := range points {
|
|
||||||
prices[i] = p.Price
|
|
||||||
}
|
|
||||||
|
|
||||||
switch method {
|
|
||||||
case models.PriceMethodAverage:
|
|
||||||
return CalculateAverage(prices), nil
|
|
||||||
case models.PriceMethodWeightedMedian:
|
|
||||||
return CalculateWeightedMedian(points, periodDays), nil
|
|
||||||
case models.PriceMethodMedian:
|
|
||||||
fallthrough
|
|
||||||
default:
|
|
||||||
return CalculateMedian(prices), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateComponentPrice recalculates and updates the price for a component
|
|
||||||
func (s *Service) UpdateComponentPrice(lotName string) error {
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
price, err := s.CalculatePrice(lotName, component.PriceMethod, component.PricePeriodDays)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
if price > 0 {
|
|
||||||
component.CurrentPrice = &price
|
|
||||||
component.PriceUpdatedAt = &now
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.componentRepo.Update(component)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetManualPrice sets a manual price override
|
|
||||||
func (s *Service) SetManualPrice(lotName string, price float64, reason string, userID uint) error {
|
|
||||||
override := &models.PriceOverride{
|
|
||||||
LotName: lotName,
|
|
||||||
Price: price,
|
|
||||||
ValidFrom: time.Now(),
|
|
||||||
Reason: reason,
|
|
||||||
CreatedBy: userID,
|
|
||||||
}
|
|
||||||
return s.priceRepo.CreatePriceOverride(override)
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdatePriceMethod changes the pricing method for a component
|
|
||||||
func (s *Service) UpdatePriceMethod(lotName string, method models.PriceMethod, periodDays int) error {
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
component.PriceMethod = method
|
|
||||||
if periodDays > 0 {
|
|
||||||
component.PricePeriodDays = periodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.componentRepo.Update(component); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.UpdateComponentPrice(lotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPriceStats returns statistics for a component's price history
|
|
||||||
func (s *Service) GetPriceStats(lotName string, periodDays int) (*PriceStats, error) {
|
|
||||||
if periodDays == 0 {
|
|
||||||
periodDays = s.config.DefaultPeriodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
points, err := s.priceRepo.GetPriceHistory(lotName, periodDays)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(points) == 0 {
|
|
||||||
return &PriceStats{QuoteCount: 0}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
prices := make([]float64, len(points))
|
|
||||||
for i, p := range points {
|
|
||||||
prices[i] = p.Price
|
|
||||||
}
|
|
||||||
|
|
||||||
return &PriceStats{
|
|
||||||
QuoteCount: len(points),
|
|
||||||
MinPrice: CalculatePercentile(prices, 0),
|
|
||||||
MaxPrice: CalculatePercentile(prices, 100),
|
|
||||||
MedianPrice: CalculateMedian(prices),
|
|
||||||
AveragePrice: CalculateAverage(prices),
|
|
||||||
StdDeviation: CalculateStdDev(prices),
|
|
||||||
LatestPrice: points[0].Price,
|
|
||||||
LatestDate: points[0].Date,
|
|
||||||
OldestDate: points[len(points)-1].Date,
|
|
||||||
Percentile25: CalculatePercentile(prices, 25),
|
|
||||||
Percentile75: CalculatePercentile(prices, 75),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type PriceStats struct {
|
|
||||||
QuoteCount int `json:"quote_count"`
|
|
||||||
MinPrice float64 `json:"min_price"`
|
|
||||||
MaxPrice float64 `json:"max_price"`
|
|
||||||
MedianPrice float64 `json:"median_price"`
|
|
||||||
AveragePrice float64 `json:"average_price"`
|
|
||||||
StdDeviation float64 `json:"std_deviation"`
|
|
||||||
LatestPrice float64 `json:"latest_price"`
|
|
||||||
LatestDate time.Time `json:"latest_date"`
|
|
||||||
OldestDate time.Time `json:"oldest_date"`
|
|
||||||
Percentile25 float64 `json:"percentile_25"`
|
|
||||||
Percentile75 float64 `json:"percentile_75"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// RecalculateAllPrices recalculates prices for all components
|
|
||||||
func (s *Service) RecalculateAllPrices() (updated int, errors int) {
|
|
||||||
return s.RecalculateAllPricesWithProgress(nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RecalculateAllPricesWithProgress recalculates prices and reports progress.
|
|
||||||
func (s *Service) RecalculateAllPricesWithProgress(onProgress func(RecalculateProgress)) (updated int, errors int) {
|
|
||||||
if s.db == nil {
|
|
||||||
return 0, 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// Logic mirrors "Обновить цены" in admin pricing.
|
|
||||||
var components []models.LotMetadata
|
|
||||||
if err := s.db.Find(&components).Error; err != nil {
|
|
||||||
return 0, len(components)
|
|
||||||
}
|
|
||||||
total := len(components)
|
|
||||||
|
|
||||||
var allLotNames []string
|
|
||||||
_ = s.db.Model(&models.LotMetadata{}).Pluck("lot_name", &allLotNames).Error
|
|
||||||
|
|
||||||
type lotDate struct {
|
|
||||||
Lot string
|
|
||||||
Date time.Time
|
|
||||||
}
|
|
||||||
var latestDates []lotDate
|
|
||||||
_ = s.db.Raw(`SELECT lot, MAX(date) as date FROM lot_log GROUP BY lot`).Scan(&latestDates).Error
|
|
||||||
lotLatestDate := make(map[string]time.Time, len(latestDates))
|
|
||||||
for _, ld := range latestDates {
|
|
||||||
lotLatestDate[ld.Lot] = ld.Date
|
|
||||||
}
|
|
||||||
|
|
||||||
var skipped, manual, unchanged int
|
|
||||||
now := time.Now()
|
|
||||||
current := 0
|
|
||||||
|
|
||||||
for _, comp := range components {
|
|
||||||
current++
|
|
||||||
reportProgress := func() {
|
|
||||||
if onProgress != nil && (current%10 == 0 || current == total) {
|
|
||||||
onProgress(RecalculateProgress{
|
|
||||||
Current: current,
|
|
||||||
Total: total,
|
|
||||||
LotName: comp.LotName,
|
|
||||||
Updated: updated,
|
|
||||||
Errors: errors,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.ManualPrice != nil && *comp.ManualPrice > 0 {
|
|
||||||
manual++
|
|
||||||
reportProgress()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
method := comp.PriceMethod
|
|
||||||
if method == "" {
|
|
||||||
method = models.PriceMethodMedian
|
|
||||||
}
|
|
||||||
|
|
||||||
var sourceLots []string
|
|
||||||
if comp.MetaPrices != "" {
|
|
||||||
sourceLots = expandMetaPricesWithCache(comp.MetaPrices, comp.LotName, allLotNames)
|
|
||||||
} else {
|
|
||||||
sourceLots = []string{comp.LotName}
|
|
||||||
}
|
|
||||||
if len(sourceLots) == 0 {
|
|
||||||
skipped++
|
|
||||||
reportProgress()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.PriceUpdatedAt != nil {
|
|
||||||
hasNewData := false
|
|
||||||
for _, lot := range sourceLots {
|
|
||||||
if latestDate, ok := lotLatestDate[lot]; ok && latestDate.After(*comp.PriceUpdatedAt) {
|
|
||||||
hasNewData = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !hasNewData {
|
|
||||||
unchanged++
|
|
||||||
reportProgress()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var prices []float64
|
|
||||||
if comp.PricePeriodDays > 0 {
|
|
||||||
_ = s.db.Raw(
|
|
||||||
`SELECT price FROM lot_log WHERE lot IN ? AND date >= DATE_SUB(NOW(), INTERVAL ? DAY) ORDER BY price`,
|
|
||||||
sourceLots, comp.PricePeriodDays,
|
|
||||||
).Pluck("price", &prices).Error
|
|
||||||
} else {
|
|
||||||
_ = s.db.Raw(
|
|
||||||
`SELECT price FROM lot_log WHERE lot IN ? ORDER BY price`,
|
|
||||||
sourceLots,
|
|
||||||
).Pluck("price", &prices).Error
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(prices) == 0 && comp.PricePeriodDays > 0 {
|
|
||||||
_ = s.db.Raw(`SELECT price FROM lot_log WHERE lot IN ? ORDER BY price`, sourceLots).Pluck("price", &prices).Error
|
|
||||||
}
|
|
||||||
if len(prices) == 0 {
|
|
||||||
skipped++
|
|
||||||
reportProgress()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
var basePrice float64
|
|
||||||
switch method {
|
|
||||||
case models.PriceMethodAverage:
|
|
||||||
basePrice = CalculateAverage(prices)
|
|
||||||
default:
|
|
||||||
basePrice = CalculateMedian(prices)
|
|
||||||
}
|
|
||||||
if basePrice <= 0 {
|
|
||||||
skipped++
|
|
||||||
reportProgress()
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
finalPrice := basePrice
|
|
||||||
if comp.PriceCoefficient != 0 {
|
|
||||||
finalPrice = finalPrice * (1 + comp.PriceCoefficient/100)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.db.Model(&models.LotMetadata{}).
|
|
||||||
Where("lot_name = ?", comp.LotName).
|
|
||||||
Updates(map[string]interface{}{
|
|
||||||
"current_price": finalPrice,
|
|
||||||
"price_updated_at": now,
|
|
||||||
}).Error; err != nil {
|
|
||||||
errors++
|
|
||||||
} else {
|
|
||||||
updated++
|
|
||||||
}
|
|
||||||
|
|
||||||
reportProgress()
|
|
||||||
}
|
|
||||||
|
|
||||||
if onProgress != nil && total == 0 {
|
|
||||||
onProgress(RecalculateProgress{
|
|
||||||
Current: 0,
|
|
||||||
Total: 0,
|
|
||||||
LotName: "",
|
|
||||||
Updated: updated,
|
|
||||||
Errors: errors,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return updated, errors
|
|
||||||
}
|
|
||||||
|
|
||||||
func expandMetaPricesWithCache(metaPrices, excludeLot string, allLotNames []string) []string {
|
|
||||||
sources := strings.Split(metaPrices, ",")
|
|
||||||
var result []string
|
|
||||||
seen := make(map[string]bool)
|
|
||||||
|
|
||||||
for _, source := range sources {
|
|
||||||
source = strings.TrimSpace(source)
|
|
||||||
if source == "" || source == excludeLot {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.HasSuffix(source, "*") {
|
|
||||||
prefix := strings.TrimSuffix(source, "*")
|
|
||||||
for _, lot := range allLotNames {
|
|
||||||
if strings.HasPrefix(lot, prefix) && lot != excludeLot && !seen[lot] {
|
|
||||||
result = append(result, lot)
|
|
||||||
seen[lot] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if !seen[source] {
|
|
||||||
result = append(result, source)
|
|
||||||
seen[source] = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
@@ -16,8 +16,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrProjectNotFound = errors.New("project not found")
|
ErrProjectNotFound = errors.New("project not found")
|
||||||
ErrProjectForbidden = errors.New("access to project forbidden")
|
ErrProjectForbidden = errors.New("access to project forbidden")
|
||||||
|
ErrProjectCodeExists = errors.New("project code and variant already exist")
|
||||||
)
|
)
|
||||||
|
|
||||||
type ProjectService struct {
|
type ProjectService struct {
|
||||||
@@ -29,12 +30,16 @@ func NewProjectService(localDB *localdb.LocalDB) *ProjectService {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type CreateProjectRequest struct {
|
type CreateProjectRequest struct {
|
||||||
Name string `json:"name"`
|
Code string `json:"code"`
|
||||||
|
Variant string `json:"variant,omitempty"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
TrackerURL string `json:"tracker_url"`
|
TrackerURL string `json:"tracker_url"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type UpdateProjectRequest struct {
|
type UpdateProjectRequest struct {
|
||||||
Name string `json:"name"`
|
Code *string `json:"code,omitempty"`
|
||||||
|
Variant *string `json:"variant,omitempty"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
TrackerURL *string `json:"tracker_url,omitempty"`
|
TrackerURL *string `json:"tracker_url,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -45,17 +50,30 @@ type ProjectConfigurationsResult struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *ProjectService) Create(ownerUsername string, req *CreateProjectRequest) (*models.Project, error) {
|
func (s *ProjectService) Create(ownerUsername string, req *CreateProjectRequest) (*models.Project, error) {
|
||||||
name := strings.TrimSpace(req.Name)
|
var namePtr *string
|
||||||
if name == "" {
|
if req.Name != nil {
|
||||||
return nil, fmt.Errorf("project name is required")
|
name := strings.TrimSpace(*req.Name)
|
||||||
|
if name != "" {
|
||||||
|
namePtr = &name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
code := strings.TrimSpace(req.Code)
|
||||||
|
if code == "" {
|
||||||
|
return nil, fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
variant := strings.TrimSpace(req.Variant)
|
||||||
|
if err := s.ensureUniqueProjectCodeVariant("", code, variant); err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
localProject := &localdb.LocalProject{
|
localProject := &localdb.LocalProject{
|
||||||
UUID: uuid.NewString(),
|
UUID: uuid.NewString(),
|
||||||
OwnerUsername: ownerUsername,
|
OwnerUsername: ownerUsername,
|
||||||
Name: name,
|
Code: code,
|
||||||
TrackerURL: normalizeProjectTrackerURL(name, req.TrackerURL),
|
Variant: variant,
|
||||||
|
Name: namePtr,
|
||||||
|
TrackerURL: normalizeProjectTrackerURL(code, req.TrackerURL),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: false,
|
IsSystem: false,
|
||||||
CreatedAt: now,
|
CreatedAt: now,
|
||||||
@@ -76,20 +94,33 @@ func (s *ProjectService) Update(projectUUID, ownerUsername string, req *UpdatePr
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, ErrProjectNotFound
|
return nil, ErrProjectNotFound
|
||||||
}
|
}
|
||||||
if localProject.OwnerUsername != ownerUsername {
|
|
||||||
return nil, ErrProjectForbidden
|
if req.Code != nil {
|
||||||
|
code := strings.TrimSpace(*req.Code)
|
||||||
|
if code == "" {
|
||||||
|
return nil, fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
localProject.Code = code
|
||||||
|
}
|
||||||
|
if req.Variant != nil {
|
||||||
|
localProject.Variant = strings.TrimSpace(*req.Variant)
|
||||||
|
}
|
||||||
|
if err := s.ensureUniqueProjectCodeVariant(projectUUID, localProject.Code, localProject.Variant); err != nil {
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
name := strings.TrimSpace(req.Name)
|
if req.Name != nil {
|
||||||
if name == "" {
|
name := strings.TrimSpace(*req.Name)
|
||||||
return nil, fmt.Errorf("project name is required")
|
if name == "" {
|
||||||
|
localProject.Name = nil
|
||||||
|
} else {
|
||||||
|
localProject.Name = &name
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
localProject.Name = name
|
|
||||||
if req.TrackerURL != nil {
|
if req.TrackerURL != nil {
|
||||||
localProject.TrackerURL = normalizeProjectTrackerURL(name, *req.TrackerURL)
|
localProject.TrackerURL = normalizeProjectTrackerURL(localProject.Code, *req.TrackerURL)
|
||||||
} else if strings.TrimSpace(localProject.TrackerURL) == "" {
|
} else if strings.TrimSpace(localProject.TrackerURL) == "" {
|
||||||
localProject.TrackerURL = normalizeProjectTrackerURL(name, "")
|
localProject.TrackerURL = normalizeProjectTrackerURL(localProject.Code, "")
|
||||||
}
|
}
|
||||||
localProject.UpdatedAt = time.Now()
|
localProject.UpdatedAt = time.Now()
|
||||||
localProject.SyncStatus = "pending"
|
localProject.SyncStatus = "pending"
|
||||||
@@ -102,6 +133,38 @@ func (s *ProjectService) Update(projectUUID, ownerUsername string, req *UpdatePr
|
|||||||
return localdb.LocalToProject(localProject), nil
|
return localdb.LocalToProject(localProject), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) ensureUniqueProjectCodeVariant(excludeUUID, code, variant string) error {
|
||||||
|
normalizedCode := normalizeProjectCode(code)
|
||||||
|
normalizedVariant := normalizeProjectVariant(variant)
|
||||||
|
if normalizedCode == "" {
|
||||||
|
return fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
projects, err := s.localDB.GetAllProjects(true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for i := range projects {
|
||||||
|
project := projects[i]
|
||||||
|
if excludeUUID != "" && project.UUID == excludeUUID {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if normalizeProjectCode(project.Code) == normalizedCode &&
|
||||||
|
normalizeProjectVariant(project.Variant) == normalizedVariant {
|
||||||
|
return ErrProjectCodeExists
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeProjectCode(code string) string {
|
||||||
|
return strings.ToLower(strings.TrimSpace(code))
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeProjectVariant(variant string) string {
|
||||||
|
return strings.ToLower(strings.TrimSpace(variant))
|
||||||
|
}
|
||||||
|
|
||||||
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
||||||
return s.setProjectActive(projectUUID, ownerUsername, false)
|
return s.setProjectActive(projectUUID, ownerUsername, false)
|
||||||
}
|
}
|
||||||
@@ -116,9 +179,6 @@ func (s *ProjectService) setProjectActive(projectUUID, ownerUsername string, isA
|
|||||||
if err := tx.Where("uuid = ?", projectUUID).First(&project).Error; err != nil {
|
if err := tx.Where("uuid = ?", projectUUID).First(&project).Error; err != nil {
|
||||||
return ErrProjectNotFound
|
return ErrProjectNotFound
|
||||||
}
|
}
|
||||||
if project.OwnerUsername != ownerUsername {
|
|
||||||
return ErrProjectForbidden
|
|
||||||
}
|
|
||||||
if project.IsActive == isActive {
|
if project.IsActive == isActive {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,11 +2,13 @@ package services
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"git.mchus.pro/mchus/quoteforge/internal/services/pricing"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@@ -20,7 +22,14 @@ type QuoteService struct {
|
|||||||
statsRepo *repository.StatsRepository
|
statsRepo *repository.StatsRepository
|
||||||
pricelistRepo *repository.PricelistRepository
|
pricelistRepo *repository.PricelistRepository
|
||||||
localDB *localdb.LocalDB
|
localDB *localdb.LocalDB
|
||||||
pricingService *pricing.Service
|
pricingService priceResolver
|
||||||
|
cacheMu sync.RWMutex
|
||||||
|
priceCache map[string]cachedLotPrice
|
||||||
|
cacheTTL time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
type priceResolver interface {
|
||||||
|
GetEffectivePrice(lotName string) (*float64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewQuoteService(
|
func NewQuoteService(
|
||||||
@@ -28,7 +37,7 @@ func NewQuoteService(
|
|||||||
statsRepo *repository.StatsRepository,
|
statsRepo *repository.StatsRepository,
|
||||||
pricelistRepo *repository.PricelistRepository,
|
pricelistRepo *repository.PricelistRepository,
|
||||||
localDB *localdb.LocalDB,
|
localDB *localdb.LocalDB,
|
||||||
pricingService *pricing.Service,
|
pricingService priceResolver,
|
||||||
) *QuoteService {
|
) *QuoteService {
|
||||||
return &QuoteService{
|
return &QuoteService{
|
||||||
componentRepo: componentRepo,
|
componentRepo: componentRepo,
|
||||||
@@ -36,9 +45,16 @@ func NewQuoteService(
|
|||||||
pricelistRepo: pricelistRepo,
|
pricelistRepo: pricelistRepo,
|
||||||
localDB: localDB,
|
localDB: localDB,
|
||||||
pricingService: pricingService,
|
pricingService: pricingService,
|
||||||
|
priceCache: make(map[string]cachedLotPrice, 4096),
|
||||||
|
cacheTTL: 10 * time.Second,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type cachedLotPrice struct {
|
||||||
|
price *float64
|
||||||
|
expiresAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
type QuoteItem struct {
|
type QuoteItem struct {
|
||||||
LotName string `json:"lot_name"`
|
LotName string `json:"lot_name"`
|
||||||
Quantity int `json:"quantity"`
|
Quantity int `json:"quantity"`
|
||||||
@@ -62,6 +78,7 @@ type QuoteRequest struct {
|
|||||||
LotName string `json:"lot_name"`
|
LotName string `json:"lot_name"`
|
||||||
Quantity int `json:"quantity"`
|
Quantity int `json:"quantity"`
|
||||||
} `json:"items"`
|
} `json:"items"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"` // Optional: use specific pricelist for pricing
|
||||||
}
|
}
|
||||||
|
|
||||||
type PriceLevelsRequest struct {
|
type PriceLevelsRequest struct {
|
||||||
@@ -70,6 +87,7 @@ type PriceLevelsRequest struct {
|
|||||||
Quantity int `json:"quantity"`
|
Quantity int `json:"quantity"`
|
||||||
} `json:"items"`
|
} `json:"items"`
|
||||||
PricelistIDs map[string]uint `json:"pricelist_ids,omitempty"`
|
PricelistIDs map[string]uint `json:"pricelist_ids,omitempty"`
|
||||||
|
NoCache bool `json:"no_cache,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type PriceLevelsItem struct {
|
type PriceLevelsItem struct {
|
||||||
@@ -96,8 +114,69 @@ func (s *QuoteService) ValidateAndCalculate(req *QuoteRequest) (*QuoteValidation
|
|||||||
if len(req.Items) == 0 {
|
if len(req.Items) == 0 {
|
||||||
return nil, ErrEmptyQuote
|
return nil, ErrEmptyQuote
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Strict local-first path: calculations use local SQLite snapshot regardless of online status.
|
||||||
|
if s.localDB != nil {
|
||||||
|
result := &QuoteValidationResult{
|
||||||
|
Valid: true,
|
||||||
|
Items: make([]QuoteItem, 0, len(req.Items)),
|
||||||
|
Errors: make([]string, 0),
|
||||||
|
Warnings: make([]string, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine which pricelist to use for pricing
|
||||||
|
pricelistID := req.PricelistID
|
||||||
|
if pricelistID == nil || *pricelistID == 0 {
|
||||||
|
// By default, use latest estimate pricelist
|
||||||
|
latestPricelist, err := s.localDB.GetLatestLocalPricelistBySource("estimate")
|
||||||
|
if err == nil && latestPricelist != nil {
|
||||||
|
pricelistID = &latestPricelist.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var total float64
|
||||||
|
for _, reqItem := range req.Items {
|
||||||
|
localComp, err := s.localDB.GetLocalComponent(reqItem.LotName)
|
||||||
|
if err != nil {
|
||||||
|
result.Valid = false
|
||||||
|
result.Errors = append(result.Errors, "Component not found: "+reqItem.LotName)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
item := QuoteItem{
|
||||||
|
LotName: reqItem.LotName,
|
||||||
|
Quantity: reqItem.Quantity,
|
||||||
|
Description: localComp.LotDescription,
|
||||||
|
Category: localComp.Category,
|
||||||
|
HasPrice: false,
|
||||||
|
UnitPrice: 0,
|
||||||
|
TotalPrice: 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get price from pricelist_items
|
||||||
|
if pricelistID != nil {
|
||||||
|
price, found := s.lookupPriceByPricelistID(*pricelistID, reqItem.LotName)
|
||||||
|
if found && price > 0 {
|
||||||
|
item.UnitPrice = price
|
||||||
|
item.TotalPrice = price * float64(reqItem.Quantity)
|
||||||
|
item.HasPrice = true
|
||||||
|
total += item.TotalPrice
|
||||||
|
} else {
|
||||||
|
result.Warnings = append(result.Warnings, "No price available for: "+reqItem.LotName)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Warnings = append(result.Warnings, "No pricelist available for: "+reqItem.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Items = append(result.Items, item)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Total = total
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
if s.componentRepo == nil || s.pricingService == nil {
|
if s.componentRepo == nil || s.pricingService == nil {
|
||||||
return nil, errors.New("offline mode: quote calculation not available")
|
return nil, errors.New("quote calculation not available")
|
||||||
}
|
}
|
||||||
|
|
||||||
result := &QuoteValidationResult{
|
result := &QuoteValidationResult{
|
||||||
@@ -170,11 +249,55 @@ func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLeve
|
|||||||
return nil, ErrEmptyQuote
|
return nil, ErrEmptyQuote
|
||||||
}
|
}
|
||||||
|
|
||||||
|
lotNames := make([]string, 0, len(req.Items))
|
||||||
|
seenLots := make(map[string]struct{}, len(req.Items))
|
||||||
|
for _, reqItem := range req.Items {
|
||||||
|
if _, ok := seenLots[reqItem.LotName]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seenLots[reqItem.LotName] = struct{}{}
|
||||||
|
lotNames = append(lotNames, reqItem.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
result := &PriceLevelsResult{
|
result := &PriceLevelsResult{
|
||||||
Items: make([]PriceLevelsItem, 0, len(req.Items)),
|
Items: make([]PriceLevelsItem, 0, len(req.Items)),
|
||||||
ResolvedPricelistIDs: map[string]uint{},
|
ResolvedPricelistIDs: map[string]uint{},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type levelState struct {
|
||||||
|
id uint
|
||||||
|
prices map[string]float64
|
||||||
|
}
|
||||||
|
levelBySource := map[models.PricelistSource]*levelState{
|
||||||
|
models.PricelistSourceEstimate: {prices: map[string]float64{}},
|
||||||
|
models.PricelistSourceWarehouse: {prices: map[string]float64{}},
|
||||||
|
models.PricelistSourceCompetitor: {prices: map[string]float64{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
for source, st := range levelBySource {
|
||||||
|
sourceKey := string(source)
|
||||||
|
if req.PricelistIDs != nil {
|
||||||
|
if explicitID, ok := req.PricelistIDs[sourceKey]; ok && explicitID > 0 {
|
||||||
|
st.id = explicitID
|
||||||
|
result.ResolvedPricelistIDs[sourceKey] = explicitID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if st.id == 0 && s.pricelistRepo != nil {
|
||||||
|
latest, err := s.pricelistRepo.GetLatestActiveBySource(sourceKey)
|
||||||
|
if err == nil {
|
||||||
|
st.id = latest.ID
|
||||||
|
result.ResolvedPricelistIDs[sourceKey] = latest.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if st.id == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
prices, err := s.lookupPricesByPricelistID(st.id, lotNames, req.NoCache)
|
||||||
|
if err == nil {
|
||||||
|
st.prices = prices
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
for _, reqItem := range req.Items {
|
for _, reqItem := range req.Items {
|
||||||
item := PriceLevelsItem{
|
item := PriceLevelsItem{
|
||||||
LotName: reqItem.LotName,
|
LotName: reqItem.LotName,
|
||||||
@@ -182,22 +305,17 @@ func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLeve
|
|||||||
PriceMissing: make([]string, 0, 3),
|
PriceMissing: make([]string, 0, 3),
|
||||||
}
|
}
|
||||||
|
|
||||||
estimatePrice, estimateID := s.lookupLevelPrice(models.PricelistSourceEstimate, reqItem.LotName, req.PricelistIDs)
|
if p, ok := levelBySource[models.PricelistSourceEstimate].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
warehousePrice, warehouseID := s.lookupLevelPrice(models.PricelistSourceWarehouse, reqItem.LotName, req.PricelistIDs)
|
price := p
|
||||||
competitorPrice, competitorID := s.lookupLevelPrice(models.PricelistSourceCompetitor, reqItem.LotName, req.PricelistIDs)
|
item.EstimatePrice = &price
|
||||||
|
|
||||||
item.EstimatePrice = estimatePrice
|
|
||||||
item.WarehousePrice = warehousePrice
|
|
||||||
item.CompetitorPrice = competitorPrice
|
|
||||||
|
|
||||||
if estimateID != 0 {
|
|
||||||
result.ResolvedPricelistIDs[string(models.PricelistSourceEstimate)] = estimateID
|
|
||||||
}
|
}
|
||||||
if warehouseID != 0 {
|
if p, ok := levelBySource[models.PricelistSourceWarehouse].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
result.ResolvedPricelistIDs[string(models.PricelistSourceWarehouse)] = warehouseID
|
price := p
|
||||||
|
item.WarehousePrice = &price
|
||||||
}
|
}
|
||||||
if competitorID != 0 {
|
if p, ok := levelBySource[models.PricelistSourceCompetitor].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
result.ResolvedPricelistIDs[string(models.PricelistSourceCompetitor)] = competitorID
|
price := p
|
||||||
|
item.CompetitorPrice = &price
|
||||||
}
|
}
|
||||||
|
|
||||||
if item.EstimatePrice == nil {
|
if item.EstimatePrice == nil {
|
||||||
@@ -220,6 +338,93 @@ func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLeve
|
|||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) lookupPricesByPricelistID(pricelistID uint, lotNames []string, noCache bool) (map[string]float64, error) {
|
||||||
|
result := make(map[string]float64, len(lotNames))
|
||||||
|
if pricelistID == 0 || len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
missing := make([]string, 0, len(lotNames))
|
||||||
|
if noCache {
|
||||||
|
missing = append(missing, lotNames...)
|
||||||
|
} else {
|
||||||
|
now := time.Now()
|
||||||
|
s.cacheMu.RLock()
|
||||||
|
for _, lotName := range lotNames {
|
||||||
|
if entry, ok := s.priceCache[s.cacheKey(pricelistID, lotName)]; ok && entry.expiresAt.After(now) {
|
||||||
|
if entry.price != nil && *entry.price > 0 {
|
||||||
|
result[lotName] = *entry.price
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
missing = append(missing, lotName)
|
||||||
|
}
|
||||||
|
s.cacheMu.RUnlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(missing) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
loaded := make(map[string]float64, len(missing))
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
prices, err := s.pricelistRepo.GetPricesForLots(pricelistID, missing)
|
||||||
|
if err == nil {
|
||||||
|
for lotName, price := range prices {
|
||||||
|
if price > 0 {
|
||||||
|
result[lotName] = price
|
||||||
|
loaded[lotName] = price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.updateCache(pricelistID, missing, loaded)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback path (usually offline): local per-lot lookup.
|
||||||
|
if s.localDB != nil {
|
||||||
|
for _, lotName := range missing {
|
||||||
|
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
||||||
|
if found && price > 0 {
|
||||||
|
result[lotName] = price
|
||||||
|
loaded[lotName] = price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.updateCache(pricelistID, missing, loaded)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, fmt.Errorf("price lookup unavailable for pricelist %d", pricelistID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) updateCache(pricelistID uint, requested []string, loaded map[string]float64) {
|
||||||
|
if len(requested) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
expiresAt := time.Now().Add(s.cacheTTL)
|
||||||
|
s.cacheMu.Lock()
|
||||||
|
defer s.cacheMu.Unlock()
|
||||||
|
|
||||||
|
for _, lotName := range requested {
|
||||||
|
if price, ok := loaded[lotName]; ok && price > 0 {
|
||||||
|
priceCopy := price
|
||||||
|
s.priceCache[s.cacheKey(pricelistID, lotName)] = cachedLotPrice{
|
||||||
|
price: &priceCopy,
|
||||||
|
expiresAt: expiresAt,
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.priceCache[s.cacheKey(pricelistID, lotName)] = cachedLotPrice{
|
||||||
|
price: nil,
|
||||||
|
expiresAt: expiresAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) cacheKey(pricelistID uint, lotName string) string {
|
||||||
|
return fmt.Sprintf("%d|%s", pricelistID, lotName)
|
||||||
|
}
|
||||||
|
|
||||||
func calculateDelta(target, base *float64) (*float64, *float64) {
|
func calculateDelta(target, base *float64) (*float64, *float64) {
|
||||||
if target == nil || base == nil {
|
if target == nil || base == nil {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
|
|||||||
410
internal/services/sync/readiness.go
Normal file
410
internal/services/sync/readiness.go
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
ReadinessReady = "ready"
|
||||||
|
ReadinessBlocked = "blocked"
|
||||||
|
ReadinessUnknown = "unknown"
|
||||||
|
)
|
||||||
|
|
||||||
|
var ErrSyncBlockedByReadiness = errors.New("sync blocked by readiness guard")
|
||||||
|
|
||||||
|
type SyncReadiness struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
Blocked bool `json:"blocked"`
|
||||||
|
ReasonCode string `json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type SyncBlockedError struct {
|
||||||
|
Readiness SyncReadiness
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *SyncBlockedError) Error() string {
|
||||||
|
if e == nil {
|
||||||
|
return ErrSyncBlockedByReadiness.Error()
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(e.Readiness.ReasonText) != "" {
|
||||||
|
return e.Readiness.ReasonText
|
||||||
|
}
|
||||||
|
return ErrSyncBlockedByReadiness.Error()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) EnsureReadinessForSync() (*SyncReadiness, error) {
|
||||||
|
readiness, err := s.GetReadiness()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if readiness.Blocked {
|
||||||
|
return readiness, &SyncBlockedError{Readiness: *readiness}
|
||||||
|
}
|
||||||
|
return readiness, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) GetReadiness() (*SyncReadiness, error) {
|
||||||
|
now := time.Now().UTC()
|
||||||
|
if !s.isOnline() {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"OFFLINE_UNVERIFIED_SCHEMA",
|
||||||
|
"Синхронизация недоступна: нет соединения с сервером и нельзя проверить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
mariaDB, err := s.getDB()
|
||||||
|
if err != nil || mariaDB == nil {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"OFFLINE_UNVERIFIED_SCHEMA",
|
||||||
|
"Синхронизация недоступна: нет соединения с сервером и нельзя проверить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
migrations, err := listActiveClientMigrations(mariaDB)
|
||||||
|
if err != nil {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"REMOTE_MIGRATION_REGISTRY_UNAVAILABLE",
|
||||||
|
"Синхронизация заблокирована: не удалось проверить централизованные миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range migrations {
|
||||||
|
m := migrations[i]
|
||||||
|
if strings.TrimSpace(m.MinAppVersion) != "" {
|
||||||
|
if compareVersions(appmeta.Version(), m.MinAppVersion) < 0 {
|
||||||
|
min := m.MinAppVersion
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"MIN_APP_VERSION_REQUIRED",
|
||||||
|
fmt.Sprintf("Требуется обновление приложения до версии %s для безопасной синхронизации.", m.MinAppVersion),
|
||||||
|
&min,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.applyMissingRemoteMigrations(migrations); err != nil {
|
||||||
|
if strings.Contains(strings.ToLower(err.Error()), "checksum") {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"REMOTE_MIGRATION_CHECKSUM_MISMATCH",
|
||||||
|
"Синхронизация заблокирована: контрольная сумма миграции не совпадает.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"LOCAL_MIGRATION_APPLY_FAILED",
|
||||||
|
"Синхронизация заблокирована: не удалось применить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.reportClientSchemaState(mariaDB, now); err != nil {
|
||||||
|
slog.Warn("failed to report client schema state", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ready := &SyncReadiness{Status: ReadinessReady, Blocked: false, LastCheckedAt: &now}
|
||||||
|
if setErr := s.localDB.SetSyncGuardState(ReadinessReady, "", "", nil, &now); setErr != nil {
|
||||||
|
slog.Warn("failed to persist sync guard state", "error", setErr)
|
||||||
|
}
|
||||||
|
return ready, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) blockedReadiness(now time.Time, code, text string, minVersion *string) (*SyncReadiness, error) {
|
||||||
|
readiness := &SyncReadiness{
|
||||||
|
Status: ReadinessBlocked,
|
||||||
|
Blocked: true,
|
||||||
|
ReasonCode: code,
|
||||||
|
ReasonText: text,
|
||||||
|
RequiredMinAppVersion: minVersion,
|
||||||
|
LastCheckedAt: &now,
|
||||||
|
}
|
||||||
|
if err := s.localDB.SetSyncGuardState(ReadinessBlocked, code, text, minVersion, &now); err != nil {
|
||||||
|
slog.Warn("failed to persist blocked sync guard state", "error", err)
|
||||||
|
}
|
||||||
|
return readiness, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) isOnline() bool {
|
||||||
|
if s.directDB != nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if s.connMgr == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return s.connMgr.IsOnline()
|
||||||
|
}
|
||||||
|
|
||||||
|
type clientLocalMigration struct {
|
||||||
|
ID string `gorm:"column:id"`
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
SQLText string `gorm:"column:sql_text"`
|
||||||
|
Checksum string `gorm:"column:checksum"`
|
||||||
|
MinAppVersion string `gorm:"column:min_app_version"`
|
||||||
|
OrderNo int `gorm:"column:order_no"`
|
||||||
|
CreatedAt time.Time `gorm:"column:created_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func listActiveClientMigrations(db *gorm.DB) ([]clientLocalMigration, error) {
|
||||||
|
if strings.EqualFold(db.Dialector.Name(), "sqlite") {
|
||||||
|
return []clientLocalMigration{}, nil
|
||||||
|
}
|
||||||
|
if err := ensureClientMigrationRegistryTable(db); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
rows := make([]clientLocalMigration, 0)
|
||||||
|
if err := db.Raw(`
|
||||||
|
SELECT id, name, sql_text, checksum, COALESCE(min_app_version, '') AS min_app_version, order_no, created_at
|
||||||
|
FROM qt_client_local_migrations
|
||||||
|
WHERE is_active = 1
|
||||||
|
ORDER BY order_no ASC, created_at ASC, id ASC
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("load client local migrations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return rows, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureClientMigrationRegistryTable(db *gorm.DB) error {
|
||||||
|
// Check if table exists instead of trying to create (avoids permission issues)
|
||||||
|
if !tableExists(db, "qt_client_local_migrations") {
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE IF NOT EXISTS qt_client_local_migrations (
|
||||||
|
id VARCHAR(128) NOT NULL,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
sql_text LONGTEXT NOT NULL,
|
||||||
|
checksum VARCHAR(128) NOT NULL,
|
||||||
|
min_app_version VARCHAR(64) NULL,
|
||||||
|
order_no INT NOT NULL DEFAULT 0,
|
||||||
|
is_active TINYINT(1) NOT NULL DEFAULT 1,
|
||||||
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
PRIMARY KEY (id),
|
||||||
|
INDEX idx_qt_client_local_migrations_active_order (is_active, order_no, created_at)
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create qt_client_local_migrations table: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !tableExists(db, "qt_client_schema_state") {
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE IF NOT EXISTS qt_client_schema_state (
|
||||||
|
username VARCHAR(100) NOT NULL,
|
||||||
|
last_applied_migration_id VARCHAR(128) NULL,
|
||||||
|
app_version VARCHAR(64) NULL,
|
||||||
|
last_checked_at DATETIME NOT NULL,
|
||||||
|
updated_at DATETIME NOT NULL,
|
||||||
|
PRIMARY KEY (username),
|
||||||
|
INDEX idx_qt_client_schema_state_checked (last_checked_at)
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create qt_client_schema_state table: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func tableExists(db *gorm.DB, tableName string) bool {
|
||||||
|
var count int64
|
||||||
|
// For MariaDB/MySQL, check information_schema
|
||||||
|
if err := db.Raw(`
|
||||||
|
SELECT COUNT(*) FROM information_schema.TABLES
|
||||||
|
WHERE TABLE_SCHEMA = DATABASE() AND TABLE_NAME = ?
|
||||||
|
`, tableName).Scan(&count).Error; err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return count > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) applyMissingRemoteMigrations(migrations []clientLocalMigration) error {
|
||||||
|
for i := range migrations {
|
||||||
|
m := migrations[i]
|
||||||
|
computedChecksum := digestSQL(m.SQLText)
|
||||||
|
checksum := strings.TrimSpace(m.Checksum)
|
||||||
|
if checksum == "" {
|
||||||
|
checksum = computedChecksum
|
||||||
|
} else if !strings.EqualFold(checksum, computedChecksum) {
|
||||||
|
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
applied, err := s.localDB.GetRemoteMigrationApplied(m.ID)
|
||||||
|
if err == nil {
|
||||||
|
if strings.TrimSpace(applied.Checksum) != checksum {
|
||||||
|
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return fmt.Errorf("check local applied migration %s: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(m.SQLText) == "" {
|
||||||
|
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
||||||
|
return fmt.Errorf("mark empty migration %s as applied: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
statements := splitSQLStatementsLite(m.SQLText)
|
||||||
|
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
for _, stmt := range statements {
|
||||||
|
if err := tx.Exec(stmt).Error; err != nil {
|
||||||
|
return fmt.Errorf("apply migration %s statement %q: %w", m.ID, stmt, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
||||||
|
return fmt.Errorf("record applied migration %s: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitSQLStatementsLite(script string) []string {
|
||||||
|
scanner := bufio.NewScanner(strings.NewReader(script))
|
||||||
|
scanner.Buffer(make([]byte, 1024), 1024*1024)
|
||||||
|
|
||||||
|
lines := make([]string, 0, 64)
|
||||||
|
for scanner.Scan() {
|
||||||
|
line := strings.TrimSpace(scanner.Text())
|
||||||
|
if line == "" || strings.HasPrefix(line, "--") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lines = append(lines, scanner.Text())
|
||||||
|
}
|
||||||
|
combined := strings.Join(lines, "\n")
|
||||||
|
raw := strings.Split(combined, ";")
|
||||||
|
stmts := make([]string, 0, len(raw))
|
||||||
|
for _, stmt := range raw {
|
||||||
|
trimmed := strings.TrimSpace(stmt)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
stmts = append(stmts, trimmed)
|
||||||
|
}
|
||||||
|
return stmts
|
||||||
|
}
|
||||||
|
|
||||||
|
func digestSQL(sqlText string) string {
|
||||||
|
hash := sha256.Sum256([]byte(sqlText))
|
||||||
|
return hex.EncodeToString(hash[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
func compareVersions(left, right string) int {
|
||||||
|
leftParts := normalizeVersionParts(left)
|
||||||
|
rightParts := normalizeVersionParts(right)
|
||||||
|
maxLen := len(leftParts)
|
||||||
|
if len(rightParts) > maxLen {
|
||||||
|
maxLen = len(rightParts)
|
||||||
|
}
|
||||||
|
for i := 0; i < maxLen; i++ {
|
||||||
|
lv := 0
|
||||||
|
rv := 0
|
||||||
|
if i < len(leftParts) {
|
||||||
|
lv = leftParts[i]
|
||||||
|
}
|
||||||
|
if i < len(rightParts) {
|
||||||
|
rv = rightParts[i]
|
||||||
|
}
|
||||||
|
if lv < rv {
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
if lv > rv {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time) error {
|
||||||
|
if strings.EqualFold(mariaDB.Dialector.Name(), "sqlite") {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
username := strings.TrimSpace(s.localDB.GetDBUser())
|
||||||
|
if username == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
lastMigrationID := ""
|
||||||
|
if id, err := s.localDB.GetLatestAppliedRemoteMigrationID(); err == nil {
|
||||||
|
lastMigrationID = id
|
||||||
|
}
|
||||||
|
return mariaDB.Exec(`
|
||||||
|
INSERT INTO qt_client_schema_state (username, last_applied_migration_id, app_version, last_checked_at, updated_at)
|
||||||
|
VALUES (?, ?, ?, ?, ?)
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
last_applied_migration_id = VALUES(last_applied_migration_id),
|
||||||
|
app_version = VALUES(app_version),
|
||||||
|
last_checked_at = VALUES(last_checked_at),
|
||||||
|
updated_at = VALUES(updated_at)
|
||||||
|
`, username, lastMigrationID, appmeta.Version(), checkedAt, checkedAt).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeVersionParts(v string) []int {
|
||||||
|
trimmed := strings.TrimSpace(v)
|
||||||
|
trimmed = strings.TrimPrefix(trimmed, "v")
|
||||||
|
chunks := strings.Split(trimmed, ".")
|
||||||
|
parts := make([]int, 0, len(chunks))
|
||||||
|
for _, chunk := range chunks {
|
||||||
|
clean := strings.TrimSpace(chunk)
|
||||||
|
if clean == "" {
|
||||||
|
parts = append(parts, 0)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n := 0
|
||||||
|
for i := 0; i < len(clean); i++ {
|
||||||
|
if clean[i] < '0' || clean[i] > '9' {
|
||||||
|
clean = clean[:i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if clean != "" {
|
||||||
|
if parsed, err := strconv.Atoi(clean); err == nil {
|
||||||
|
n = parsed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parts = append(parts, n)
|
||||||
|
}
|
||||||
|
return parts
|
||||||
|
}
|
||||||
|
|
||||||
|
func toReadinessFromState(state *localdb.LocalSyncGuardState) *SyncReadiness {
|
||||||
|
if state == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
blocked := state.Status == ReadinessBlocked
|
||||||
|
return &SyncReadiness{
|
||||||
|
Status: state.Status,
|
||||||
|
Blocked: blocked,
|
||||||
|
ReasonCode: state.ReasonCode,
|
||||||
|
ReasonText: state.ReasonText,
|
||||||
|
RequiredMinAppVersion: state.RequiredMinAppVersion,
|
||||||
|
LastCheckedAt: state.LastCheckedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -200,6 +200,7 @@ func (s *Service) ImportProjectsToLocal() (*ProjectImportResult, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
existing.OwnerUsername = project.OwnerUsername
|
existing.OwnerUsername = project.OwnerUsername
|
||||||
|
existing.Code = project.Code
|
||||||
existing.Name = project.Name
|
existing.Name = project.Name
|
||||||
existing.TrackerURL = project.TrackerURL
|
existing.TrackerURL = project.TrackerURL
|
||||||
existing.IsActive = project.IsActive
|
existing.IsActive = project.IsActive
|
||||||
@@ -322,6 +323,9 @@ func (s *Service) NeedSync() (bool, error) {
|
|||||||
// SyncPricelists synchronizes all active pricelists from server to local SQLite
|
// SyncPricelists synchronizes all active pricelists from server to local SQLite
|
||||||
func (s *Service) SyncPricelists() (int, error) {
|
func (s *Service) SyncPricelists() (int, error) {
|
||||||
slog.Info("starting pricelist sync")
|
slog.Info("starting pricelist sync")
|
||||||
|
if _, err := s.EnsureReadinessForSync(); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
// Get database connection
|
// Get database connection
|
||||||
mariaDB, err := s.getDB()
|
mariaDB, err := s.getDB()
|
||||||
@@ -337,19 +341,16 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, fmt.Errorf("getting active server pricelists: %w", err)
|
return 0, fmt.Errorf("getting active server pricelists: %w", err)
|
||||||
}
|
}
|
||||||
|
serverPricelistIDs := make([]uint, 0, len(serverPricelists))
|
||||||
|
for i := range serverPricelists {
|
||||||
|
serverPricelistIDs = append(serverPricelistIDs, serverPricelists[i].ID)
|
||||||
|
}
|
||||||
|
|
||||||
synced := 0
|
synced := 0
|
||||||
var latestEstimateLocalID uint
|
|
||||||
var latestEstimateCreatedAt time.Time
|
|
||||||
for _, pl := range serverPricelists {
|
for _, pl := range serverPricelists {
|
||||||
// Check if pricelist already exists locally
|
// Check if pricelist already exists locally
|
||||||
existing, _ := s.localDB.GetLocalPricelistByServerID(pl.ID)
|
existing, _ := s.localDB.GetLocalPricelistByServerID(pl.ID)
|
||||||
if existing != nil {
|
if existing != nil {
|
||||||
// Track latest estimate pricelist by created_at for component refresh.
|
|
||||||
if pl.Source == string(models.PricelistSourceEstimate) && (latestEstimateCreatedAt.IsZero() || pl.CreatedAt.After(latestEstimateCreatedAt)) {
|
|
||||||
latestEstimateCreatedAt = pl.CreatedAt
|
|
||||||
latestEstimateLocalID = existing.ID
|
|
||||||
}
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -378,23 +379,19 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
slog.Debug("synced pricelist with items", "version", pl.Version, "items", itemCount)
|
||||||
}
|
}
|
||||||
|
|
||||||
if pl.Source == string(models.PricelistSourceEstimate) && (latestEstimateCreatedAt.IsZero() || pl.CreatedAt.After(latestEstimateCreatedAt)) {
|
|
||||||
latestEstimateCreatedAt = pl.CreatedAt
|
|
||||||
latestEstimateLocalID = localPL.ID
|
|
||||||
}
|
|
||||||
synced++
|
synced++
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update component prices from latest estimate pricelist only.
|
removed, err := s.localDB.DeleteUnusedLocalPricelistsMissingOnServer(serverPricelistIDs)
|
||||||
if latestEstimateLocalID > 0 {
|
if err != nil {
|
||||||
updated, err := s.localDB.UpdateComponentPricesFromPricelist(latestEstimateLocalID)
|
slog.Warn("failed to cleanup stale local pricelists", "error", err)
|
||||||
if err != nil {
|
} else if removed > 0 {
|
||||||
slog.Warn("failed to update component prices from pricelist", "error", err)
|
slog.Info("deleted stale local pricelists", "deleted", removed)
|
||||||
} else {
|
|
||||||
slog.Info("updated component prices from latest pricelist", "updated", updated)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Backfill lot_category for used pricelists (older local caches may miss the column values).
|
||||||
|
s.backfillUsedPricelistItemCategories(pricelistRepo, serverPricelistIDs)
|
||||||
|
|
||||||
// Update last sync time
|
// Update last sync time
|
||||||
s.localDB.SetLastSyncTime(time.Now())
|
s.localDB.SetLastSyncTime(time.Now())
|
||||||
s.RecordSyncHeartbeat()
|
s.RecordSyncHeartbeat()
|
||||||
@@ -403,6 +400,83 @@ func (s *Service) SyncPricelists() (int, error) {
|
|||||||
return synced, nil
|
return synced, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) backfillUsedPricelistItemCategories(pricelistRepo *repository.PricelistRepository, activeServerPricelistIDs []uint) {
|
||||||
|
if s.localDB == nil || pricelistRepo == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
activeSet := make(map[uint]struct{}, len(activeServerPricelistIDs))
|
||||||
|
for _, id := range activeServerPricelistIDs {
|
||||||
|
activeSet[id] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
type row struct {
|
||||||
|
ID uint `gorm:"column:id"`
|
||||||
|
}
|
||||||
|
var usedRows []row
|
||||||
|
if err := s.localDB.DB().Raw(`
|
||||||
|
SELECT DISTINCT pricelist_id AS id
|
||||||
|
FROM local_configurations
|
||||||
|
WHERE is_active = 1 AND pricelist_id IS NOT NULL
|
||||||
|
UNION
|
||||||
|
SELECT DISTINCT warehouse_pricelist_id AS id
|
||||||
|
FROM local_configurations
|
||||||
|
WHERE is_active = 1 AND warehouse_pricelist_id IS NOT NULL
|
||||||
|
UNION
|
||||||
|
SELECT DISTINCT competitor_pricelist_id AS id
|
||||||
|
FROM local_configurations
|
||||||
|
WHERE is_active = 1 AND competitor_pricelist_id IS NOT NULL
|
||||||
|
`).Scan(&usedRows).Error; err != nil {
|
||||||
|
slog.Warn("pricelist category backfill: failed to list used pricelists", "error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, r := range usedRows {
|
||||||
|
serverID := r.ID
|
||||||
|
if serverID == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := activeSet[serverID]; !ok {
|
||||||
|
// Not present on server (or not active) - cannot backfill from remote.
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
localPL, err := s.localDB.GetLocalPricelistByServerID(serverID)
|
||||||
|
if err != nil || localPL == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.localDB.CountLocalPricelistItems(localPL.ID) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
missing, err := s.localDB.CountLocalPricelistItemsWithEmptyCategory(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("pricelist category backfill: failed to check local items", "server_id", serverID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if missing == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
serverItems, _, err := pricelistRepo.GetItems(serverID, 0, 10000, "")
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("pricelist category backfill: failed to load server items", "server_id", serverID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
||||||
|
for i := range serverItems {
|
||||||
|
localItems[i] = *localdb.PricelistItemToLocal(&serverItems[i], localPL.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.localDB.ReplaceLocalPricelistItems(localPL.ID, localItems); err != nil {
|
||||||
|
slog.Warn("pricelist category backfill: failed to replace local items", "server_id", serverID, "error", err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
slog.Info("pricelist category backfill: refreshed local items", "server_id", serverID, "items", len(localItems))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// RecordSyncHeartbeat updates shared sync heartbeat for current DB user.
|
// RecordSyncHeartbeat updates shared sync heartbeat for current DB user.
|
||||||
// Only users with write rights are expected to be able to update this table.
|
// Only users with write rights are expected to be able to update this table.
|
||||||
func (s *Service) RecordSyncHeartbeat() {
|
func (s *Service) RecordSyncHeartbeat() {
|
||||||
@@ -539,24 +613,34 @@ func (s *Service) listConnectedDBUsers(mariaDB *gorm.DB) (map[string]struct{}, e
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ensureUserSyncStatusTable(db *gorm.DB) error {
|
func ensureUserSyncStatusTable(db *gorm.DB) error {
|
||||||
if err := db.Exec(`
|
// Check if table exists instead of trying to create (avoids permission issues)
|
||||||
CREATE TABLE IF NOT EXISTS qt_pricelist_sync_status (
|
if !tableExists(db, "qt_pricelist_sync_status") {
|
||||||
username VARCHAR(100) NOT NULL,
|
if err := db.Exec(`
|
||||||
last_sync_at DATETIME NOT NULL,
|
CREATE TABLE IF NOT EXISTS qt_pricelist_sync_status (
|
||||||
updated_at DATETIME NOT NULL,
|
username VARCHAR(100) NOT NULL,
|
||||||
app_version VARCHAR(64) NULL,
|
last_sync_at DATETIME NOT NULL,
|
||||||
PRIMARY KEY (username),
|
updated_at DATETIME NOT NULL,
|
||||||
INDEX idx_qt_pricelist_sync_status_last_sync (last_sync_at)
|
app_version VARCHAR(64) NULL,
|
||||||
)
|
PRIMARY KEY (username),
|
||||||
`).Error; err != nil {
|
INDEX idx_qt_pricelist_sync_status_last_sync (last_sync_at)
|
||||||
return err
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create qt_pricelist_sync_status table: %w", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Backward compatibility for environments where table was created without app_version.
|
// Backward compatibility for environments where table was created without app_version.
|
||||||
return db.Exec(`
|
// Only try to add column if table exists.
|
||||||
ALTER TABLE qt_pricelist_sync_status
|
if tableExists(db, "qt_pricelist_sync_status") {
|
||||||
ADD COLUMN IF NOT EXISTS app_version VARCHAR(64) NULL
|
if err := db.Exec(`
|
||||||
`).Error
|
ALTER TABLE qt_pricelist_sync_status
|
||||||
|
ADD COLUMN IF NOT EXISTS app_version VARCHAR(64) NULL
|
||||||
|
`).Error; err != nil {
|
||||||
|
// Log but don't fail if alter fails (column might already exist)
|
||||||
|
slog.Debug("failed to add app_version column", "error", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SyncPricelistItems synchronizes items for a specific pricelist
|
// SyncPricelistItems synchronizes items for a specific pricelist
|
||||||
@@ -592,11 +676,7 @@ func (s *Service) SyncPricelistItems(localPricelistID uint) (int, error) {
|
|||||||
// Convert and save locally
|
// Convert and save locally
|
||||||
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
localItems := make([]localdb.LocalPricelistItem, len(serverItems))
|
||||||
for i, item := range serverItems {
|
for i, item := range serverItems {
|
||||||
localItems[i] = localdb.LocalPricelistItem{
|
localItems[i] = *localdb.PricelistItemToLocal(&item, localPricelistID)
|
||||||
PricelistID: localPricelistID,
|
|
||||||
LotName: item.LotName,
|
|
||||||
Price: item.Price,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
|
if err := s.localDB.SaveLocalPricelistItems(localItems); err != nil {
|
||||||
@@ -672,6 +752,10 @@ func (s *Service) SyncPricelistsIfNeeded() error {
|
|||||||
|
|
||||||
// PushPendingChanges pushes all pending changes to the server
|
// PushPendingChanges pushes all pending changes to the server
|
||||||
func (s *Service) PushPendingChanges() (int, error) {
|
func (s *Service) PushPendingChanges() (int, error) {
|
||||||
|
if _, err := s.EnsureReadinessForSync(); err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
removed, err := s.localDB.PurgeOrphanConfigurationPendingChanges()
|
removed, err := s.localDB.PurgeOrphanConfigurationPendingChanges()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
slog.Warn("failed to purge orphan configuration pending changes", "error", err)
|
slog.Warn("failed to purge orphan configuration pending changes", "error", err)
|
||||||
@@ -765,6 +849,12 @@ func (s *Service) pushProjectChange(change *localdb.PendingChange) error {
|
|||||||
projectRepo := repository.NewProjectRepository(mariaDB)
|
projectRepo := repository.NewProjectRepository(mariaDB)
|
||||||
project := payload.Snapshot
|
project := payload.Snapshot
|
||||||
project.UUID = payload.ProjectUUID
|
project.UUID = payload.ProjectUUID
|
||||||
|
if strings.TrimSpace(project.Code) == "" {
|
||||||
|
project.Code = strings.TrimSpace(derefString(project.Name))
|
||||||
|
if project.Code == "" {
|
||||||
|
project.Code = project.UUID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if err := projectRepo.UpsertByUUID(&project); err != nil {
|
if err := projectRepo.UpsertByUUID(&project); err != nil {
|
||||||
return fmt.Errorf("upsert project on server: %w", err)
|
return fmt.Errorf("upsert project on server: %w", err)
|
||||||
@@ -785,6 +875,17 @@ func (s *Service) pushProjectChange(change *localdb.PendingChange) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func derefString(value *string) string {
|
||||||
|
if value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return *value
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
func decodeProjectChangePayload(change *localdb.PendingChange) (ProjectChangePayload, error) {
|
func decodeProjectChangePayload(change *localdb.PendingChange) (ProjectChangePayload, error) {
|
||||||
var payload ProjectChangePayload
|
var payload ProjectChangePayload
|
||||||
if err := json.Unmarshal([]byte(change.Payload), &payload); err == nil && payload.ProjectUUID != "" {
|
if err := json.Unmarshal([]byte(change.Payload), &payload); err == nil && payload.ProjectUUID != "" {
|
||||||
@@ -1055,7 +1156,8 @@ func (s *Service) ensureConfigurationProject(mariaDB *gorm.DB, cfg *models.Confi
|
|||||||
systemProject = &models.Project{
|
systemProject = &models.Project{
|
||||||
UUID: uuid.NewString(),
|
UUID: uuid.NewString(),
|
||||||
OwnerUsername: "",
|
OwnerUsername: "",
|
||||||
Name: "Без проекта",
|
Code: "Без проекта",
|
||||||
|
Name: ptrString("Без проекта"),
|
||||||
IsActive: true,
|
IsActive: true,
|
||||||
IsSystem: true,
|
IsSystem: true,
|
||||||
}
|
}
|
||||||
@@ -1219,6 +1321,21 @@ func (s *Service) loadCurrentConfigurationState(configurationUUID string) (model
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if currentVersionNo == 0 {
|
||||||
|
if err := s.repairMissingConfigurationVersion(localCfg); err != nil {
|
||||||
|
return models.Configuration{}, "", 0, fmt.Errorf("repair missing configuration version: %w", err)
|
||||||
|
}
|
||||||
|
var latest localdb.LocalConfigurationVersion
|
||||||
|
err = s.localDB.DB().
|
||||||
|
Where("configuration_uuid = ?", configurationUUID).
|
||||||
|
Order("version_no DESC").
|
||||||
|
First(&latest).Error
|
||||||
|
if err == nil {
|
||||||
|
currentVersionNo = latest.VersionNo
|
||||||
|
currentVersionID = latest.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if currentVersionNo == 0 {
|
if currentVersionNo == 0 {
|
||||||
return models.Configuration{}, "", 0, fmt.Errorf("no local configuration version found for %s", configurationUUID)
|
return models.Configuration{}, "", 0, fmt.Errorf("no local configuration version found for %s", configurationUUID)
|
||||||
}
|
}
|
||||||
@@ -1226,6 +1343,64 @@ func (s *Service) loadCurrentConfigurationState(configurationUUID string) (model
|
|||||||
return cfg, currentVersionID, currentVersionNo, nil
|
return cfg, currentVersionID, currentVersionNo, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *Service) repairMissingConfigurationVersion(localCfg *localdb.LocalConfiguration) error {
|
||||||
|
if localCfg == nil {
|
||||||
|
return fmt.Errorf("local configuration is nil")
|
||||||
|
}
|
||||||
|
|
||||||
|
return s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
var cfg localdb.LocalConfiguration
|
||||||
|
if err := tx.Where("uuid = ?", localCfg.UUID).First(&cfg).Error; err != nil {
|
||||||
|
return fmt.Errorf("load local configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// If versions exist, just make sure current_version_id is set.
|
||||||
|
var latest localdb.LocalConfigurationVersion
|
||||||
|
if err := tx.Where("configuration_uuid = ?", cfg.UUID).
|
||||||
|
Order("version_no DESC").
|
||||||
|
First(&latest).Error; err == nil {
|
||||||
|
if cfg.CurrentVersionID == nil || *cfg.CurrentVersionID == "" {
|
||||||
|
if err := tx.Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", latest.ID).Error; err != nil {
|
||||||
|
return fmt.Errorf("set current version id: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
} else if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return fmt.Errorf("load latest version: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
snapshot, err := localdb.BuildConfigurationSnapshot(&cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("build configuration snapshot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
note := "Auto-repaired missing local version"
|
||||||
|
version := localdb.LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 1,
|
||||||
|
Data: snapshot,
|
||||||
|
ChangeNote: ¬e,
|
||||||
|
AppVersion: appmeta.Version(),
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Create(&version).Error; err != nil {
|
||||||
|
return fmt.Errorf("create initial version: %w", err)
|
||||||
|
}
|
||||||
|
if err := tx.Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", version.ID).Error; err != nil {
|
||||||
|
return fmt.Errorf("set current version id: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Warn("repaired missing local configuration version", "uuid", cfg.UUID, "version_no", version.VersionNo)
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// NOTE: prepared for future conflict resolution:
|
// NOTE: prepared for future conflict resolution:
|
||||||
// when server starts storing version metadata, we can compare payload.CurrentVersionNo
|
// when server starts storing version metadata, we can compare payload.CurrentVersionNo
|
||||||
// against remote version and branch into custom strategies. For now use last-write-wins.
|
// against remote version and branch into custom strategies. For now use last-write-wins.
|
||||||
|
|||||||
@@ -0,0 +1,107 @@
|
|||||||
|
package sync_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
if err := serverDB.AutoMigrate(
|
||||||
|
&models.Pricelist{},
|
||||||
|
&models.PricelistItem{},
|
||||||
|
&models.Lot{},
|
||||||
|
&models.LotPartnumber{},
|
||||||
|
&models.StockLog{},
|
||||||
|
); err != nil {
|
||||||
|
t.Fatalf("migrate server tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverPL := models.Pricelist{
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "2026-02-11-001",
|
||||||
|
Notification: "server",
|
||||||
|
CreatedBy: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&models.PricelistItem{
|
||||||
|
PricelistID: serverPL.ID,
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
PriceMethod: "",
|
||||||
|
MetaPrices: "",
|
||||||
|
ManualPrice: nil,
|
||||||
|
AvailableQty: nil,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: serverPL.ID,
|
||||||
|
Source: serverPL.Source,
|
||||||
|
Version: serverPL.Version,
|
||||||
|
Name: serverPL.Notification,
|
||||||
|
CreatedAt: serverPL.CreatedAt,
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(serverPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{
|
||||||
|
PricelistID: localPL.ID,
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "",
|
||||||
|
Price: 10,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local pricelist items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveConfiguration(&localdb.LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
Name: "cfg",
|
||||||
|
Items: localdb.LocalConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 10}},
|
||||||
|
IsActive: true,
|
||||||
|
PricelistID: &serverPL.ID,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
CreatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local configuration with pricelist ref: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.SyncPricelists(); err != nil {
|
||||||
|
t.Fatalf("sync pricelists: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items, err := local.GetLocalPricelistItems(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load local items: %v", err)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 local item, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected lot_category backfilled to CPU, got %q", items[0].LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
85
internal/services/sync/service_pricelist_cleanup_test.go
Normal file
85
internal/services/sync/service_pricelist_cleanup_test.go
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
package sync_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
if err := serverDB.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}); err != nil {
|
||||||
|
t.Fatalf("migrate server pricelist tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverPL := models.Pricelist{
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "2026-01-01-001",
|
||||||
|
Notification: "server",
|
||||||
|
CreatedBy: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&models.PricelistItem{PricelistID: serverPL.ID, LotName: "CPU_A", Price: 10}).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 9991,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "old-unused",
|
||||||
|
Name: "old-unused",
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
SyncedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local missing pricelist: %v", err)
|
||||||
|
}
|
||||||
|
missingUsed := &localdb.LocalPricelist{
|
||||||
|
ServerID: 9992,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "old-used",
|
||||||
|
Name: "old-used",
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
SyncedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
IsUsed: false,
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelist(missingUsed); err != nil {
|
||||||
|
t.Fatalf("seed local referenced pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(&localdb.LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
Name: "cfg",
|
||||||
|
Items: localdb.LocalConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1}},
|
||||||
|
IsActive: true,
|
||||||
|
PricelistID: &missingUsed.ServerID,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
CreatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local configuration with pricelist ref: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.SyncPricelists(); err != nil {
|
||||||
|
t.Fatalf("sync pricelists: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(9991); err == nil {
|
||||||
|
t.Fatalf("expected unused missing local pricelist to be deleted")
|
||||||
|
}
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(9992); err != nil {
|
||||||
|
t.Fatalf("expected local pricelist referenced by active config to stay: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(serverPL.ID); err != nil {
|
||||||
|
t.Fatalf("expected server pricelist to be synced locally: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -23,7 +23,7 @@ func TestPushPendingChangesProjectsBeforeConfigurations(t *testing.T) {
|
|||||||
projectService := services.NewProjectService(local)
|
projectService := services.NewProjectService(local)
|
||||||
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
|
||||||
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: "Project A"})
|
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: ptrString("Project A"), Code: "PRJ-A"})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("create project: %v", err)
|
t.Fatalf("create project: %v", err)
|
||||||
}
|
}
|
||||||
@@ -74,11 +74,11 @@ func TestPushPendingChangesProjectCreateThenUpdateBeforeFirstPush(t *testing.T)
|
|||||||
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: "Project v1"})
|
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: ptrString("Project v1"), Code: "PRJ-V1"})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
t.Fatalf("create project: %v", err)
|
t.Fatalf("create project: %v", err)
|
||||||
}
|
}
|
||||||
if _, err := projectService.Update(project.UUID, "tester", &services.UpdateProjectRequest{Name: "Project v2"}); err != nil {
|
if _, err := projectService.Update(project.UUID, "tester", &services.UpdateProjectRequest{Name: ptrString("Project v2")}); err != nil {
|
||||||
t.Fatalf("update project: %v", err)
|
t.Fatalf("update project: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -100,8 +100,8 @@ func TestPushPendingChangesProjectCreateThenUpdateBeforeFirstPush(t *testing.T)
|
|||||||
if err := serverDB.Where("uuid = ?", project.UUID).First(&serverProject).Error; err != nil {
|
if err := serverDB.Where("uuid = ?", project.UUID).First(&serverProject).Error; err != nil {
|
||||||
t.Fatalf("project not pushed to server: %v", err)
|
t.Fatalf("project not pushed to server: %v", err)
|
||||||
}
|
}
|
||||||
if serverProject.Name != "Project v2" {
|
if serverProject.Name == nil || *serverProject.Name != "Project v2" {
|
||||||
t.Fatalf("expected latest project name, got %q", serverProject.Name)
|
t.Fatalf("expected latest project name, got %v", serverProject.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
var serverCfg models.Configuration
|
var serverCfg models.Configuration
|
||||||
@@ -324,6 +324,8 @@ CREATE TABLE qt_projects (
|
|||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
uuid TEXT NOT NULL UNIQUE,
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
owner_username TEXT NOT NULL,
|
owner_username TEXT NOT NULL,
|
||||||
|
code TEXT NOT NULL,
|
||||||
|
variant TEXT NOT NULL DEFAULT '',
|
||||||
name TEXT NOT NULL,
|
name TEXT NOT NULL,
|
||||||
tracker_url TEXT NULL,
|
tracker_url TEXT NULL,
|
||||||
is_active INTEGER NOT NULL DEFAULT 1,
|
is_active INTEGER NOT NULL DEFAULT 1,
|
||||||
@@ -333,6 +335,9 @@ CREATE TABLE qt_projects (
|
|||||||
);`).Error; err != nil {
|
);`).Error; err != nil {
|
||||||
t.Fatalf("create qt_projects: %v", err)
|
t.Fatalf("create qt_projects: %v", err)
|
||||||
}
|
}
|
||||||
|
if err := db.Exec(`CREATE UNIQUE INDEX idx_qt_projects_code_variant ON qt_projects(code, variant);`).Error; err != nil {
|
||||||
|
t.Fatalf("create qt_projects index: %v", err)
|
||||||
|
}
|
||||||
if err := db.Exec(`
|
if err := db.Exec(`
|
||||||
CREATE TABLE qt_configurations (
|
CREATE TABLE qt_configurations (
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
@@ -348,7 +353,14 @@ CREATE TABLE qt_configurations (
|
|||||||
notes TEXT NULL,
|
notes TEXT NULL,
|
||||||
is_template INTEGER NOT NULL DEFAULT 0,
|
is_template INTEGER NOT NULL DEFAULT 0,
|
||||||
server_count INTEGER NOT NULL DEFAULT 1,
|
server_count INTEGER NOT NULL DEFAULT 1,
|
||||||
|
server_model TEXT NULL,
|
||||||
|
support_code TEXT NULL,
|
||||||
|
article TEXT NULL,
|
||||||
pricelist_id INTEGER NULL,
|
pricelist_id INTEGER NULL,
|
||||||
|
warehouse_pricelist_id INTEGER NULL,
|
||||||
|
competitor_pricelist_id INTEGER NULL,
|
||||||
|
disable_price_refresh INTEGER NOT NULL DEFAULT 0,
|
||||||
|
only_in_stock INTEGER NOT NULL DEFAULT 0,
|
||||||
price_updated_at DATETIME NULL,
|
price_updated_at DATETIME NULL,
|
||||||
created_at DATETIME
|
created_at DATETIME
|
||||||
);`).Error; err != nil {
|
);`).Error; err != nil {
|
||||||
@@ -357,6 +369,10 @@ CREATE TABLE qt_configurations (
|
|||||||
return db
|
return db
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
func getCurrentVersionInfo(t *testing.T, local *localdb.LocalDB, configurationUUID string, currentVersionID *string) (int, string) {
|
func getCurrentVersionInfo(t *testing.T, local *localdb.LocalDB, configurationUUID string, currentVersionID *string) (int, string) {
|
||||||
t.Helper()
|
t.Helper()
|
||||||
if currentVersionID == nil || *currentVersionID == "" {
|
if currentVersionID == nil || *currentVersionID == "" {
|
||||||
|
|||||||
@@ -71,6 +71,15 @@ func (w *Worker) runSync() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if readiness, err := w.service.EnsureReadinessForSync(); err != nil {
|
||||||
|
w.logger.Warn("background sync: blocked by readiness guard",
|
||||||
|
"error", err,
|
||||||
|
"reason_code", readiness.ReasonCode,
|
||||||
|
"reason_text", readiness.ReasonText,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// Push pending changes first
|
// Push pending changes first
|
||||||
pushed, err := w.service.PushPendingChanges()
|
pushed, err := w.service.PushPendingChanges()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
413
man/backup.md
Normal file
413
man/backup.md
Normal file
@@ -0,0 +1,413 @@
|
|||||||
|
# AI Implementation Guide: Go Scheduled Backup Rotation (ZIP)
|
||||||
|
|
||||||
|
This document is written **for an AI** to replicate the same backup approach in another Go project. It contains the exact requirements, design notes, and full module listings you can copy.
|
||||||
|
|
||||||
|
## Requirements (Behavioral)
|
||||||
|
- Run backups on a daily schedule at a configured local time (default `00:00`).
|
||||||
|
- At startup, if there is no backup for the current period, create it immediately.
|
||||||
|
- Backup content must include:
|
||||||
|
- Local SQLite DB file (e.g., `qfs.db`).
|
||||||
|
- SQLite sidecars (`-wal`, `-shm`) if present.
|
||||||
|
- Runtime config file (e.g., `config.yaml`) if present.
|
||||||
|
- Backups must be ZIP archives named:
|
||||||
|
- `qfs-backp-YYYY-MM-DD.zip`
|
||||||
|
- Retention policy:
|
||||||
|
- 7 daily, 4 weekly, 12 monthly, 10 yearly archives.
|
||||||
|
- Keep backups in period-specific directories:
|
||||||
|
- `<backup root>/daily`, `/weekly`, `/monthly`, `/yearly`.
|
||||||
|
- Prevent duplicate backups for the same period via a marker file.
|
||||||
|
- Log success with the archive path, and log errors on failure.
|
||||||
|
|
||||||
|
## Configuration & Env
|
||||||
|
- Config key: `backup.time` with format `HH:MM` in local time. Default: `00:00`.
|
||||||
|
- Env overrides:
|
||||||
|
- `QFS_BACKUP_DIR` — backup root directory.
|
||||||
|
- `QFS_BACKUP_DISABLE` — disable backups (`1/true/yes`).
|
||||||
|
|
||||||
|
## Integration Steps (Minimal)
|
||||||
|
1. Add `BackupConfig` to your config struct.
|
||||||
|
2. Add a scheduler goroutine that:
|
||||||
|
- On startup: runs backup immediately if needed.
|
||||||
|
- Then sleeps until next configured time and runs daily.
|
||||||
|
3. Add the backup module (below).
|
||||||
|
4. Wire logs for success/failure.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Full Go Listings
|
||||||
|
|
||||||
|
## 1) Backup Module (Drop-in)
|
||||||
|
Create: `internal/appstate/backup.go`
|
||||||
|
|
||||||
|
```go
|
||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/zip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type backupPeriod struct {
|
||||||
|
name string
|
||||||
|
retention int
|
||||||
|
key func(time.Time) string
|
||||||
|
date func(time.Time) string
|
||||||
|
}
|
||||||
|
|
||||||
|
var backupPeriods = []backupPeriod{
|
||||||
|
{
|
||||||
|
name: "daily",
|
||||||
|
retention: 7,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "weekly",
|
||||||
|
retention: 4,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
y, w := t.ISOWeek()
|
||||||
|
return fmt.Sprintf("%04d-W%02d", y, w)
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "monthly",
|
||||||
|
retention: 12,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yearly",
|
||||||
|
retention: 10,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
envBackupDisable = "QFS_BACKUP_DISABLE"
|
||||||
|
envBackupDir = "QFS_BACKUP_DIR"
|
||||||
|
)
|
||||||
|
|
||||||
|
var backupNow = time.Now
|
||||||
|
|
||||||
|
// EnsureRotatingLocalBackup creates or refreshes daily/weekly/monthly/yearly backups
|
||||||
|
// for the local database and config. It keeps a limited number per period.
|
||||||
|
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
||||||
|
if isBackupDisabled() {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
if dbPath == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(dbPath); err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("stat db: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
root := resolveBackupRoot(dbPath)
|
||||||
|
now := backupNow()
|
||||||
|
|
||||||
|
created := make([]string, 0)
|
||||||
|
for _, period := range backupPeriods {
|
||||||
|
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
||||||
|
if err != nil {
|
||||||
|
return created, err
|
||||||
|
}
|
||||||
|
if len(newFiles) > 0 {
|
||||||
|
created = append(created, newFiles...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return created, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveBackupRoot(dbPath string) string {
|
||||||
|
if fromEnv := strings.TrimSpace(os.Getenv(envBackupDir)); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv)
|
||||||
|
}
|
||||||
|
return filepath.Join(filepath.Dir(dbPath), "backups")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isBackupDisabled() bool {
|
||||||
|
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
||||||
|
return val == "1" || val == "true" || val == "yes"
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensurePeriodBackup(root string, period backupPeriod, now time.Time, dbPath, configPath string) ([]string, error) {
|
||||||
|
key := period.key(now)
|
||||||
|
periodDir := filepath.Join(root, period.name)
|
||||||
|
if err := os.MkdirAll(periodDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup dir: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasBackupForKey(periodDir, key) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
archiveName := fmt.Sprintf("qfs-backp-%s.zip", period.date(now))
|
||||||
|
archivePath := filepath.Join(periodDir, archiveName)
|
||||||
|
|
||||||
|
if err := createBackupArchive(archivePath, dbPath, configPath); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup archive: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writePeriodMarker(periodDir, key); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pruneOldBackups(periodDir, period.retention); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string{archivePath}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasBackupForKey(periodDir, key string) bool {
|
||||||
|
marker := periodMarker{Key: ""}
|
||||||
|
data, err := os.ReadFile(periodMarkerPath(periodDir))
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(data, &marker); err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return marker.Key == key
|
||||||
|
}
|
||||||
|
|
||||||
|
type periodMarker struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func periodMarkerPath(periodDir string) string {
|
||||||
|
return filepath.Join(periodDir, ".period.json")
|
||||||
|
}
|
||||||
|
|
||||||
|
func writePeriodMarker(periodDir, key string) error {
|
||||||
|
data, err := json.MarshalIndent(periodMarker{Key: key}, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return os.WriteFile(periodMarkerPath(periodDir), data, 0644)
|
||||||
|
}
|
||||||
|
|
||||||
|
func pruneOldBackups(periodDir string, keep int) error {
|
||||||
|
entries, err := os.ReadDir(periodDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read backups dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files := make([]os.DirEntry, 0, len(entries))
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(entry.Name(), ".zip") {
|
||||||
|
files = append(files, entry)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) <= keep {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(files, func(i, j int) bool {
|
||||||
|
infoI, errI := files[i].Info()
|
||||||
|
infoJ, errJ := files[j].Info()
|
||||||
|
if errI != nil || errJ != nil {
|
||||||
|
return files[i].Name() < files[j].Name()
|
||||||
|
}
|
||||||
|
return infoI.ModTime().Before(infoJ.ModTime())
|
||||||
|
})
|
||||||
|
|
||||||
|
for i := 0; i < len(files)-keep; i++ {
|
||||||
|
path := filepath.Join(periodDir, files[i].Name())
|
||||||
|
if err := os.Remove(path); err != nil {
|
||||||
|
return fmt.Errorf("remove old backup %s: %w", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||||
|
file, err := os.Create(destPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
zipWriter := zip.NewWriter(file)
|
||||||
|
if err := addZipFile(zipWriter, dbPath); err != nil {
|
||||||
|
_ = zipWriter.Close()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
||||||
|
|
||||||
|
if strings.TrimSpace(configPath) != "" {
|
||||||
|
_ = addZipOptionalFile(zipWriter, configPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := zipWriter.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return file.Sync()
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||||
|
if _, err := os.Stat(path); err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return addZipFile(writer, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipFile(writer *zip.Writer, path string) error {
|
||||||
|
in, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
info, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
header, err := zip.FileInfoHeader(info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
header.Name = filepath.Base(path)
|
||||||
|
header.Method = zip.Deflate
|
||||||
|
|
||||||
|
out, err := writer.CreateHeader(header)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(out, in)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2) Scheduler Hook (Main)
|
||||||
|
Add this to your `main.go` (or equivalent). This schedules daily backups and logs success.
|
||||||
|
|
||||||
|
```go
|
||||||
|
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string) {
|
||||||
|
if cfg == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
hour, minute, err := parseBackupTime(cfg.Backup.Time)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("invalid backup time; using 00:00", "value", cfg.Backup.Time, "error", err)
|
||||||
|
hour = 0
|
||||||
|
minute = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Startup check: if no backup exists for current periods, create now.
|
||||||
|
if created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath); backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr)
|
||||||
|
} else if len(created) > 0 {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
next := nextBackupTime(time.Now(), hour, minute)
|
||||||
|
timer := time.NewTimer(time.Until(next))
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
timer.Stop()
|
||||||
|
return
|
||||||
|
case <-timer.C:
|
||||||
|
start := time.Now()
|
||||||
|
created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath)
|
||||||
|
duration := time.Since(start)
|
||||||
|
if backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr, "duration", duration)
|
||||||
|
} else {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path, "duration", duration)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseBackupTime(value string) (int, int, error) {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return 0, 0, fmt.Errorf("empty backup time")
|
||||||
|
}
|
||||||
|
parsed, err := time.Parse("15:04", value)
|
||||||
|
if err != nil {
|
||||||
|
return 0, 0, err
|
||||||
|
}
|
||||||
|
return parsed.Hour(), parsed.Minute(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func nextBackupTime(now time.Time, hour, minute int) time.Time {
|
||||||
|
location := now.Location()
|
||||||
|
target := time.Date(now.Year(), now.Month(), now.Day(), hour, minute, 0, 0, location)
|
||||||
|
if !now.Before(target) {
|
||||||
|
target = target.Add(24 * time.Hour)
|
||||||
|
}
|
||||||
|
return target
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3) Config Struct (Minimal)
|
||||||
|
Add to config:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type BackupConfig struct {
|
||||||
|
Time string `yaml:"time"`
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Default:
|
||||||
|
```go
|
||||||
|
if c.Backup.Time == "" {
|
||||||
|
c.Backup.Time = "00:00"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes for Replication
|
||||||
|
- Keep `backup.time` in local time. Do **not** parse with timezone offsets unless required.
|
||||||
|
- The `.period.json` marker is what prevents duplicate backups within the same period.
|
||||||
|
- The archive file name only contains the date. Uniqueness is ensured by per-period directories and the period marker.
|
||||||
|
- If you change naming or retention, update both the file naming and prune logic together.
|
||||||
38
memory.md
Normal file
38
memory.md
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
# Changes summary (2026-02-11)
|
||||||
|
|
||||||
|
Implemented strict `lot_category` flow using `pricelist_items.lot_category` only (no parsing from `lot_name`), plus local caching and backfill:
|
||||||
|
|
||||||
|
1. Local DB schema + migrations
|
||||||
|
- Added `lot_category` column to `local_pricelist_items` via `LocalPricelistItem` model.
|
||||||
|
- Added local migration `2026_02_11_local_pricelist_item_category` to add the column if missing and create indexes:
|
||||||
|
- `idx_local_pricelist_items_pricelist_lot (pricelist_id, lot_name)`
|
||||||
|
- `idx_local_pricelist_items_lot_category (lot_category)`
|
||||||
|
|
||||||
|
2. Server model/repository
|
||||||
|
- Added `LotCategory` field to `models.PricelistItem`.
|
||||||
|
- `PricelistRepository.GetItems` now sets `Category` from `LotCategory` (no parsing from `lot_name`).
|
||||||
|
|
||||||
|
3. Sync + local DB helpers
|
||||||
|
- `SyncPricelistItems` now saves `lot_category` into local cache via `PricelistItemToLocal`.
|
||||||
|
- Added `LocalDB.CountLocalPricelistItemsWithEmptyCategory` and `LocalDB.ReplaceLocalPricelistItems`.
|
||||||
|
- Added `LocalDB.GetLocalLotCategoriesByServerPricelistID` for strict category lookup.
|
||||||
|
- Added `SyncPricelists` backfill step: for used active pricelists with empty categories, force refresh items from server.
|
||||||
|
|
||||||
|
4. API handler
|
||||||
|
- `GET /api/pricelists/:id/items` returns `category` from `local_pricelist_items.lot_category` (no parsing from `lot_name`).
|
||||||
|
|
||||||
|
5. Article category foundation
|
||||||
|
- New package `internal/article`:
|
||||||
|
- `ResolveLotCategoriesStrict` pulls categories from local pricelist items and errors on missing category.
|
||||||
|
- `GroupForLotCategory` maps only allowed codes (CPU/MEM/GPU/M2/SSD/HDD/EDSFF/HHHL/NIC/HCA/DPU/PSU/PS) to article groups; excludes `SFP`.
|
||||||
|
- Error type `MissingCategoryForLotError` with base `ErrMissingCategoryForLot`.
|
||||||
|
|
||||||
|
6. Tests
|
||||||
|
- Added unit tests for converters and article category resolver.
|
||||||
|
- Added handler test to ensure `/api/pricelists/:id/items` returns `lot_category`.
|
||||||
|
- Added sync test for category backfill on used pricelist items.
|
||||||
|
- `go test ./...` passed.
|
||||||
|
|
||||||
|
Additional fixes (2026-02-11):
|
||||||
|
- Fixed article parsing bug: CPU/GPU parsers were swapped in `internal/article/generator.go`. CPU now uses last token from CPU lot; GPU uses model+memory from `GPU_vendor_model_mem_iface`.
|
||||||
|
- Adjusted configurator base tab layout to align labels on the same row (separate label row + input row grid).
|
||||||
14
migrations/014_add_stock_log.sql
Normal file
14
migrations/014_add_stock_log.sql
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
CREATE TABLE IF NOT EXISTS stock_log (
|
||||||
|
stock_log_id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
lot VARCHAR(255) NOT NULL,
|
||||||
|
supplier VARCHAR(255) NULL,
|
||||||
|
date DATE NOT NULL,
|
||||||
|
price DECIMAL(12,2) NOT NULL,
|
||||||
|
quality VARCHAR(255) NULL,
|
||||||
|
comments TEXT NULL,
|
||||||
|
vendor VARCHAR(255) NULL,
|
||||||
|
qty DECIMAL(14,3) NULL,
|
||||||
|
INDEX idx_stock_log_lot_date (lot, date),
|
||||||
|
INDEX idx_stock_log_date (date),
|
||||||
|
INDEX idx_stock_log_vendor (vendor)
|
||||||
|
);
|
||||||
7
migrations/015_add_lot_partnumbers.sql
Normal file
7
migrations/015_add_lot_partnumbers.sql
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
CREATE TABLE IF NOT EXISTS lot_partnumbers (
|
||||||
|
partnumber VARCHAR(255) NOT NULL,
|
||||||
|
lot_name VARCHAR(255) NOT NULL DEFAULT '',
|
||||||
|
description VARCHAR(10000) NULL,
|
||||||
|
PRIMARY KEY (partnumber, lot_name),
|
||||||
|
INDEX idx_lot_partnumbers_lot_name (lot_name)
|
||||||
|
);
|
||||||
25
migrations/016_add_price_level_pricelist_bindings.sql
Normal file
25
migrations/016_add_price_level_pricelist_bindings.sql
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
-- Add per-source pricelist bindings for configurations
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS warehouse_pricelist_id BIGINT UNSIGNED NULL AFTER pricelist_id,
|
||||||
|
ADD COLUMN IF NOT EXISTS competitor_pricelist_id BIGINT UNSIGNED NULL AFTER warehouse_pricelist_id,
|
||||||
|
ADD COLUMN IF NOT EXISTS disable_price_refresh BOOLEAN NOT NULL DEFAULT FALSE AFTER competitor_pricelist_id;
|
||||||
|
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD INDEX IF NOT EXISTS idx_qt_configurations_warehouse_pricelist_id (warehouse_pricelist_id),
|
||||||
|
ADD INDEX IF NOT EXISTS idx_qt_configurations_competitor_pricelist_id (competitor_pricelist_id);
|
||||||
|
|
||||||
|
-- Optional FK bindings (safe if re-run due IF NOT EXISTS on columns/indexes)
|
||||||
|
-- If your MariaDB version does not support IF NOT EXISTS for FK names, duplicate-FK errors are ignored by migration runner.
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD CONSTRAINT fk_qt_configurations_warehouse_pricelist_id
|
||||||
|
FOREIGN KEY (warehouse_pricelist_id)
|
||||||
|
REFERENCES qt_pricelists(id)
|
||||||
|
ON DELETE RESTRICT
|
||||||
|
ON UPDATE CASCADE;
|
||||||
|
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD CONSTRAINT fk_qt_configurations_competitor_pricelist_id
|
||||||
|
FOREIGN KEY (competitor_pricelist_id)
|
||||||
|
REFERENCES qt_pricelists(id)
|
||||||
|
ON DELETE RESTRICT
|
||||||
|
ON UPDATE CASCADE;
|
||||||
25
migrations/017_update_lot_partnumbers_for_placeholders.sql
Normal file
25
migrations/017_update_lot_partnumbers_for_placeholders.sql
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
-- Allow placeholder mappings (partnumber without bound lot) and store import description.
|
||||||
|
ALTER TABLE lot_partnumbers
|
||||||
|
ADD COLUMN IF NOT EXISTS description VARCHAR(10000) NULL AFTER lot_name;
|
||||||
|
|
||||||
|
ALTER TABLE lot_partnumbers
|
||||||
|
MODIFY COLUMN lot_name VARCHAR(255) NOT NULL DEFAULT '';
|
||||||
|
|
||||||
|
-- Drop FK on lot_name if it exists to allow unresolved placeholders.
|
||||||
|
SET @lp_fk_name := (
|
||||||
|
SELECT kcu.CONSTRAINT_NAME
|
||||||
|
FROM information_schema.KEY_COLUMN_USAGE kcu
|
||||||
|
WHERE kcu.TABLE_SCHEMA = DATABASE()
|
||||||
|
AND kcu.TABLE_NAME = 'lot_partnumbers'
|
||||||
|
AND kcu.COLUMN_NAME = 'lot_name'
|
||||||
|
AND kcu.REFERENCED_TABLE_NAME IS NOT NULL
|
||||||
|
LIMIT 1
|
||||||
|
);
|
||||||
|
SET @lp_drop_fk_sql := IF(
|
||||||
|
@lp_fk_name IS NULL,
|
||||||
|
'SELECT 1',
|
||||||
|
CONCAT('ALTER TABLE lot_partnumbers DROP FOREIGN KEY `', @lp_fk_name, '`')
|
||||||
|
);
|
||||||
|
PREPARE lp_stmt FROM @lp_drop_fk_sql;
|
||||||
|
EXECUTE lp_stmt;
|
||||||
|
DEALLOCATE PREPARE lp_stmt;
|
||||||
10
migrations/018_add_stock_ignore_rules.sql
Normal file
10
migrations/018_add_stock_ignore_rules.sql
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
CREATE TABLE IF NOT EXISTS stock_ignore_rules (
|
||||||
|
id BIGINT UNSIGNED NOT NULL AUTO_INCREMENT,
|
||||||
|
target VARCHAR(20) NOT NULL,
|
||||||
|
match_type VARCHAR(20) NOT NULL,
|
||||||
|
pattern VARCHAR(500) NOT NULL,
|
||||||
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
PRIMARY KEY (id),
|
||||||
|
UNIQUE KEY uq_stock_ignore_rule (target, match_type, pattern),
|
||||||
|
KEY idx_stock_ignore_target (target)
|
||||||
|
);
|
||||||
2
migrations/019_rename_stock_log_lot_to_partnumber.sql
Normal file
2
migrations/019_rename_stock_log_lot_to_partnumber.sql
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
ALTER TABLE stock_log
|
||||||
|
CHANGE COLUMN lot partnumber VARCHAR(255) NOT NULL;
|
||||||
3
migrations/020_add_only_in_stock_to_configurations.sql
Normal file
3
migrations/020_add_only_in_stock_to_configurations.sql
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
-- Add only_in_stock toggle to configuration settings persisted in MariaDB.
|
||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS only_in_stock BOOLEAN NOT NULL DEFAULT FALSE AFTER disable_price_refresh;
|
||||||
19
migrations/021_add_pricelist_items_pricelist_lot_index.sql
Normal file
19
migrations/021_add_pricelist_items_pricelist_lot_index.sql
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
-- Ensure fast lookup for /api/quote/price-levels batched queries:
|
||||||
|
-- SELECT ... FROM qt_pricelist_items WHERE pricelist_id = ? AND lot_name IN (...)
|
||||||
|
SET @has_idx := (
|
||||||
|
SELECT COUNT(1)
|
||||||
|
FROM information_schema.statistics
|
||||||
|
WHERE table_schema = DATABASE()
|
||||||
|
AND table_name = 'qt_pricelist_items'
|
||||||
|
AND index_name IN ('idx_qt_pricelist_items_pricelist_lot', 'idx_pricelist_lot')
|
||||||
|
);
|
||||||
|
|
||||||
|
SET @ddl := IF(
|
||||||
|
@has_idx = 0,
|
||||||
|
'ALTER TABLE qt_pricelist_items ADD INDEX idx_qt_pricelist_items_pricelist_lot (pricelist_id, lot_name)',
|
||||||
|
'SELECT ''idx_qt_pricelist_items_pricelist_lot already exists, skip'''
|
||||||
|
);
|
||||||
|
|
||||||
|
PREPARE stmt FROM @ddl;
|
||||||
|
EXECUTE stmt;
|
||||||
|
DEALLOCATE PREPARE stmt;
|
||||||
2
migrations/022_add_article_to_configurations.sql
Normal file
2
migrations/022_add_article_to_configurations.sql
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS article VARCHAR(80) NULL AFTER server_count;
|
||||||
2
migrations/023_add_server_model_to_configurations.sql
Normal file
2
migrations/023_add_server_model_to_configurations.sql
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS server_model VARCHAR(100) NULL AFTER server_count;
|
||||||
2
migrations/024_add_support_code_to_configurations.sql
Normal file
2
migrations/024_add_support_code_to_configurations.sql
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
ALTER TABLE qt_configurations
|
||||||
|
ADD COLUMN IF NOT EXISTS support_code VARCHAR(20) NULL AFTER server_model;
|
||||||
38
migrations/025_add_project_code.sql
Normal file
38
migrations/025_add_project_code.sql
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
-- Add project code and enforce uniqueness
|
||||||
|
|
||||||
|
ALTER TABLE qt_projects
|
||||||
|
ADD COLUMN code VARCHAR(100) NULL AFTER owner_username;
|
||||||
|
|
||||||
|
-- Copy code from current project name (truncate to fit)
|
||||||
|
UPDATE qt_projects
|
||||||
|
SET code = LEFT(TRIM(COALESCE(name, '')), 100);
|
||||||
|
|
||||||
|
-- Fallback for any remaining blanks
|
||||||
|
UPDATE qt_projects
|
||||||
|
SET code = uuid
|
||||||
|
WHERE code IS NULL OR TRIM(code) = '';
|
||||||
|
|
||||||
|
-- Drop unique index if it already exists to allow de-duplication updates
|
||||||
|
DROP INDEX IF EXISTS idx_qt_projects_code ON qt_projects;
|
||||||
|
|
||||||
|
-- De-duplicate codes: OPS-1948-2, OPS-1948-3... (MariaDB without CTE)
|
||||||
|
UPDATE qt_projects p
|
||||||
|
JOIN (
|
||||||
|
SELECT p1.id,
|
||||||
|
p1.code AS base_code,
|
||||||
|
(
|
||||||
|
SELECT COUNT(*)
|
||||||
|
FROM qt_projects p2
|
||||||
|
WHERE p2.code = p1.code AND p2.id <= p1.id
|
||||||
|
) AS rn
|
||||||
|
FROM qt_projects p1
|
||||||
|
) r ON r.id = p.id
|
||||||
|
SET p.code = CASE
|
||||||
|
WHEN r.rn = 1 THEN r.base_code
|
||||||
|
ELSE CONCAT(LEFT(r.base_code, 90), '-', r.rn)
|
||||||
|
END;
|
||||||
|
|
||||||
|
ALTER TABLE qt_projects
|
||||||
|
MODIFY COLUMN code VARCHAR(100) NOT NULL;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_qt_projects_code ON qt_projects(code);
|
||||||
28
migrations/026_add_project_variant.sql
Normal file
28
migrations/026_add_project_variant.sql
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
-- Add project variant and reset codes from project names
|
||||||
|
|
||||||
|
ALTER TABLE qt_projects
|
||||||
|
ADD COLUMN variant VARCHAR(100) NOT NULL DEFAULT '' AFTER code;
|
||||||
|
|
||||||
|
-- Drop legacy unique index on code to allow duplicate codes
|
||||||
|
DROP INDEX IF EXISTS idx_qt_projects_code ON qt_projects;
|
||||||
|
DROP INDEX IF EXISTS idx_qt_projects_code_variant ON qt_projects;
|
||||||
|
|
||||||
|
-- Reset code from name and clear variant
|
||||||
|
UPDATE qt_projects
|
||||||
|
SET code = LEFT(TRIM(COALESCE(name, '')), 100),
|
||||||
|
variant = '';
|
||||||
|
|
||||||
|
-- De-duplicate by assigning variant numbers: -2, -3...
|
||||||
|
UPDATE qt_projects p
|
||||||
|
JOIN (
|
||||||
|
SELECT p1.id,
|
||||||
|
p1.code,
|
||||||
|
(SELECT COUNT(*)
|
||||||
|
FROM qt_projects p2
|
||||||
|
WHERE p2.code = p1.code AND p2.id <= p1.id) AS rn
|
||||||
|
FROM qt_projects p1
|
||||||
|
) r ON r.id = p.id
|
||||||
|
SET p.code = r.code,
|
||||||
|
p.variant = CASE WHEN r.rn = 1 THEN '' ELSE CONCAT('-', r.rn) END;
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX idx_qt_projects_code_variant ON qt_projects(code, variant);
|
||||||
4
migrations/027_project_name_nullable.sql
Normal file
4
migrations/027_project_name_nullable.sql
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
-- Allow NULL project names
|
||||||
|
|
||||||
|
ALTER TABLE qt_projects
|
||||||
|
MODIFY COLUMN name VARCHAR(200) NULL;
|
||||||
315
pricelists_window.md
Normal file
315
pricelists_window.md
Normal file
@@ -0,0 +1,315 @@
|
|||||||
|
# Промпт для ИИ: Перенос паттерна Прайслист
|
||||||
|
|
||||||
|
Используй этот документ как промпт для ИИ при переносе реализации прайслиста в другой проект.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Задача
|
||||||
|
|
||||||
|
Я имею рабочую реализацию окна "Прайслист" в проекте QuoteForge. Нужно перенести эту реализацию в проект [ДОП_ПРОЕКТ_НАЗВАНИЕ], сохраняя структуру, логику и UI/UX.
|
||||||
|
|
||||||
|
## Что перенести
|
||||||
|
|
||||||
|
### Frontend - Лист прайслистов (`/pricelists`)
|
||||||
|
|
||||||
|
**Файл источник:** QuoteForge/web/templates/pricelists.html
|
||||||
|
|
||||||
|
**Компоненты:**
|
||||||
|
1. **Таблица** - список прайслистов с колонками:
|
||||||
|
- Версия (монофонт)
|
||||||
|
- Тип (estimate/warehouse/competitor)
|
||||||
|
- Дата создания
|
||||||
|
- Автор (обычно "sync")
|
||||||
|
- Позиций (количество товаров)
|
||||||
|
- Исп. (использований)
|
||||||
|
- Статус (зеленый "Активен" / серый "Неактивен")
|
||||||
|
- Действия (Просмотр, Удалить если не используется)
|
||||||
|
|
||||||
|
2. **Пагинация** - навигация по страницам с активной страницей выделена
|
||||||
|
|
||||||
|
3. **Модальное окно** - "Создать прайслист" (если есть прав на запись)
|
||||||
|
|
||||||
|
**Что копировать:**
|
||||||
|
- HTML структуру таблицы из lines 10-30
|
||||||
|
- JavaScript функции:
|
||||||
|
- `loadPricelists(page)` - загрузка списка
|
||||||
|
- `renderPricelists(items)` - рендер таблицы
|
||||||
|
- `renderPagination(total, page, perPage)` - пагинация
|
||||||
|
- `checkPricelistWritePermission()` - проверка прав
|
||||||
|
- Модальные функции: `openCreateModal()`, `closeCreateModal()`, `createPricelist()`
|
||||||
|
- CSS классы Tailwind (скопируются как есть)
|
||||||
|
|
||||||
|
**Где использовать в дочернем проекте:**
|
||||||
|
- URL: `/pricelists` (или адаптировать под ваши маршруты)
|
||||||
|
- API: `GET /api/pricelists?page=1&per_page=20`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Frontend - Детали прайслиста (`/pricelists/:id`)
|
||||||
|
|
||||||
|
**Файл источник:** QuoteForge/web/templates/pricelist_detail.html
|
||||||
|
|
||||||
|
**Компоненты:**
|
||||||
|
1. **Хлебная крошка** - кнопка назад на список
|
||||||
|
|
||||||
|
2. **Инфо-панель** - сводка по прайслисту:
|
||||||
|
- Версия (монофонт)
|
||||||
|
- Дата создания
|
||||||
|
- Автор
|
||||||
|
- Позиций (количество)
|
||||||
|
- Использований (в скольких конфигах)
|
||||||
|
- Статус (зеленый/серый)
|
||||||
|
- Истекает (дата или "-")
|
||||||
|
|
||||||
|
3. **Таблица товаров** - с поиском и пагинацией:
|
||||||
|
- Артикул (монофонт, lot_name)
|
||||||
|
- Категория (извлекается первая часть до "_")
|
||||||
|
- Описание (обрезается до 60 символов с "...")
|
||||||
|
- [УСЛОВНО] Доступно (qty) - только для warehouse источника
|
||||||
|
- [УСЛОВНО] Partnumbers - только для warehouse источника
|
||||||
|
- Цена, $ (с 2 знаками после запятой)
|
||||||
|
- Настройки (аббревиатуры: РУЧН, Сред, Взвеш.мед, периоды (1н, 1м, 3м, 1г), коэффициент, МЕТА)
|
||||||
|
|
||||||
|
4. **Поиск** - дебаунс 300мс, поиск по lot_name
|
||||||
|
|
||||||
|
5. **Динамические колонки** - qty и partnumbers скрываются/показываются в зависимости от source (warehouse или нет)
|
||||||
|
|
||||||
|
**Что копировать:**
|
||||||
|
- HTML структуру из lines 4-78
|
||||||
|
- JavaScript функции:
|
||||||
|
- `loadPricelistInfo()` - загрузка деталей прайслиста
|
||||||
|
- `loadItems(page)` - загрузка товаров
|
||||||
|
- `renderItems(items)` - рендер таблицы товаров
|
||||||
|
- `renderItemsPagination(total, page, perPage)` - пагинация товаров
|
||||||
|
- `isWarehouseSource()` - проверка источника
|
||||||
|
- `toggleWarehouseColumns()` - показать/скрыть conditional колонки
|
||||||
|
- `formatQty(qty)` - форматирование количества
|
||||||
|
- `formatPriceSettings(item)` - форматирование строки настроек
|
||||||
|
- `escapeHtml(text)` - экранирование HTML
|
||||||
|
- Debounce для поиска (lines 300-306)
|
||||||
|
- CSS классы Tailwind
|
||||||
|
- Логику conditional колонок (lines 152-164)
|
||||||
|
|
||||||
|
**Где использовать в дочернем проекте:**
|
||||||
|
- URL: `/pricelists/:id`
|
||||||
|
- API:
|
||||||
|
- `GET /api/pricelists/:id`
|
||||||
|
- `GET /api/pricelists/:id/items?page=1&per_page=50&search=...`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Backend - Handler
|
||||||
|
|
||||||
|
**Файл источник:** QuoteForge/internal/handlers/pricelist.go
|
||||||
|
|
||||||
|
**Методы для реализации:**
|
||||||
|
|
||||||
|
1. **List** (lines 23-89)
|
||||||
|
- Параметры: `page`, `per_page`, `source` (фильтр), `active_only`
|
||||||
|
- Логика:
|
||||||
|
- Получить все прайслисты
|
||||||
|
- Отфильтровать по source (case-insensitive)
|
||||||
|
- Отсортировать по CreatedAt DESC (свежее сверху)
|
||||||
|
- Пагинировать
|
||||||
|
- Для каждого: посчитать товары (CountLocalPricelistItems), использования (IsUsed)
|
||||||
|
- Вернуть JSON с полями: id, source, version, created_by, item_count, usage_count, is_active, created_at, synced_from
|
||||||
|
|
||||||
|
2. **Get** (lines 92-116)
|
||||||
|
- Параметр: `id` (uint из URL)
|
||||||
|
- Логика:
|
||||||
|
- Получить прайслист по ID
|
||||||
|
- Вернуть его детали (id, source, version, item_count, is_active, created_at)
|
||||||
|
- 404 если не найден
|
||||||
|
|
||||||
|
3. **GetItems** (lines 119-181)
|
||||||
|
- Параметры: `id` (URL), `page`, `per_page`, `search` (query)
|
||||||
|
- Логика:
|
||||||
|
- Получить прайслист по ID
|
||||||
|
- Получить товары этого прайслиста
|
||||||
|
- Фильтровать по lot_name LIKE search (если передан)
|
||||||
|
- Посчитать total
|
||||||
|
- Пагинировать
|
||||||
|
- Для каждого товара: извлечь категорию из lot_name (первая часть до "_")
|
||||||
|
- Вернуть JSON: source, items (id, lot_name, price, category, available_qty, partnumbers), total, page, per_page
|
||||||
|
|
||||||
|
4. **GetLotNames** (lines 183-211)
|
||||||
|
- Параметр: `id` (URL)
|
||||||
|
- Логика:
|
||||||
|
- Получить все lot_names из этого прайслиста
|
||||||
|
- Отсортировать alphabetically
|
||||||
|
- Вернуть JSON: lot_names (array of strings), total
|
||||||
|
|
||||||
|
5. **GetLatest** (lines 214-233)
|
||||||
|
- Параметр: `source` (query, default "estimate")
|
||||||
|
- Логика:
|
||||||
|
- Нормализовать source (case-insensitive)
|
||||||
|
- Получить самый свежий прайслист по этому source
|
||||||
|
- Вернуть его детали
|
||||||
|
- 404 если не найден
|
||||||
|
|
||||||
|
**Регистрация маршрутов:**
|
||||||
|
```go
|
||||||
|
pricelists := api.Group("/pricelists")
|
||||||
|
{
|
||||||
|
pricelists.GET("", handler.List)
|
||||||
|
pricelists.GET("/latest", handler.GetLatest)
|
||||||
|
pricelists.GET("/:id", handler.Get)
|
||||||
|
pricelists.GET("/:id/items", handler.GetItems)
|
||||||
|
pricelists.GET("/:id/lots", handler.GetLotNames)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Адаптация для другого проекта
|
||||||
|
|
||||||
|
### Что нужно изменить
|
||||||
|
|
||||||
|
1. **Источник данных**
|
||||||
|
- QuoteForge использует local DB (LocalPricelist, LocalPricelistItem)
|
||||||
|
- В вашем проекте: замените на ваши структуры/таблицы
|
||||||
|
- Сущность "прайслист" может называться по-другому
|
||||||
|
|
||||||
|
2. **API маршруты**
|
||||||
|
- `/api/pricelists` → ваш путь
|
||||||
|
- `:id` - может быть UUID вместо int, адаптировать parsing
|
||||||
|
|
||||||
|
3. **Имена полей**
|
||||||
|
- Если у вас нет поля `version` - используйте ID или дату
|
||||||
|
- Если нет `source` - опустить фильтр
|
||||||
|
- Если нет `IsUsed` - считать как всегда 0
|
||||||
|
|
||||||
|
4. **Структуры данных**
|
||||||
|
- Pricelist должна иметь: id, name/version, created_at, source, item_count
|
||||||
|
- PricelistItem должна иметь: id, lot_name, price, available_qty, partnumbers
|
||||||
|
|
||||||
|
5. **Условные колонки**
|
||||||
|
- Логика: если source == "warehouse", показать qty и partnumbers
|
||||||
|
- Адаптировать под ваши источники/типы
|
||||||
|
|
||||||
|
### Что копировать как есть
|
||||||
|
|
||||||
|
- **HTML структура** - таблицы, модали, классы Tailwind
|
||||||
|
- **JavaScript логика** - все функции загрузки, рендера, пагинации
|
||||||
|
- **CSS классы** - Tailwind работает везде одинаково
|
||||||
|
- **Форматирование функций** - formatPrice, formatQty, formatDate
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Пошаговая инструкция для ИИ
|
||||||
|
|
||||||
|
1. **Прочитай оба файла:**
|
||||||
|
- QuoteForge/web/templates/pricelists.html (список)
|
||||||
|
- QuoteForge/web/templates/pricelist_detail.html (детали)
|
||||||
|
- QuoteForge/internal/handlers/pricelist.go (backend)
|
||||||
|
|
||||||
|
2. **Определи структуры данных в дочернем проекте:**
|
||||||
|
- Какая таблица хранит "прайслисты"?
|
||||||
|
- Какие у неё поля?
|
||||||
|
- Как связаны товары?
|
||||||
|
|
||||||
|
3. **Адаптируй Backend:**
|
||||||
|
- Скопируй методы Handler
|
||||||
|
- Замени DB вызовы на вызовы вашего хранилища
|
||||||
|
- Замени имена полей в JSON ответах если нужно
|
||||||
|
- Убедись, что API возвращает нужный формат
|
||||||
|
|
||||||
|
4. **Адаптируй Frontend - Список:**
|
||||||
|
- Скопируй HTML таблицу
|
||||||
|
- Скопируй функции load/render/pagination
|
||||||
|
- Замени маршруты `/pricelists` → ваши
|
||||||
|
- Замени API endpoint → ваш
|
||||||
|
- Протестируй список загружается
|
||||||
|
|
||||||
|
5. **Адаптируй Frontend - Детали:**
|
||||||
|
- Скопируй HTML для деталей
|
||||||
|
- Скопируй функции loadInfo/loadItems/render
|
||||||
|
- Замени маршруты и endpoints
|
||||||
|
- Особое внимание на conditional колонки (toggleWarehouseColumns)
|
||||||
|
- Протестируй поиск работает
|
||||||
|
|
||||||
|
6. **Протестируй:**
|
||||||
|
- Список загружается
|
||||||
|
- Пагинация работает
|
||||||
|
- Детали открываются
|
||||||
|
- Поиск работает
|
||||||
|
- Conditional колонки показываются/скрываются правильно
|
||||||
|
- Форматирование цен и дат работает
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Пример адаптации
|
||||||
|
|
||||||
|
### Backend (было):
|
||||||
|
```go
|
||||||
|
func (h *PricelistHandler) List(c *gin.Context) {
|
||||||
|
localPLs, err := h.localDB.GetLocalPricelists()
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backend (стало):
|
||||||
|
```go
|
||||||
|
func (h *CatalogHandler) List(c *gin.Context) {
|
||||||
|
catalogs, err := h.service.GetAllCatalogs(page, perPage)
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend (было):
|
||||||
|
```javascript
|
||||||
|
const resp = await fetch(`/api/pricelists?page=${page}&per_page=20`);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Frontend (стало):
|
||||||
|
```javascript
|
||||||
|
const resp = await fetch(`/api/catalogs?page=${page}&per_page=20`);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Качество результата
|
||||||
|
|
||||||
|
Когда закончишь:
|
||||||
|
- ✅ Список и детали выглядят идентично QuoteForge
|
||||||
|
- ✅ Все функции работают (load, render, pagination, search, conditional columns)
|
||||||
|
- ✅ Обработка ошибок (404, empty list, network errors)
|
||||||
|
- ✅ Таблицы с Tailwind классами оформлены одинаково
|
||||||
|
- ✅ Форматирование чисел/дат совпадает
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Вопросы для ИИ
|
||||||
|
|
||||||
|
Перед тем как давать этот промпт, ответь на эти вопросы:
|
||||||
|
|
||||||
|
1. **Какие у тебя структуры данных для "прайслиста"?**
|
||||||
|
- Пример: какие поля, как называется таблица
|
||||||
|
|
||||||
|
2. **Какие API endpoints уже есть?**
|
||||||
|
- Или нужно создать с нуля?
|
||||||
|
|
||||||
|
3. **Есть ли уже разница в источниках (estimate/warehouse)?**
|
||||||
|
- Или все одного типа?
|
||||||
|
|
||||||
|
4. **Нужна ли возможность создавать прайслисты?**
|
||||||
|
- Или только просмотр?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Чеклист для проверки
|
||||||
|
|
||||||
|
После переноса проверь:
|
||||||
|
|
||||||
|
- [ ] Backend: List возвращает правильный JSON
|
||||||
|
- [ ] Backend: Get возвращает детали
|
||||||
|
- [ ] Backend: GetItems возвращает товары с поиском
|
||||||
|
- [ ] Frontend: Список загружается на `/pricelists`
|
||||||
|
- [ ] Frontend: Клик на прайслист открывает `/pricelists/:id`
|
||||||
|
- [ ] Frontend: Таблица на детальной странице рендеритсяся
|
||||||
|
- [ ] Frontend: Поиск работает с дебаунсом
|
||||||
|
- [ ] Frontend: Пагинация работает
|
||||||
|
- [ ] Frontend: Conditional колонки показываются/скрываются
|
||||||
|
- [ ] Frontend: Форматирование цен работает (2 знака)
|
||||||
|
- [ ] Frontend: Форматирование дат работает (ru-RU)
|
||||||
|
- [ ] UI: Выглядит идентично QuoteForge
|
||||||
72
releases/memory/v1.2.1.md
Normal file
72
releases/memory/v1.2.1.md
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
# v1.2.1 Release Notes
|
||||||
|
|
||||||
|
**Date:** 2026-02-09
|
||||||
|
**Changes since v1.2.0:** 2 commits
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
Fixed configurator component substitution by updating to work with new pricelist-based pricing model. Addresses regression from v1.2.0 refactor that removed `CurrentPrice` field from components.
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
### 1. Refactor: Remove CurrentPrice from local_components (5984a57)
|
||||||
|
**Type:** Refactor
|
||||||
|
**Files Changed:** 11 files, +167 insertions, -194 deletions
|
||||||
|
|
||||||
|
#### Overview
|
||||||
|
Transitioned from component-based pricing to pricelist-based pricing model:
|
||||||
|
- Removed `CurrentPrice` and `SyncedAt` from LocalComponent (metadata-only now)
|
||||||
|
- Added `WarehousePricelistID` and `CompetitorPricelistID` to LocalConfiguration
|
||||||
|
- Removed 2 unused methods: UpdateComponentPricesFromPricelist, EnsureComponentPricesFromPricelists
|
||||||
|
|
||||||
|
#### Key Changes
|
||||||
|
- **Data Model:**
|
||||||
|
- LocalComponent: now stores only metadata (LotName, LotDescription, Category, Model)
|
||||||
|
- LocalConfiguration: added warehouse and competitor pricelist references
|
||||||
|
|
||||||
|
- **Migrations:**
|
||||||
|
- drop_component_unused_fields - removes CurrentPrice, SyncedAt columns
|
||||||
|
- add_warehouse_competitor_pricelists - adds new pricelist fields
|
||||||
|
|
||||||
|
- **Quote Calculation:**
|
||||||
|
- Updated to use pricelist_items instead of component.CurrentPrice
|
||||||
|
- Added PricelistID field to QuoteRequest
|
||||||
|
- Maintains offline-first behavior
|
||||||
|
|
||||||
|
- **API:**
|
||||||
|
- Removed CurrentPrice from ComponentView
|
||||||
|
- Components API no longer returns pricing
|
||||||
|
|
||||||
|
### 2. Fix: Load component prices via API (acf7c8a)
|
||||||
|
**Type:** Bug Fix
|
||||||
|
**Files Changed:** 1 file (web/templates/index.html), +66 insertions, -12 deletions
|
||||||
|
|
||||||
|
#### Problem
|
||||||
|
After v1.2.0 refactor, the configurator's autocomplete was filtering out all components because it checked for the removed `current_price` field on component objects.
|
||||||
|
|
||||||
|
#### Solution
|
||||||
|
Implemented on-demand price loading via API:
|
||||||
|
- Added `ensurePricesLoaded()` function to fetch prices from `/api/quote/price-levels`
|
||||||
|
- Added `componentPricesCache` to cache loaded prices in memory
|
||||||
|
- Updated all 3 autocomplete modes (single, multi, section) to load prices when input is focused
|
||||||
|
- Changed price validation from `c.current_price` to `hasComponentPrice(lot_name)`
|
||||||
|
- Updated cart item creation to use cached API prices
|
||||||
|
|
||||||
|
#### Impact
|
||||||
|
- Components without prices are still filtered out (as required)
|
||||||
|
- Price checks now use API data instead of removed database field
|
||||||
|
- Frontend loads prices on-demand for better performance
|
||||||
|
|
||||||
|
## Testing Notes
|
||||||
|
- ✅ Configurator component substitution now works
|
||||||
|
- ✅ Prices load correctly from pricelist
|
||||||
|
- ✅ Offline mode still supported (prices cached after initial load)
|
||||||
|
- ✅ Multi-pricelist support functional (estimate/warehouse/competitor)
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
None
|
||||||
|
|
||||||
|
## Migration Path
|
||||||
|
No database migration needed from v1.2.0 - migrations were applied in v1.2.0 release.
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
None for end users. Internal: `ComponentView` no longer includes `CurrentPrice` in API responses.
|
||||||
59
releases/memory/v1.2.2.md
Normal file
59
releases/memory/v1.2.2.md
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
# Release v1.2.2 (2026-02-09)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Fixed CSV export filename inconsistency where project names weren't being resolved correctly. Standardized export format across both manual exports and project configuration exports to use `YYYY-MM-DD (project_name) config_name BOM.csv`.
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
- `8f596ce` fix: standardize CSV export filename format to use project name
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### CSV Export Filename Standardization
|
||||||
|
|
||||||
|
**Problem:**
|
||||||
|
- ExportCSV and ExportConfigCSV had inconsistent filename formats
|
||||||
|
- Project names sometimes fell back to config names when not explicitly provided
|
||||||
|
- Export timestamps didn't reflect actual price update time
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
- Unified format: `YYYY-MM-DD (project_name) config_name BOM.csv`
|
||||||
|
- Both export paths now use PriceUpdatedAt if available, otherwise CreatedAt
|
||||||
|
- Project name resolved from ProjectUUID via ProjectService for both paths
|
||||||
|
- Frontend passes project_uuid context when exporting
|
||||||
|
|
||||||
|
**Technical Details:**
|
||||||
|
|
||||||
|
Backend:
|
||||||
|
- Added `ProjectUUID` field to `ExportRequest` struct in handlers/export.go
|
||||||
|
- Updated ExportCSV to look up project name from ProjectUUID using ProjectService
|
||||||
|
- Ensured ExportConfigCSV gets project name from config's ProjectUUID
|
||||||
|
- Both use CreatedAt (for ExportCSV) or PriceUpdatedAt/CreatedAt (for ExportConfigCSV)
|
||||||
|
|
||||||
|
Frontend:
|
||||||
|
- Added `projectUUID` and `projectName` state variables in index.html
|
||||||
|
- Load and store projectUUID when configuration is loaded
|
||||||
|
- Pass `project_uuid` in JSON body for both export requests
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
- `internal/handlers/export.go` - Project name resolution and ExportRequest update
|
||||||
|
- `internal/handlers/export_test.go` - Updated mock initialization with projectService param
|
||||||
|
- `cmd/qfs/main.go` - Pass projectService to ExportHandler constructor
|
||||||
|
- `web/templates/index.html` - Add projectUUID tracking and export payload updates
|
||||||
|
|
||||||
|
## Testing Notes
|
||||||
|
|
||||||
|
✅ All existing tests updated and passing
|
||||||
|
✅ Code builds without errors
|
||||||
|
✅ Export filename now includes correct project name
|
||||||
|
✅ Works for both form-based and project-based exports
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
|
||||||
|
None - API response format unchanged, only filename generation updated.
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
|
||||||
|
None identified.
|
||||||
95
releases/memory/v1.2.3.md
Normal file
95
releases/memory/v1.2.3.md
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
# Release v1.2.3 (2026-02-10)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Unified synchronization functionality with event-driven UI updates. Resolved user confusion about duplicate sync buttons by implementing a single sync source with automatic page refreshes.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### Main Feature: Sync Event System
|
||||||
|
|
||||||
|
- **Added `sync-completed` event** in base.html's `syncAction()` function
|
||||||
|
- Dispatched after successful `/api/sync/all` or `/api/sync/push`
|
||||||
|
- Includes endpoint and response data in event detail
|
||||||
|
- Enables pages to react automatically to sync completion
|
||||||
|
|
||||||
|
### Configs Page (`configs.html`)
|
||||||
|
|
||||||
|
- **Removed "Импорт с сервера" button** - duplicate functionality no longer needed
|
||||||
|
- **Updated layout** - changed from 2-column grid to single button layout
|
||||||
|
- **Removed `importConfigsFromServer()` function** - functionality now handled by navbar sync
|
||||||
|
- **Added sync-completed event listener**:
|
||||||
|
- Automatically reloads configurations list after sync
|
||||||
|
- Resets pagination to first page
|
||||||
|
- New configurations appear immediately without manual refresh
|
||||||
|
|
||||||
|
### Projects Page (`projects.html`)
|
||||||
|
|
||||||
|
- **Wrapped initialization in DOMContentLoaded**:
|
||||||
|
- Moved `loadProjects()` and all event listeners inside handler
|
||||||
|
- Ensures DOM is fully loaded before accessing elements
|
||||||
|
- **Added sync-completed event listener**:
|
||||||
|
- Automatically reloads projects list after sync
|
||||||
|
- New projects appear immediately without manual refresh
|
||||||
|
|
||||||
|
### Pricelists Page (`pricelists.html`)
|
||||||
|
|
||||||
|
- **Added sync-completed event listener** to existing DOMContentLoaded:
|
||||||
|
- Automatically reloads pricelists when sync completes
|
||||||
|
- Maintains existing permissions and modal functionality
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
### User Experience
|
||||||
|
- ✅ Single "Синхронизация" button in navbar - no confusion about sync sources
|
||||||
|
- ✅ Automatic list updates after sync - no need for manual F5 refresh
|
||||||
|
- ✅ Consistent behavior across all pages (configs, projects, pricelists)
|
||||||
|
- ✅ Better feedback: toast notification + automatic UI refresh
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
- ✅ Event-driven loose coupling between navbar and pages
|
||||||
|
- ✅ Easy to extend to other pages (just add event listener)
|
||||||
|
- ✅ No backend changes needed
|
||||||
|
- ✅ Production-ready
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
|
||||||
|
- **`/api/configs/import` endpoint** still works but UI button removed
|
||||||
|
- Users should use navbar "Синхронизация" button instead
|
||||||
|
- Backend API remains unchanged for backward compatibility
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
1. `web/templates/base.html` - Added sync-completed event dispatch
|
||||||
|
2. `web/templates/configs.html` - Event listener + removed duplicate UI
|
||||||
|
3. `web/templates/projects.html` - DOMContentLoaded wrapper + event listener
|
||||||
|
4. `web/templates/pricelists.html` - Event listener for auto-refresh
|
||||||
|
|
||||||
|
**Stats:** 4 files changed, 59 insertions(+), 65 deletions(-)
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
- `99fd80b` - feat: unify sync functionality with event-driven UI updates
|
||||||
|
|
||||||
|
## Testing Checklist
|
||||||
|
|
||||||
|
- [x] Configs page: New configurations appear after navbar sync
|
||||||
|
- [x] Projects page: New projects appear after navbar sync
|
||||||
|
- [x] Pricelists page: Pricelists refresh after navbar sync
|
||||||
|
- [x] Both `/api/sync/all` and `/api/sync/push` trigger updates
|
||||||
|
- [x] Toast notifications still show correctly
|
||||||
|
- [x] Sync status indicator updates
|
||||||
|
- [x] Error handling (423, network errors) still works
|
||||||
|
- [x] Mode switching (Active/Archive) works correctly
|
||||||
|
- [x] Backward compatibility maintained
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
|
||||||
|
None - implementation is production-ready
|
||||||
|
|
||||||
|
## Migration Notes
|
||||||
|
|
||||||
|
No migration needed. Changes are frontend-only and backward compatible:
|
||||||
|
- Old `/api/configs/import` endpoint still functional
|
||||||
|
- No database schema changes
|
||||||
|
- No configuration changes needed
|
||||||
68
releases/memory/v1.3.0.md
Normal file
68
releases/memory/v1.3.0.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# Release v1.3.0 (2026-02-11)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Introduced article generation with pricelist categories, added local configuration storage, and expanded sync/export capabilities. Simplified article generator compression and loosened project update constraints.
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
### Main Features: Articles + Pricelist Categories
|
||||||
|
|
||||||
|
- **Article generation pipeline**
|
||||||
|
- New generator and tests under `internal/article/`
|
||||||
|
- Category support with test coverage
|
||||||
|
- **Pricelist category integration**
|
||||||
|
- Handler and repository updates
|
||||||
|
- Sync backfill test for category propagation
|
||||||
|
|
||||||
|
### Local Configuration Storage
|
||||||
|
|
||||||
|
- **Local DB support**
|
||||||
|
- New localdb models, converters, snapshots, and migrations
|
||||||
|
- Local configuration service for cached configurations
|
||||||
|
|
||||||
|
### Export & UI
|
||||||
|
|
||||||
|
- **Export handler updates** for article data output
|
||||||
|
- **Configs and index templates** adjusted for new article-related fields
|
||||||
|
|
||||||
|
### Behavior Changes
|
||||||
|
|
||||||
|
- **Cross-user project updates allowed**
|
||||||
|
- Removed restriction in project service
|
||||||
|
- **Article compression refinement**
|
||||||
|
- Generator logic simplified to reduce complexity
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
|
||||||
|
None identified. Existing APIs remain intact.
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
1. `internal/article/*` - Article generator + categories + tests
|
||||||
|
2. `internal/localdb/*` - Local DB models, migrations, snapshots
|
||||||
|
3. `internal/handlers/export.go` - Export updates
|
||||||
|
4. `internal/handlers/pricelist.go` - Category handling
|
||||||
|
5. `internal/services/sync/service.go` - Category backfill logic
|
||||||
|
6. `web/templates/configs.html` - Article field updates
|
||||||
|
7. `web/templates/index.html` - Article field updates
|
||||||
|
|
||||||
|
**Stats:** 33 files changed, 2059 insertions(+), 329 deletions(-)
|
||||||
|
|
||||||
|
## Commits
|
||||||
|
|
||||||
|
- `5edffe8` - Add article generation and pricelist categories
|
||||||
|
- `e355903` - Allow cross-user project updates
|
||||||
|
- `e58fd35` - Refine article compression and simplify generator
|
||||||
|
|
||||||
|
## Testing Checklist
|
||||||
|
|
||||||
|
- [ ] Tests not run (not requested)
|
||||||
|
|
||||||
|
## Migration Notes
|
||||||
|
|
||||||
|
- New migrations:
|
||||||
|
- `022_add_article_to_configurations.sql`
|
||||||
|
- `023_add_server_model_to_configurations.sql`
|
||||||
|
- `024_add_support_code_to_configurations.sql`
|
||||||
|
- Ensure migrations are applied before running v1.3.0
|
||||||
@@ -1,51 +0,0 @@
|
|||||||
# QuoteForge v1.0.3
|
|
||||||
|
|
||||||
Дата релиза: 2026-02-06
|
|
||||||
Тег: `v1.0.3`
|
|
||||||
Диапазон изменений: `v1.0.2..v1.0.3`
|
|
||||||
|
|
||||||
## Что нового
|
|
||||||
|
|
||||||
- Добавлена страница управления проектами `/projects` с:
|
|
||||||
- датой и временем создания проекта;
|
|
||||||
- сортировкой по названию и дате создания;
|
|
||||||
- серверной пагинацией;
|
|
||||||
- фильтром по автору в заголовке таблицы.
|
|
||||||
- Добавлена отдельная вкладка `Статус синхронизации` на уровне `Алерты / Компоненты / Прайслисты`.
|
|
||||||
- Во вкладке статуса синхронизации отображаются:
|
|
||||||
- пользователь;
|
|
||||||
- версия приложения;
|
|
||||||
- статус (`онлайн` или относительное время последней синхронизации).
|
|
||||||
|
|
||||||
## Изменения синхронизации
|
|
||||||
|
|
||||||
- Реализован heartbeat синхронизации пользователей в MariaDB: `qt_pricelist_sync_status`.
|
|
||||||
- Добавлен API `GET /api/sync/users-status` для UI статуса синхронизации.
|
|
||||||
- Логика онлайн-статуса рассчитана от интервала фоновой синхронизации: `5 минут + 10%`.
|
|
||||||
- В heartbeat фиксируется версия приложения (`app_version`).
|
|
||||||
|
|
||||||
## Важные исправления
|
|
||||||
|
|
||||||
- Исправлено восстановление отсутствующей серверной конфигурации при push обновлений.
|
|
||||||
- Исправлено экранирование паролей в MySQL DSN в setup.
|
|
||||||
- Улучшена логика запуска SQL-миграций на старте при отсутствии прав/необходимости.
|
|
||||||
- Обновлена логика пересчета прайслистов через админский price-refresh.
|
|
||||||
|
|
||||||
## Миграции и совместимость
|
|
||||||
|
|
||||||
Добавлены SQL-миграции:
|
|
||||||
|
|
||||||
- `migrations/010_add_pricelist_sync_status.sql`
|
|
||||||
- `migrations/011_add_app_version_to_pricelist_sync_status.sql`
|
|
||||||
|
|
||||||
Релиз совместим с предыдущей веткой `v1.0.x`; новая таблица синхронизации создается автоматически.
|
|
||||||
|
|
||||||
## Коммиты в релизе
|
|
||||||
|
|
||||||
- `b1b50ce` Add projects table controls and sync status tab with app version
|
|
||||||
- `6ab1e98` sync: recover missing server config during update push
|
|
||||||
- `a1d2192` Fix MySQL DSN escaping for setup passwords and clarify DB user setup
|
|
||||||
- `a90c07c` update stale files list
|
|
||||||
- `e9307c4` Apply remaining pricelist and local-first updates
|
|
||||||
- `1b48401` Use admin price-refresh logic for pricelist recalculation
|
|
||||||
- `4a86f7b` fix: skip startup sql migrations when not needed or no permissions
|
|
||||||
89
releases/v1.2.1/RELEASE_NOTES.md
Normal file
89
releases/v1.2.1/RELEASE_NOTES.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# QuoteForge v1.2.1
|
||||||
|
|
||||||
|
**Дата релиза:** 2026-02-09
|
||||||
|
**Тег:** `v1.2.1`
|
||||||
|
**GitHub:** https://git.mchus.pro/mchus/QuoteForge/releases/tag/v1.2.1
|
||||||
|
|
||||||
|
## Резюме
|
||||||
|
|
||||||
|
Быстрый патч-релиз, исправляющий регрессию в конфигураторе после рефактора v1.2.0. После удаления поля `CurrentPrice` из компонентов, autocomplete перестал показывать компоненты. Теперь используется на-demand загрузка цен через API.
|
||||||
|
|
||||||
|
## Что исправлено
|
||||||
|
|
||||||
|
### 🐛 Configurator Component Substitution (acf7c8a)
|
||||||
|
- **Проблема:** После рефактора в v1.2.0, autocomplete фильтровал ВСЕ компоненты, потому что проверял удаленное поле `current_price`
|
||||||
|
- **Решение:** Загрузка цен на-demand через `/api/quote/price-levels`
|
||||||
|
- Добавлен `componentPricesCache` для кэширования цен в памяти
|
||||||
|
- Функция `ensurePricesLoaded()` загружает цены при фокусе на поле поиска
|
||||||
|
- Все 3 режима autocomplete (single, multi, section) обновлены
|
||||||
|
- Компоненты без цен по-прежнему фильтруются (как требуется), но проверка использует API
|
||||||
|
- **Затронутые файлы:** `web/templates/index.html` (+66 строк, -12 строк)
|
||||||
|
|
||||||
|
## История v1.2.0 → v1.2.1
|
||||||
|
|
||||||
|
Всего коммитов: **2**
|
||||||
|
|
||||||
|
| Хеш | Автор | Сообщение |
|
||||||
|
|-----|-------|-----------|
|
||||||
|
| `acf7c8a` | Claude | fix: load component prices via API instead of removed current_price field |
|
||||||
|
| `5984a57` | Claude | refactor: remove CurrentPrice from local_components and transition to pricelist-based pricing |
|
||||||
|
|
||||||
|
## Тестирование
|
||||||
|
|
||||||
|
✅ Configurator component substitution работает
|
||||||
|
✅ Цены загружаются корректно из pricelist
|
||||||
|
✅ Offline режим поддерживается (цены кэшируются после первой загрузки)
|
||||||
|
✅ Multi-pricelist поддержка функциональна (estimate/warehouse/competitor)
|
||||||
|
|
||||||
|
## Breaking Changes
|
||||||
|
|
||||||
|
Нет критических изменений для конечных пользователей.
|
||||||
|
|
||||||
|
⚠️ **Для разработчиков:** `ComponentView` API больше не возвращает `CurrentPrice`.
|
||||||
|
|
||||||
|
## Миграция
|
||||||
|
|
||||||
|
Не требуется миграция БД — все миграции были применены в v1.2.0.
|
||||||
|
|
||||||
|
## Установка
|
||||||
|
|
||||||
|
### macOS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Скачать и распаковать
|
||||||
|
tar xzf qfs-v1.2.1-darwin-arm64.tar.gz # для Apple Silicon
|
||||||
|
# или
|
||||||
|
tar xzf qfs-v1.2.1-darwin-amd64.tar.gz # для Intel Mac
|
||||||
|
|
||||||
|
# Снять ограничение Gatekeeper (если требуется)
|
||||||
|
xattr -d com.apple.quarantine ./qfs
|
||||||
|
|
||||||
|
# Запустить
|
||||||
|
./qfs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Linux
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tar xzf qfs-v1.2.1-linux-amd64.tar.gz
|
||||||
|
./qfs
|
||||||
|
```
|
||||||
|
|
||||||
|
### Windows
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Распаковать qfs-v1.2.1-windows-amd64.zip
|
||||||
|
# Запустить qfs.exe
|
||||||
|
```
|
||||||
|
|
||||||
|
## Известные проблемы
|
||||||
|
|
||||||
|
Нет известных проблем на момент релиза.
|
||||||
|
|
||||||
|
## Поддержка
|
||||||
|
|
||||||
|
По вопросам обращайтесь: [@mchus](https://git.mchus.pro/mchus)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Отправлено с ❤️ через Claude Code*
|
||||||
56
scripts/check-secrets.sh
Executable file
56
scripts/check-secrets.sh
Executable file
@@ -0,0 +1,56 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
if ! git rev-parse --git-dir >/dev/null 2>&1; then
|
||||||
|
echo "Not inside a git repository."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! command -v rg >/dev/null 2>&1; then
|
||||||
|
echo "ripgrep (rg) is required for secret scanning."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
staged_files=()
|
||||||
|
while IFS= read -r file; do
|
||||||
|
staged_files+=("$file")
|
||||||
|
done < <(git diff --cached --name-only --diff-filter=ACMRTUXB)
|
||||||
|
|
||||||
|
if [ "${#staged_files[@]}" -eq 0 ]; then
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
secret_pattern='AKIA[0-9A-Z]{16}|ASIA[0-9A-Z]{16}|ghp_[A-Za-z0-9]{36}|github_pat_[A-Za-z0-9_]{20,}|xox[baprs]-[A-Za-z0-9-]{10,}|AIza[0-9A-Za-z_-]{35}|-----BEGIN (RSA|OPENSSH|EC|DSA|PRIVATE) KEY-----|(?i)(password|passwd|pwd|secret|token|api[_-]?key|jwt_secret)\s*[:=]\s*["'"'"'][^"'"'"'\s]{8,}["'"'"']'
|
||||||
|
allow_pattern='CHANGE_ME|REDACTED|PLACEHOLDER|EXAMPLE|example|<[^>]+>'
|
||||||
|
|
||||||
|
found=0
|
||||||
|
|
||||||
|
for file in "${staged_files[@]}"; do
|
||||||
|
case "$file" in
|
||||||
|
dist/*|*.png|*.jpg|*.jpeg|*.gif|*.webp|*.pdf|*.zip|*.gz|*.exe|*.dll|*.so|*.dylib)
|
||||||
|
continue
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
|
||||||
|
if ! content="$(git show ":$file" 2>/dev/null)"; then
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
hits="$(printf '%s' "$content" | rg -n --no-heading -e "$secret_pattern" || true)"
|
||||||
|
if [ -n "$hits" ]; then
|
||||||
|
filtered="$(printf '%s\n' "$hits" | rg -v -e "$allow_pattern" || true)"
|
||||||
|
if [ -n "$filtered" ]; then
|
||||||
|
echo "Potential secret found in staged file: $file"
|
||||||
|
printf '%s\n' "$filtered"
|
||||||
|
found=1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "$found" -ne 0 ]; then
|
||||||
|
echo
|
||||||
|
echo "Commit blocked: remove or redact secrets before committing."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
exit 0
|
||||||
@@ -25,6 +25,25 @@ echo ""
|
|||||||
RELEASE_DIR="releases/${VERSION}"
|
RELEASE_DIR="releases/${VERSION}"
|
||||||
mkdir -p "${RELEASE_DIR}"
|
mkdir -p "${RELEASE_DIR}"
|
||||||
|
|
||||||
|
# Create release notes template (always include macOS Gatekeeper note)
|
||||||
|
if [ ! -f "${RELEASE_DIR}/RELEASE_NOTES.md" ]; then
|
||||||
|
cat > "${RELEASE_DIR}/RELEASE_NOTES.md" <<EOF
|
||||||
|
# QuoteForge ${VERSION}
|
||||||
|
|
||||||
|
Дата релиза: $(date +%Y-%m-%d)
|
||||||
|
Тег: \`${VERSION}\`
|
||||||
|
|
||||||
|
## Что нового
|
||||||
|
|
||||||
|
- TODO: опишите ключевые изменения релиза.
|
||||||
|
|
||||||
|
## Запуск на macOS
|
||||||
|
|
||||||
|
Снимите карантинный атрибут через терминал: \`xattr -d com.apple.quarantine /path/to/qfs-darwin-arm64\`
|
||||||
|
После этого бинарник запустится без предупреждения Gatekeeper.
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
|
||||||
# Build for all platforms
|
# Build for all platforms
|
||||||
echo -e "${YELLOW}→ Building binaries...${NC}"
|
echo -e "${YELLOW}→ Building binaries...${NC}"
|
||||||
make build-all
|
make build-all
|
||||||
|
|||||||
78
todo.md
Normal file
78
todo.md
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
# QuoteForge — План очистки (удаление admin pricing)
|
||||||
|
|
||||||
|
Цель: убрать всё, что связано с администрированием цен, складскими справками, алертами.
|
||||||
|
Оставить: конфигуратор, проекты, read-only просмотр прайслистов, sync, offline-first.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Удалить файлы
|
||||||
|
|
||||||
|
- [x] `internal/handlers/pricing.go` (40.6KB) — весь admin pricing UI
|
||||||
|
- [x] `internal/services/pricing/` — весь пакет расчёта цен
|
||||||
|
- [x] `internal/services/pricelist/` — весь пакет управления прайслистами
|
||||||
|
- [x] `internal/services/stock_import.go` — импорт складских справок
|
||||||
|
- [x] `internal/services/alerts/` — весь пакет алертов
|
||||||
|
- [x] `internal/warehouse/` — алгоритмы расчёта цен по складу
|
||||||
|
- [x] `web/templates/admin_pricing.html` (109KB) — страница admin pricing
|
||||||
|
- [x] `cmd/cron/` — cron jobs (cleanup-pricelists, update-prices, update-popularity)
|
||||||
|
- [x] `cmd/importer/` — утилита импорта данных
|
||||||
|
|
||||||
|
## 2. Упростить `internal/handlers/pricelist.go` (read-only)
|
||||||
|
|
||||||
|
Read-only методы (List, Get, GetItems, GetLotNames, GetLatest) уже работают
|
||||||
|
только через `h.localDB` (SQLite) без `pricelist.Service`.
|
||||||
|
|
||||||
|
- [x] Убрать поле `service *pricelist.Service` из структуры `PricelistHandler`
|
||||||
|
- [x] Изменить конструктор: `NewPricelistHandler(localDB *localdb.LocalDB)`
|
||||||
|
- [x] Удалить write-методы: `Create()`, `CreateWithProgress()`, `Delete()`, `SetActive()`, `CanWrite()`
|
||||||
|
- [x] Удалить метод `refreshLocalPricelistCacheFromServer()` (зависит от service)
|
||||||
|
- [x] Удалить import `pricelist` пакета
|
||||||
|
- [x] Оставить: `List()`, `Get()`, `GetItems()`, `GetLotNames()`, `GetLatest()`
|
||||||
|
|
||||||
|
## 3. Упростить `cmd/qfs/main.go`
|
||||||
|
|
||||||
|
- [x] Удалить создание сервисов: `pricingService`, `alertService`, `pricelistService`, `stockImportService`
|
||||||
|
- [x] Удалить хэндлер: `pricingHandler`
|
||||||
|
- [x] Изменить создание `pricelistHandler`: `NewPricelistHandler(local)` (без service)
|
||||||
|
- [x] Удалить repositories: `priceRepo`, `alertRepo` (statsRepo оставить — nil-safe)
|
||||||
|
- [x] Удалить все routes `/api/admin/pricing/*` (строки ~1407-1430)
|
||||||
|
- [x] Из `/api/pricelists/*` оставить только read-only:
|
||||||
|
- `GET ""` (List), `GET "/latest"`, `GET "/:id"`, `GET "/:id/items"`, `GET "/:id/lots"`
|
||||||
|
- [x] Удалить write routes: `POST ""`, `POST "/create-with-progress"`, `PATCH "/:id/active"`, `DELETE "/:id"`, `GET "/can-write"`
|
||||||
|
- [x] Удалить web page `/admin/pricing`
|
||||||
|
- [x] Исправить `/pricelists` — вместо redirect на admin/pricing сделать страницу
|
||||||
|
- [x] В `QuoteService` конструкторе: передавать `nil` для `pricingService`
|
||||||
|
- [x] Удалить imports: `pricing`, `pricelist`, `alerts` пакеты
|
||||||
|
|
||||||
|
## 4. Упростить `handlers/web.go`
|
||||||
|
|
||||||
|
- [x] Удалить из `simplePages`: `admin_pricing.html`
|
||||||
|
- [x] Удалить метод: `AdminPricing()`
|
||||||
|
- [x] Оставить все остальные методы включая `Pricelists()` и `PricelistDetail()`
|
||||||
|
|
||||||
|
## 5. Упростить `base.html` (навигация)
|
||||||
|
|
||||||
|
- [x] Убрать ссылку "Администратор цен"
|
||||||
|
- [x] Добавить ссылку "Прайслисты" (на `/pricelists`)
|
||||||
|
- [x] Оставить: "Мои проекты", "Прайслисты", sync indicator
|
||||||
|
|
||||||
|
## 6. Sync — оставить полностью
|
||||||
|
|
||||||
|
- Background worker: pull компоненты + прайслисты, push конфигурации
|
||||||
|
- Все `/api/sync/*` endpoints остаются
|
||||||
|
- Это ядро offline-first архитектуры
|
||||||
|
|
||||||
|
## 7. Верификация
|
||||||
|
|
||||||
|
- [x] `go build ./cmd/qfs` — компилируется
|
||||||
|
- [x] `go vet ./...` — без ошибок
|
||||||
|
- [ ] Запуск → `/configs` работает
|
||||||
|
- [ ] `/pricelists` — read-only список работает
|
||||||
|
- [ ] `/pricelists/:id` — детали работают
|
||||||
|
- [ ] Sync с сервером работает
|
||||||
|
- [ ] Нет ссылок на admin pricing в UI
|
||||||
|
|
||||||
|
## 8. Обновить CLAUDE.md
|
||||||
|
- [x] Убрать разделы про admin pricing, stock import, alerts, cron
|
||||||
|
- [x] Обновить API endpoints список
|
||||||
|
- [x] Обновить описание приложения
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user