Compare commits
129 Commits
44ccb01203
...
v1.3.2
| Author | SHA1 | Date | |
|---|---|---|---|
| cc9b846c31 | |||
| 87cb12906d | |||
| 075fc709dd | |||
| cbaeafa9c8 | |||
| 71f73e2f1d | |||
| 2e973b6d78 | |||
| 8508ee2921 | |||
| b153afbf51 | |||
|
|
9b5d57902d | ||
|
|
4e1a46bd71 | ||
|
|
857ec7a0e5 | ||
|
|
01f21fa5ac | ||
|
|
a1edca3be9 | ||
|
|
7fbf813952 | ||
|
|
e58fd35ee4 | ||
|
|
e3559035f7 | ||
|
|
5edffe822b | ||
|
|
99fd80bca7 | ||
|
|
d8edd5d5f0 | ||
|
|
9cb17ee03f | ||
|
|
8f596cec68 | ||
|
|
8fd27d11a7 | ||
|
|
600f842b82 | ||
|
|
acf7c8a4da | ||
|
|
5984a57a8b | ||
|
|
84dda8cf0a | ||
|
|
abeb26d82d | ||
|
|
29edd73744 | ||
|
|
e8d0e28415 | ||
|
|
08feda9af6 | ||
|
|
af79b6f3bf | ||
|
|
bca82f9dc0 | ||
| 17969277e6 | |||
| 0dbfe45353 | |||
| f609d2ce35 | |||
| 593280de99 | |||
| eb8555c11a | |||
| 7523a7d887 | |||
| 95b5f8bf65 | |||
| b629af9742 | |||
| 72ff842f5d | |||
|
|
5f2969a85a | ||
|
|
eb8ac34d83 | ||
|
|
104a26d907 | ||
|
|
b965c6bb95 | ||
|
|
29035ddc5a | ||
|
|
2f0ac2f6d2 | ||
|
|
8a8ea10dc2 | ||
|
|
51e2d1fc83 | ||
|
|
3d5ab63970 | ||
|
|
c02a7eac73 | ||
|
|
651427e0dd | ||
|
|
f665e9b08c | ||
|
|
994eec53e7 | ||
|
|
2f3c20fea6 | ||
|
|
80ec7bc6b8 | ||
|
|
8e5c4f5a7c | ||
|
|
1744e6a3b8 | ||
|
|
726dccb07c | ||
|
|
38d7332a38 | ||
|
|
c0beed021c | ||
|
|
08b95c293c | ||
|
|
c418d6cfc3 | ||
|
|
548a256d04 | ||
|
|
77c00de97a | ||
|
|
0c190efda4 | ||
|
|
41c0a47f54 | ||
|
|
f4f92dea66 | ||
|
|
f42b850734 | ||
|
|
d094d39427 | ||
|
|
4509e93864 | ||
|
|
e2800b06f9 | ||
|
|
7c606af2bb | ||
|
|
fabd30650d | ||
|
|
40ade651b0 | ||
|
|
1b87c53609 | ||
| a3dc264efd | |||
| 20056f3593 | |||
|
|
8a37542929 | ||
|
|
0eb6730a55 | ||
|
|
e2d056e7cb | ||
|
|
1bce8086d6 | ||
|
|
0bdd163728 | ||
|
|
fa0f5e321d | ||
|
|
502832ac9a | ||
|
|
8d84484412 | ||
| 2510d9e36e | |||
| d7285fc730 | |||
| e33a3f2c88 | |||
| 4735e2b9bb | |||
| cdf5cef2cf | |||
| 7f030e7db7 | |||
| 3d222b7f14 | |||
| c024b96de7 | |||
| 2c75a7ccb8 | |||
|
|
f25477a25e | ||
|
|
0bde12a39d | ||
|
|
e0404186ad | ||
|
|
eda0e7cb47 | ||
|
|
693c1d05d7 | ||
|
|
7fb9dd0267 | ||
|
|
61646bea46 | ||
|
|
9495f929aa | ||
|
|
b80bde7dac | ||
|
|
e307a2765d | ||
|
|
6f1feb942a | ||
|
|
236e37376e | ||
|
|
ded6e09b5e | ||
|
|
96bbe0a510 | ||
|
|
b672cbf27d | ||
|
|
e206531364 | ||
|
|
9bd2acd4f7 | ||
| ec3c16f3fc | |||
| 1f739a3ab2 | |||
| be77256d4e | |||
| 143d217397 | |||
| 8b8d2f18f9 | |||
| 8c1c8ccace | |||
| f31ae69233 | |||
| 3132ab2fa2 | |||
| 73acc5410f | |||
| 68d0e9a540 | |||
| 8309a5dc0e | |||
|
|
48921c699d | ||
|
|
d32b1c5d0c | ||
|
|
db37040399 | ||
|
|
7ded78f2c3 | ||
|
|
d7d6e9d62c | ||
|
|
a93644131c |
5
.githooks/pre-commit
Executable file
5
.githooks/pre-commit
Executable file
@@ -0,0 +1,5 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
repo_root="$(git rev-parse --show-toplevel)"
|
||||||
|
"$repo_root/scripts/check-secrets.sh"
|
||||||
52
.gitignore
vendored
52
.gitignore
vendored
@@ -1,5 +1,51 @@
|
|||||||
# QuoteForge
|
# QuoteForge
|
||||||
config.yaml
|
config.yaml
|
||||||
|
.env
|
||||||
|
.env.*
|
||||||
|
*.pem
|
||||||
|
*.key
|
||||||
|
*.p12
|
||||||
|
*.pfx
|
||||||
|
*.crt
|
||||||
|
id_rsa
|
||||||
|
id_rsa.*
|
||||||
|
secrets.yaml
|
||||||
|
secrets.yml
|
||||||
|
|
||||||
|
# Local SQLite database (contains encrypted credentials)
|
||||||
|
/data/*.db
|
||||||
|
/data/*.db-journal
|
||||||
|
/data/*.db-shm
|
||||||
|
/data/*.db-wal
|
||||||
|
|
||||||
|
# Binaries
|
||||||
|
/server
|
||||||
|
/importer
|
||||||
|
/cron
|
||||||
|
/bin/
|
||||||
|
qfs
|
||||||
|
|
||||||
|
# Local Go build cache used in sandboxed runs
|
||||||
|
.gocache/
|
||||||
|
|
||||||
|
# Local tooling state
|
||||||
|
.claude/
|
||||||
|
|
||||||
|
# Editor settings
|
||||||
|
.idea/
|
||||||
|
.vscode/
|
||||||
|
*.swp
|
||||||
|
*.swo
|
||||||
|
|
||||||
|
# Temp and logs
|
||||||
|
*.tmp
|
||||||
|
*.temp
|
||||||
|
*.log
|
||||||
|
|
||||||
|
# Go test/build artifacts
|
||||||
|
*.out
|
||||||
|
*.test
|
||||||
|
coverage/
|
||||||
|
|
||||||
# ---> macOS
|
# ---> macOS
|
||||||
# General
|
# General
|
||||||
@@ -8,7 +54,7 @@ config.yaml
|
|||||||
.LSOverride
|
.LSOverride
|
||||||
|
|
||||||
# Icon must end with two \r
|
# Icon must end with two \r
|
||||||
Icon
|
Icon
|
||||||
|
|
||||||
# Thumbnails
|
# Thumbnails
|
||||||
._*
|
._*
|
||||||
@@ -29,3 +75,7 @@ Network Trash Folder
|
|||||||
Temporary Items
|
Temporary Items
|
||||||
.apdisk
|
.apdisk
|
||||||
|
|
||||||
|
# Release artifacts (binaries, archives, checksums), but DO track releases/memory/ for changelog
|
||||||
|
releases/*
|
||||||
|
!releases/memory/
|
||||||
|
!releases/memory/**
|
||||||
|
|||||||
425
CLAUDE.md
425
CLAUDE.md
@@ -1,375 +1,88 @@
|
|||||||
# QuoteForge - Claude Code Instructions
|
# QuoteForge - Claude Code Instructions
|
||||||
|
|
||||||
## Project Overview
|
## Overview
|
||||||
|
Корпоративный конфигуратор серверов с offline-first архитектурой.
|
||||||
|
Приложение работает через локальную SQLite базу, синхронизация с MariaDB выполняется фоново.
|
||||||
|
|
||||||
QuoteForge — корпоративный инструмент для конфигурирования серверов и формирования коммерческих предложений (КП). Приложение интегрируется с существующей базой данных RFQ_LOG.
|
## Product Scope
|
||||||
|
- Конфигуратор компонентов и расчёт КП
|
||||||
|
- Проекты и конфигурации
|
||||||
|
- Read-only просмотр прайслистов из локального кэша
|
||||||
|
- Sync (pull компонентов/прайслистов, push локальных изменений)
|
||||||
|
|
||||||
## Tech Stack
|
Из области исключены:
|
||||||
|
- admin pricing UI/API
|
||||||
|
- stock import
|
||||||
|
- alerts
|
||||||
|
- cron/importer утилиты
|
||||||
|
|
||||||
- **Language:** Go 1.22+
|
## Architecture
|
||||||
- **Web Framework:** Gin (github.com/gin-gonic/gin)
|
- Local-first: чтение и запись происходят в SQLite
|
||||||
- **ORM:** GORM (gorm.io/gorm)
|
- MariaDB используется как сервер синхронизации
|
||||||
- **Database:** MariaDB 11 (existing database RFQ_LOG)
|
- Background worker: периодический sync push+pull
|
||||||
- **Frontend:** HTML templates + htmx + Tailwind CSS (CDN)
|
- Система ревизий конфигураций: immutable snapshots при каждом сохранении (local_configuration_versions)
|
||||||
- **Excel Export:** excelize (github.com/xuri/excelize/v2)
|
|
||||||
- **Auth:** JWT (github.com/golang-jwt/jwt/v5)
|
|
||||||
|
|
||||||
## Project Structure
|
## Guardrails
|
||||||
|
- Не возвращать в проект удалённые legacy-разделы: cron jobs, importer utility, admin pricing, alerts, stock import.
|
||||||
|
- Runtime-конфиг читается из user state (`config.yaml`) или через `-config` / `QFS_CONFIG_PATH`; не хранить рабочий `config.yaml` в репозитории.
|
||||||
|
- `config.example.yaml` остаётся единственным шаблоном конфигурации в репо.
|
||||||
|
- Любые изменения в sync должны сохранять local-first поведение: локальные CRUD не блокируются из-за недоступности MariaDB.
|
||||||
|
- CSV-экспорт: имя файла должно содержать **код проекта** (`project.Code`), а не название (`project.Name`). Формат: `YYYY-MM-DD (КодПроекта) ИмяКонфигурации Артикул.csv`.
|
||||||
|
- UI: во всех breadcrumbs длинные названия спецификаций/конфигураций сокращать до 16 символов с многоточием.
|
||||||
|
|
||||||
```
|
## Key SQLite Data
|
||||||
quoteforge/
|
- `connection_settings`
|
||||||
├── cmd/
|
- `local_components`
|
||||||
│ ├── server/main.go # Main HTTP server
|
- `local_pricelists`, `local_pricelist_items`
|
||||||
│ ├── priceupdater/main.go # Cron job for price updates & alerts
|
- `local_configurations`
|
||||||
│ └── importer/main.go # Import metadata from lot table
|
- `local_configuration_versions` — immutable snapshots (ревизии) при каждом сохранении
|
||||||
├── internal/
|
- `local_projects`
|
||||||
│ ├── config/config.go # YAML config loading
|
- `pending_changes`
|
||||||
│ ├── models/ # GORM models
|
|
||||||
│ ├── handlers/ # Gin HTTP handlers
|
|
||||||
│ ├── services/ # Business logic
|
|
||||||
│ ├── middleware/ # Auth, CORS, roles
|
|
||||||
│ └── repository/ # Database queries
|
|
||||||
├── web/
|
|
||||||
│ ├── templates/ # Go HTML templates
|
|
||||||
│ └── static/ # CSS, JS
|
|
||||||
├── migrations/ # SQL migration files
|
|
||||||
├── config.yaml
|
|
||||||
└── go.mod
|
|
||||||
```
|
|
||||||
|
|
||||||
## Existing Database Tables (READ-ONLY - DO NOT MODIFY)
|
|
||||||
|
|
||||||
These tables are used by other systems. Our app only reads from them:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Component catalog
|
|
||||||
CREATE TABLE lot (
|
|
||||||
lot_name CHAR(255) PRIMARY KEY, -- e.g., "CPU_AMD_9654", "MB_INTEL_4.Sapphire_2S"
|
|
||||||
lot_description VARCHAR(10000)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Price history from suppliers
|
|
||||||
CREATE TABLE lot_log (
|
|
||||||
lot_log_id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
lot CHAR(255) NOT NULL, -- FK → lot.lot_name
|
|
||||||
supplier CHAR(255) NOT NULL, -- FK → supplier.supplier_name
|
|
||||||
date DATE NOT NULL,
|
|
||||||
price DOUBLE NOT NULL,
|
|
||||||
quality CHAR(255),
|
|
||||||
comments VARCHAR(15000),
|
|
||||||
FOREIGN KEY (lot) REFERENCES lot(lot_name),
|
|
||||||
FOREIGN KEY (supplier) REFERENCES supplier(supplier_name)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Supplier catalog
|
|
||||||
CREATE TABLE supplier (
|
|
||||||
supplier_name CHAR(255) PRIMARY KEY,
|
|
||||||
supplier_comment VARCHAR(10000)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
## New Tables (prefix qt_)
|
|
||||||
|
|
||||||
QuoteForge creates these tables:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Users
|
|
||||||
CREATE TABLE qt_users (
|
|
||||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
username VARCHAR(100) UNIQUE NOT NULL,
|
|
||||||
email VARCHAR(255) UNIQUE NOT NULL,
|
|
||||||
password_hash VARCHAR(255) NOT NULL,
|
|
||||||
role ENUM('viewer', 'editor', 'pricing_admin', 'admin') DEFAULT 'viewer',
|
|
||||||
is_active BOOLEAN DEFAULT TRUE,
|
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
|
||||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Component metadata (extends lot table)
|
|
||||||
CREATE TABLE qt_lot_metadata (
|
|
||||||
lot_name CHAR(255) PRIMARY KEY,
|
|
||||||
category_id INT,
|
|
||||||
vendor VARCHAR(50), -- Parsed from lot_name: CPU_AMD_9654 → "AMD"
|
|
||||||
model VARCHAR(100), -- Parsed: CPU_AMD_9654 → "9654"
|
|
||||||
specs JSON,
|
|
||||||
current_price DECIMAL(12,2),
|
|
||||||
price_method ENUM('manual', 'median', 'average', 'weighted_median') DEFAULT 'median',
|
|
||||||
price_period_days INT DEFAULT 90,
|
|
||||||
price_updated_at TIMESTAMP,
|
|
||||||
request_count INT DEFAULT 0,
|
|
||||||
last_request_date DATE,
|
|
||||||
popularity_score DECIMAL(10,4),
|
|
||||||
FOREIGN KEY (lot_name) REFERENCES lot(lot_name)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Categories
|
|
||||||
CREATE TABLE qt_categories (
|
|
||||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
code VARCHAR(20) UNIQUE NOT NULL, -- MB, CPU, MEM, GPU, SSD, HDD, RAID, NIC, HCA, HBA, DPU, PS
|
|
||||||
name VARCHAR(100) NOT NULL,
|
|
||||||
name_ru VARCHAR(100),
|
|
||||||
display_order INT DEFAULT 0,
|
|
||||||
is_required BOOLEAN DEFAULT FALSE
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Saved configurations
|
|
||||||
CREATE TABLE qt_configurations (
|
|
||||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
uuid VARCHAR(36) UNIQUE NOT NULL,
|
|
||||||
user_id INT NOT NULL,
|
|
||||||
name VARCHAR(200) NOT NULL,
|
|
||||||
items JSON NOT NULL, -- [{"lot_name": "CPU_AMD_9654", "quantity": 2, "unit_price": 11500}]
|
|
||||||
total_price DECIMAL(12,2),
|
|
||||||
notes TEXT,
|
|
||||||
is_template BOOLEAN DEFAULT FALSE,
|
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
|
||||||
FOREIGN KEY (user_id) REFERENCES qt_users(id)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Price overrides
|
|
||||||
CREATE TABLE qt_price_overrides (
|
|
||||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
lot_name CHAR(255) NOT NULL,
|
|
||||||
price DECIMAL(12,2) NOT NULL,
|
|
||||||
valid_from DATE NOT NULL,
|
|
||||||
valid_until DATE,
|
|
||||||
reason TEXT,
|
|
||||||
created_by INT NOT NULL,
|
|
||||||
FOREIGN KEY (lot_name) REFERENCES lot(lot_name)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Alerts for pricing admins
|
|
||||||
CREATE TABLE qt_pricing_alerts (
|
|
||||||
id INT AUTO_INCREMENT PRIMARY KEY,
|
|
||||||
lot_name CHAR(255) NOT NULL,
|
|
||||||
alert_type ENUM('high_demand_stale_price', 'price_spike', 'price_drop', 'no_recent_quotes', 'trending_no_price') NOT NULL,
|
|
||||||
severity ENUM('low', 'medium', 'high', 'critical') DEFAULT 'medium',
|
|
||||||
message TEXT NOT NULL,
|
|
||||||
details JSON,
|
|
||||||
status ENUM('new', 'acknowledged', 'resolved', 'ignored') DEFAULT 'new',
|
|
||||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Usage statistics
|
|
||||||
CREATE TABLE qt_component_usage_stats (
|
|
||||||
lot_name CHAR(255) PRIMARY KEY,
|
|
||||||
quotes_total INT DEFAULT 0,
|
|
||||||
quotes_last_30d INT DEFAULT 0,
|
|
||||||
quotes_last_7d INT DEFAULT 0,
|
|
||||||
total_quantity INT DEFAULT 0,
|
|
||||||
total_revenue DECIMAL(14,2) DEFAULT 0,
|
|
||||||
trend_direction ENUM('up', 'stable', 'down') DEFAULT 'stable',
|
|
||||||
trend_percent DECIMAL(5,2) DEFAULT 0,
|
|
||||||
last_used_at TIMESTAMP
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
## Key Business Logic
|
|
||||||
|
|
||||||
### 1. Part Number Parsing
|
|
||||||
|
|
||||||
Extract category, vendor, model from lot_name:
|
|
||||||
```go
|
|
||||||
// "CPU_AMD_9654" → category="CPU", vendor="AMD", model="9654"
|
|
||||||
// "MB_INTEL_4.Sapphire_2S_32xDDR5" → category="MB", vendor="INTEL", model="4.Sapphire_2S_32xDDR5"
|
|
||||||
// "MEM_DDR5_64G_5600" → category="MEM", vendor="DDR5", model="64G_5600"
|
|
||||||
// "GPU_NV_RTX_4090_PCIe" → category="GPU", vendor="NV", model="RTX_4090_PCIe"
|
|
||||||
|
|
||||||
func ParsePartNumber(lotName string) (category, vendor, model string) {
|
|
||||||
parts := strings.SplitN(lotName, "_", 3)
|
|
||||||
if len(parts) >= 1 { category = parts[0] }
|
|
||||||
if len(parts) >= 2 { vendor = parts[1] }
|
|
||||||
if len(parts) >= 3 { model = parts[2] }
|
|
||||||
return
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Price Calculation Methods
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Median - simple median of prices in period
|
|
||||||
func CalculateMedian(prices []float64) float64
|
|
||||||
|
|
||||||
// Average - arithmetic mean
|
|
||||||
func CalculateAverage(prices []float64) float64
|
|
||||||
|
|
||||||
// Weighted Median - recent prices have higher weight (exponential decay)
|
|
||||||
// weight = e^(-days_since_quote / decay_days)
|
|
||||||
func CalculateWeightedMedian(prices []PricePoint, decayDays int) float64
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Price Freshness (color coding)
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Green: < 30 days AND >= 3 quotes
|
|
||||||
// Yellow: 30-60 days OR 1-2 quotes
|
|
||||||
// Orange: 60-90 days
|
|
||||||
// Red: > 90 days OR no price
|
|
||||||
|
|
||||||
func GetPriceFreshness(daysSinceUpdate int, quoteCount int) string {
|
|
||||||
if daysSinceUpdate < 30 && quoteCount >= 3 {
|
|
||||||
return "fresh" // green
|
|
||||||
} else if daysSinceUpdate < 60 {
|
|
||||||
return "normal" // yellow
|
|
||||||
} else if daysSinceUpdate < 90 {
|
|
||||||
return "stale" // orange
|
|
||||||
}
|
|
||||||
return "critical" // red
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Component Sorting
|
|
||||||
|
|
||||||
Sort by: popularity + price freshness. Components without prices go to the bottom.
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Sort score = popularity_score * 10 + freshness_bonus - no_price_penalty
|
|
||||||
// freshness_bonus: fresh=100, normal=50, stale=10, critical=0
|
|
||||||
// no_price_penalty: -1000 if current_price is NULL or 0
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Alert Generation
|
|
||||||
|
|
||||||
Generate alerts when:
|
|
||||||
- **high_demand_stale_price** (CRITICAL): >= 5 quotes/month AND price > 60 days old
|
|
||||||
- **trending_no_price** (HIGH): trend_percent > 50% AND no price set
|
|
||||||
- **price_spike** (MEDIUM): price increased > 20% from previous period
|
|
||||||
- **no_recent_quotes** (MEDIUM): popular component, no supplier quotes > 90 days
|
|
||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
| Group | Endpoints |
|
||||||
|
|-------|-----------|
|
||||||
|
| Setup | `GET /setup`, `POST /setup`, `POST /setup/test`, `GET /setup/status` |
|
||||||
|
| Components | `GET /api/components`, `GET /api/components/:lot_name`, `GET /api/categories` |
|
||||||
|
| Quote | `POST /api/quote/validate`, `POST /api/quote/calculate`, `POST /api/quote/price-levels` |
|
||||||
|
| Pricelists (read-only) | `GET /api/pricelists`, `GET /api/pricelists/latest`, `GET /api/pricelists/:id`, `GET /api/pricelists/:id/items`, `GET /api/pricelists/:id/lots` |
|
||||||
|
| Configs | CRUD + refresh/clone/reactivate/rename/project binding + versions/rollback via `/api/configs/*` |
|
||||||
|
| Projects | CRUD + nested configs + `DELETE /api/projects/:uuid` (delete variant) via `/api/projects/*` |
|
||||||
|
| Sync | `GET /api/sync/status`, `GET /api/sync/readiness`, `GET /api/sync/info`, `GET /api/sync/users-status`, `POST /api/sync/components`, `POST /api/sync/pricelists`, `POST /api/sync/all`, `POST /api/sync/push`, `GET /api/sync/pending`, `GET /api/sync/pending/count` |
|
||||||
|
| Export | `POST /api/export/csv` |
|
||||||
|
|
||||||
### Auth
|
## Web Routes
|
||||||
```
|
- `/configs`
|
||||||
POST /api/auth/login → {"username", "password"} → {"token", "refresh_token"}
|
- `/configurator`
|
||||||
POST /api/auth/logout
|
- `/configs/:uuid/revisions`
|
||||||
POST /api/auth/refresh
|
- `/projects`
|
||||||
GET /api/auth/me → current user info
|
- `/projects/:uuid`
|
||||||
```
|
- `/pricelists`
|
||||||
|
- `/pricelists/:id`
|
||||||
|
- `/setup`
|
||||||
|
|
||||||
### Components
|
## Release Notes & Change Log
|
||||||
```
|
Release notes are maintained in `releases/memory/` directory organized by version tags (e.g., `v1.2.1.md`).
|
||||||
GET /api/components → list with pagination
|
Before working on the codebase, review the most recent release notes to understand recent changes.
|
||||||
GET /api/components?category=CPU&vendor=AMD → filtered
|
- Check `releases/memory/` for detailed changelog between tags
|
||||||
GET /api/components/:lot_name → single component details
|
- Each release file documents commits, breaking changes, and migration notes
|
||||||
GET /api/categories → category list
|
|
||||||
```
|
|
||||||
|
|
||||||
### Quote Builder
|
|
||||||
```
|
|
||||||
POST /api/quote/validate → {"items": [...]} → {"valid": bool, "errors": [], "warnings": []}
|
|
||||||
POST /api/quote/calculate → {"items": [...]} → {"items": [...], "total": 45000.00}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Export
|
|
||||||
```
|
|
||||||
POST /api/export/csv → {"items": [...], "name": "Config 1"} → CSV file
|
|
||||||
POST /api/export/xlsx → {"items": [...], "name": "Config 1"} → XLSX file
|
|
||||||
```
|
|
||||||
|
|
||||||
### Configurations
|
|
||||||
```
|
|
||||||
GET /api/configs → list user's configurations
|
|
||||||
POST /api/configs → save new configuration
|
|
||||||
GET /api/configs/:uuid → get by UUID
|
|
||||||
PUT /api/configs/:uuid → update
|
|
||||||
DELETE /api/configs/:uuid → delete
|
|
||||||
GET /api/configs/:uuid/export → export as JSON
|
|
||||||
POST /api/configs/import → import from JSON
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pricing Admin (requires role: pricing_admin or admin)
|
|
||||||
```
|
|
||||||
GET /admin/pricing/stats → dashboard stats
|
|
||||||
GET /admin/pricing/components → components with pricing info
|
|
||||||
GET /admin/pricing/components/:lot_name → component pricing details
|
|
||||||
POST /admin/pricing/update → update price method/value
|
|
||||||
POST /admin/pricing/recalculate-all → recalculate all prices
|
|
||||||
|
|
||||||
GET /admin/pricing/alerts → list alerts
|
|
||||||
POST /admin/pricing/alerts/:id/acknowledge → mark as seen
|
|
||||||
POST /admin/pricing/alerts/:id/resolve → mark as resolved
|
|
||||||
POST /admin/pricing/alerts/:id/ignore → dismiss alert
|
|
||||||
```
|
|
||||||
|
|
||||||
### htmx Partials
|
|
||||||
```
|
|
||||||
GET /partials/components?category=CPU&vendor=AMD → HTML fragment
|
|
||||||
GET /partials/cart → cart HTML
|
|
||||||
GET /partials/summary → price summary HTML
|
|
||||||
```
|
|
||||||
|
|
||||||
## User Roles
|
|
||||||
|
|
||||||
| Role | Permissions |
|
|
||||||
|------|-------------|
|
|
||||||
| viewer | View components, create quotes, export |
|
|
||||||
| editor | + save/load configurations |
|
|
||||||
| pricing_admin | + manage prices, view alerts |
|
|
||||||
| admin | + manage users |
|
|
||||||
|
|
||||||
## Frontend Guidelines
|
|
||||||
|
|
||||||
- **Mobile-first** design
|
|
||||||
- Use **htmx** for interactivity (hx-get, hx-post, hx-target, hx-swap)
|
|
||||||
- Use **Tailwind CSS** via CDN
|
|
||||||
- Minimal custom JavaScript
|
|
||||||
- Color scheme for price freshness:
|
|
||||||
- `text-green-600 bg-green-50` - fresh
|
|
||||||
- `text-yellow-600 bg-yellow-50` - normal
|
|
||||||
- `text-orange-600 bg-orange-50` - stale
|
|
||||||
- `text-red-600 bg-red-50` - critical
|
|
||||||
|
|
||||||
## Commands
|
## Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Run development server
|
# Development
|
||||||
go run ./cmd/server
|
go run ./cmd/qfs
|
||||||
|
make run
|
||||||
|
|
||||||
# Run price updater (cron job)
|
# Build
|
||||||
go run ./cmd/priceupdater
|
make build-release
|
||||||
|
CGO_ENABLED=0 go build -o bin/qfs ./cmd/qfs
|
||||||
|
|
||||||
# Run importer (one-time setup)
|
# Verification
|
||||||
go run ./cmd/importer
|
go build ./cmd/qfs
|
||||||
|
go vet ./...
|
||||||
# Build for production
|
|
||||||
CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/quoteforge ./cmd/server
|
|
||||||
|
|
||||||
# Run tests
|
|
||||||
go test ./...
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Dependencies (go.mod)
|
|
||||||
|
|
||||||
```go
|
|
||||||
module github.com/mchus/quoteforge
|
|
||||||
|
|
||||||
go 1.22
|
|
||||||
|
|
||||||
require (
|
|
||||||
github.com/gin-gonic/gin v1.9.1
|
|
||||||
github.com/go-sql-driver/mysql v1.7.1
|
|
||||||
gorm.io/gorm v1.25.5
|
|
||||||
gorm.io/driver/mysql v1.5.2
|
|
||||||
github.com/xuri/excelize/v2 v2.8.0
|
|
||||||
github.com/golang-jwt/jwt/v5 v5.2.0
|
|
||||||
github.com/google/uuid v1.5.0
|
|
||||||
golang.org/x/crypto v0.17.0
|
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Development Priorities
|
|
||||||
|
|
||||||
1. **Phase 1 (MVP):** Project setup, models, component API, basic UI, CSV export
|
|
||||||
2. **Phase 2:** JWT auth with roles, pricing admin UI, all price methods
|
|
||||||
3. **Phase 3:** Save/load configs, JSON import/export, XLSX export, cron jobs
|
|
||||||
4. **Phase 4:** Usage stats, alerts system, dashboard
|
|
||||||
5. **Phase 5:** Polish, tests, Docker, documentation
|
|
||||||
|
|
||||||
## Code Style
|
## Code Style
|
||||||
|
- gofmt
|
||||||
- Use standard Go formatting (gofmt)
|
- structured logging (`slog`)
|
||||||
- Error handling: always check errors, wrap with context
|
- explicit error wrapping with context
|
||||||
- Logging: use structured logging (slog or zerolog)
|
|
||||||
- Comments: in Russian or English, be consistent
|
|
||||||
- File naming: snake_case for files, PascalCase for types
|
|
||||||
|
|||||||
104
Makefile
Normal file
104
Makefile
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
.PHONY: build build-release clean test run version install-hooks
|
||||||
|
|
||||||
|
# Get version from git
|
||||||
|
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
|
||||||
|
BUILD_TIME := $(shell date -u '+%Y-%m-%d_%H:%M:%S')
|
||||||
|
LDFLAGS := -s -w -X main.Version=$(VERSION)
|
||||||
|
|
||||||
|
# Binary name
|
||||||
|
BINARY := qfs
|
||||||
|
|
||||||
|
# Build for development (with debug info)
|
||||||
|
build:
|
||||||
|
go build -o bin/$(BINARY) ./cmd/qfs
|
||||||
|
|
||||||
|
# Build for release (optimized, with version)
|
||||||
|
build-release:
|
||||||
|
@echo "Building $(BINARY) version $(VERSION)..."
|
||||||
|
CGO_ENABLED=0 go build -ldflags="$(LDFLAGS)" -o bin/$(BINARY) ./cmd/qfs
|
||||||
|
@echo "✓ Built: bin/$(BINARY)"
|
||||||
|
@./bin/$(BINARY) -version
|
||||||
|
|
||||||
|
# Build release for Linux (cross-compile)
|
||||||
|
build-linux:
|
||||||
|
@echo "Building $(BINARY) for Linux..."
|
||||||
|
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o bin/$(BINARY)-linux-amd64 ./cmd/qfs
|
||||||
|
@echo "✓ Built: bin/$(BINARY)-linux-amd64"
|
||||||
|
|
||||||
|
# Build release for macOS (cross-compile)
|
||||||
|
build-macos:
|
||||||
|
@echo "Building $(BINARY) for macOS..."
|
||||||
|
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o bin/$(BINARY)-darwin-amd64 ./cmd/qfs
|
||||||
|
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build -ldflags="$(LDFLAGS)" -o bin/$(BINARY)-darwin-arm64 ./cmd/qfs
|
||||||
|
@echo "✓ Built: bin/$(BINARY)-darwin-amd64"
|
||||||
|
@echo "✓ Built: bin/$(BINARY)-darwin-arm64"
|
||||||
|
|
||||||
|
# Build release for Windows (cross-compile)
|
||||||
|
build-windows:
|
||||||
|
@echo "Building $(BINARY) for Windows..."
|
||||||
|
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build -ldflags="$(LDFLAGS)" -o bin/$(BINARY)-windows-amd64.exe ./cmd/qfs
|
||||||
|
@echo "✓ Built: bin/$(BINARY)-windows-amd64.exe"
|
||||||
|
|
||||||
|
# Build all platforms
|
||||||
|
build-all: build-release build-linux build-macos build-windows
|
||||||
|
|
||||||
|
# Create release packages for all platforms
|
||||||
|
release:
|
||||||
|
@./scripts/release.sh
|
||||||
|
|
||||||
|
# Show version
|
||||||
|
version:
|
||||||
|
@echo "Version: $(VERSION)"
|
||||||
|
|
||||||
|
# Clean build artifacts
|
||||||
|
clean:
|
||||||
|
rm -rf bin/
|
||||||
|
rm -f $(BINARY)
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
test:
|
||||||
|
go test -v ./...
|
||||||
|
|
||||||
|
# Run development server
|
||||||
|
run:
|
||||||
|
go run ./cmd/qfs
|
||||||
|
|
||||||
|
# Run with auto-restart (requires entr: brew install entr)
|
||||||
|
watch:
|
||||||
|
find . -name '*.go' | entr -r go run ./cmd/qfs
|
||||||
|
|
||||||
|
# Install dependencies
|
||||||
|
deps:
|
||||||
|
go mod download
|
||||||
|
go mod tidy
|
||||||
|
|
||||||
|
# Install local git hooks
|
||||||
|
install-hooks:
|
||||||
|
git config core.hooksPath .githooks
|
||||||
|
chmod +x .githooks/pre-commit scripts/check-secrets.sh
|
||||||
|
@echo "Installed git hooks from .githooks/"
|
||||||
|
|
||||||
|
# Help
|
||||||
|
help:
|
||||||
|
@echo "QuoteForge Server (qfs) - Build Commands"
|
||||||
|
@echo ""
|
||||||
|
@echo "Usage: make [target]"
|
||||||
|
@echo ""
|
||||||
|
@echo "Targets:"
|
||||||
|
@echo " build Build for development (with debug info)"
|
||||||
|
@echo " build-release Build optimized release (default)"
|
||||||
|
@echo " build-linux Cross-compile for Linux"
|
||||||
|
@echo " build-macos Cross-compile for macOS (Intel + Apple Silicon)"
|
||||||
|
@echo " build-windows Cross-compile for Windows"
|
||||||
|
@echo " build-all Build for all platforms"
|
||||||
|
@echo " release Create release packages for all platforms"
|
||||||
|
@echo " version Show current version"
|
||||||
|
@echo " clean Remove build artifacts"
|
||||||
|
@echo " test Run tests"
|
||||||
|
@echo " run Run development server"
|
||||||
|
@echo " watch Run with auto-restart (requires entr)"
|
||||||
|
@echo " deps Install/update dependencies"
|
||||||
|
@echo " install-hooks Install local git hooks (secret scan on commit)"
|
||||||
|
@echo " help Show this help"
|
||||||
|
@echo ""
|
||||||
|
@echo "Current version: $(VERSION)"
|
||||||
336
README.md
336
README.md
@@ -2,7 +2,8 @@
|
|||||||
|
|
||||||
**Server Configuration & Quotation Tool**
|
**Server Configuration & Quotation Tool**
|
||||||
|
|
||||||
QuoteForge — корпоративный инструмент для конфигурирования серверов и формирования коммерческих предложений. Позволяет быстро собрать спецификацию сервера из каталога компонентов с автоматическим расчётом цен.
|
QuoteForge — корпоративный инструмент для конфигурирования серверов и формирования коммерческих предложений (КП).
|
||||||
|
Приложение работает в strict local-first режиме: пользовательские операции выполняются через локальную SQLite, MariaDB используется для синхронизации и серверного администрирования прайслистов.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||
@@ -16,7 +17,8 @@ QuoteForge — корпоративный инструмент для конфи
|
|||||||
- 💰 **Автоматический расчёт цен** — актуальные цены на основе истории закупок
|
- 💰 **Автоматический расчёт цен** — актуальные цены на основе истории закупок
|
||||||
- 📊 **Экспорт в CSV/XLSX** — готовые спецификации для клиентов
|
- 📊 **Экспорт в CSV/XLSX** — готовые спецификации для клиентов
|
||||||
- 💾 **Сохранение конфигураций** — история и шаблоны для повторного использования
|
- 💾 **Сохранение конфигураций** — история и шаблоны для повторного использования
|
||||||
- 📤 **Импорт/экспорт JSON** — обмен конфигурациями между пользователями
|
- 🔌 **Полная офлайн-работа** — можно продолжать работу без сети и синхронизировать позже
|
||||||
|
- 🛡️ **Защищенная синхронизация** — sync блокируется preflight-проверкой, если локальная схема не готова
|
||||||
|
|
||||||
### Для ценовых администраторов
|
### Для ценовых администраторов
|
||||||
- 📈 **Умный расчёт цен** — медиана, взвешенная медиана, среднее
|
- 📈 **Умный расчёт цен** — медиана, взвешенная медиана, среднее
|
||||||
@@ -36,7 +38,7 @@ QuoteForge — корпоративный инструмент для конфи
|
|||||||
|
|
||||||
- **Backend:** Go 1.22+, Gin, GORM
|
- **Backend:** Go 1.22+, Gin, GORM
|
||||||
- **Frontend:** HTML, Tailwind CSS, htmx
|
- **Frontend:** HTML, Tailwind CSS, htmx
|
||||||
- **Database:** MariaDB 11+
|
- **Database:** SQLite (runtime/local-first), MariaDB 11+ (sync + server admin)
|
||||||
- **Export:** excelize (XLSX), encoding/csv
|
- **Export:** excelize (XLSX), encoding/csv
|
||||||
|
|
||||||
## Требования
|
## Требования
|
||||||
@@ -54,13 +56,13 @@ git clone https://github.com/your-company/quoteforge.git
|
|||||||
cd quoteforge
|
cd quoteforge
|
||||||
```
|
```
|
||||||
|
|
||||||
### 2. Настройка конфигурации
|
### 2. Настройка runtime-конфига (опционально)
|
||||||
|
|
||||||
```bash
|
`config.yaml` создаётся автоматически при первом старте в той же user-state папке, где находится `qfs.db`.
|
||||||
cp config.example.yaml config.yaml
|
Если найден старый формат, приложение автоматически мигрирует файл в актуальный runtime-формат
|
||||||
```
|
(оставляя только используемые секции `server` и `logging`).
|
||||||
|
|
||||||
Отредактируйте `config.yaml`:
|
При необходимости можно создать/отредактировать файл вручную:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
server:
|
server:
|
||||||
@@ -68,43 +70,254 @@ server:
|
|||||||
port: 8080
|
port: 8080
|
||||||
mode: "release"
|
mode: "release"
|
||||||
|
|
||||||
database:
|
logging:
|
||||||
host: "localhost"
|
level: "info"
|
||||||
port: 3306
|
format: "json"
|
||||||
name: "RFQ_LOG"
|
output: "stdout"
|
||||||
user: "quoteforge"
|
|
||||||
password: "your-secure-password"
|
|
||||||
|
|
||||||
auth:
|
|
||||||
jwt_secret: "your-jwt-secret-min-32-chars"
|
|
||||||
token_expiry: "24h"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### 3. Миграции базы данных
|
### 3. Миграции базы данных
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
make migrate
|
go run ./cmd/qfs -migrate
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Мигратор OPS -> проекты (preview/apply)
|
||||||
|
|
||||||
|
Переносит квоты, чьи названия начинаются с `OPS-xxxx` (где `x` — цифра), в проект `OPS-xxxx`.
|
||||||
|
Если проекта нет, он будет создан; если архивный — реактивирован.
|
||||||
|
|
||||||
|
Сначала всегда смотрите preview:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/migrate_ops_projects
|
||||||
|
```
|
||||||
|
|
||||||
|
Применение изменений:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/migrate_ops_projects -apply
|
||||||
|
```
|
||||||
|
|
||||||
|
Без интерактивного подтверждения:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/migrate_ops_projects -apply -yes
|
||||||
|
```
|
||||||
|
|
||||||
|
### Права БД для пользователя приложения
|
||||||
|
|
||||||
|
#### Полный набор прав для обычного пользователя
|
||||||
|
|
||||||
|
Чтобы выдать существующему пользователю все необходимые права (без переоздания):
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Справочные таблицы (только чтение)
|
||||||
|
GRANT SELECT ON RFQ_LOG.lot TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_categories TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelists TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO '<DB_USER>'@'%';
|
||||||
|
|
||||||
|
-- Таблицы конфигураций и проектов (чтение и запись)
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO '<DB_USER>'@'%';
|
||||||
|
|
||||||
|
-- Таблицы синхронизации (только чтение для миграций, чтение+запись для статуса)
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO '<DB_USER>'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO '<DB_USER>'@'%';
|
||||||
|
|
||||||
|
-- Применить изменения
|
||||||
|
FLUSH PRIVILEGES;
|
||||||
|
|
||||||
|
-- Проверка выданных прав
|
||||||
|
SHOW GRANTS FOR '<DB_USER>'@'%';
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Таблицы и их назначение
|
||||||
|
|
||||||
|
| Таблица | Назначение | Права | Примечание |
|
||||||
|
|---------|-----------|-------|-----------|
|
||||||
|
| `lot` | Справочник компонентов | SELECT | Существующая таблица |
|
||||||
|
| `qt_lot_metadata` | Расширенные данные компонентов | SELECT | Метаданные компонентов |
|
||||||
|
| `qt_categories` | Категории компонентов | SELECT | Справочник |
|
||||||
|
| `qt_pricelists` | Прайслисты | SELECT | Управляется сервером |
|
||||||
|
| `qt_pricelist_items` | Позиции прайслистов | SELECT | Управляется сервером |
|
||||||
|
| `qt_configurations` | Сохранённые конфигурации | SELECT, INSERT, UPDATE | Основная таблица работы |
|
||||||
|
| `qt_projects` | Проекты | SELECT, INSERT, UPDATE | Для группировки конфигураций |
|
||||||
|
| `qt_client_local_migrations` | Справочник миграций БД | SELECT | Только чтение (управляется админом) |
|
||||||
|
| `qt_client_schema_state` | Состояние локальной схемы | SELECT, INSERT, UPDATE | Отслеживание примененных миграций |
|
||||||
|
| `qt_pricelist_sync_status` | Статус синхронизации | SELECT, INSERT, UPDATE | Отслеживание активности синхронизации |
|
||||||
|
|
||||||
|
#### При создании нового пользователя
|
||||||
|
|
||||||
|
Если нужно создать нового пользователя с нуля:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- 1) Создать пользователя
|
||||||
|
CREATE USER IF NOT EXISTS 'quote_user'@'%' IDENTIFIED BY '<DB_PASSWORD>';
|
||||||
|
|
||||||
|
-- 2) Выдать все необходимые права
|
||||||
|
GRANT SELECT ON RFQ_LOG.lot TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_lot_metadata TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_categories TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelists TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_pricelist_items TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_configurations TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_projects TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT ON RFQ_LOG.qt_client_local_migrations TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_client_schema_state TO 'quote_user'@'%';
|
||||||
|
GRANT SELECT, INSERT, UPDATE ON RFQ_LOG.qt_pricelist_sync_status TO 'quote_user'@'%';
|
||||||
|
|
||||||
|
-- 3) Применить изменения
|
||||||
|
FLUSH PRIVILEGES;
|
||||||
|
|
||||||
|
-- 4) Проверить права
|
||||||
|
SHOW GRANTS FOR 'quote_user'@'%';
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Важные замечания
|
||||||
|
|
||||||
|
- **Таблицы синхронизации** должны быть созданы администратором БД один раз. Приложение не требует прав CREATE TABLE.
|
||||||
|
- **Прайслисты** (`qt_pricelists`, `qt_pricelist_items`) — справочные таблицы, управляются сервером, пользователь имеет только SELECT.
|
||||||
|
- **Конфигурации и проекты** — таблицы, в которые пишет само приложение (INSERT, UPDATE при сохранении изменений).
|
||||||
|
- **Таблицы миграций** нужны для синхронизации: приложение читает список миграций и отчитывается о применённых.
|
||||||
|
- Если видите ошибку `Access denied for user ...@'<ip>'`, проверьте наличие конфликтующих записей пользователя с разными хостами (user@localhost vs user@'%').
|
||||||
|
|
||||||
### 4. Импорт метаданных компонентов
|
### 4. Импорт метаданных компонентов
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
make seed
|
go run ./cmd/importer
|
||||||
```
|
```
|
||||||
|
|
||||||
### 5. Запуск
|
### 5. Запуск
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Development
|
# Development
|
||||||
make run
|
go run ./cmd/qfs
|
||||||
|
|
||||||
# Production
|
# Production (with Makefile - recommended)
|
||||||
make build
|
make build-release # Builds with version info
|
||||||
./bin/quoteforge
|
./bin/qfs -version # Check version
|
||||||
|
|
||||||
|
# Production (manual)
|
||||||
|
VERSION=$(git describe --tags --always --dirty)
|
||||||
|
CGO_ENABLED=0 go build -ldflags="-s -w -X main.Version=$VERSION" -o bin/qfs ./cmd/qfs
|
||||||
|
./bin/qfs -version
|
||||||
|
```
|
||||||
|
|
||||||
|
**Makefile команды:**
|
||||||
|
```bash
|
||||||
|
make build-release # Оптимизированная сборка с версией
|
||||||
|
make build-all # Сборка для всех платформ (Linux, macOS, Windows)
|
||||||
|
make build-windows # Только для Windows
|
||||||
|
make run # Запуск dev сервера
|
||||||
|
make test # Запуск тестов
|
||||||
|
make install-hooks # Установить git hooks (блокировка коммита с секретами)
|
||||||
|
make clean # Очистка bin/
|
||||||
|
make help # Показать все команды
|
||||||
```
|
```
|
||||||
|
|
||||||
Приложение будет доступно по адресу: http://localhost:8080
|
Приложение будет доступно по адресу: http://localhost:8080
|
||||||
|
|
||||||
|
### Локальная SQLite база (state)
|
||||||
|
|
||||||
|
Локальная база приложения хранится в профиле пользователя и не зависит от расположения бинарника.
|
||||||
|
Имя файла: `qfs.db`.
|
||||||
|
|
||||||
|
- macOS: `~/Library/Application Support/QuoteForge/qfs.db`
|
||||||
|
- Linux: `$XDG_STATE_HOME/quoteforge/qfs.db` (или `~/.local/state/quoteforge/qfs.db`)
|
||||||
|
- Windows: `%LOCALAPPDATA%\\QuoteForge\\qfs.db`
|
||||||
|
|
||||||
|
Можно переопределить путь через `-localdb` или переменную окружения `QFS_DB_PATH`.
|
||||||
|
|
||||||
|
#### Sync readiness guard
|
||||||
|
|
||||||
|
Перед `push/pull` выполняется preflight-проверка:
|
||||||
|
- доступен ли сервер (MariaDB);
|
||||||
|
- можно ли проверить и применить централизованные миграции локальной БД;
|
||||||
|
- подходит ли версия приложения под `min_app_version` миграций.
|
||||||
|
|
||||||
|
Если проверка не пройдена:
|
||||||
|
- локальная работа (CRUD) продолжается;
|
||||||
|
- sync API возвращает `423 Locked` с `reason_code` и `reason_text`;
|
||||||
|
- в UI показывается красный индикатор и причина блокировки в модалке синхронизации.
|
||||||
|
|
||||||
|
#### Схема потоков данных синхронизации
|
||||||
|
|
||||||
|
```text
|
||||||
|
[ SERVER / MariaDB ]
|
||||||
|
┌───────────────────────────┐
|
||||||
|
│ qt_projects │
|
||||||
|
│ qt_configurations │
|
||||||
|
│ qt_pricelists │
|
||||||
|
│ qt_pricelist_items │
|
||||||
|
│ qt_pricelist_sync_status │
|
||||||
|
└─────────────┬─────────────┘
|
||||||
|
│
|
||||||
|
pull (projects/configs/pricelists)
|
||||||
|
│
|
||||||
|
┌──────────────────┴──────────────────┐
|
||||||
|
│ │
|
||||||
|
[ CLIENT A / local SQLite ] [ CLIENT B / local SQLite ]
|
||||||
|
┌───────────────────────────────┐ ┌───────────────────────────────┐
|
||||||
|
│ local_projects │ │ local_projects │
|
||||||
|
│ local_configurations │ │ local_configurations │
|
||||||
|
│ local_pricelists │ │ local_pricelists │
|
||||||
|
│ local_pricelist_items │ │ local_pricelist_items │
|
||||||
|
│ pending_changes (proj/config) │ │ pending_changes (proj/config) │
|
||||||
|
└───────────────┬───────────────┘ └───────────────┬───────────────┘
|
||||||
|
│ │
|
||||||
|
push (projects/configurations only) push (projects/configurations only)
|
||||||
|
│ │
|
||||||
|
└──────────────────┬────────────────────┘
|
||||||
|
│
|
||||||
|
[ SERVER / MariaDB ]
|
||||||
|
```
|
||||||
|
|
||||||
|
По сущностям:
|
||||||
|
- Конфигурации: `Client <-> Server <-> Other Clients`
|
||||||
|
- Проекты: `Client <-> Server <-> Other Clients`
|
||||||
|
- Прайслисты: `Server -> Clients only` (локальный push отсутствует)
|
||||||
|
- Локальная очистка прайслистов на клиенте: удаляются записи, которых нет на сервере и которые не используются активными локальными конфигурациями
|
||||||
|
|
||||||
|
### Версионность конфигураций (local-first)
|
||||||
|
|
||||||
|
Для `local_configurations` используется append-only versioning через полные snapshot-версии:
|
||||||
|
|
||||||
|
- таблица: `local_configuration_versions`
|
||||||
|
- для каждого изменения создаётся новая версия (`version_no = max + 1`)
|
||||||
|
- `local_configurations.current_version_id` указывает на активную версию
|
||||||
|
- старые версии не изменяются и не удаляются в обычном потоке
|
||||||
|
- rollback не "перематывает" историю, а создаёт новую версию из выбранного snapshot
|
||||||
|
|
||||||
|
При backfill (миграция `006_add_local_configuration_versions.sql`) для существующих конфигураций создаётся `v1` и проставляется `current_version_id`.
|
||||||
|
|
||||||
|
#### Rollback
|
||||||
|
|
||||||
|
Rollback выполняется API-методом:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
POST /api/configs/:uuid/rollback
|
||||||
|
{
|
||||||
|
"target_version": 3,
|
||||||
|
"note": "optional"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
- создаётся новая версия `vN` с `data` из целевой версии
|
||||||
|
- `change_note = "rollback to v{target_version}"` (+ note, если передан)
|
||||||
|
- `current_version_id` переключается на новую версию
|
||||||
|
- конфигурация уходит в `sync_status = pending`
|
||||||
|
|
||||||
|
### Локальный config.yaml
|
||||||
|
|
||||||
|
По умолчанию `qfs` ищет `config.yaml` в той же user-state папке, где лежит `qfs.db` (а не рядом с бинарником).
|
||||||
|
Если файла нет, он создаётся автоматически. Если формат устарел, он автоматически мигрируется в runtime-формат (`server` + `logging`).
|
||||||
|
Можно переопределить путь через `-config` или `QFS_CONFIG_PATH`.
|
||||||
|
|
||||||
## Docker
|
## Docker
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -120,9 +333,8 @@ docker-compose up -d
|
|||||||
```
|
```
|
||||||
quoteforge/
|
quoteforge/
|
||||||
├── cmd/
|
├── cmd/
|
||||||
│ ├── server/ # Основной сервер
|
│ ├── server/main.go # Main HTTP server
|
||||||
│ ├── priceupdater/ # Cron job обновления цен
|
│ └── importer/main.go # Import metadata from lot table
|
||||||
│ └── importer/ # Импорт данных
|
|
||||||
├── internal/
|
├── internal/
|
||||||
│ ├── config/ # Конфигурация
|
│ ├── config/ # Конфигурация
|
||||||
│ ├── models/ # GORM модели
|
│ ├── models/ # GORM модели
|
||||||
@@ -134,12 +346,23 @@ quoteforge/
|
|||||||
│ ├── templates/ # HTML шаблоны
|
│ ├── templates/ # HTML шаблоны
|
||||||
│ └── static/ # CSS, JS, изображения
|
│ └── static/ # CSS, JS, изображения
|
||||||
├── migrations/ # SQL миграции
|
├── migrations/ # SQL миграции
|
||||||
├── config.yaml # Конфигурация
|
├── config.example.yaml # Пример конфигурации
|
||||||
├── Dockerfile
|
├── releases/
|
||||||
├── docker-compose.yml
|
│ └── memory/ # Changelog между тегами (v1.2.1.md, v1.2.2.md, ...)
|
||||||
└── Makefile
|
└── go.mod
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Releases & Changelog
|
||||||
|
|
||||||
|
Change log между версиями хранится в `releases/memory/` каталоге в файлах вида `v{major}.{minor}.{patch}.md`.
|
||||||
|
|
||||||
|
Каждый файл содержит:
|
||||||
|
- Список коммитов между версиями
|
||||||
|
- Описание изменений и их влияния
|
||||||
|
- Breaking changes и заметки о миграции
|
||||||
|
|
||||||
|
**Перед работой над кодом проверьте последний файл в этой папке, чтобы понять текущее состояние проекта.**
|
||||||
|
|
||||||
## Роли пользователей
|
## Роли пользователей
|
||||||
|
|
||||||
| Роль | Описание |
|
| Роль | Описание |
|
||||||
@@ -161,34 +384,52 @@ GET /api/components # Список компонентов
|
|||||||
POST /api/quote/calculate # Расчёт цены
|
POST /api/quote/calculate # Расчёт цены
|
||||||
POST /api/export/xlsx # Экспорт в Excel
|
POST /api/export/xlsx # Экспорт в Excel
|
||||||
GET /api/configs # Сохранённые конфигурации
|
GET /api/configs # Сохранённые конфигурации
|
||||||
|
GET /api/configs/:uuid/versions # Список версий конфигурации
|
||||||
|
GET /api/configs/:uuid/versions/:version # Получить конкретную версию
|
||||||
|
POST /api/configs/:uuid/rollback # Rollback на указанную версию
|
||||||
|
POST /api/configs/:uuid/reactivate # Вернуть архивную конфигурацию в активные
|
||||||
|
GET /api/sync/readiness # Статус readiness guard (ready|blocked|unknown)
|
||||||
|
GET /api/sync/status # Сводный статус синхронизации
|
||||||
|
GET /api/sync/info # Данные для модалки синхронизации
|
||||||
|
POST /api/sync/push # Push pending changes (423, если blocked)
|
||||||
|
POST /api/sync/all # Full sync push+pull (423, если blocked)
|
||||||
|
POST /api/sync/components # Pull components (423, если blocked)
|
||||||
|
POST /api/sync/pricelists # Pull pricelists (423, если blocked)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Cron Jobs
|
### Краткая карта sync API
|
||||||
|
|
||||||
Добавьте в crontab:
|
| Endpoint | Назначение | Поток |
|
||||||
|
|----------|------------|-------|
|
||||||
|
| `POST /api/sync/push` | Отправить локальные pending-изменения | `SQLite -> MariaDB` |
|
||||||
|
| `POST /api/sync/components` | Подтянуть справочник компонентов | `MariaDB -> SQLite` |
|
||||||
|
| `POST /api/sync/pricelists` | Подтянуть прайслисты и позиции | `MariaDB -> SQLite` |
|
||||||
|
| `POST /api/sync/all` | Полный цикл: push + pull + импорт проектов/конфигураций | `двунаправленно` |
|
||||||
|
| `GET /api/sync/readiness` | Статус preflight/readiness | `read-only` |
|
||||||
|
| `GET /api/sync/status` / `GET /api/sync/info` | Сводка статуса и данных синхронизации | `read-only` |
|
||||||
|
|
||||||
```bash
|
#### Sync payload для versioning
|
||||||
# Обновление цен — каждую ночь в 2:00
|
|
||||||
0 2 * * * /opt/quoteforge/bin/priceupdater
|
|
||||||
|
|
||||||
# Генерация алертов — каждый час
|
События в `pending_changes` для конфигураций содержат:
|
||||||
0 * * * * /opt/quoteforge/bin/priceupdater --alerts-only
|
- `configuration_uuid`
|
||||||
```
|
- `operation` (`create` / `update` / `rollback`)
|
||||||
|
- `current_version_id` и `current_version_no`
|
||||||
|
- `snapshot` (текущее состояние конфигурации)
|
||||||
|
- `idempotency_key` и `conflict_policy` (`last_write_wins`)
|
||||||
|
|
||||||
|
Это позволяет push-слою отправлять на сервер актуальное состояние и готовит основу для будущего conflict resolution.
|
||||||
|
|
||||||
## Разработка
|
## Разработка
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Запуск в режиме разработки (hot reload)
|
# Запуск в режиме разработки (hot reload)
|
||||||
make dev
|
go run ./cmd/qfs
|
||||||
|
|
||||||
# Запуск тестов
|
# Запуск тестов
|
||||||
make test
|
go test ./...
|
||||||
|
|
||||||
# Линтер
|
|
||||||
make lint
|
|
||||||
|
|
||||||
# Сборка для Linux
|
# Сборка для Linux
|
||||||
make build-linux
|
CGO_ENABLED=0 go build -ldflags="-s -w" -o bin/qfs ./cmd/qfs
|
||||||
```
|
```
|
||||||
|
|
||||||
## Переменные окружения
|
## Переменные окружения
|
||||||
@@ -202,6 +443,11 @@ make build-linux
|
|||||||
| `QF_DB_PASSWORD` | Пароль БД | — |
|
| `QF_DB_PASSWORD` | Пароль БД | — |
|
||||||
| `QF_JWT_SECRET` | Секрет для JWT | — |
|
| `QF_JWT_SECRET` | Секрет для JWT | — |
|
||||||
| `QF_SERVER_PORT` | Порт сервера | 8080 |
|
| `QF_SERVER_PORT` | Порт сервера | 8080 |
|
||||||
|
| `QFS_DB_PATH` | Полный путь к локальной SQLite БД | OS-specific user state dir |
|
||||||
|
| `QFS_STATE_DIR` | Каталог state (если `QFS_DB_PATH` не задан) | OS-specific user state dir |
|
||||||
|
| `QFS_CONFIG_PATH` | Полный путь к `config.yaml` | OS-specific user state dir |
|
||||||
|
| `QFS_BACKUP_DIR` | Каталог для ротационных бэкапов локальных данных | `<db dir>/backups` |
|
||||||
|
| `QFS_BACKUP_DISABLE` | Отключить автоматические бэкапы (`1/true/yes`) | — |
|
||||||
|
|
||||||
## Интеграция с существующей БД
|
## Интеграция с существующей БД
|
||||||
|
|
||||||
|
|||||||
21
assets_embed.go
Normal file
21
assets_embed.go
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
package quoteforge
|
||||||
|
|
||||||
|
import (
|
||||||
|
"embed"
|
||||||
|
"io/fs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TemplatesFS contains HTML templates embedded into the binary.
|
||||||
|
//
|
||||||
|
//go:embed web/templates/*.html web/templates/partials/*.html
|
||||||
|
var TemplatesFS embed.FS
|
||||||
|
|
||||||
|
// StaticFiles contains static assets (CSS, JS, etc.) embedded into the binary.
|
||||||
|
//
|
||||||
|
//go:embed web/static/*
|
||||||
|
var StaticFiles embed.FS
|
||||||
|
|
||||||
|
// StaticFS returns a filesystem rooted at web/static for serving static assets.
|
||||||
|
func StaticFS() (fs.FS, error) {
|
||||||
|
return fs.Sub(StaticFiles, "web/static")
|
||||||
|
}
|
||||||
164
cmd/migrate/main.go
Normal file
164
cmd/migrate/main.go
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to resolve default local SQLite path: %v", err)
|
||||||
|
}
|
||||||
|
localDBPath := flag.String("localdb", defaultLocalDBPath, "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||||
|
dryRun := flag.Bool("dry-run", false, "show what would be migrated without actually doing it")
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
log.Println("QuoteForge Configuration Migration Tool")
|
||||||
|
log.Println("========================================")
|
||||||
|
|
||||||
|
// Initialize local SQLite
|
||||||
|
log.Printf("Opening local SQLite at %s...", *localDBPath)
|
||||||
|
local, err := localdb.New(*localDBPath)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to initialize local database: %v", err)
|
||||||
|
}
|
||||||
|
log.Println("Local SQLite initialized")
|
||||||
|
if !local.HasSettings() {
|
||||||
|
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||||
|
}
|
||||||
|
|
||||||
|
settings, err := local.GetSettings()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to load SQLite connection settings: %v", err)
|
||||||
|
}
|
||||||
|
dsn, err := local.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to build DSN from SQLite settings: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Connect to MariaDB
|
||||||
|
log.Printf("Connecting to MariaDB at %s:%d...", settings.Host, settings.Port)
|
||||||
|
mariaDB, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to connect to MariaDB: %v", err)
|
||||||
|
}
|
||||||
|
log.Println("Connected to MariaDB")
|
||||||
|
|
||||||
|
// Count configurations in MariaDB
|
||||||
|
var serverCount int64
|
||||||
|
if err := mariaDB.Model(&models.Configuration{}).Count(&serverCount).Error; err != nil {
|
||||||
|
log.Fatalf("Failed to count configurations: %v", err)
|
||||||
|
}
|
||||||
|
log.Printf("Found %d configurations in MariaDB", serverCount)
|
||||||
|
|
||||||
|
if serverCount == 0 {
|
||||||
|
log.Println("No configurations to migrate")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get all configurations from MariaDB
|
||||||
|
var configs []models.Configuration
|
||||||
|
if err := mariaDB.Find(&configs).Error; err != nil {
|
||||||
|
log.Fatalf("Failed to fetch configurations: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check existing local configurations
|
||||||
|
localCount := local.CountConfigurations()
|
||||||
|
log.Printf("Found %d configurations in local SQLite", localCount)
|
||||||
|
|
||||||
|
if *dryRun {
|
||||||
|
log.Println("\n[DRY RUN] Would migrate the following configurations:")
|
||||||
|
for _, c := range configs {
|
||||||
|
userName := c.OwnerUsername
|
||||||
|
if userName == "" {
|
||||||
|
userName = "unknown"
|
||||||
|
}
|
||||||
|
log.Printf(" - %s (UUID: %s, User: %s, Items: %d)", c.Name, c.UUID, userName, len(c.Items))
|
||||||
|
}
|
||||||
|
log.Printf("\nTotal: %d configurations", len(configs))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Migrate configurations
|
||||||
|
log.Println("\nMigrating configurations...")
|
||||||
|
migrated := 0
|
||||||
|
skipped := 0
|
||||||
|
errors := 0
|
||||||
|
|
||||||
|
for _, c := range configs {
|
||||||
|
// Check if already exists
|
||||||
|
existing, err := local.GetConfigurationByUUID(c.UUID)
|
||||||
|
if err == nil && existing.ID > 0 {
|
||||||
|
log.Printf(" SKIP: %s (already exists)", c.Name)
|
||||||
|
skipped++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert items
|
||||||
|
localItems := make(localdb.LocalConfigItems, len(c.Items))
|
||||||
|
for i, item := range c.Items {
|
||||||
|
localItems[i] = localdb.LocalConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create local configuration
|
||||||
|
now := time.Now()
|
||||||
|
localConfig := &localdb.LocalConfiguration{
|
||||||
|
UUID: c.UUID,
|
||||||
|
ServerID: &c.ID,
|
||||||
|
ProjectUUID: c.ProjectUUID,
|
||||||
|
Name: c.Name,
|
||||||
|
Items: localItems,
|
||||||
|
TotalPrice: c.TotalPrice,
|
||||||
|
CustomPrice: c.CustomPrice,
|
||||||
|
Notes: c.Notes,
|
||||||
|
IsTemplate: c.IsTemplate,
|
||||||
|
ServerCount: c.ServerCount,
|
||||||
|
CreatedAt: c.CreatedAt,
|
||||||
|
UpdatedAt: now,
|
||||||
|
SyncedAt: &now,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
OriginalUserID: derefUint(c.UserID),
|
||||||
|
OriginalUsername: c.OwnerUsername,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveConfiguration(localConfig); err != nil {
|
||||||
|
log.Printf(" ERROR: %s - %v", c.Name, err)
|
||||||
|
errors++
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf(" OK: %s (%d items)", c.Name, len(c.Items))
|
||||||
|
migrated++
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Println("\n========================================")
|
||||||
|
log.Printf("Migration complete!")
|
||||||
|
log.Printf(" Migrated: %d", migrated)
|
||||||
|
log.Printf(" Skipped: %d", skipped)
|
||||||
|
log.Printf(" Errors: %d", errors)
|
||||||
|
|
||||||
|
fmt.Println("\nDone! You can now run the server with: go run ./cmd/qfs")
|
||||||
|
}
|
||||||
|
|
||||||
|
func derefUint(v *uint) uint {
|
||||||
|
if v == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return *v
|
||||||
|
}
|
||||||
308
cmd/migrate_ops_projects/main.go
Normal file
308
cmd/migrate_ops_projects/main.go
Normal file
@@ -0,0 +1,308 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"flag"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appstate"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"gorm.io/driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
type configRow struct {
|
||||||
|
ID uint
|
||||||
|
UUID string
|
||||||
|
OwnerUsername string
|
||||||
|
Name string
|
||||||
|
ProjectUUID *string
|
||||||
|
}
|
||||||
|
|
||||||
|
type migrationAction struct {
|
||||||
|
ConfigID uint
|
||||||
|
ConfigUUID string
|
||||||
|
ConfigName string
|
||||||
|
OwnerUsername string
|
||||||
|
TargetProjectName string
|
||||||
|
CurrentProject string
|
||||||
|
NeedCreateProject bool
|
||||||
|
NeedReactivate bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
defaultLocalDBPath, err := appstate.ResolveDBPath("")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to resolve default local SQLite path: %v", err)
|
||||||
|
}
|
||||||
|
localDBPath := flag.String("localdb", defaultLocalDBPath, "path to local SQLite database (default: user state dir or QFS_DB_PATH)")
|
||||||
|
apply := flag.Bool("apply", false, "apply migration (default is preview only)")
|
||||||
|
yes := flag.Bool("yes", false, "skip interactive confirmation (works only with -apply)")
|
||||||
|
flag.Parse()
|
||||||
|
|
||||||
|
local, err := localdb.New(*localDBPath)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to initialize local database: %v", err)
|
||||||
|
}
|
||||||
|
if !local.HasSettings() {
|
||||||
|
log.Fatalf("SQLite connection settings are not configured. Run qfs setup first.")
|
||||||
|
}
|
||||||
|
dsn, err := local.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to build DSN from SQLite settings: %v", err)
|
||||||
|
}
|
||||||
|
dbUser := strings.TrimSpace(local.GetDBUser())
|
||||||
|
|
||||||
|
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to connect database: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := ensureProjectsTable(db); err != nil {
|
||||||
|
log.Fatalf("precheck failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
actions, existingProjects, err := buildPlan(db, dbUser)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to build migration plan: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
printPlan(actions)
|
||||||
|
if len(actions) == 0 {
|
||||||
|
fmt.Println("Nothing to migrate.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !*apply {
|
||||||
|
fmt.Println("\nPreview complete. Re-run with -apply to execute.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !*yes {
|
||||||
|
ok, confirmErr := askForConfirmation()
|
||||||
|
if confirmErr != nil {
|
||||||
|
log.Fatalf("confirmation failed: %v", confirmErr)
|
||||||
|
}
|
||||||
|
if !ok {
|
||||||
|
fmt.Println("Aborted.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := executePlan(db, actions, existingProjects); err != nil {
|
||||||
|
log.Fatalf("migration failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("Migration completed successfully.")
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureProjectsTable(db *gorm.DB) error {
|
||||||
|
var count int64
|
||||||
|
if err := db.Raw("SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = DATABASE() AND table_name = 'qt_projects'").Scan(&count).Error; err != nil {
|
||||||
|
return fmt.Errorf("checking qt_projects table: %w", err)
|
||||||
|
}
|
||||||
|
if count == 0 {
|
||||||
|
return fmt.Errorf("table qt_projects does not exist; run migration 009_add_projects.sql first")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildPlan(db *gorm.DB, fallbackOwner string) ([]migrationAction, map[string]*models.Project, error) {
|
||||||
|
var configs []configRow
|
||||||
|
if err := db.Table("qt_configurations").
|
||||||
|
Select("id, uuid, owner_username, name, project_uuid").
|
||||||
|
Find(&configs).Error; err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("load configurations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
codeRegex := regexp.MustCompile(`^(OPS-[0-9]{4})`)
|
||||||
|
owners := make(map[string]struct{})
|
||||||
|
projectNames := make(map[string]struct{})
|
||||||
|
type candidate struct {
|
||||||
|
config configRow
|
||||||
|
code string
|
||||||
|
owner string
|
||||||
|
}
|
||||||
|
candidates := make([]candidate, 0)
|
||||||
|
|
||||||
|
for _, cfg := range configs {
|
||||||
|
match := codeRegex.FindStringSubmatch(strings.TrimSpace(cfg.Name))
|
||||||
|
if len(match) < 2 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
owner := strings.TrimSpace(cfg.OwnerUsername)
|
||||||
|
if owner == "" {
|
||||||
|
owner = strings.TrimSpace(fallbackOwner)
|
||||||
|
}
|
||||||
|
if owner == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
code := match[1]
|
||||||
|
owners[owner] = struct{}{}
|
||||||
|
projectNames[code] = struct{}{}
|
||||||
|
candidates = append(candidates, candidate{config: cfg, code: code, owner: owner})
|
||||||
|
}
|
||||||
|
|
||||||
|
ownerList := setKeys(owners)
|
||||||
|
nameList := setKeys(projectNames)
|
||||||
|
existingProjects := make(map[string]*models.Project)
|
||||||
|
if len(ownerList) > 0 && len(nameList) > 0 {
|
||||||
|
var projects []models.Project
|
||||||
|
if err := db.Where("owner_username IN ? AND name IN ?", ownerList, nameList).Find(&projects).Error; err != nil {
|
||||||
|
return nil, nil, fmt.Errorf("load existing projects: %w", err)
|
||||||
|
}
|
||||||
|
for i := range projects {
|
||||||
|
p := projects[i]
|
||||||
|
existingProjects[projectKey(p.OwnerUsername, derefString(p.Name))] = &p
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
actions := make([]migrationAction, 0)
|
||||||
|
for _, c := range candidates {
|
||||||
|
key := projectKey(c.owner, c.code)
|
||||||
|
existing := existingProjects[key]
|
||||||
|
|
||||||
|
currentProject := ""
|
||||||
|
if c.config.ProjectUUID != nil {
|
||||||
|
currentProject = *c.config.ProjectUUID
|
||||||
|
}
|
||||||
|
|
||||||
|
if existing != nil && currentProject == existing.UUID {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
action := migrationAction{
|
||||||
|
ConfigID: c.config.ID,
|
||||||
|
ConfigUUID: c.config.UUID,
|
||||||
|
ConfigName: c.config.Name,
|
||||||
|
OwnerUsername: c.owner,
|
||||||
|
TargetProjectName: c.code,
|
||||||
|
CurrentProject: currentProject,
|
||||||
|
}
|
||||||
|
if existing == nil {
|
||||||
|
action.NeedCreateProject = true
|
||||||
|
} else if !existing.IsActive {
|
||||||
|
action.NeedReactivate = true
|
||||||
|
}
|
||||||
|
actions = append(actions, action)
|
||||||
|
}
|
||||||
|
|
||||||
|
return actions, existingProjects, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func printPlan(actions []migrationAction) {
|
||||||
|
createCount := 0
|
||||||
|
reactivateCount := 0
|
||||||
|
for _, a := range actions {
|
||||||
|
if a.NeedCreateProject {
|
||||||
|
createCount++
|
||||||
|
}
|
||||||
|
if a.NeedReactivate {
|
||||||
|
reactivateCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Planned actions: %d\n", len(actions))
|
||||||
|
fmt.Printf("Projects to create: %d\n", createCount)
|
||||||
|
fmt.Printf("Projects to reactivate: %d\n", reactivateCount)
|
||||||
|
fmt.Println("\nDetails:")
|
||||||
|
|
||||||
|
for _, a := range actions {
|
||||||
|
extra := ""
|
||||||
|
if a.NeedCreateProject {
|
||||||
|
extra = " [create project]"
|
||||||
|
} else if a.NeedReactivate {
|
||||||
|
extra = " [reactivate project]"
|
||||||
|
}
|
||||||
|
current := a.CurrentProject
|
||||||
|
if current == "" {
|
||||||
|
current = "NULL"
|
||||||
|
}
|
||||||
|
fmt.Printf("- %s | owner=%s | \"%s\" | project: %s -> %s%s\n",
|
||||||
|
a.ConfigUUID, a.OwnerUsername, a.ConfigName, current, a.TargetProjectName, extra)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func askForConfirmation() (bool, error) {
|
||||||
|
fmt.Print("\nApply these changes? type 'yes' to continue: ")
|
||||||
|
reader := bufio.NewReader(os.Stdin)
|
||||||
|
line, err := reader.ReadString('\n')
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
return strings.EqualFold(strings.TrimSpace(line), "yes"), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func executePlan(db *gorm.DB, actions []migrationAction, existingProjects map[string]*models.Project) error {
|
||||||
|
return db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
projectCache := make(map[string]*models.Project, len(existingProjects))
|
||||||
|
for k, v := range existingProjects {
|
||||||
|
cp := *v
|
||||||
|
projectCache[k] = &cp
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, action := range actions {
|
||||||
|
key := projectKey(action.OwnerUsername, action.TargetProjectName)
|
||||||
|
project := projectCache[key]
|
||||||
|
if project == nil {
|
||||||
|
project = &models.Project{
|
||||||
|
UUID: uuid.NewString(),
|
||||||
|
OwnerUsername: action.OwnerUsername,
|
||||||
|
Code: action.TargetProjectName,
|
||||||
|
Name: ptrString(action.TargetProjectName),
|
||||||
|
IsActive: true,
|
||||||
|
IsSystem: false,
|
||||||
|
}
|
||||||
|
if err := tx.Create(project).Error; err != nil {
|
||||||
|
return fmt.Errorf("create project %s for owner %s: %w", action.TargetProjectName, action.OwnerUsername, err)
|
||||||
|
}
|
||||||
|
projectCache[key] = project
|
||||||
|
} else if !project.IsActive {
|
||||||
|
if err := tx.Model(&models.Project{}).Where("uuid = ?", project.UUID).Update("is_active", true).Error; err != nil {
|
||||||
|
return fmt.Errorf("reactivate project %s (%s): %w", derefString(project.Name), project.UUID, err)
|
||||||
|
}
|
||||||
|
project.IsActive = true
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Table("qt_configurations").Where("id = ?", action.ConfigID).Update("project_uuid", project.UUID).Error; err != nil {
|
||||||
|
return fmt.Errorf("move configuration %s to project %s: %w", action.ConfigUUID, project.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func setKeys(set map[string]struct{}) []string {
|
||||||
|
keys := make([]string, 0, len(set))
|
||||||
|
for k := range set {
|
||||||
|
keys = append(keys, k)
|
||||||
|
}
|
||||||
|
sort.Strings(keys)
|
||||||
|
return keys
|
||||||
|
}
|
||||||
|
|
||||||
|
func projectKey(owner, name string) string {
|
||||||
|
return owner + "||" + name
|
||||||
|
}
|
||||||
|
|
||||||
|
func derefString(value *string) string {
|
||||||
|
if value == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return *value
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
66
cmd/qfs/config_migration_test.go
Normal file
66
cmd/qfs/config_migration_test.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMigrateConfigFileToRuntimeShapeDropsDeprecatedSections(t *testing.T) {
|
||||||
|
t.Helper()
|
||||||
|
dir := t.TempDir()
|
||||||
|
path := filepath.Join(dir, "config.yaml")
|
||||||
|
|
||||||
|
legacy := `server:
|
||||||
|
host: "0.0.0.0"
|
||||||
|
port: 9191
|
||||||
|
database:
|
||||||
|
host: "legacy-db"
|
||||||
|
port: 3306
|
||||||
|
name: "RFQ_LOG"
|
||||||
|
user: "old"
|
||||||
|
password: "REDACTED_TEST_PASSWORD"
|
||||||
|
pricing:
|
||||||
|
default_method: "median"
|
||||||
|
logging:
|
||||||
|
level: "debug"
|
||||||
|
format: "text"
|
||||||
|
output: "stdout"
|
||||||
|
`
|
||||||
|
if err := os.WriteFile(path, []byte(legacy), 0644); err != nil {
|
||||||
|
t.Fatalf("write legacy config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := config.Load(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load legacy config: %v", err)
|
||||||
|
}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
if err := migrateConfigFileToRuntimeShape(path, cfg); err != nil {
|
||||||
|
t.Fatalf("migrate config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
got, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read migrated config: %v", err)
|
||||||
|
}
|
||||||
|
text := string(got)
|
||||||
|
if strings.Contains(text, "database:") {
|
||||||
|
t.Fatalf("migrated config still contains deprecated database section:\n%s", text)
|
||||||
|
}
|
||||||
|
if strings.Contains(text, "pricing:") {
|
||||||
|
t.Fatalf("migrated config still contains deprecated pricing section:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "server:") || !strings.Contains(text, "logging:") {
|
||||||
|
t.Fatalf("migrated config missing required sections:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "port: 9191") {
|
||||||
|
t.Fatalf("migrated config did not preserve server port:\n%s", text)
|
||||||
|
}
|
||||||
|
if !strings.Contains(text, "level: debug") {
|
||||||
|
t.Fatalf("migrated config did not preserve logging level:\n%s", text)
|
||||||
|
}
|
||||||
|
}
|
||||||
1818
cmd/qfs/main.go
Normal file
1818
cmd/qfs/main.go
Normal file
File diff suppressed because it is too large
Load Diff
327
cmd/qfs/versioning_api_test.go
Normal file
327
cmd/qfs/versioning_api_test.go
Normal file
@@ -0,0 +1,327 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/json"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestConfigurationVersioningAPI(t *testing.T) {
|
||||||
|
moveToRepoRoot(t)
|
||||||
|
|
||||||
|
local, connMgr, configService := newAPITestStack(t)
|
||||||
|
_ = local
|
||||||
|
|
||||||
|
created, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "api-v1",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_API", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := configService.RenameNoAuth(created.UUID, "api-v2"); err != nil {
|
||||||
|
t.Fatalf("rename config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := &config.Config{}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("setup router: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// list versions happy path
|
||||||
|
listReq := httptest.NewRequest(http.MethodGet, "/api/configs/"+created.UUID+"/versions?limit=10&offset=0", nil)
|
||||||
|
listRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(listRec, listReq)
|
||||||
|
if listRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("list versions status=%d body=%s", listRec.Code, listRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
// get version happy path
|
||||||
|
getReq := httptest.NewRequest(http.MethodGet, "/api/configs/"+created.UUID+"/versions/1", nil)
|
||||||
|
getRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(getRec, getReq)
|
||||||
|
if getRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("get version status=%d body=%s", getRec.Code, getRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
// rollback happy path
|
||||||
|
body := []byte(`{"target_version":1,"note":"api rollback"}`)
|
||||||
|
rbReq := httptest.NewRequest(http.MethodPost, "/api/configs/"+created.UUID+"/rollback", bytes.NewReader(body))
|
||||||
|
rbReq.Header.Set("Content-Type", "application/json")
|
||||||
|
rbRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(rbRec, rbReq)
|
||||||
|
if rbRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("rollback status=%d body=%s", rbRec.Code, rbRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var rbResp struct {
|
||||||
|
Message string `json:"message"`
|
||||||
|
CurrentVersion struct {
|
||||||
|
VersionNo int `json:"version_no"`
|
||||||
|
} `json:"current_version"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(rbRec.Body.Bytes(), &rbResp); err != nil {
|
||||||
|
t.Fatalf("unmarshal rollback response: %v", err)
|
||||||
|
}
|
||||||
|
if rbResp.Message == "" || rbResp.CurrentVersion.VersionNo != 3 {
|
||||||
|
t.Fatalf("unexpected rollback response: %+v", rbResp)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 404: version missing
|
||||||
|
notFoundReq := httptest.NewRequest(http.MethodGet, "/api/configs/"+created.UUID+"/versions/999", nil)
|
||||||
|
notFoundRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(notFoundRec, notFoundReq)
|
||||||
|
if notFoundRec.Code != http.StatusNotFound {
|
||||||
|
t.Fatalf("expected 404 for missing version, got %d", notFoundRec.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 400: invalid version number
|
||||||
|
invalidReq := httptest.NewRequest(http.MethodGet, "/api/configs/"+created.UUID+"/versions/abc", nil)
|
||||||
|
invalidRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(invalidRec, invalidReq)
|
||||||
|
if invalidRec.Code != http.StatusBadRequest {
|
||||||
|
t.Fatalf("expected 400 for invalid version, got %d", invalidRec.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 400: rollback invalid target_version
|
||||||
|
badRollbackReq := httptest.NewRequest(http.MethodPost, "/api/configs/"+created.UUID+"/rollback", bytes.NewReader([]byte(`{"target_version":0}`)))
|
||||||
|
badRollbackReq.Header.Set("Content-Type", "application/json")
|
||||||
|
badRollbackRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(badRollbackRec, badRollbackReq)
|
||||||
|
if badRollbackRec.Code != http.StatusBadRequest {
|
||||||
|
t.Fatalf("expected 400 for invalid rollback target, got %d", badRollbackRec.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// archive + reactivate flow
|
||||||
|
delReq := httptest.NewRequest(http.MethodDelete, "/api/configs/"+created.UUID, nil)
|
||||||
|
delRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(delRec, delReq)
|
||||||
|
if delRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("archive status=%d body=%s", delRec.Code, delRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
archivedListReq := httptest.NewRequest(http.MethodGet, "/api/configs?status=archived&page=1&per_page=20", nil)
|
||||||
|
archivedListRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(archivedListRec, archivedListReq)
|
||||||
|
if archivedListRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("archived list status=%d body=%s", archivedListRec.Code, archivedListRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
reactivateReq := httptest.NewRequest(http.MethodPost, "/api/configs/"+created.UUID+"/reactivate", nil)
|
||||||
|
reactivateRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(reactivateRec, reactivateReq)
|
||||||
|
if reactivateRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("reactivate status=%d body=%s", reactivateRec.Code, reactivateRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
activeListReq := httptest.NewRequest(http.MethodGet, "/api/configs?status=active&page=1&per_page=20", nil)
|
||||||
|
activeListRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(activeListRec, activeListReq)
|
||||||
|
if activeListRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("active list status=%d body=%s", activeListRec.Code, activeListRec.Body.String())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestProjectArchiveHidesConfigsAndCloneIntoProject(t *testing.T) {
|
||||||
|
moveToRepoRoot(t)
|
||||||
|
|
||||||
|
local, connMgr, configService := newAPITestStack(t)
|
||||||
|
_ = configService
|
||||||
|
|
||||||
|
cfg := &config.Config{}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("setup router: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"P1","code":"P1"}`)))
|
||||||
|
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||||
|
createProjectRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||||
|
if createProjectRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("create project status=%d body=%s", createProjectRec.Code, createProjectRec.Body.String())
|
||||||
|
}
|
||||||
|
var project models.Project
|
||||||
|
if err := json.Unmarshal(createProjectRec.Body.Bytes(), &project); err != nil {
|
||||||
|
t.Fatalf("unmarshal project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createCfgBody := []byte(`{"name":"Cfg A","items":[{"lot_name":"CPU","quantity":1,"unit_price":100}],"server_count":1}`)
|
||||||
|
createCfgReq := httptest.NewRequest(http.MethodPost, "/api/projects/"+project.UUID+"/configs", bytes.NewReader(createCfgBody))
|
||||||
|
createCfgReq.Header.Set("Content-Type", "application/json")
|
||||||
|
createCfgRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(createCfgRec, createCfgReq)
|
||||||
|
if createCfgRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("create project config status=%d body=%s", createCfgRec.Code, createCfgRec.Body.String())
|
||||||
|
}
|
||||||
|
var createdCfg models.Configuration
|
||||||
|
if err := json.Unmarshal(createCfgRec.Body.Bytes(), &createdCfg); err != nil {
|
||||||
|
t.Fatalf("unmarshal project config: %v", err)
|
||||||
|
}
|
||||||
|
if createdCfg.ProjectUUID == nil || *createdCfg.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected config project_uuid=%s got=%v", project.UUID, createdCfg.ProjectUUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
cloneReq := httptest.NewRequest(http.MethodPost, "/api/projects/"+project.UUID+"/configs/"+createdCfg.UUID+"/clone", bytes.NewReader([]byte(`{"name":"Cfg A Clone"}`)))
|
||||||
|
cloneReq.Header.Set("Content-Type", "application/json")
|
||||||
|
cloneRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(cloneRec, cloneReq)
|
||||||
|
if cloneRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("clone in project status=%d body=%s", cloneRec.Code, cloneRec.Body.String())
|
||||||
|
}
|
||||||
|
var cloneCfg models.Configuration
|
||||||
|
if err := json.Unmarshal(cloneRec.Body.Bytes(), &cloneCfg); err != nil {
|
||||||
|
t.Fatalf("unmarshal clone config: %v", err)
|
||||||
|
}
|
||||||
|
if cloneCfg.ProjectUUID == nil || *cloneCfg.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected clone project_uuid=%s got=%v", project.UUID, cloneCfg.ProjectUUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
projectConfigsReq := httptest.NewRequest(http.MethodGet, "/api/projects/"+project.UUID+"/configs", nil)
|
||||||
|
projectConfigsRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(projectConfigsRec, projectConfigsReq)
|
||||||
|
if projectConfigsRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("project configs status=%d body=%s", projectConfigsRec.Code, projectConfigsRec.Body.String())
|
||||||
|
}
|
||||||
|
var projectConfigsResp struct {
|
||||||
|
Configurations []models.Configuration `json:"configurations"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(projectConfigsRec.Body.Bytes(), &projectConfigsResp); err != nil {
|
||||||
|
t.Fatalf("unmarshal project configs response: %v", err)
|
||||||
|
}
|
||||||
|
if len(projectConfigsResp.Configurations) != 2 {
|
||||||
|
t.Fatalf("expected 2 project configs after clone, got %d", len(projectConfigsResp.Configurations))
|
||||||
|
}
|
||||||
|
|
||||||
|
archiveReq := httptest.NewRequest(http.MethodPost, "/api/projects/"+project.UUID+"/archive", nil)
|
||||||
|
archiveRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(archiveRec, archiveReq)
|
||||||
|
if archiveRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("archive project status=%d body=%s", archiveRec.Code, archiveRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
activeReq := httptest.NewRequest(http.MethodGet, "/api/configs?status=active&page=1&per_page=20", nil)
|
||||||
|
activeRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(activeRec, activeReq)
|
||||||
|
if activeRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("active configs status=%d body=%s", activeRec.Code, activeRec.Body.String())
|
||||||
|
}
|
||||||
|
var activeResp struct {
|
||||||
|
Configurations []models.Configuration `json:"configurations"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(activeRec.Body.Bytes(), &activeResp); err != nil {
|
||||||
|
t.Fatalf("unmarshal active configs response: %v", err)
|
||||||
|
}
|
||||||
|
if len(activeResp.Configurations) != 0 {
|
||||||
|
t.Fatalf("expected no active configs after project archive, got %d", len(activeResp.Configurations))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConfigMoveToProjectEndpoint(t *testing.T) {
|
||||||
|
moveToRepoRoot(t)
|
||||||
|
|
||||||
|
local, connMgr, _ := newAPITestStack(t)
|
||||||
|
cfg := &config.Config{}
|
||||||
|
setConfigDefaults(cfg)
|
||||||
|
router, _, err := setupRouter(cfg, local, connMgr, nil, "tester", nil)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("setup router: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createProjectReq := httptest.NewRequest(http.MethodPost, "/api/projects", bytes.NewReader([]byte(`{"name":"Move Project","code":"MOVE"}`)))
|
||||||
|
createProjectReq.Header.Set("Content-Type", "application/json")
|
||||||
|
createProjectRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(createProjectRec, createProjectReq)
|
||||||
|
if createProjectRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("create project status=%d body=%s", createProjectRec.Code, createProjectRec.Body.String())
|
||||||
|
}
|
||||||
|
var project models.Project
|
||||||
|
if err := json.Unmarshal(createProjectRec.Body.Bytes(), &project); err != nil {
|
||||||
|
t.Fatalf("unmarshal project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
createConfigReq := httptest.NewRequest(http.MethodPost, "/api/configs", bytes.NewReader([]byte(`{"name":"Move Me","items":[],"notes":"","server_count":1}`)))
|
||||||
|
createConfigReq.Header.Set("Content-Type", "application/json")
|
||||||
|
createConfigRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(createConfigRec, createConfigReq)
|
||||||
|
if createConfigRec.Code != http.StatusCreated {
|
||||||
|
t.Fatalf("create config status=%d body=%s", createConfigRec.Code, createConfigRec.Body.String())
|
||||||
|
}
|
||||||
|
var created models.Configuration
|
||||||
|
if err := json.Unmarshal(createConfigRec.Body.Bytes(), &created); err != nil {
|
||||||
|
t.Fatalf("unmarshal config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
moveReq := httptest.NewRequest(http.MethodPatch, "/api/configs/"+created.UUID+"/project", bytes.NewReader([]byte(`{"project_uuid":"`+project.UUID+`"}`)))
|
||||||
|
moveReq.Header.Set("Content-Type", "application/json")
|
||||||
|
moveRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(moveRec, moveReq)
|
||||||
|
if moveRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("move config status=%d body=%s", moveRec.Code, moveRec.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
getReq := httptest.NewRequest(http.MethodGet, "/api/configs/"+created.UUID, nil)
|
||||||
|
getRec := httptest.NewRecorder()
|
||||||
|
router.ServeHTTP(getRec, getReq)
|
||||||
|
if getRec.Code != http.StatusOK {
|
||||||
|
t.Fatalf("get config status=%d body=%s", getRec.Code, getRec.Body.String())
|
||||||
|
}
|
||||||
|
var updated models.Configuration
|
||||||
|
if err := json.Unmarshal(getRec.Body.Bytes(), &updated); err != nil {
|
||||||
|
t.Fatalf("unmarshal updated config: %v", err)
|
||||||
|
}
|
||||||
|
if updated.ProjectUUID == nil || *updated.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected moved project_uuid=%s, got %v", project.UUID, updated.ProjectUUID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newAPITestStack(t *testing.T) (*localdb.LocalDB, *db.ConnectionManager, *services.LocalConfigurationService) {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
localPath := filepath.Join(t.TempDir(), "api.db")
|
||||||
|
local, err := localdb.New(localPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
connMgr := db.NewConnectionManager(local)
|
||||||
|
syncService := syncsvc.NewService(connMgr, local)
|
||||||
|
configService := services.NewLocalConfigurationService(
|
||||||
|
local,
|
||||||
|
syncService,
|
||||||
|
&services.QuoteService{},
|
||||||
|
func() bool { return false },
|
||||||
|
)
|
||||||
|
return local, connMgr, configService
|
||||||
|
}
|
||||||
|
|
||||||
|
func moveToRepoRoot(t *testing.T) {
|
||||||
|
t.Helper()
|
||||||
|
wd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("getwd: %v", err)
|
||||||
|
}
|
||||||
|
root := filepath.Clean(filepath.Join(wd, "..", ".."))
|
||||||
|
if err := os.Chdir(root); err != nil {
|
||||||
|
t.Fatalf("chdir repo root: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() {
|
||||||
|
_ = os.Chdir(wd)
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -1,289 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"flag"
|
|
||||||
"log/slog"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"os/signal"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
"github.com/mchus/quoteforge/internal/config"
|
|
||||||
"github.com/mchus/quoteforge/internal/handlers"
|
|
||||||
"github.com/mchus/quoteforge/internal/middleware"
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
"github.com/mchus/quoteforge/internal/services/alerts"
|
|
||||||
"github.com/mchus/quoteforge/internal/services/pricing"
|
|
||||||
"gorm.io/driver/mysql"
|
|
||||||
"gorm.io/gorm"
|
|
||||||
"gorm.io/gorm/logger"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
configPath := flag.String("config", "config.yaml", "path to config file")
|
|
||||||
migrate := flag.Bool("migrate", false, "run database migrations")
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
cfg, err := config.Load(*configPath)
|
|
||||||
if err != nil {
|
|
||||||
slog.Error("failed to load config", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
setupLogger(cfg.Logging)
|
|
||||||
|
|
||||||
slog.Info("starting QuoteForge server",
|
|
||||||
"host", cfg.Server.Host,
|
|
||||||
"port", cfg.Server.Port,
|
|
||||||
"mode", cfg.Server.Mode,
|
|
||||||
)
|
|
||||||
|
|
||||||
db, err := setupDatabase(cfg.Database)
|
|
||||||
if err != nil {
|
|
||||||
slog.Error("failed to connect to database", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if *migrate {
|
|
||||||
slog.Info("running database migrations...")
|
|
||||||
if err := models.Migrate(db); err != nil {
|
|
||||||
slog.Error("migration failed", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
if err := models.SeedCategories(db); err != nil {
|
|
||||||
slog.Error("seeding categories failed", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
slog.Info("migrations completed")
|
|
||||||
}
|
|
||||||
|
|
||||||
gin.SetMode(cfg.Server.Mode)
|
|
||||||
router := setupRouter(db, cfg)
|
|
||||||
|
|
||||||
srv := &http.Server{
|
|
||||||
Addr: cfg.Address(),
|
|
||||||
Handler: router,
|
|
||||||
ReadTimeout: cfg.Server.ReadTimeout,
|
|
||||||
WriteTimeout: cfg.Server.WriteTimeout,
|
|
||||||
}
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
slog.Info("server listening", "address", cfg.Address())
|
|
||||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
|
||||||
slog.Error("server error", "error", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
quit := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
|
|
||||||
<-quit
|
|
||||||
|
|
||||||
slog.Info("shutting down server...")
|
|
||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
if err := srv.Shutdown(ctx); err != nil {
|
|
||||||
slog.Error("server forced to shutdown", "error", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.Info("server stopped")
|
|
||||||
}
|
|
||||||
|
|
||||||
func setupLogger(cfg config.LoggingConfig) {
|
|
||||||
var level slog.Level
|
|
||||||
switch cfg.Level {
|
|
||||||
case "debug":
|
|
||||||
level = slog.LevelDebug
|
|
||||||
case "warn":
|
|
||||||
level = slog.LevelWarn
|
|
||||||
case "error":
|
|
||||||
level = slog.LevelError
|
|
||||||
default:
|
|
||||||
level = slog.LevelInfo
|
|
||||||
}
|
|
||||||
|
|
||||||
opts := &slog.HandlerOptions{Level: level}
|
|
||||||
|
|
||||||
var handler slog.Handler
|
|
||||||
if cfg.Format == "json" {
|
|
||||||
handler = slog.NewJSONHandler(os.Stdout, opts)
|
|
||||||
} else {
|
|
||||||
handler = slog.NewTextHandler(os.Stdout, opts)
|
|
||||||
}
|
|
||||||
|
|
||||||
slog.SetDefault(slog.New(handler))
|
|
||||||
}
|
|
||||||
|
|
||||||
func setupDatabase(cfg config.DatabaseConfig) (*gorm.DB, error) {
|
|
||||||
gormLogger := logger.Default.LogMode(logger.Silent)
|
|
||||||
|
|
||||||
db, err := gorm.Open(mysql.Open(cfg.DSN()), &gorm.Config{
|
|
||||||
Logger: gormLogger,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
sqlDB, err := db.DB()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
sqlDB.SetMaxOpenConns(cfg.MaxOpenConns)
|
|
||||||
sqlDB.SetMaxIdleConns(cfg.MaxIdleConns)
|
|
||||||
sqlDB.SetConnMaxLifetime(cfg.ConnMaxLifetime)
|
|
||||||
|
|
||||||
return db, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func setupRouter(db *gorm.DB, cfg *config.Config) *gin.Engine {
|
|
||||||
// Repositories
|
|
||||||
userRepo := repository.NewUserRepository(db)
|
|
||||||
componentRepo := repository.NewComponentRepository(db)
|
|
||||||
categoryRepo := repository.NewCategoryRepository(db)
|
|
||||||
priceRepo := repository.NewPriceRepository(db)
|
|
||||||
configRepo := repository.NewConfigurationRepository(db)
|
|
||||||
alertRepo := repository.NewAlertRepository(db)
|
|
||||||
statsRepo := repository.NewStatsRepository(db)
|
|
||||||
|
|
||||||
// Services
|
|
||||||
authService := services.NewAuthService(userRepo, cfg.Auth)
|
|
||||||
pricingService := pricing.NewService(componentRepo, priceRepo, cfg.Pricing)
|
|
||||||
componentService := services.NewComponentService(componentRepo, categoryRepo, statsRepo)
|
|
||||||
quoteService := services.NewQuoteService(componentRepo, statsRepo, pricingService)
|
|
||||||
configService := services.NewConfigurationService(configRepo, componentRepo, quoteService)
|
|
||||||
exportService := services.NewExportService(cfg.Export)
|
|
||||||
alertService := alerts.NewService(alertRepo, componentRepo, priceRepo, statsRepo, cfg.Alerts, cfg.Pricing)
|
|
||||||
|
|
||||||
// Handlers
|
|
||||||
authHandler := handlers.NewAuthHandler(authService, userRepo)
|
|
||||||
componentHandler := handlers.NewComponentHandler(componentService)
|
|
||||||
quoteHandler := handlers.NewQuoteHandler(quoteService)
|
|
||||||
configHandler := handlers.NewConfigurationHandler(configService, exportService)
|
|
||||||
exportHandler := handlers.NewExportHandler(exportService, configService, componentService)
|
|
||||||
pricingHandler := handlers.NewPricingHandler(pricingService, alertService, componentRepo, statsRepo)
|
|
||||||
|
|
||||||
// Router
|
|
||||||
router := gin.New()
|
|
||||||
router.Use(gin.Recovery())
|
|
||||||
router.Use(requestLogger())
|
|
||||||
router.Use(middleware.CORS())
|
|
||||||
|
|
||||||
// Health check
|
|
||||||
router.GET("/health", func(c *gin.Context) {
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"status": "ok",
|
|
||||||
"time": time.Now().UTC().Format(time.RFC3339),
|
|
||||||
})
|
|
||||||
})
|
|
||||||
|
|
||||||
// API routes
|
|
||||||
api := router.Group("/api")
|
|
||||||
{
|
|
||||||
api.GET("/ping", func(c *gin.Context) {
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "pong"})
|
|
||||||
})
|
|
||||||
|
|
||||||
// Auth (public)
|
|
||||||
auth := api.Group("/auth")
|
|
||||||
{
|
|
||||||
auth.POST("/login", authHandler.Login)
|
|
||||||
auth.POST("/refresh", authHandler.Refresh)
|
|
||||||
auth.POST("/logout", authHandler.Logout)
|
|
||||||
auth.GET("/me", middleware.Auth(authService), authHandler.Me)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Components (public read, for quote builder)
|
|
||||||
components := api.Group("/components")
|
|
||||||
{
|
|
||||||
components.GET("", componentHandler.List)
|
|
||||||
components.GET("/:lot_name", componentHandler.Get)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Categories (public)
|
|
||||||
api.GET("/categories", componentHandler.GetCategories)
|
|
||||||
api.GET("/vendors", componentHandler.GetVendors)
|
|
||||||
|
|
||||||
// Quote (public, for anonymous quote building)
|
|
||||||
quote := api.Group("/quote")
|
|
||||||
{
|
|
||||||
quote.POST("/validate", quoteHandler.Validate)
|
|
||||||
quote.POST("/calculate", quoteHandler.Calculate)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Export (public, for anonymous exports)
|
|
||||||
export := api.Group("/export")
|
|
||||||
{
|
|
||||||
export.POST("/csv", exportHandler.ExportCSV)
|
|
||||||
export.POST("/xlsx", exportHandler.ExportXLSX)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Configurations (requires auth)
|
|
||||||
configs := api.Group("/configs")
|
|
||||||
configs.Use(middleware.Auth(authService))
|
|
||||||
configs.Use(middleware.RequireEditor())
|
|
||||||
{
|
|
||||||
configs.GET("", configHandler.List)
|
|
||||||
configs.POST("", configHandler.Create)
|
|
||||||
configs.GET("/:uuid", configHandler.Get)
|
|
||||||
configs.PUT("/:uuid", configHandler.Update)
|
|
||||||
configs.DELETE("/:uuid", configHandler.Delete)
|
|
||||||
configs.GET("/:uuid/export", configHandler.ExportJSON)
|
|
||||||
configs.GET("/:uuid/csv", exportHandler.ExportConfigCSV)
|
|
||||||
configs.GET("/:uuid/xlsx", exportHandler.ExportConfigXLSX)
|
|
||||||
configs.POST("/import", configHandler.ImportJSON)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Admin routes
|
|
||||||
admin := router.Group("/admin")
|
|
||||||
admin.Use(middleware.Auth(authService))
|
|
||||||
{
|
|
||||||
// Pricing admin
|
|
||||||
pricingAdmin := admin.Group("/pricing")
|
|
||||||
pricingAdmin.Use(middleware.RequirePricingAdmin())
|
|
||||||
{
|
|
||||||
pricingAdmin.GET("/stats", pricingHandler.GetStats)
|
|
||||||
pricingAdmin.GET("/components", pricingHandler.ListComponents)
|
|
||||||
pricingAdmin.GET("/components/:lot_name", pricingHandler.GetComponentPricing)
|
|
||||||
pricingAdmin.POST("/update", pricingHandler.UpdatePrice)
|
|
||||||
pricingAdmin.POST("/recalculate-all", pricingHandler.RecalculateAll)
|
|
||||||
|
|
||||||
pricingAdmin.GET("/alerts", pricingHandler.ListAlerts)
|
|
||||||
pricingAdmin.POST("/alerts/:id/acknowledge", pricingHandler.AcknowledgeAlert)
|
|
||||||
pricingAdmin.POST("/alerts/:id/resolve", pricingHandler.ResolveAlert)
|
|
||||||
pricingAdmin.POST("/alerts/:id/ignore", pricingHandler.IgnoreAlert)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return router
|
|
||||||
}
|
|
||||||
|
|
||||||
func requestLogger() gin.HandlerFunc {
|
|
||||||
return func(c *gin.Context) {
|
|
||||||
start := time.Now()
|
|
||||||
path := c.Request.URL.Path
|
|
||||||
query := c.Request.URL.RawQuery
|
|
||||||
|
|
||||||
c.Next()
|
|
||||||
|
|
||||||
latency := time.Since(start)
|
|
||||||
status := c.Writer.Status()
|
|
||||||
|
|
||||||
slog.Info("request",
|
|
||||||
"method", c.Request.Method,
|
|
||||||
"path", path,
|
|
||||||
"query", query,
|
|
||||||
"status", status,
|
|
||||||
"latency", latency,
|
|
||||||
"ip", c.ClientIP(),
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
# Copy this file to config.yaml and update values
|
# Copy this file to config.yaml and update values
|
||||||
|
|
||||||
server:
|
server:
|
||||||
host: "0.0.0.0"
|
host: "127.0.0.1" # Use 0.0.0.0 to listen on all interfaces
|
||||||
port: 8080
|
port: 8080
|
||||||
mode: "release" # debug | release
|
mode: "release" # debug | release
|
||||||
read_timeout: "30s"
|
read_timeout: "30s"
|
||||||
@@ -37,6 +37,9 @@ export:
|
|||||||
max_file_age: "1h"
|
max_file_age: "1h"
|
||||||
company_name: "Your Company Name"
|
company_name: "Your Company Name"
|
||||||
|
|
||||||
|
backup:
|
||||||
|
time: "00:00"
|
||||||
|
|
||||||
alerts:
|
alerts:
|
||||||
enabled: true
|
enabled: true
|
||||||
check_interval: "1h"
|
check_interval: "1h"
|
||||||
|
|||||||
BIN
dist/qfs-darwin-amd64
vendored
Executable file
BIN
dist/qfs-darwin-amd64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-darwin-arm64
vendored
Executable file
BIN
dist/qfs-darwin-arm64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-linux-amd64
vendored
Executable file
BIN
dist/qfs-linux-amd64
vendored
Executable file
Binary file not shown.
BIN
dist/qfs-windows-amd64.exe
vendored
Executable file
BIN
dist/qfs-windows-amd64.exe
vendored
Executable file
Binary file not shown.
19
go.mod
19
go.mod
@@ -1,23 +1,25 @@
|
|||||||
module github.com/mchus/quoteforge
|
module git.mchus.pro/mchus/quoteforge
|
||||||
|
|
||||||
go 1.24.0
|
go 1.24.0
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/gin-gonic/gin v1.9.1
|
github.com/gin-gonic/gin v1.9.1
|
||||||
|
github.com/glebarez/sqlite v1.11.0
|
||||||
github.com/golang-jwt/jwt/v5 v5.3.0
|
github.com/golang-jwt/jwt/v5 v5.3.0
|
||||||
github.com/google/uuid v1.6.0
|
github.com/google/uuid v1.6.0
|
||||||
github.com/xuri/excelize/v2 v2.10.0
|
|
||||||
golang.org/x/crypto v0.43.0
|
golang.org/x/crypto v0.43.0
|
||||||
gopkg.in/yaml.v3 v3.0.1
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
gorm.io/driver/mysql v1.5.2
|
gorm.io/driver/mysql v1.5.2
|
||||||
gorm.io/gorm v1.25.5
|
gorm.io/gorm v1.25.7
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/bytedance/sonic v1.9.1 // indirect
|
github.com/bytedance/sonic v1.9.1 // indirect
|
||||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
||||||
|
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
|
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
|
||||||
github.com/gin-contrib/sse v0.1.0 // indirect
|
github.com/gin-contrib/sse v0.1.0 // indirect
|
||||||
|
github.com/glebarez/go-sqlite v1.21.2 // indirect
|
||||||
github.com/go-playground/locales v0.14.1 // indirect
|
github.com/go-playground/locales v0.14.1 // indirect
|
||||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||||
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
github.com/go-playground/validator/v10 v10.14.0 // indirect
|
||||||
@@ -32,16 +34,17 @@ require (
|
|||||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||||
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
|
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
|
||||||
github.com/richardlehane/mscfb v1.0.4 // indirect
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||||
github.com/richardlehane/msoleps v1.0.4 // indirect
|
github.com/stretchr/testify v1.11.1 // indirect
|
||||||
github.com/tiendc/go-deepcopy v1.7.1 // indirect
|
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||||
github.com/ugorji/go/codec v1.2.11 // indirect
|
github.com/ugorji/go/codec v1.2.11 // indirect
|
||||||
github.com/xuri/efp v0.0.1 // indirect
|
|
||||||
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9 // indirect
|
|
||||||
golang.org/x/arch v0.3.0 // indirect
|
golang.org/x/arch v0.3.0 // indirect
|
||||||
golang.org/x/net v0.46.0 // indirect
|
golang.org/x/net v0.46.0 // indirect
|
||||||
golang.org/x/sys v0.37.0 // indirect
|
golang.org/x/sys v0.37.0 // indirect
|
||||||
golang.org/x/text v0.30.0 // indirect
|
golang.org/x/text v0.30.0 // indirect
|
||||||
google.golang.org/protobuf v1.30.0 // indirect
|
google.golang.org/protobuf v1.30.0 // indirect
|
||||||
|
modernc.org/libc v1.22.5 // indirect
|
||||||
|
modernc.org/mathutil v1.5.0 // indirect
|
||||||
|
modernc.org/memory v1.5.0 // indirect
|
||||||
|
modernc.org/sqlite v1.23.1 // indirect
|
||||||
)
|
)
|
||||||
|
|||||||
41
go.sum
41
go.sum
@@ -7,12 +7,18 @@ github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583j
|
|||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
|
||||||
|
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
||||||
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
||||||
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
||||||
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
|
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
|
||||||
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
|
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
|
||||||
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
|
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
|
||||||
|
github.com/glebarez/go-sqlite v1.21.2 h1:3a6LFC4sKahUunAmynQKLZceZCOzUthkRkEAl9gAXWo=
|
||||||
|
github.com/glebarez/go-sqlite v1.21.2/go.mod h1:sfxdZyhQjTM2Wry3gVYWaW072Ri1WMdWJi0k6+3382k=
|
||||||
|
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
|
||||||
|
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
|
||||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||||
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||||
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
||||||
@@ -32,6 +38,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
|
|||||||
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
|
||||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
|
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26 h1:Xim43kblpZXfIBQsbuBVKCudVG457BR2GZFIz3uw3hQ=
|
||||||
|
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
|
||||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
|
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
|
||||||
@@ -56,11 +64,9 @@ github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZ
|
|||||||
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
|
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
|
||||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/richardlehane/mscfb v1.0.4 h1:WULscsljNPConisD5hR0+OyZjwK46Pfyr6mPu5ZawpM=
|
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/richardlehane/mscfb v1.0.4/go.mod h1:YzVpcZg9czvAuhk9T+a3avCpcFPMUWm7gK3DypaEsUk=
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||||
github.com/richardlehane/msoleps v1.0.1/go.mod h1:BWev5JBpU9Ko2WAgmZEuiz4/u3ZYTKbjLycmwiWUfWg=
|
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||||
github.com/richardlehane/msoleps v1.0.4 h1:WuESlvhX3gH2IHcd8UqyCuFY5yiq/GR/yqaSM/9/g00=
|
|
||||||
github.com/richardlehane/msoleps v1.0.4/go.mod h1:BWev5JBpU9Ko2WAgmZEuiz4/u3ZYTKbjLycmwiWUfWg=
|
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||||
@@ -73,25 +79,15 @@ github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
|
|||||||
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||||
github.com/tiendc/go-deepcopy v1.7.1 h1:LnubftI6nYaaMOcaz0LphzwraqN8jiWTwm416sitff4=
|
|
||||||
github.com/tiendc/go-deepcopy v1.7.1/go.mod h1:4bKjNC2r7boYOkD2IOuZpYjmlDdzjbpTRyCx+goBCJQ=
|
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||||
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
|
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
|
||||||
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||||
github.com/xuri/efp v0.0.1 h1:fws5Rv3myXyYni8uwj2qKjVaRP30PdjeYe2Y6FDsCL8=
|
|
||||||
github.com/xuri/efp v0.0.1/go.mod h1:ybY/Jr0T0GTCnYjKqmdwxyxn2BQf2RcQIIvex5QldPI=
|
|
||||||
github.com/xuri/excelize/v2 v2.10.0 h1:8aKsP7JD39iKLc6dH5Tw3dgV3sPRh8uRVXu/fMstfW4=
|
|
||||||
github.com/xuri/excelize/v2 v2.10.0/go.mod h1:SC5TzhQkaOsTWpANfm+7bJCldzcnU/jrhqkTi/iBHBU=
|
|
||||||
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9 h1:+C0TIdyyYmzadGaL/HBLbf3WdLgC29pgyhTjAT/0nuE=
|
|
||||||
github.com/xuri/nfp v0.0.2-0.20250530014748-2ddeb826f9a9/go.mod h1:WwHg+CVyzlv/TX9xqBFXEZAuxOPxn2k1GNHwG41IIUQ=
|
|
||||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||||
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
||||||
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||||
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
|
||||||
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
|
||||||
golang.org/x/image v0.25.0 h1:Y6uW6rH1y5y/LK1J8BPWZtr6yZ7hrsy6hFrXjgsc2fQ=
|
|
||||||
golang.org/x/image v0.25.0/go.mod h1:tCAmOEGthTtkalusGp1g3xa2gke8J6c2N565dTyl9Rs=
|
|
||||||
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
|
||||||
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
|
||||||
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
@@ -100,8 +96,9 @@ golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
|
|||||||
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
|
||||||
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
|
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||||
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
|
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
|
||||||
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||||
@@ -113,6 +110,14 @@ gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
|||||||
gorm.io/driver/mysql v1.5.2 h1:QC2HRskSE75wBuOxe0+iCkyJZ+RqpudsQtqkp+IMuXs=
|
gorm.io/driver/mysql v1.5.2 h1:QC2HRskSE75wBuOxe0+iCkyJZ+RqpudsQtqkp+IMuXs=
|
||||||
gorm.io/driver/mysql v1.5.2/go.mod h1:pQLhh1Ut/WUAySdTHwBpBv6+JKcj+ua4ZFx1QQTBzb8=
|
gorm.io/driver/mysql v1.5.2/go.mod h1:pQLhh1Ut/WUAySdTHwBpBv6+JKcj+ua4ZFx1QQTBzb8=
|
||||||
gorm.io/gorm v1.25.2-0.20230530020048-26663ab9bf55/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
|
gorm.io/gorm v1.25.2-0.20230530020048-26663ab9bf55/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
|
||||||
gorm.io/gorm v1.25.5 h1:zR9lOiiYf09VNh5Q1gphfyia1JpiClIWG9hQaxB/mls=
|
gorm.io/gorm v1.25.7 h1:VsD6acwRjz2zFxGO50gPO6AkNs7KKnvfzUjHQhZDz/A=
|
||||||
gorm.io/gorm v1.25.5/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
|
gorm.io/gorm v1.25.7/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
|
||||||
|
modernc.org/libc v1.22.5 h1:91BNch/e5B0uPbJFgqbxXuOnxBQjlS//icfQEGmvyjE=
|
||||||
|
modernc.org/libc v1.22.5/go.mod h1:jj+Z7dTNX8fBScMVNRAYZ/jF91K8fdT2hYMThc3YjBY=
|
||||||
|
modernc.org/mathutil v1.5.0 h1:rV0Ko/6SfM+8G+yKiyI830l3Wuz1zRutdslNoQ0kfiQ=
|
||||||
|
modernc.org/mathutil v1.5.0/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
|
||||||
|
modernc.org/memory v1.5.0 h1:N+/8c5rE6EqugZwHii4IFsaJ7MUhoWX07J5tC/iI5Ds=
|
||||||
|
modernc.org/memory v1.5.0/go.mod h1:PkUhL0Mugw21sHPeskwZW4D6VscE/GQJOnIpCnW6pSU=
|
||||||
|
modernc.org/sqlite v1.23.1 h1:nrSBg4aRQQwq59JpvGEQ15tNxoO5pX/kUjcRNwSAGQM=
|
||||||
|
modernc.org/sqlite v1.23.1/go.mod h1:OrDj17Mggn6MhE+iPbBNf7RGKODDE9NFT0f3EwDzJqk=
|
||||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||||
|
|||||||
26
internal/appmeta/version.go
Normal file
26
internal/appmeta/version.go
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
package appmeta
|
||||||
|
|
||||||
|
import "sync/atomic"
|
||||||
|
|
||||||
|
var appVersion atomic.Value
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
appVersion.Store("dev")
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetVersion configures the running application version string.
|
||||||
|
func SetVersion(v string) {
|
||||||
|
if v == "" {
|
||||||
|
v = "dev"
|
||||||
|
}
|
||||||
|
appVersion.Store(v)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Version returns the running application version string.
|
||||||
|
func Version() string {
|
||||||
|
if v, ok := appVersion.Load().(string); ok && v != "" {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
return "dev"
|
||||||
|
}
|
||||||
|
|
||||||
273
internal/appstate/backup.go
Normal file
273
internal/appstate/backup.go
Normal file
@@ -0,0 +1,273 @@
|
|||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/zip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type backupPeriod struct {
|
||||||
|
name string
|
||||||
|
retention int
|
||||||
|
key func(time.Time) string
|
||||||
|
date func(time.Time) string
|
||||||
|
}
|
||||||
|
|
||||||
|
var backupPeriods = []backupPeriod{
|
||||||
|
{
|
||||||
|
name: "daily",
|
||||||
|
retention: 7,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "weekly",
|
||||||
|
retention: 4,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
y, w := t.ISOWeek()
|
||||||
|
return fmt.Sprintf("%04d-W%02d", y, w)
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "monthly",
|
||||||
|
retention: 12,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yearly",
|
||||||
|
retention: 10,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
envBackupDisable = "QFS_BACKUP_DISABLE"
|
||||||
|
envBackupDir = "QFS_BACKUP_DIR"
|
||||||
|
)
|
||||||
|
|
||||||
|
var backupNow = time.Now
|
||||||
|
|
||||||
|
// EnsureRotatingLocalBackup creates or refreshes daily/weekly/monthly/yearly backups
|
||||||
|
// for the local database and config. It keeps a limited number per period.
|
||||||
|
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
||||||
|
if isBackupDisabled() {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
if dbPath == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(dbPath); err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("stat db: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
root := resolveBackupRoot(dbPath)
|
||||||
|
now := backupNow()
|
||||||
|
|
||||||
|
created := make([]string, 0)
|
||||||
|
for _, period := range backupPeriods {
|
||||||
|
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
||||||
|
if err != nil {
|
||||||
|
return created, err
|
||||||
|
}
|
||||||
|
if len(newFiles) > 0 {
|
||||||
|
created = append(created, newFiles...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return created, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveBackupRoot(dbPath string) string {
|
||||||
|
if fromEnv := strings.TrimSpace(os.Getenv(envBackupDir)); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv)
|
||||||
|
}
|
||||||
|
return filepath.Join(filepath.Dir(dbPath), "backups")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isBackupDisabled() bool {
|
||||||
|
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
||||||
|
return val == "1" || val == "true" || val == "yes"
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensurePeriodBackup(root string, period backupPeriod, now time.Time, dbPath, configPath string) ([]string, error) {
|
||||||
|
key := period.key(now)
|
||||||
|
periodDir := filepath.Join(root, period.name)
|
||||||
|
if err := os.MkdirAll(periodDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup dir: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasBackupForKey(periodDir, key) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
archiveName := fmt.Sprintf("qfs-backp-%s.zip", period.date(now))
|
||||||
|
archivePath := filepath.Join(periodDir, archiveName)
|
||||||
|
|
||||||
|
if err := createBackupArchive(archivePath, dbPath, configPath); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup archive: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writePeriodMarker(periodDir, key); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pruneOldBackups(periodDir, period.retention); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string{archivePath}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasBackupForKey(periodDir, key string) bool {
|
||||||
|
marker := periodMarker{Key: ""}
|
||||||
|
data, err := os.ReadFile(periodMarkerPath(periodDir))
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(data, &marker); err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return marker.Key == key
|
||||||
|
}
|
||||||
|
|
||||||
|
type periodMarker struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func periodMarkerPath(periodDir string) string {
|
||||||
|
return filepath.Join(periodDir, ".period.json")
|
||||||
|
}
|
||||||
|
|
||||||
|
func writePeriodMarker(periodDir, key string) error {
|
||||||
|
data, err := json.MarshalIndent(periodMarker{Key: key}, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return os.WriteFile(periodMarkerPath(periodDir), data, 0644)
|
||||||
|
}
|
||||||
|
|
||||||
|
func pruneOldBackups(periodDir string, keep int) error {
|
||||||
|
entries, err := os.ReadDir(periodDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read backups dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files := make([]os.DirEntry, 0, len(entries))
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(entry.Name(), ".zip") {
|
||||||
|
files = append(files, entry)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) <= keep {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(files, func(i, j int) bool {
|
||||||
|
infoI, errI := files[i].Info()
|
||||||
|
infoJ, errJ := files[j].Info()
|
||||||
|
if errI != nil || errJ != nil {
|
||||||
|
return files[i].Name() < files[j].Name()
|
||||||
|
}
|
||||||
|
return infoI.ModTime().Before(infoJ.ModTime())
|
||||||
|
})
|
||||||
|
|
||||||
|
for i := 0; i < len(files)-keep; i++ {
|
||||||
|
path := filepath.Join(periodDir, files[i].Name())
|
||||||
|
if err := os.Remove(path); err != nil {
|
||||||
|
return fmt.Errorf("remove old backup %s: %w", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||||
|
file, err := os.Create(destPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
zipWriter := zip.NewWriter(file)
|
||||||
|
if err := addZipFile(zipWriter, dbPath); err != nil {
|
||||||
|
_ = zipWriter.Close()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
||||||
|
|
||||||
|
if strings.TrimSpace(configPath) != "" {
|
||||||
|
_ = addZipOptionalFile(zipWriter, configPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := zipWriter.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return file.Sync()
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||||
|
if _, err := os.Stat(path); err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return addZipFile(writer, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipFile(writer *zip.Writer, path string) error {
|
||||||
|
in, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
info, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
header, err := zip.FileInfoHeader(info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
header.Name = filepath.Base(path)
|
||||||
|
header.Method = zip.Deflate
|
||||||
|
|
||||||
|
out, err := writer.CreateHeader(header)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(out, in)
|
||||||
|
return err
|
||||||
|
}
|
||||||
83
internal/appstate/backup_test.go
Normal file
83
internal/appstate/backup_test.go
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestEnsureRotatingLocalBackupCreatesAndRotates(t *testing.T) {
|
||||||
|
temp := t.TempDir()
|
||||||
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
|
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||||
|
t.Fatalf("write db: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
|
t.Fatalf("write config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
prevNow := backupNow
|
||||||
|
defer func() { backupNow = prevNow }()
|
||||||
|
backupNow = func() time.Time { return time.Date(2026, 2, 11, 10, 0, 0, 0, time.UTC) }
|
||||||
|
|
||||||
|
created, err := EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("backup: %v", err)
|
||||||
|
}
|
||||||
|
if len(created) == 0 {
|
||||||
|
t.Fatalf("expected backup to be created")
|
||||||
|
}
|
||||||
|
|
||||||
|
dailyArchive := filepath.Join(temp, "backups", "daily", "qfs-backp-2026-02-11.zip")
|
||||||
|
if _, err := os.Stat(dailyArchive); err != nil {
|
||||||
|
t.Fatalf("daily archive missing: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
backupNow = func() time.Time { return time.Date(2026, 2, 12, 10, 0, 0, 0, time.UTC) }
|
||||||
|
created, err = EnsureRotatingLocalBackup(dbPath, cfgPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("backup rotate: %v", err)
|
||||||
|
}
|
||||||
|
if len(created) == 0 {
|
||||||
|
t.Fatalf("expected backup to be created for new day")
|
||||||
|
}
|
||||||
|
|
||||||
|
dailyArchive = filepath.Join(temp, "backups", "daily", "qfs-backp-2026-02-12.zip")
|
||||||
|
if _, err := os.Stat(dailyArchive); err != nil {
|
||||||
|
t.Fatalf("daily archive missing after rotate: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestEnsureRotatingLocalBackupEnvControls(t *testing.T) {
|
||||||
|
temp := t.TempDir()
|
||||||
|
dbPath := filepath.Join(temp, "qfs.db")
|
||||||
|
cfgPath := filepath.Join(temp, "config.yaml")
|
||||||
|
|
||||||
|
if err := os.WriteFile(dbPath, []byte("db"), 0644); err != nil {
|
||||||
|
t.Fatalf("write db: %v", err)
|
||||||
|
}
|
||||||
|
if err := os.WriteFile(cfgPath, []byte("cfg"), 0644); err != nil {
|
||||||
|
t.Fatalf("write config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
backupRoot := filepath.Join(temp, "custom_backups")
|
||||||
|
t.Setenv(envBackupDir, backupRoot)
|
||||||
|
|
||||||
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
|
t.Fatalf("backup with env: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
||||||
|
t.Fatalf("expected backup in custom dir: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Setenv(envBackupDisable, "1")
|
||||||
|
if _, err := EnsureRotatingLocalBackup(dbPath, cfgPath); err != nil {
|
||||||
|
t.Fatalf("backup disabled: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := os.Stat(filepath.Join(backupRoot, "daily", "meta.json")); err != nil {
|
||||||
|
t.Fatalf("backup should remain from previous run: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
217
internal/appstate/path.go
Normal file
217
internal/appstate/path.go
Normal file
@@ -0,0 +1,217 @@
|
|||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
appDirName = "QuoteForge"
|
||||||
|
defaultDB = "qfs.db"
|
||||||
|
defaultCfg = "config.yaml"
|
||||||
|
envDBPath = "QFS_DB_PATH"
|
||||||
|
envStateDir = "QFS_STATE_DIR"
|
||||||
|
envCfgPath = "QFS_CONFIG_PATH"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ResolveDBPath returns the local SQLite path using priority:
|
||||||
|
// explicit CLI path > QFS_DB_PATH > OS-specific user state directory.
|
||||||
|
func ResolveDBPath(explicitPath string) (string, error) {
|
||||||
|
if explicitPath != "" {
|
||||||
|
return filepath.Clean(explicitPath), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if fromEnv := os.Getenv(envDBPath); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
dir, err := defaultStateDir()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return filepath.Join(dir, defaultDB), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveConfigPath returns the config path using priority:
|
||||||
|
// explicit CLI path > QFS_CONFIG_PATH > OS-specific user state directory.
|
||||||
|
func ResolveConfigPath(explicitPath string) (string, error) {
|
||||||
|
if explicitPath != "" {
|
||||||
|
return filepath.Clean(explicitPath), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if fromEnv := os.Getenv(envCfgPath); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
dir, err := defaultStateDir()
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return filepath.Join(dir, defaultCfg), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveConfigPathNearDB returns config path using priority:
|
||||||
|
// explicit CLI path > QFS_CONFIG_PATH > directory of resolved local DB path.
|
||||||
|
// Falls back to ResolveConfigPath when dbPath is empty.
|
||||||
|
func ResolveConfigPathNearDB(explicitPath, dbPath string) (string, error) {
|
||||||
|
if explicitPath != "" {
|
||||||
|
return filepath.Clean(explicitPath), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if fromEnv := os.Getenv(envCfgPath); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(dbPath) != "" {
|
||||||
|
return filepath.Join(filepath.Dir(filepath.Clean(dbPath)), defaultCfg), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return ResolveConfigPath("")
|
||||||
|
}
|
||||||
|
|
||||||
|
// MigrateLegacyDB copies an existing legacy DB (and optional SQLite sidecars)
|
||||||
|
// to targetPath if targetPath does not already exist.
|
||||||
|
// Returns source path if migration happened.
|
||||||
|
func MigrateLegacyDB(targetPath string, legacyPaths []string) (string, error) {
|
||||||
|
if targetPath == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if exists(targetPath) {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
|
||||||
|
return "", fmt.Errorf("creating target db directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, src := range legacyPaths {
|
||||||
|
if src == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
src = filepath.Clean(src)
|
||||||
|
if src == targetPath || !exists(src) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := copyFile(src, targetPath); err != nil {
|
||||||
|
return "", fmt.Errorf("migrating legacy db from %s: %w", src, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Optional SQLite sidecar files.
|
||||||
|
_ = copyIfExists(src+"-wal", targetPath+"-wal")
|
||||||
|
_ = copyIfExists(src+"-shm", targetPath+"-shm")
|
||||||
|
|
||||||
|
return src, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MigrateLegacyFile copies an existing legacy file to targetPath
|
||||||
|
// if targetPath does not already exist.
|
||||||
|
func MigrateLegacyFile(targetPath string, legacyPaths []string) (string, error) {
|
||||||
|
if targetPath == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if exists(targetPath) {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := os.MkdirAll(filepath.Dir(targetPath), 0755); err != nil {
|
||||||
|
return "", fmt.Errorf("creating target directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, src := range legacyPaths {
|
||||||
|
if src == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
src = filepath.Clean(src)
|
||||||
|
if src == targetPath || !exists(src) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := copyFile(src, targetPath); err != nil {
|
||||||
|
return "", fmt.Errorf("migrating legacy file from %s: %w", src, err)
|
||||||
|
}
|
||||||
|
return src, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func defaultStateDir() (string, error) {
|
||||||
|
if override := os.Getenv(envStateDir); override != "" {
|
||||||
|
return filepath.Clean(override), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
switch runtime.GOOS {
|
||||||
|
case "darwin":
|
||||||
|
base, err := os.UserConfigDir() // ~/Library/Application Support
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("resolving user config dir: %w", err)
|
||||||
|
}
|
||||||
|
return filepath.Join(base, appDirName), nil
|
||||||
|
case "windows":
|
||||||
|
if local := os.Getenv("LOCALAPPDATA"); local != "" {
|
||||||
|
return filepath.Join(local, appDirName), nil
|
||||||
|
}
|
||||||
|
base, err := os.UserConfigDir()
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("resolving user config dir: %w", err)
|
||||||
|
}
|
||||||
|
return filepath.Join(base, appDirName), nil
|
||||||
|
default:
|
||||||
|
if xdgState := os.Getenv("XDG_STATE_HOME"); xdgState != "" {
|
||||||
|
return filepath.Join(xdgState, "quoteforge"), nil
|
||||||
|
}
|
||||||
|
home, err := os.UserHomeDir()
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("resolving user home dir: %w", err)
|
||||||
|
}
|
||||||
|
return filepath.Join(home, ".local", "state", "quoteforge"), nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func exists(path string) bool {
|
||||||
|
_, err := os.Stat(path)
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func copyIfExists(src, dst string) error {
|
||||||
|
if !exists(src) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return copyFile(src, dst)
|
||||||
|
}
|
||||||
|
|
||||||
|
func copyFile(src, dst string) error {
|
||||||
|
in, err := os.Open(src)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
info, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
out, err := os.OpenFile(dst, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, info.Mode().Perm())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer out.Close()
|
||||||
|
|
||||||
|
if _, err := io.Copy(out, in); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return out.Sync()
|
||||||
|
}
|
||||||
124
internal/article/categories.go
Normal file
124
internal/article/categories.go
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrMissingCategoryForLot is returned when a lot has no category in local_pricelist_items.lot_category.
|
||||||
|
var ErrMissingCategoryForLot = errors.New("missing_category_for_lot")
|
||||||
|
|
||||||
|
type MissingCategoryForLotError struct {
|
||||||
|
LotName string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *MissingCategoryForLotError) Error() string {
|
||||||
|
if e == nil || strings.TrimSpace(e.LotName) == "" {
|
||||||
|
return ErrMissingCategoryForLot.Error()
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%s: %s", ErrMissingCategoryForLot.Error(), e.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *MissingCategoryForLotError) Unwrap() error {
|
||||||
|
return ErrMissingCategoryForLot
|
||||||
|
}
|
||||||
|
|
||||||
|
type Group string
|
||||||
|
|
||||||
|
const (
|
||||||
|
GroupCPU Group = "CPU"
|
||||||
|
GroupMEM Group = "MEM"
|
||||||
|
GroupGPU Group = "GPU"
|
||||||
|
GroupDISK Group = "DISK"
|
||||||
|
GroupNET Group = "NET"
|
||||||
|
GroupPSU Group = "PSU"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GroupForLotCategory maps pricelist lot_category codes into article groups.
|
||||||
|
// Unknown/unrelated categories return ok=false.
|
||||||
|
func GroupForLotCategory(cat string) (group Group, ok bool) {
|
||||||
|
c := strings.ToUpper(strings.TrimSpace(cat))
|
||||||
|
switch c {
|
||||||
|
case "CPU":
|
||||||
|
return GroupCPU, true
|
||||||
|
case "MEM":
|
||||||
|
return GroupMEM, true
|
||||||
|
case "GPU":
|
||||||
|
return GroupGPU, true
|
||||||
|
case "M2", "SSD", "HDD", "EDSFF", "HHHL":
|
||||||
|
return GroupDISK, true
|
||||||
|
case "NIC", "HCA", "DPU":
|
||||||
|
return GroupNET, true
|
||||||
|
case "HBA":
|
||||||
|
return GroupNET, true
|
||||||
|
case "PSU", "PS":
|
||||||
|
return GroupPSU, true
|
||||||
|
default:
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ResolveLotCategoriesStrict resolves categories for lotNames using local_pricelist_items.lot_category
|
||||||
|
// for a given server pricelist id. If any lot is missing or has empty category, returns an error.
|
||||||
|
func ResolveLotCategoriesStrict(local *localdb.LocalDB, serverPricelistID uint, lotNames []string) (map[string]string, error) {
|
||||||
|
if local == nil {
|
||||||
|
return nil, fmt.Errorf("local db is nil")
|
||||||
|
}
|
||||||
|
cats, err := local.GetLocalLotCategoriesByServerPricelistID(serverPricelistID, lotNames)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
missing := make([]string, 0)
|
||||||
|
for _, lot := range lotNames {
|
||||||
|
cat := strings.TrimSpace(cats[lot])
|
||||||
|
if cat == "" {
|
||||||
|
missing = append(missing, lot)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
cats[lot] = cat
|
||||||
|
}
|
||||||
|
if len(missing) > 0 {
|
||||||
|
fallback, err := local.GetLocalComponentCategoriesByLotNames(missing)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, lot := range missing {
|
||||||
|
if cat := strings.TrimSpace(fallback[lot]); cat != "" {
|
||||||
|
cats[lot] = cat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, lot := range missing {
|
||||||
|
if strings.TrimSpace(cats[lot]) == "" {
|
||||||
|
return nil, &MissingCategoryForLotError{LotName: lot}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return cats, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// NormalizeServerModel produces a stable article segment for the server model.
|
||||||
|
func NormalizeServerModel(model string) string {
|
||||||
|
trimmed := strings.TrimSpace(model)
|
||||||
|
if trimmed == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(trimmed)
|
||||||
|
var b strings.Builder
|
||||||
|
for _, r := range upper {
|
||||||
|
if r >= 'A' && r <= 'Z' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if r >= '0' && r <= '9' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if r == '.' {
|
||||||
|
b.WriteRune(r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
98
internal/article/categories_test.go
Normal file
98
internal/article/categories_test.go
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestResolveLotCategoriesStrict_MissingCategoryReturnsError(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "CPU_A", LotCategory: "", Price: 10},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = ResolveLotCategoriesStrict(local, 1, []string{"CPU_A"})
|
||||||
|
if err == nil {
|
||||||
|
t.Fatalf("expected error")
|
||||||
|
}
|
||||||
|
if !errors.Is(err, ErrMissingCategoryForLot) {
|
||||||
|
t.Fatalf("expected ErrMissingCategoryForLot, got %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestResolveLotCategoriesStrict_FallbackToLocalComponents(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 2,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-002",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(2)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "CPU_B", LotCategory: "", Price: 10},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&localdb.LocalComponent{
|
||||||
|
LotName: "CPU_B",
|
||||||
|
Category: "CPU",
|
||||||
|
LotDescription: "cpu",
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("save local components: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cats, err := ResolveLotCategoriesStrict(local, 2, []string{"CPU_B"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("expected fallback, got error: %v", err)
|
||||||
|
}
|
||||||
|
if cats["CPU_B"] != "CPU" {
|
||||||
|
t.Fatalf("expected CPU, got %q", cats["CPU_B"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGroupForLotCategory(t *testing.T) {
|
||||||
|
if g, ok := GroupForLotCategory("cpu"); !ok || g != GroupCPU {
|
||||||
|
t.Fatalf("expected cpu -> GroupCPU")
|
||||||
|
}
|
||||||
|
if g, ok := GroupForLotCategory("SFP"); ok || g != "" {
|
||||||
|
t.Fatalf("expected SFP to be excluded")
|
||||||
|
}
|
||||||
|
}
|
||||||
602
internal/article/generator.go
Normal file
602
internal/article/generator.go
Normal file
@@ -0,0 +1,602 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
type BuildOptions struct {
|
||||||
|
ServerModel string
|
||||||
|
SupportCode string
|
||||||
|
ServerPricelist *uint
|
||||||
|
}
|
||||||
|
|
||||||
|
type BuildResult struct {
|
||||||
|
Article string
|
||||||
|
Warnings []string
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
reMemGiB = regexp.MustCompile(`(?i)(\d+)\s*(GB|G)`)
|
||||||
|
reMemTiB = regexp.MustCompile(`(?i)(\d+)\s*(TB|T)`)
|
||||||
|
reCapacityT = regexp.MustCompile(`(?i)(\d+(?:[.,]\d+)?)T`)
|
||||||
|
reCapacityG = regexp.MustCompile(`(?i)(\d+(?:[.,]\d+)?)G`)
|
||||||
|
rePortSpeed = regexp.MustCompile(`(?i)(\d+)p(\d+)(GbE|G)`)
|
||||||
|
rePortFC = regexp.MustCompile(`(?i)(\d+)pFC(\d+)`)
|
||||||
|
reWatts = regexp.MustCompile(`(?i)(\d{3,5})\s*W`)
|
||||||
|
)
|
||||||
|
|
||||||
|
func Build(local *localdb.LocalDB, items []models.ConfigItem, opts BuildOptions) (BuildResult, error) {
|
||||||
|
segments := make([]string, 0, 8)
|
||||||
|
warnings := make([]string, 0)
|
||||||
|
|
||||||
|
model := NormalizeServerModel(opts.ServerModel)
|
||||||
|
if model == "" {
|
||||||
|
return BuildResult{}, fmt.Errorf("server_model required")
|
||||||
|
}
|
||||||
|
segments = append(segments, model)
|
||||||
|
|
||||||
|
lotNames := make([]string, 0, len(items))
|
||||||
|
for _, it := range items {
|
||||||
|
lotNames = append(lotNames, it.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
if opts.ServerPricelist == nil || *opts.ServerPricelist == 0 {
|
||||||
|
return BuildResult{}, fmt.Errorf("pricelist_id required for article")
|
||||||
|
}
|
||||||
|
|
||||||
|
cats, err := ResolveLotCategoriesStrict(local, *opts.ServerPricelist, lotNames)
|
||||||
|
if err != nil {
|
||||||
|
return BuildResult{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
cpuSeg := buildCPUSegment(items, cats)
|
||||||
|
if cpuSeg != "" {
|
||||||
|
segments = append(segments, cpuSeg)
|
||||||
|
}
|
||||||
|
memSeg, memWarn := buildMemSegment(items, cats)
|
||||||
|
if memWarn != "" {
|
||||||
|
warnings = append(warnings, memWarn)
|
||||||
|
}
|
||||||
|
if memSeg != "" {
|
||||||
|
segments = append(segments, memSeg)
|
||||||
|
}
|
||||||
|
gpuSeg := buildGPUSegment(items, cats)
|
||||||
|
if gpuSeg != "" {
|
||||||
|
segments = append(segments, gpuSeg)
|
||||||
|
}
|
||||||
|
diskSeg, diskWarn := buildDiskSegment(items, cats)
|
||||||
|
if diskWarn != "" {
|
||||||
|
warnings = append(warnings, diskWarn)
|
||||||
|
}
|
||||||
|
if diskSeg != "" {
|
||||||
|
segments = append(segments, diskSeg)
|
||||||
|
}
|
||||||
|
netSeg, netWarn := buildNetSegment(items, cats)
|
||||||
|
if netWarn != "" {
|
||||||
|
warnings = append(warnings, netWarn)
|
||||||
|
}
|
||||||
|
if netSeg != "" {
|
||||||
|
segments = append(segments, netSeg)
|
||||||
|
}
|
||||||
|
psuSeg, psuWarn := buildPSUSegment(items, cats)
|
||||||
|
if psuWarn != "" {
|
||||||
|
warnings = append(warnings, psuWarn)
|
||||||
|
}
|
||||||
|
if psuSeg != "" {
|
||||||
|
segments = append(segments, psuSeg)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(opts.SupportCode) != "" {
|
||||||
|
code := strings.TrimSpace(opts.SupportCode)
|
||||||
|
if !isSupportCodeValid(code) {
|
||||||
|
return BuildResult{}, fmt.Errorf("invalid_support_code")
|
||||||
|
}
|
||||||
|
segments = append(segments, code)
|
||||||
|
}
|
||||||
|
|
||||||
|
article := strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) > 80 {
|
||||||
|
article = compressArticle(segments)
|
||||||
|
warnings = append(warnings, "compressed")
|
||||||
|
}
|
||||||
|
if len([]rune(article)) > 80 {
|
||||||
|
return BuildResult{}, fmt.Errorf("article_overflow")
|
||||||
|
}
|
||||||
|
|
||||||
|
return BuildResult{Article: article, Warnings: warnings}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSupportCodeValid(code string) bool {
|
||||||
|
if len(code) < 3 {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if !strings.Contains(code, "y") {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
parts := strings.Split(code, "y")
|
||||||
|
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for _, r := range parts[0] {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
switch parts[1] {
|
||||||
|
case "W", "B", "S", "P":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildCPUSegment(items []models.ConfigItem, cats map[string]string) string {
|
||||||
|
type agg struct {
|
||||||
|
qty int
|
||||||
|
}
|
||||||
|
models := map[string]*agg{}
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupCPU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
model := parseCPUModel(it.LotName)
|
||||||
|
if model == "" {
|
||||||
|
model = "UNK"
|
||||||
|
}
|
||||||
|
if _, ok := models[model]; !ok {
|
||||||
|
models[model] = &agg{}
|
||||||
|
}
|
||||||
|
models[model].qty += it.Quantity
|
||||||
|
}
|
||||||
|
if len(models) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(models))
|
||||||
|
for model, a := range models {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", a.qty, model))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildMemSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
totalGiB := 0
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupMEM {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
per := parseMemGiB(it.LotName)
|
||||||
|
if per <= 0 {
|
||||||
|
return "", "mem_unknown"
|
||||||
|
}
|
||||||
|
totalGiB += per * it.Quantity
|
||||||
|
}
|
||||||
|
if totalGiB == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
if totalGiB%1024 == 0 {
|
||||||
|
return fmt.Sprintf("%dT", totalGiB/1024), ""
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%dG", totalGiB), ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildGPUSegment(items []models.ConfigItem, cats map[string]string) string {
|
||||||
|
models := map[string]int{}
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupGPU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
model := parseGPUModel(it.LotName)
|
||||||
|
if model == "" {
|
||||||
|
model = "UNK"
|
||||||
|
}
|
||||||
|
models[model] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(models) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(models))
|
||||||
|
for model, qty := range models {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, model))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildDiskSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
type key struct {
|
||||||
|
t string
|
||||||
|
c string
|
||||||
|
}
|
||||||
|
groupQty := map[key]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupDISK {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
capToken := parseCapacity(it.LotName)
|
||||||
|
if capToken == "" {
|
||||||
|
warn = "disk_unknown"
|
||||||
|
}
|
||||||
|
typeCode := diskTypeCode(cats[it.LotName], it.LotName)
|
||||||
|
k := key{t: typeCode, c: capToken}
|
||||||
|
groupQty[k] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for k, qty := range groupQty {
|
||||||
|
if k.c == "" {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, k.t))
|
||||||
|
} else {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s%s", qty, k.c, k.t))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildNetSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
groupQty := map[string]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupNET {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
profile := parsePortSpeed(it.LotName)
|
||||||
|
if profile == "" {
|
||||||
|
profile = "UNKNET"
|
||||||
|
warn = "net_unknown"
|
||||||
|
}
|
||||||
|
groupQty[profile] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for profile, qty := range groupQty {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, profile))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildPSUSegment(items []models.ConfigItem, cats map[string]string) (string, string) {
|
||||||
|
groupQty := map[string]int{}
|
||||||
|
warn := ""
|
||||||
|
for _, it := range items {
|
||||||
|
group, ok := GroupForLotCategory(cats[it.LotName])
|
||||||
|
if !ok || group != GroupPSU {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
rating := parseWatts(it.LotName)
|
||||||
|
if rating == "" {
|
||||||
|
rating = "UNKPSU"
|
||||||
|
warn = "psu_unknown"
|
||||||
|
}
|
||||||
|
groupQty[rating] += it.Quantity
|
||||||
|
}
|
||||||
|
if len(groupQty) == 0 {
|
||||||
|
return "", ""
|
||||||
|
}
|
||||||
|
parts := make([]string, 0, len(groupQty))
|
||||||
|
for rating, qty := range groupQty {
|
||||||
|
parts = append(parts, fmt.Sprintf("%dx%s", qty, rating))
|
||||||
|
}
|
||||||
|
sort.Strings(parts)
|
||||||
|
return strings.Join(parts, "+"), warn
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeModelToken(lotName string) string {
|
||||||
|
if idx := strings.Index(lotName, "_"); idx >= 0 && idx+1 < len(lotName) {
|
||||||
|
lotName = lotName[idx+1:]
|
||||||
|
}
|
||||||
|
parts := strings.Split(lotName, "_")
|
||||||
|
token := parts[len(parts)-1]
|
||||||
|
return strings.ToUpper(strings.TrimSpace(token))
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseCPUModel(lotName string) string {
|
||||||
|
parts := strings.Split(lotName, "_")
|
||||||
|
if len(parts) >= 2 {
|
||||||
|
last := strings.ToUpper(strings.TrimSpace(parts[len(parts)-1]))
|
||||||
|
if last != "" {
|
||||||
|
return last
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return normalizeModelToken(lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseGPUModel(lotName string) string {
|
||||||
|
upper := strings.ToUpper(lotName)
|
||||||
|
if idx := strings.Index(upper, "GPU_"); idx >= 0 {
|
||||||
|
upper = upper[idx+4:]
|
||||||
|
}
|
||||||
|
parts := strings.Split(upper, "_")
|
||||||
|
model := ""
|
||||||
|
mem := ""
|
||||||
|
for i, p := range parts {
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
switch p {
|
||||||
|
case "NV", "NVIDIA", "AMD", "RADEON", "PCIE", "PCI", "SXM", "SXMX":
|
||||||
|
continue
|
||||||
|
default:
|
||||||
|
if strings.Contains(p, "GB") {
|
||||||
|
mem = p
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if model == "" && (i > 0) {
|
||||||
|
model = p
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if model != "" && mem != "" {
|
||||||
|
return model + "_" + mem
|
||||||
|
}
|
||||||
|
if model != "" {
|
||||||
|
return model
|
||||||
|
}
|
||||||
|
return normalizeModelToken(lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseMemGiB(lotName string) int {
|
||||||
|
if m := reMemTiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return atoi(m[1]) * 1024
|
||||||
|
}
|
||||||
|
if m := reMemGiB.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return atoi(m[1])
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseCapacity(lotName string) string {
|
||||||
|
if m := reCapacityT.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
return normalizeTToken(strings.ReplaceAll(m[1], ",", ".")) + "T"
|
||||||
|
}
|
||||||
|
if m := reCapacityG.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
return normalizeNumberToken(strings.ReplaceAll(m[1], ",", ".")) + "G"
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func diskTypeCode(cat string, lotName string) string {
|
||||||
|
c := strings.ToUpper(strings.TrimSpace(cat))
|
||||||
|
if c == "M2" {
|
||||||
|
return "M2"
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(lotName)
|
||||||
|
if strings.Contains(upper, "NVME") {
|
||||||
|
return "NV"
|
||||||
|
}
|
||||||
|
if strings.Contains(upper, "SAS") {
|
||||||
|
return "SAS"
|
||||||
|
}
|
||||||
|
if strings.Contains(upper, "SATA") {
|
||||||
|
return "SAT"
|
||||||
|
}
|
||||||
|
return c
|
||||||
|
}
|
||||||
|
|
||||||
|
func parsePortSpeed(lotName string) string {
|
||||||
|
if m := rePortSpeed.FindStringSubmatch(lotName); len(m) == 4 {
|
||||||
|
return fmt.Sprintf("%sp%sG", m[1], m[2])
|
||||||
|
}
|
||||||
|
if m := rePortFC.FindStringSubmatch(lotName); len(m) == 3 {
|
||||||
|
return fmt.Sprintf("%spFC%s", m[1], m[2])
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseWatts(lotName string) string {
|
||||||
|
if m := reWatts.FindStringSubmatch(lotName); len(m) == 2 {
|
||||||
|
w := atoi(m[1])
|
||||||
|
if w >= 1000 {
|
||||||
|
kw := fmt.Sprintf("%.1f", float64(w)/1000.0)
|
||||||
|
kw = strings.TrimSuffix(kw, ".0")
|
||||||
|
return fmt.Sprintf("%skW", kw)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%dW", w)
|
||||||
|
}
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeNumberToken(raw string) string {
|
||||||
|
raw = strings.TrimSpace(raw)
|
||||||
|
raw = strings.TrimLeft(raw, "0")
|
||||||
|
if raw == "" || raw[0] == '.' {
|
||||||
|
raw = "0" + raw
|
||||||
|
}
|
||||||
|
return raw
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeTToken(raw string) string {
|
||||||
|
raw = normalizeNumberToken(raw)
|
||||||
|
parts := strings.SplitN(raw, ".", 2)
|
||||||
|
intPart := parts[0]
|
||||||
|
frac := ""
|
||||||
|
if len(parts) == 2 {
|
||||||
|
frac = parts[1]
|
||||||
|
}
|
||||||
|
if frac == "" {
|
||||||
|
frac = "0"
|
||||||
|
}
|
||||||
|
if len(intPart) >= 2 {
|
||||||
|
return intPart + "." + frac
|
||||||
|
}
|
||||||
|
if len(frac) > 1 {
|
||||||
|
frac = frac[:1]
|
||||||
|
}
|
||||||
|
return intPart + "." + frac
|
||||||
|
}
|
||||||
|
|
||||||
|
func atoi(v string) int {
|
||||||
|
n := 0
|
||||||
|
for _, r := range v {
|
||||||
|
if r < '0' || r > '9' {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n = n*10 + int(r-'0')
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressArticle(segments []string) string {
|
||||||
|
if len(segments) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
normalized := make([]string, 0, len(segments))
|
||||||
|
for _, s := range segments {
|
||||||
|
normalized = append(normalized, strings.ReplaceAll(s, "GbE", "G"))
|
||||||
|
}
|
||||||
|
segments = normalized
|
||||||
|
article := strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
|
||||||
|
// segment order: model, cpu, mem, gpu, disk, net, psu, support
|
||||||
|
index := func(i int) (int, bool) {
|
||||||
|
if i >= 0 && i < len(segments) {
|
||||||
|
return i, true
|
||||||
|
}
|
||||||
|
return -1, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// 1) remove PSU
|
||||||
|
if i, ok := index(6); ok {
|
||||||
|
segments = append(segments[:i], segments[i+1:]...)
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2) compress NET/HBA/HCA
|
||||||
|
if i, ok := index(5); ok {
|
||||||
|
segments[i] = compressNetSegment(segments[i])
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3) compress DISK
|
||||||
|
if i, ok := index(4); ok {
|
||||||
|
segments[i] = compressDiskSegment(segments[i])
|
||||||
|
article = strings.Join(segments, "-")
|
||||||
|
if len([]rune(article)) <= 80 {
|
||||||
|
return article
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4) compress GPU to vendor only (GPU_NV)
|
||||||
|
if i, ok := index(3); ok {
|
||||||
|
segments[i] = compressGPUSegment(segments[i])
|
||||||
|
}
|
||||||
|
return strings.Join(segments, "-")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressNetSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
profile := p
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
profile = x[1]
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(profile)
|
||||||
|
label := "NIC"
|
||||||
|
if strings.Contains(upper, "FC") {
|
||||||
|
label = "HBA"
|
||||||
|
} else if strings.Contains(upper, "HCA") || strings.Contains(upper, "IB") {
|
||||||
|
label = "HCA"
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sx%s", qty, label))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressDiskSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
spec := p
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
spec = x[1]
|
||||||
|
}
|
||||||
|
upper := strings.ToUpper(spec)
|
||||||
|
label := "DSK"
|
||||||
|
for _, t := range []string{"M2", "NV", "SAS", "SAT", "SSD", "HDD", "EDS", "HHH"} {
|
||||||
|
if strings.Contains(upper, t) {
|
||||||
|
label = t
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sx%s", qty, label))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
|
|
||||||
|
func compressGPUSegment(seg string) string {
|
||||||
|
if seg == "" {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
parts := strings.Split(seg, "+")
|
||||||
|
out := make([]string, 0, len(parts))
|
||||||
|
for _, p := range parts {
|
||||||
|
p = strings.TrimSpace(p)
|
||||||
|
if p == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
qty := "1"
|
||||||
|
if x := strings.SplitN(p, "x", 2); len(x) == 2 {
|
||||||
|
qty = x[0]
|
||||||
|
}
|
||||||
|
out = append(out, fmt.Sprintf("%sxGPU_NV", qty))
|
||||||
|
}
|
||||||
|
if len(out) == 0 {
|
||||||
|
return seg
|
||||||
|
}
|
||||||
|
sort.Strings(out)
|
||||||
|
return strings.Join(out, "+")
|
||||||
|
}
|
||||||
66
internal/article/generator_test.go
Normal file
66
internal/article/generator_test.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package article
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestBuild_ParsesNetAndPSU(t *testing.T) {
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{PricelistID: localPL.ID, LotName: "NIC_2p25G_MCX512A-AC", LotCategory: "NIC", Price: 1},
|
||||||
|
{PricelistID: localPL.ID, LotName: "HBA_2pFC32_Gen6", LotCategory: "HBA", Price: 1},
|
||||||
|
{PricelistID: localPL.ID, LotName: "PS_1000W_Platinum", LotCategory: "PS", Price: 1},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := models.ConfigItems{
|
||||||
|
{LotName: "NIC_2p25G_MCX512A-AC", Quantity: 1},
|
||||||
|
{LotName: "HBA_2pFC32_Gen6", Quantity: 1},
|
||||||
|
{LotName: "PS_1000W_Platinum", Quantity: 2},
|
||||||
|
}
|
||||||
|
result, err := Build(local, items, BuildOptions{
|
||||||
|
ServerModel: "DL380GEN11",
|
||||||
|
SupportCode: "1yW",
|
||||||
|
ServerPricelist: &localPL.ServerID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("build article: %v", err)
|
||||||
|
}
|
||||||
|
if result.Article == "" {
|
||||||
|
t.Fatalf("expected article to be non-empty")
|
||||||
|
}
|
||||||
|
if contains(result.Article, "UNKNET") || contains(result.Article, "UNKPSU") {
|
||||||
|
t.Fatalf("unexpected UNK in article: %s", result.Article)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func contains(s, sub string) bool {
|
||||||
|
return strings.Contains(s, sub)
|
||||||
|
}
|
||||||
@@ -2,9 +2,12 @@ package config
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net"
|
||||||
"os"
|
"os"
|
||||||
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -17,6 +20,7 @@ type Config struct {
|
|||||||
Alerts AlertsConfig `yaml:"alerts"`
|
Alerts AlertsConfig `yaml:"alerts"`
|
||||||
Notifications NotificationsConfig `yaml:"notifications"`
|
Notifications NotificationsConfig `yaml:"notifications"`
|
||||||
Logging LoggingConfig `yaml:"logging"`
|
Logging LoggingConfig `yaml:"logging"`
|
||||||
|
Backup BackupConfig `yaml:"backup"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type ServerConfig struct {
|
type ServerConfig struct {
|
||||||
@@ -39,8 +43,18 @@ type DatabaseConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (d *DatabaseConfig) DSN() string {
|
func (d *DatabaseConfig) DSN() string {
|
||||||
return fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4&parseTime=True&loc=Local",
|
cfg := mysqlDriver.NewConfig()
|
||||||
d.User, d.Password, d.Host, d.Port, d.Name)
|
cfg.User = d.User
|
||||||
|
cfg.Passwd = d.Password
|
||||||
|
cfg.Net = "tcp"
|
||||||
|
cfg.Addr = net.JoinHostPort(d.Host, strconv.Itoa(d.Port))
|
||||||
|
cfg.DBName = d.Name
|
||||||
|
cfg.ParseTime = true
|
||||||
|
cfg.Loc = time.Local
|
||||||
|
cfg.Params = map[string]string{
|
||||||
|
"charset": "utf8mb4",
|
||||||
|
}
|
||||||
|
return cfg.FormatDSN()
|
||||||
}
|
}
|
||||||
|
|
||||||
type AuthConfig struct {
|
type AuthConfig struct {
|
||||||
@@ -88,6 +102,10 @@ type LoggingConfig struct {
|
|||||||
FilePath string `yaml:"file_path"`
|
FilePath string `yaml:"file_path"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type BackupConfig struct {
|
||||||
|
Time string `yaml:"time"`
|
||||||
|
}
|
||||||
|
|
||||||
func Load(path string) (*Config, error) {
|
func Load(path string) (*Config, error) {
|
||||||
data, err := os.ReadFile(path)
|
data, err := os.ReadFile(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -106,7 +124,7 @@ func Load(path string) (*Config, error) {
|
|||||||
|
|
||||||
func (c *Config) setDefaults() {
|
func (c *Config) setDefaults() {
|
||||||
if c.Server.Host == "" {
|
if c.Server.Host == "" {
|
||||||
c.Server.Host = "0.0.0.0"
|
c.Server.Host = "127.0.0.1"
|
||||||
}
|
}
|
||||||
if c.Server.Port == 0 {
|
if c.Server.Port == 0 {
|
||||||
c.Server.Port = 8080
|
c.Server.Port = 8080
|
||||||
@@ -169,6 +187,10 @@ func (c *Config) setDefaults() {
|
|||||||
if c.Logging.Output == "" {
|
if c.Logging.Output == "" {
|
||||||
c.Logging.Output = "stdout"
|
c.Logging.Output = "stdout"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if c.Backup.Time == "" {
|
||||||
|
c.Backup.Time = "00:00"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Config) Address() string {
|
func (c *Config) Address() string {
|
||||||
|
|||||||
334
internal/db/connection.go
Normal file
334
internal/db/connection.go
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
package db
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"gorm.io/driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
defaultConnectTimeout = 5 * time.Second
|
||||||
|
defaultPingInterval = 30 * time.Second
|
||||||
|
defaultReconnectCooldown = 10 * time.Second
|
||||||
|
|
||||||
|
maxOpenConns = 10
|
||||||
|
maxIdleConns = 2
|
||||||
|
connMaxLifetime = 5 * time.Minute
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConnectionStatus represents the current status of the database connection
|
||||||
|
type ConnectionStatus struct {
|
||||||
|
IsConnected bool
|
||||||
|
LastCheck time.Time
|
||||||
|
LastError string // empty if no error
|
||||||
|
DSNHost string // host:port for display (without password!)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConnectionManager manages database connections with thread-safety and connection pooling
|
||||||
|
type ConnectionManager struct {
|
||||||
|
localDB *localdb.LocalDB // for getting DSN from settings
|
||||||
|
mu sync.RWMutex // protects db and state
|
||||||
|
db *gorm.DB // current connection (nil if not connected)
|
||||||
|
lastError error // last connection error
|
||||||
|
lastCheck time.Time // time of last check/attempt
|
||||||
|
connectTimeout time.Duration // timeout for connection (default: 5s)
|
||||||
|
pingInterval time.Duration // minimum interval between pings (default: 30s)
|
||||||
|
reconnectCooldown time.Duration // pause after failed attempt (default: 10s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewConnectionManager creates a new ConnectionManager instance
|
||||||
|
func NewConnectionManager(localDB *localdb.LocalDB) *ConnectionManager {
|
||||||
|
return &ConnectionManager{
|
||||||
|
localDB: localDB,
|
||||||
|
connectTimeout: defaultConnectTimeout,
|
||||||
|
pingInterval: defaultPingInterval,
|
||||||
|
reconnectCooldown: defaultReconnectCooldown,
|
||||||
|
db: nil,
|
||||||
|
lastError: nil,
|
||||||
|
lastCheck: time.Time{},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetDB returns the current database connection, establishing it if needed
|
||||||
|
// Thread-safe and respects connection cooldowns
|
||||||
|
func (cm *ConnectionManager) GetDB() (*gorm.DB, error) {
|
||||||
|
// Handle case where localDB is nil
|
||||||
|
if cm.localDB == nil {
|
||||||
|
return nil, fmt.Errorf("local database not initialized")
|
||||||
|
}
|
||||||
|
|
||||||
|
// First check if we already have a valid connection
|
||||||
|
cm.mu.RLock()
|
||||||
|
if cm.db != nil {
|
||||||
|
// Check if connection is still valid and within ping interval
|
||||||
|
if time.Since(cm.lastCheck) < cm.pingInterval {
|
||||||
|
cm.mu.RUnlock()
|
||||||
|
return cm.db, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cm.mu.RUnlock()
|
||||||
|
|
||||||
|
// Upgrade to write lock
|
||||||
|
cm.mu.Lock()
|
||||||
|
defer cm.mu.Unlock()
|
||||||
|
|
||||||
|
// Double-check: someone else might have connected while we were waiting for the write lock
|
||||||
|
if cm.db != nil {
|
||||||
|
// Check if connection is still valid and within ping interval
|
||||||
|
if time.Since(cm.lastCheck) < cm.pingInterval {
|
||||||
|
return cm.db, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if we're in cooldown period after a failed attempt
|
||||||
|
if cm.lastError != nil && time.Since(cm.lastCheck) < cm.reconnectCooldown {
|
||||||
|
return nil, cm.lastError
|
||||||
|
}
|
||||||
|
|
||||||
|
// Attempt to connect
|
||||||
|
err := cm.connect()
|
||||||
|
if err != nil {
|
||||||
|
// Drop stale handle so callers don't treat it as an active connection.
|
||||||
|
cm.db = nil
|
||||||
|
cm.lastError = err
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update last check time and return success
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
cm.lastError = nil
|
||||||
|
return cm.db, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// connect establishes a new database connection
|
||||||
|
func (cm *ConnectionManager) connect() error {
|
||||||
|
// Get DSN from local settings
|
||||||
|
dsn, err := cm.localDB.GetDSN()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("getting DSN: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create context with timeout
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), cm.connectTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
// Open database connection
|
||||||
|
db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("opening database connection: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test the connection
|
||||||
|
sqlDB, err := db.DB()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("getting sql.DB: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ping with timeout
|
||||||
|
if err = sqlDB.PingContext(ctx); err != nil {
|
||||||
|
return fmt.Errorf("pinging database: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set connection pool settings
|
||||||
|
sqlDB.SetMaxOpenConns(maxOpenConns)
|
||||||
|
sqlDB.SetMaxIdleConns(maxIdleConns)
|
||||||
|
sqlDB.SetConnMaxLifetime(connMaxLifetime)
|
||||||
|
|
||||||
|
// Store the connection
|
||||||
|
cm.db = db
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsOnline checks if the database is currently connected and responsive.
|
||||||
|
// If disconnected, it tries to reconnect (respecting cooldowns in GetDB).
|
||||||
|
func (cm *ConnectionManager) IsOnline() bool {
|
||||||
|
cm.mu.RLock()
|
||||||
|
isDisconnected := cm.db == nil
|
||||||
|
lastErr := cm.lastError
|
||||||
|
checkedRecently := time.Since(cm.lastCheck) < cm.pingInterval
|
||||||
|
cm.mu.RUnlock()
|
||||||
|
|
||||||
|
// Try reconnect in disconnected state.
|
||||||
|
if isDisconnected {
|
||||||
|
_, err := cm.GetDB()
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we've checked recently, return cached result.
|
||||||
|
if checkedRecently {
|
||||||
|
return lastErr == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Need to perform actual ping.
|
||||||
|
cm.mu.Lock()
|
||||||
|
defer cm.mu.Unlock()
|
||||||
|
|
||||||
|
// Double-check after acquiring write lock
|
||||||
|
if cm.db == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Perform ping with timeout
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), cm.connectTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
sqlDB, err := cm.db.DB()
|
||||||
|
if err != nil {
|
||||||
|
cm.lastError = err
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
cm.db = nil
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if err = sqlDB.PingContext(ctx); err != nil {
|
||||||
|
cm.lastError = err
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
cm.db = nil
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update last check time and return success
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
cm.lastError = nil
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// TryConnect forces a new connection attempt (for UI "Reconnect" button)
|
||||||
|
// Ignores cooldown period
|
||||||
|
func (cm *ConnectionManager) TryConnect() error {
|
||||||
|
cm.mu.Lock()
|
||||||
|
defer cm.mu.Unlock()
|
||||||
|
|
||||||
|
// Attempt to connect
|
||||||
|
err := cm.connect()
|
||||||
|
if err != nil {
|
||||||
|
cm.lastError = err
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update last check time and clear error
|
||||||
|
cm.lastCheck = time.Now()
|
||||||
|
cm.lastError = nil
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disconnect closes the current database connection
|
||||||
|
func (cm *ConnectionManager) Disconnect() {
|
||||||
|
cm.mu.Lock()
|
||||||
|
defer cm.mu.Unlock()
|
||||||
|
|
||||||
|
if cm.db != nil {
|
||||||
|
sqlDB, err := cm.db.DB()
|
||||||
|
if err == nil {
|
||||||
|
sqlDB.Close()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
cm.db = nil
|
||||||
|
cm.lastError = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLastError returns the last connection error (thread-safe)
|
||||||
|
func (cm *ConnectionManager) GetLastError() error {
|
||||||
|
cm.mu.RLock()
|
||||||
|
defer cm.mu.RUnlock()
|
||||||
|
return cm.lastError
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStatus returns the current connection status
|
||||||
|
func (cm *ConnectionManager) GetStatus() ConnectionStatus {
|
||||||
|
cm.mu.RLock()
|
||||||
|
defer cm.mu.RUnlock()
|
||||||
|
|
||||||
|
status := ConnectionStatus{
|
||||||
|
IsConnected: cm.db != nil,
|
||||||
|
LastCheck: cm.lastCheck,
|
||||||
|
LastError: "",
|
||||||
|
DSNHost: "",
|
||||||
|
}
|
||||||
|
|
||||||
|
if cm.lastError != nil {
|
||||||
|
status.LastError = cm.lastError.Error()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract host from DSN for display
|
||||||
|
if cm.localDB != nil {
|
||||||
|
if dsn, err := cm.localDB.GetDSN(); err == nil {
|
||||||
|
// Parse DSN to extract host:port
|
||||||
|
// Format: user:password@tcp(host:port)/database?...
|
||||||
|
status.DSNHost = extractHostFromDSN(dsn)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return status
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractHostFromDSN extracts the host:port part from a DSN string
|
||||||
|
func extractHostFromDSN(dsn string) string {
|
||||||
|
// Find the tcp( part
|
||||||
|
tcpStart := 0
|
||||||
|
if tcpStart = len("tcp("); tcpStart < len(dsn) && dsn[tcpStart] == '(' {
|
||||||
|
// Look for the closing parenthesis
|
||||||
|
parenEnd := -1
|
||||||
|
for i := tcpStart + 1; i < len(dsn); i++ {
|
||||||
|
if dsn[i] == ')' {
|
||||||
|
parenEnd = i
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if parenEnd != -1 {
|
||||||
|
// Extract host:port part between tcp( and )
|
||||||
|
hostPort := dsn[tcpStart+1 : parenEnd]
|
||||||
|
return hostPort
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: try to find host:port by looking for @tcp( pattern
|
||||||
|
atIndex := -1
|
||||||
|
for i := 0; i < len(dsn)-4; i++ {
|
||||||
|
if dsn[i:i+4] == "@tcp" {
|
||||||
|
atIndex = i
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if atIndex != -1 {
|
||||||
|
// Look for the opening parenthesis after @tcp
|
||||||
|
parenStart := -1
|
||||||
|
for i := atIndex + 4; i < len(dsn); i++ {
|
||||||
|
if dsn[i] == '(' {
|
||||||
|
parenStart = i
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if parenStart != -1 {
|
||||||
|
// Look for the closing parenthesis
|
||||||
|
parenEnd := -1
|
||||||
|
for i := parenStart + 1; i < len(dsn); i++ {
|
||||||
|
if dsn[i] == ')' {
|
||||||
|
parenEnd = i
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if parenEnd != -1 {
|
||||||
|
hostPort := dsn[parenStart+1 : parenEnd]
|
||||||
|
return hostPort
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we can't parse it, return empty string
|
||||||
|
return ""
|
||||||
|
}
|
||||||
@@ -4,9 +4,9 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/middleware"
|
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
)
|
)
|
||||||
|
|
||||||
type AuthHandler struct {
|
type AuthHandler struct {
|
||||||
|
|||||||
@@ -3,70 +3,106 @@ package handlers
|
|||||||
import (
|
import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type ComponentHandler struct {
|
type ComponentHandler struct {
|
||||||
componentService *services.ComponentService
|
componentService *services.ComponentService
|
||||||
|
localDB *localdb.LocalDB
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewComponentHandler(componentService *services.ComponentService) *ComponentHandler {
|
func NewComponentHandler(componentService *services.ComponentService, localDB *localdb.LocalDB) *ComponentHandler {
|
||||||
return &ComponentHandler{componentService: componentService}
|
return &ComponentHandler{
|
||||||
|
componentService: componentService,
|
||||||
|
localDB: localDB,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ComponentHandler) List(c *gin.Context) {
|
func (h *ComponentHandler) List(c *gin.Context) {
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
|
if page < 1 {
|
||||||
filter := repository.ComponentFilter{
|
page = 1
|
||||||
Category: c.Query("category"),
|
}
|
||||||
Vendor: c.Query("vendor"),
|
if perPage < 1 {
|
||||||
Search: c.Query("search"),
|
perPage = 20
|
||||||
HasPrice: c.Query("has_price") == "true",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
result, err := h.componentService.List(filter, page, perPage)
|
filter := repository.ComponentFilter{
|
||||||
|
Category: c.Query("category"),
|
||||||
|
Search: c.Query("search"),
|
||||||
|
HasPrice: c.Query("has_price") == "true",
|
||||||
|
ExcludeHidden: c.Query("include_hidden") != "true", // По умолчанию скрытые не показываются
|
||||||
|
}
|
||||||
|
|
||||||
|
localFilter := localdb.ComponentFilter{
|
||||||
|
Category: filter.Category,
|
||||||
|
Search: filter.Search,
|
||||||
|
HasPrice: filter.HasPrice,
|
||||||
|
}
|
||||||
|
offset := (page - 1) * perPage
|
||||||
|
localComps, total, err := h.localDB.ListComponents(localFilter, offset, perPage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, result)
|
components := make([]services.ComponentView, len(localComps))
|
||||||
|
for i, lc := range localComps {
|
||||||
|
components[i] = services.ComponentView{
|
||||||
|
LotName: lc.LotName,
|
||||||
|
Description: lc.LotDescription,
|
||||||
|
Category: lc.Category,
|
||||||
|
CategoryName: lc.Category,
|
||||||
|
Model: lc.Model,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, &services.ComponentListResult{
|
||||||
|
Components: components,
|
||||||
|
Total: total,
|
||||||
|
Page: page,
|
||||||
|
PerPage: perPage,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ComponentHandler) Get(c *gin.Context) {
|
func (h *ComponentHandler) Get(c *gin.Context) {
|
||||||
lotName := c.Param("lot_name")
|
lotName := c.Param("lot_name")
|
||||||
|
component, err := h.localDB.GetLocalComponent(lotName)
|
||||||
component, err := h.componentService.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, component)
|
c.JSON(http.StatusOK, services.ComponentView{
|
||||||
|
LotName: component.LotName,
|
||||||
|
Description: component.LotDescription,
|
||||||
|
Category: component.Category,
|
||||||
|
CategoryName: component.Category,
|
||||||
|
Model: component.Model,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ComponentHandler) GetCategories(c *gin.Context) {
|
func (h *ComponentHandler) GetCategories(c *gin.Context) {
|
||||||
categories, err := h.componentService.GetCategories()
|
codes, err := h.localDB.GetLocalComponentCategories()
|
||||||
if err != nil {
|
if err == nil && len(codes) > 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
categories := make([]models.Category, 0, len(codes))
|
||||||
|
for _, code := range codes {
|
||||||
|
trimmed := strings.TrimSpace(code)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
categories = append(categories, models.Category{Code: trimmed, Name: trimmed})
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, categories)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusOK, categories)
|
c.JSON(http.StatusOK, models.DefaultCategories)
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ComponentHandler) GetVendors(c *gin.Context) {
|
|
||||||
category := c.Query("category")
|
|
||||||
|
|
||||||
vendors, err := h.componentService.GetVendors(category)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, vendors)
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,13 +1,12 @@
|
|||||||
package handlers
|
package handlers
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/middleware"
|
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type ConfigurationHandler struct {
|
type ConfigurationHandler struct {
|
||||||
@@ -26,11 +25,11 @@ func NewConfigurationHandler(
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) List(c *gin.Context) {
|
func (h *ConfigurationHandler) List(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
|
|
||||||
configs, total, err := h.configService.ListByUser(userID, page, perPage)
|
configs, total, err := h.configService.ListByUser(username, page, perPage)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
@@ -45,7 +44,7 @@ func (h *ConfigurationHandler) List(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Create(c *gin.Context) {
|
func (h *ConfigurationHandler) Create(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
|
|
||||||
var req services.CreateConfigRequest
|
var req services.CreateConfigRequest
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
@@ -53,7 +52,7 @@ func (h *ConfigurationHandler) Create(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
config, err := h.configService.Create(userID, &req)
|
config, err := h.configService.Create(username, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
@@ -63,10 +62,10 @@ func (h *ConfigurationHandler) Create(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Get(c *gin.Context) {
|
func (h *ConfigurationHandler) Get(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
config, err := h.configService.GetByUUID(uuid, userID)
|
config, err := h.configService.GetByUUID(uuid, username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
status := http.StatusNotFound
|
status := http.StatusNotFound
|
||||||
if err == services.ErrConfigForbidden {
|
if err == services.ErrConfigForbidden {
|
||||||
@@ -80,7 +79,7 @@ func (h *ConfigurationHandler) Get(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Update(c *gin.Context) {
|
func (h *ConfigurationHandler) Update(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
var req services.CreateConfigRequest
|
var req services.CreateConfigRequest
|
||||||
@@ -89,7 +88,7 @@ func (h *ConfigurationHandler) Update(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
config, err := h.configService.Update(uuid, userID, &req)
|
config, err := h.configService.Update(uuid, username, &req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
status := http.StatusInternalServerError
|
status := http.StatusInternalServerError
|
||||||
if err == services.ErrConfigNotFound {
|
if err == services.ErrConfigNotFound {
|
||||||
@@ -105,10 +104,10 @@ func (h *ConfigurationHandler) Update(c *gin.Context) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) Delete(c *gin.Context) {
|
func (h *ConfigurationHandler) Delete(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
err := h.configService.Delete(uuid, userID)
|
err := h.configService.Delete(uuid, username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
status := http.StatusInternalServerError
|
status := http.StatusInternalServerError
|
||||||
if err == services.ErrConfigNotFound {
|
if err == services.ErrConfigNotFound {
|
||||||
@@ -123,34 +122,118 @@ func (h *ConfigurationHandler) Delete(c *gin.Context) {
|
|||||||
c.JSON(http.StatusOK, gin.H{"message": "deleted"})
|
c.JSON(http.StatusOK, gin.H{"message": "deleted"})
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) ExportJSON(c *gin.Context) {
|
type RenameConfigRequest struct {
|
||||||
userID := middleware.GetUserID(c)
|
Name string `json:"name" binding:"required"`
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
data, err := h.configService.ExportJSON(uuid, userID)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.Header("Content-Disposition", "attachment; filename=config.json")
|
|
||||||
c.Data(http.StatusOK, "application/json", data)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ConfigurationHandler) ImportJSON(c *gin.Context) {
|
func (h *ConfigurationHandler) Rename(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
data, err := io.ReadAll(c.Request.Body)
|
var req RenameConfigRequest
|
||||||
if err != nil {
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read body"})
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
config, err := h.configService.ImportJSON(userID, data)
|
config, err := h.configService.Rename(uuid, username, req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
status := http.StatusInternalServerError
|
||||||
|
if err == services.ErrConfigNotFound {
|
||||||
|
status = http.StatusNotFound
|
||||||
|
} else if err == services.ErrConfigForbidden {
|
||||||
|
status = http.StatusForbidden
|
||||||
|
}
|
||||||
|
c.JSON(status, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, config)
|
||||||
|
}
|
||||||
|
|
||||||
|
type CloneConfigRequest struct {
|
||||||
|
Name string `json:"name" binding:"required"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *ConfigurationHandler) Clone(c *gin.Context) {
|
||||||
|
username := middleware.GetUsername(c)
|
||||||
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
|
var req CloneConfigRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
config, err := h.configService.Clone(uuid, username, req.Name)
|
||||||
|
if err != nil {
|
||||||
|
status := http.StatusInternalServerError
|
||||||
|
if err == services.ErrConfigNotFound {
|
||||||
|
status = http.StatusNotFound
|
||||||
|
} else if err == services.ErrConfigForbidden {
|
||||||
|
status = http.StatusForbidden
|
||||||
|
}
|
||||||
|
c.JSON(status, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
c.JSON(http.StatusCreated, config)
|
c.JSON(http.StatusCreated, config)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *ConfigurationHandler) RefreshPrices(c *gin.Context) {
|
||||||
|
username := middleware.GetUsername(c)
|
||||||
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
|
config, err := h.configService.RefreshPrices(uuid, username)
|
||||||
|
if err != nil {
|
||||||
|
status := http.StatusInternalServerError
|
||||||
|
if err == services.ErrConfigNotFound {
|
||||||
|
status = http.StatusNotFound
|
||||||
|
} else if err == services.ErrConfigForbidden {
|
||||||
|
status = http.StatusForbidden
|
||||||
|
}
|
||||||
|
c.JSON(status, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, config)
|
||||||
|
}
|
||||||
|
|
||||||
|
// func (h *ConfigurationHandler) ExportJSON(c *gin.Context) {
|
||||||
|
// userID := middleware.GetUserID(c)
|
||||||
|
// uuid := c.Param("uuid")
|
||||||
|
//
|
||||||
|
// config, err := h.configService.GetByUUID(uuid, userID)
|
||||||
|
// if err != nil {
|
||||||
|
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
|
// return
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// data, err := h.configService.ExportJSON(uuid, userID)
|
||||||
|
// if err != nil {
|
||||||
|
// c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
|
// return
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// filename := fmt.Sprintf("%s %s SPEC.json", config.CreatedAt.Format("2006-01-02"), config.Name)
|
||||||
|
// c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
|
// c.Data(http.StatusOK, "application/json", data)
|
||||||
|
// }
|
||||||
|
|
||||||
|
// func (h *ConfigurationHandler) ImportJSON(c *gin.Context) {
|
||||||
|
// userID := middleware.GetUserID(c)
|
||||||
|
//
|
||||||
|
// data, err := io.ReadAll(c.Request.Body)
|
||||||
|
// if err != nil {
|
||||||
|
// c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read body"})
|
||||||
|
// return
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// config, err := h.configService.ImportJSON(userID, data)
|
||||||
|
// if err != nil {
|
||||||
|
// c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
|
// return
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// c.JSON(http.StatusCreated, config)
|
||||||
|
// }
|
||||||
|
|||||||
@@ -3,35 +3,41 @@ package handlers
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/middleware"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/middleware"
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type ExportHandler struct {
|
type ExportHandler struct {
|
||||||
exportService *services.ExportService
|
exportService *services.ExportService
|
||||||
configService *services.ConfigurationService
|
configService services.ConfigurationGetter
|
||||||
componentService *services.ComponentService
|
projectService *services.ProjectService
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewExportHandler(
|
func NewExportHandler(
|
||||||
exportService *services.ExportService,
|
exportService *services.ExportService,
|
||||||
configService *services.ConfigurationService,
|
configService services.ConfigurationGetter,
|
||||||
componentService *services.ComponentService,
|
projectService *services.ProjectService,
|
||||||
) *ExportHandler {
|
) *ExportHandler {
|
||||||
return &ExportHandler{
|
return &ExportHandler{
|
||||||
exportService: exportService,
|
exportService: exportService,
|
||||||
configService: configService,
|
configService: configService,
|
||||||
componentService: componentService,
|
projectService: projectService,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type ExportRequest struct {
|
type ExportRequest struct {
|
||||||
Name string `json:"name" binding:"required"`
|
Name string `json:"name" binding:"required"`
|
||||||
Items []struct {
|
ProjectName string `json:"project_name"`
|
||||||
|
ProjectUUID string `json:"project_uuid"`
|
||||||
|
Article string `json:"article"`
|
||||||
|
ServerCount int `json:"server_count"`
|
||||||
|
PricelistID *uint `json:"pricelist_id"`
|
||||||
|
Items []struct {
|
||||||
LotName string `json:"lot_name" binding:"required"`
|
LotName string `json:"lot_name" binding:"required"`
|
||||||
Quantity int `json:"quantity" binding:"required,min=1"`
|
Quantity int `json:"quantity" binding:"required,min=1"`
|
||||||
UnitPrice float64 `json:"unit_price"`
|
UnitPrice float64 `json:"unit_price"`
|
||||||
@@ -48,127 +54,162 @@ func (h *ExportHandler) ExportCSV(c *gin.Context) {
|
|||||||
|
|
||||||
data := h.buildExportData(&req)
|
data := h.buildExportData(&req)
|
||||||
|
|
||||||
csvData, err := h.exportService.ToCSV(data)
|
// Validate before streaming (can return JSON error)
|
||||||
if err != nil {
|
if len(data.Configs) == 0 || len(data.Configs[0].Items) == 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no items to export"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s_%s.csv", req.Name, time.Now().Format("20060102"))
|
// Get project code for filename
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=%s", filename))
|
projectCode := req.ProjectName // legacy field: may contain code from frontend
|
||||||
c.Data(http.StatusOK, "text/csv; charset=utf-8", csvData)
|
if projectCode == "" && req.ProjectUUID != "" {
|
||||||
}
|
username := middleware.GetUsername(c)
|
||||||
|
if project, err := h.projectService.GetByUUID(req.ProjectUUID, username); err == nil && project != nil {
|
||||||
func (h *ExportHandler) ExportXLSX(c *gin.Context) {
|
projectCode = project.Code
|
||||||
var req ExportRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
data := h.buildExportData(&req)
|
|
||||||
|
|
||||||
xlsxData, err := h.exportService.ToXLSX(data)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s_%s.xlsx", req.Name, time.Now().Format("20060102"))
|
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=%s", filename))
|
|
||||||
c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", xlsxData)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ExportData {
|
|
||||||
items := make([]services.ExportItem, len(req.Items))
|
|
||||||
var total float64
|
|
||||||
|
|
||||||
for i, item := range req.Items {
|
|
||||||
itemTotal := item.UnitPrice * float64(item.Quantity)
|
|
||||||
items[i] = services.ExportItem{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Quantity: item.Quantity,
|
|
||||||
UnitPrice: item.UnitPrice,
|
|
||||||
TotalPrice: itemTotal,
|
|
||||||
}
|
}
|
||||||
total += itemTotal
|
}
|
||||||
|
if projectCode == "" {
|
||||||
|
projectCode = req.Name
|
||||||
}
|
}
|
||||||
|
|
||||||
return &services.ExportData{
|
// Set headers before streaming
|
||||||
Name: req.Name,
|
exportDate := data.CreatedAt
|
||||||
Items: items,
|
articleSegment := sanitizeFilenameSegment(req.Article)
|
||||||
Total: total,
|
if articleSegment == "" {
|
||||||
Notes: req.Notes,
|
articleSegment = "BOM"
|
||||||
CreatedAt: time.Now(),
|
|
||||||
}
|
}
|
||||||
|
filename := fmt.Sprintf("%s (%s) %s %s.csv", exportDate.Format("2006-01-02"), projectCode, req.Name, articleSegment)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
|
|
||||||
|
// Stream CSV (cannot return JSON after this point)
|
||||||
|
if err := h.exportService.ToCSV(c.Writer, data); err != nil {
|
||||||
|
c.Error(err) // Log only
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildExportData converts an ExportRequest into a ProjectExportData using a temporary Configuration model
|
||||||
|
// so that ExportService.ConfigToExportData can resolve categories via localDB.
|
||||||
|
func (h *ExportHandler) buildExportData(req *ExportRequest) *services.ProjectExportData {
|
||||||
|
configItems := make(models.ConfigItems, len(req.Items))
|
||||||
|
for i, item := range req.Items {
|
||||||
|
configItems[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
serverCount := req.ServerCount
|
||||||
|
if serverCount < 1 {
|
||||||
|
serverCount = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := &models.Configuration{
|
||||||
|
Article: req.Article,
|
||||||
|
ServerCount: serverCount,
|
||||||
|
PricelistID: req.PricelistID,
|
||||||
|
Items: configItems,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
return h.exportService.ConfigToExportData(cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
func sanitizeFilenameSegment(value string) string {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
replacer := strings.NewReplacer(
|
||||||
|
"/", "_",
|
||||||
|
"\\", "_",
|
||||||
|
":", "_",
|
||||||
|
"*", "_",
|
||||||
|
"?", "_",
|
||||||
|
"\"", "_",
|
||||||
|
"<", "_",
|
||||||
|
">", "_",
|
||||||
|
"|", "_",
|
||||||
|
)
|
||||||
|
return strings.TrimSpace(replacer.Replace(value))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
func (h *ExportHandler) ExportConfigCSV(c *gin.Context) {
|
||||||
userID := middleware.GetUserID(c)
|
username := middleware.GetUsername(c)
|
||||||
uuid := c.Param("uuid")
|
uuid := c.Param("uuid")
|
||||||
|
|
||||||
config, err := h.configService.GetByUUID(uuid, userID)
|
// Get config before streaming (can return JSON error)
|
||||||
|
config, err := h.configService.GetByUUID(uuid, username)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
data := h.configToExportData(config)
|
data := h.exportService.ConfigToExportData(config)
|
||||||
|
|
||||||
csvData, err := h.exportService.ToCSV(data)
|
// Validate before streaming (can return JSON error)
|
||||||
if err != nil {
|
if len(data.Configs) == 0 || len(data.Configs[0].Items) == 0 {
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no items to export"})
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s_%s.csv", config.Name, config.CreatedAt.Format("20060102"))
|
// Get project code for filename
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=%s", filename))
|
projectCode := config.Name // fallback: use config name if no project
|
||||||
c.Data(http.StatusOK, "text/csv; charset=utf-8", csvData)
|
if config.ProjectUUID != nil && *config.ProjectUUID != "" {
|
||||||
}
|
if project, err := h.projectService.GetByUUID(*config.ProjectUUID, username); err == nil && project != nil {
|
||||||
|
projectCode = project.Code
|
||||||
func (h *ExportHandler) ExportConfigXLSX(c *gin.Context) {
|
|
||||||
userID := middleware.GetUserID(c)
|
|
||||||
uuid := c.Param("uuid")
|
|
||||||
|
|
||||||
config, err := h.configService.GetByUUID(uuid, userID)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
data := h.configToExportData(config)
|
|
||||||
|
|
||||||
xlsxData, err := h.exportService.ToXLSX(data)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
filename := fmt.Sprintf("%s_%s.xlsx", config.Name, config.CreatedAt.Format("20060102"))
|
|
||||||
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=%s", filename))
|
|
||||||
c.Data(http.StatusOK, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", xlsxData)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *ExportHandler) configToExportData(config *models.Configuration) *services.ExportData {
|
|
||||||
items := make([]services.ExportItem, len(config.Items))
|
|
||||||
var total float64
|
|
||||||
|
|
||||||
for i, item := range config.Items {
|
|
||||||
itemTotal := item.UnitPrice * float64(item.Quantity)
|
|
||||||
items[i] = services.ExportItem{
|
|
||||||
LotName: item.LotName,
|
|
||||||
Quantity: item.Quantity,
|
|
||||||
UnitPrice: item.UnitPrice,
|
|
||||||
TotalPrice: itemTotal,
|
|
||||||
}
|
}
|
||||||
total += itemTotal
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return &services.ExportData{
|
// Set headers before streaming
|
||||||
Name: config.Name,
|
// Use price update time if available, otherwise creation time
|
||||||
Items: items,
|
exportDate := config.CreatedAt
|
||||||
Total: total,
|
if config.PriceUpdatedAt != nil {
|
||||||
Notes: config.Notes,
|
exportDate = *config.PriceUpdatedAt
|
||||||
CreatedAt: config.CreatedAt,
|
}
|
||||||
|
filename := fmt.Sprintf("%s (%s) %s BOM.csv", exportDate.Format("2006-01-02"), projectCode, config.Name)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
|
|
||||||
|
// Stream CSV (cannot return JSON after this point)
|
||||||
|
if err := h.exportService.ToCSV(c.Writer, data); err != nil {
|
||||||
|
c.Error(err) // Log only
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExportProjectCSV exports all active configurations of a project as a single CSV file.
|
||||||
|
func (h *ExportHandler) ExportProjectCSV(c *gin.Context) {
|
||||||
|
username := middleware.GetUsername(c)
|
||||||
|
projectUUID := c.Param("uuid")
|
||||||
|
|
||||||
|
project, err := h.projectService.GetByUUID(projectUUID, username)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.projectService.ListConfigurations(projectUUID, username, "active")
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(result.Configs) == 0 {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "no configurations to export"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
data := h.exportService.ProjectToExportData(result.Configs)
|
||||||
|
|
||||||
|
// Filename: YYYY-MM-DD (ProjectCode) BOM.csv
|
||||||
|
filename := fmt.Sprintf("%s (%s) BOM.csv", time.Now().Format("2006-01-02"), project.Code)
|
||||||
|
c.Header("Content-Type", "text/csv; charset=utf-8")
|
||||||
|
c.Header("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||||
|
|
||||||
|
if err := h.exportService.ToCSV(c.Writer, data); err != nil {
|
||||||
|
c.Error(err)
|
||||||
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
305
internal/handlers/export_test.go
Normal file
305
internal/handlers/export_test.go
Normal file
@@ -0,0 +1,305 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/csv"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Mock services for testing
|
||||||
|
type mockConfigService struct {
|
||||||
|
config *models.Configuration
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockConfigService) GetByUUID(uuid string, ownerUsername string) (*models.Configuration, error) {
|
||||||
|
return m.config, m.err
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
func TestExportCSV_Success(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Create handler with mocks
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create JSON request body
|
||||||
|
jsonBody := `{
|
||||||
|
"name": "Test Export",
|
||||||
|
"items": [
|
||||||
|
{
|
||||||
|
"lot_name": "LOT-001",
|
||||||
|
"quantity": 2,
|
||||||
|
"unit_price": 100.50
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"notes": "Test notes"
|
||||||
|
}`
|
||||||
|
|
||||||
|
// Create HTTP request
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(jsonBody))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
// Create response recorder
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
// Create Gin context
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
// Call handler
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Check status code
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("Expected status 200, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check Content-Type header
|
||||||
|
contentType := w.Header().Get("Content-Type")
|
||||||
|
if contentType != "text/csv; charset=utf-8" {
|
||||||
|
t.Errorf("Expected Content-Type 'text/csv; charset=utf-8', got %q", contentType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for BOM
|
||||||
|
responseBody := w.Body.Bytes()
|
||||||
|
if len(responseBody) < 3 {
|
||||||
|
t.Fatalf("Response too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := responseBody[:3]
|
||||||
|
if !bytes.Equal(actualBOM, expectedBOM) {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch. Expected %v, got %v", expectedBOM, actualBOM)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check semicolon delimiter in CSV
|
||||||
|
reader := csv.NewReader(bytes.NewReader(responseBody[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Errorf("Failed to parse CSV header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 8 {
|
||||||
|
t.Errorf("Expected 8 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportCSV_InvalidRequest(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create invalid request (missing required field)
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(`{"name": "Test"}`))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Errorf("Expected status 400, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportCSV_EmptyItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create request with empty items array - should fail binding validation
|
||||||
|
req, _ := http.NewRequest("POST", "/api/export/csv", bytes.NewBufferString(`{"name":"Empty Export","items":[],"notes":""}`))
|
||||||
|
req.Header.Set("Content-Type", "application/json")
|
||||||
|
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
|
||||||
|
handler.ExportCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request (validation error from gin binding)
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Logf("Status code: %d (expected 400 for empty items)", w.Code)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_Success(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Mock configuration
|
||||||
|
mockConfig := &models.Configuration{
|
||||||
|
UUID: "test-uuid",
|
||||||
|
Name: "Test Config",
|
||||||
|
OwnerUsername: "testuser",
|
||||||
|
Items: models.ConfigItems{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{config: mockConfig},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
// Create HTTP request
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mock middleware.GetUsername
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Check status code
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("Expected status 200, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check Content-Type header
|
||||||
|
contentType := w.Header().Get("Content-Type")
|
||||||
|
if contentType != "text/csv; charset=utf-8" {
|
||||||
|
t.Errorf("Expected Content-Type 'text/csv; charset=utf-8', got %q", contentType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for BOM
|
||||||
|
responseBody := w.Body.Bytes()
|
||||||
|
if len(responseBody) < 3 {
|
||||||
|
t.Fatalf("Response too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := responseBody[:3]
|
||||||
|
if !bytes.Equal(actualBOM, expectedBOM) {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_NotFound(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{err: errors.New("config not found")},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/nonexistent-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "nonexistent-uuid"},
|
||||||
|
}
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Should return 404 Not Found
|
||||||
|
if w.Code != http.StatusNotFound {
|
||||||
|
t.Errorf("Expected status 404, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestExportConfigCSV_EmptyItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
// Mock configuration with empty items
|
||||||
|
mockConfig := &models.Configuration{
|
||||||
|
UUID: "test-uuid",
|
||||||
|
Name: "Empty Config",
|
||||||
|
OwnerUsername: "testuser",
|
||||||
|
Items: models.ConfigItems{},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
exportSvc := services.NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
handler := NewExportHandler(
|
||||||
|
exportSvc,
|
||||||
|
&mockConfigService{config: mockConfig},
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/configs/test-uuid/export", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{
|
||||||
|
{Key: "uuid", Value: "test-uuid"},
|
||||||
|
}
|
||||||
|
c.Set("username", "testuser")
|
||||||
|
|
||||||
|
handler.ExportConfigCSV(c)
|
||||||
|
|
||||||
|
// Should return 400 Bad Request
|
||||||
|
if w.Code != http.StatusBadRequest {
|
||||||
|
t.Errorf("Expected status 400, got %d", w.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should return JSON error
|
||||||
|
var errResp map[string]interface{}
|
||||||
|
json.Unmarshal(w.Body.Bytes(), &errResp)
|
||||||
|
if _, hasError := errResp["error"]; !hasError {
|
||||||
|
t.Errorf("Expected error in JSON response")
|
||||||
|
}
|
||||||
|
}
|
||||||
229
internal/handlers/pricelist.go
Normal file
229
internal/handlers/pricelist.go
Normal file
@@ -0,0 +1,229 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PricelistHandler struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPricelistHandler(localDB *localdb.LocalDB) *PricelistHandler {
|
||||||
|
return &PricelistHandler{localDB: localDB}
|
||||||
|
}
|
||||||
|
|
||||||
|
// List returns all pricelists with pagination.
|
||||||
|
func (h *PricelistHandler) List(c *gin.Context) {
|
||||||
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 20
|
||||||
|
}
|
||||||
|
source := c.Query("source")
|
||||||
|
activeOnly := c.DefaultQuery("active_only", "false") == "true"
|
||||||
|
|
||||||
|
localPLs, err := h.localDB.GetLocalPricelists()
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if source != "" {
|
||||||
|
filtered := localPLs[:0]
|
||||||
|
for _, lpl := range localPLs {
|
||||||
|
if strings.EqualFold(lpl.Source, source) {
|
||||||
|
filtered = append(filtered, lpl)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
localPLs = filtered
|
||||||
|
}
|
||||||
|
if activeOnly {
|
||||||
|
// Local cache stores only active snapshots for normal operations.
|
||||||
|
}
|
||||||
|
sort.SliceStable(localPLs, func(i, j int) bool { return localPLs[i].CreatedAt.After(localPLs[j].CreatedAt) })
|
||||||
|
total := len(localPLs)
|
||||||
|
start := (page - 1) * perPage
|
||||||
|
if start > total {
|
||||||
|
start = total
|
||||||
|
}
|
||||||
|
end := start + perPage
|
||||||
|
if end > total {
|
||||||
|
end = total
|
||||||
|
}
|
||||||
|
pageSlice := localPLs[start:end]
|
||||||
|
summaries := make([]map[string]interface{}, 0, len(pageSlice))
|
||||||
|
for _, lpl := range pageSlice {
|
||||||
|
itemCount := h.localDB.CountLocalPricelistItems(lpl.ID)
|
||||||
|
usageCount := 0
|
||||||
|
if lpl.IsUsed {
|
||||||
|
usageCount = 1
|
||||||
|
}
|
||||||
|
summaries = append(summaries, map[string]interface{}{
|
||||||
|
"id": lpl.ServerID,
|
||||||
|
"source": lpl.Source,
|
||||||
|
"version": lpl.Version,
|
||||||
|
"created_by": "sync",
|
||||||
|
"item_count": itemCount,
|
||||||
|
"usage_count": usageCount,
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": lpl.CreatedAt,
|
||||||
|
"synced_from": "local",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"pricelists": summaries,
|
||||||
|
"total": total,
|
||||||
|
"page": page,
|
||||||
|
"per_page": perPage,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get returns a single pricelist by ID.
|
||||||
|
func (h *PricelistHandler) Get(c *gin.Context) {
|
||||||
|
idStr := c.Param("id")
|
||||||
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"id": localPL.ServerID,
|
||||||
|
"source": localPL.Source,
|
||||||
|
"version": localPL.Version,
|
||||||
|
"created_by": "sync",
|
||||||
|
"item_count": h.localDB.CountLocalPricelistItems(localPL.ID),
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": localPL.CreatedAt,
|
||||||
|
"synced_from": "local",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetItems returns items for a pricelist with pagination.
|
||||||
|
func (h *PricelistHandler) GetItems(c *gin.Context) {
|
||||||
|
idStr := c.Param("id")
|
||||||
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
|
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "50"))
|
||||||
|
search := c.Query("search")
|
||||||
|
|
||||||
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 {
|
||||||
|
perPage = 50
|
||||||
|
}
|
||||||
|
var items []localdb.LocalPricelistItem
|
||||||
|
dbq := h.localDB.DB().Model(&localdb.LocalPricelistItem{}).Where("pricelist_id = ?", localPL.ID)
|
||||||
|
if strings.TrimSpace(search) != "" {
|
||||||
|
dbq = dbq.Where("lot_name LIKE ?", "%"+strings.TrimSpace(search)+"%")
|
||||||
|
}
|
||||||
|
var total int64
|
||||||
|
if err := dbq.Count(&total).Error; err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
offset := (page - 1) * perPage
|
||||||
|
|
||||||
|
if err := dbq.Order("lot_name").Offset(offset).Limit(perPage).Find(&items).Error; err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
resultItems := make([]gin.H, 0, len(items))
|
||||||
|
for _, item := range items {
|
||||||
|
resultItems = append(resultItems, gin.H{
|
||||||
|
"id": item.ID,
|
||||||
|
"lot_name": item.LotName,
|
||||||
|
"price": item.Price,
|
||||||
|
"category": item.LotCategory,
|
||||||
|
"available_qty": item.AvailableQty,
|
||||||
|
"partnumbers": []string(item.Partnumbers),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"source": localPL.Source,
|
||||||
|
"items": resultItems,
|
||||||
|
"total": total,
|
||||||
|
"page": page,
|
||||||
|
"per_page": perPage,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *PricelistHandler) GetLotNames(c *gin.Context) {
|
||||||
|
idStr := c.Param("id")
|
||||||
|
id, err := strconv.ParseUint(idStr, 10, 32)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid pricelist ID"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
localPL, err := h.localDB.GetLocalPricelistByServerID(uint(id))
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "pricelist not found"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
items, err := h.localDB.GetLocalPricelistItems(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
lotNames := make([]string, 0, len(items))
|
||||||
|
for _, item := range items {
|
||||||
|
lotNames = append(lotNames, item.LotName)
|
||||||
|
}
|
||||||
|
sort.Strings(lotNames)
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"lot_names": lotNames,
|
||||||
|
"total": len(lotNames),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLatest returns the most recent active pricelist.
|
||||||
|
func (h *PricelistHandler) GetLatest(c *gin.Context) {
|
||||||
|
source := c.DefaultQuery("source", string(models.PricelistSourceEstimate))
|
||||||
|
source = string(models.NormalizePricelistSource(source))
|
||||||
|
|
||||||
|
localPL, err := h.localDB.GetLatestLocalPricelistBySource(source)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusNotFound, gin.H{"error": "no pricelists available"})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"id": localPL.ServerID,
|
||||||
|
"source": localPL.Source,
|
||||||
|
"version": localPL.Version,
|
||||||
|
"created_by": "sync",
|
||||||
|
"item_count": h.localDB.CountLocalPricelistItems(localPL.ID),
|
||||||
|
"is_active": true,
|
||||||
|
"created_at": localPL.CreatedAt,
|
||||||
|
"synced_from": "local",
|
||||||
|
})
|
||||||
|
}
|
||||||
84
internal/handlers/pricelist_test.go
Normal file
84
internal/handlers/pricelist_test.go
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPricelistGetItems_ReturnsLotCategoryFromLocalPricelistItems(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
local, err := localdb.New(filepath.Join(t.TempDir(), "local.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 1,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "S-2026-02-11-001",
|
||||||
|
Name: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(1)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{
|
||||||
|
PricelistID: localPL.ID,
|
||||||
|
LotName: "NO_UNDERSCORE_NAME",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save local pricelist items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
h := NewPricelistHandler(local)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest("GET", "/api/pricelists/1/items?page=1&per_page=50", nil)
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
c, _ := gin.CreateTestContext(w)
|
||||||
|
c.Request = req
|
||||||
|
c.Params = gin.Params{{Key: "id", Value: "1"}}
|
||||||
|
|
||||||
|
h.GetItems(c)
|
||||||
|
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Fatalf("expected status 200, got %d: %s", w.Code, w.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var resp struct {
|
||||||
|
Items []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Category string `json:"category"`
|
||||||
|
UnitPrice any `json:"price"`
|
||||||
|
} `json:"items"`
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(w.Body.Bytes(), &resp); err != nil {
|
||||||
|
t.Fatalf("unmarshal response: %v", err)
|
||||||
|
}
|
||||||
|
if len(resp.Items) != 1 {
|
||||||
|
t.Fatalf("expected 1 item, got %d", len(resp.Items))
|
||||||
|
}
|
||||||
|
if resp.Items[0].LotName != "NO_UNDERSCORE_NAME" {
|
||||||
|
t.Fatalf("expected lot_name NO_UNDERSCORE_NAME, got %q", resp.Items[0].LotName)
|
||||||
|
}
|
||||||
|
if resp.Items[0].Category != "CPU" {
|
||||||
|
t.Fatalf("expected category CPU, got %q", resp.Items[0].Category)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,210 +0,0 @@
|
|||||||
package handlers
|
|
||||||
|
|
||||||
import (
|
|
||||||
"net/http"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
|
||||||
"github.com/mchus/quoteforge/internal/middleware"
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
"github.com/mchus/quoteforge/internal/services/alerts"
|
|
||||||
"github.com/mchus/quoteforge/internal/services/pricing"
|
|
||||||
)
|
|
||||||
|
|
||||||
type PricingHandler struct {
|
|
||||||
pricingService *pricing.Service
|
|
||||||
alertService *alerts.Service
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
statsRepo *repository.StatsRepository
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewPricingHandler(
|
|
||||||
pricingService *pricing.Service,
|
|
||||||
alertService *alerts.Service,
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
statsRepo *repository.StatsRepository,
|
|
||||||
) *PricingHandler {
|
|
||||||
return &PricingHandler{
|
|
||||||
pricingService: pricingService,
|
|
||||||
alertService: alertService,
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
statsRepo: statsRepo,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) GetStats(c *gin.Context) {
|
|
||||||
newAlerts, _ := h.alertService.GetNewAlertsCount()
|
|
||||||
topComponents, _ := h.statsRepo.GetTopComponents(10)
|
|
||||||
trendingComponents, _ := h.statsRepo.GetTrendingComponents(10)
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"new_alerts_count": newAlerts,
|
|
||||||
"top_components": topComponents,
|
|
||||||
"trending_components": trendingComponents,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ListComponents(c *gin.Context) {
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
|
||||||
|
|
||||||
filter := repository.ComponentFilter{
|
|
||||||
Category: c.Query("category"),
|
|
||||||
Vendor: c.Query("vendor"),
|
|
||||||
Search: c.Query("search"),
|
|
||||||
}
|
|
||||||
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 || perPage > 100 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
|
|
||||||
components, total, err := h.componentRepo.List(filter, offset, perPage)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"components": components,
|
|
||||||
"total": total,
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) GetComponentPricing(c *gin.Context) {
|
|
||||||
lotName := c.Param("lot_name")
|
|
||||||
|
|
||||||
component, err := h.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusNotFound, gin.H{"error": "component not found"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
stats, err := h.pricingService.GetPriceStats(lotName, 0)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"component": component,
|
|
||||||
"price_stats": stats,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
type UpdatePriceRequest struct {
|
|
||||||
LotName string `json:"lot_name" binding:"required"`
|
|
||||||
Method models.PriceMethod `json:"method"`
|
|
||||||
PeriodDays int `json:"period_days"`
|
|
||||||
ManualPrice *float64 `json:"manual_price"`
|
|
||||||
Reason string `json:"reason"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) UpdatePrice(c *gin.Context) {
|
|
||||||
userID := middleware.GetUserID(c)
|
|
||||||
|
|
||||||
var req UpdatePriceRequest
|
|
||||||
if err := c.ShouldBindJSON(&req); err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if req.ManualPrice != nil && *req.ManualPrice > 0 {
|
|
||||||
err := h.pricingService.SetManualPrice(req.LotName, *req.ManualPrice, req.Reason, userID)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if req.Method != "" {
|
|
||||||
err := h.pricingService.UpdatePriceMethod(req.LotName, req.Method, req.PeriodDays)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "price updated"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) RecalculateAll(c *gin.Context) {
|
|
||||||
// This would be better as a background job
|
|
||||||
c.JSON(http.StatusAccepted, gin.H{"message": "recalculation started"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ListAlerts(c *gin.Context) {
|
|
||||||
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
|
||||||
perPage, _ := strconv.Atoi(c.DefaultQuery("per_page", "20"))
|
|
||||||
|
|
||||||
filter := repository.AlertFilter{
|
|
||||||
Status: models.AlertStatus(c.Query("status")),
|
|
||||||
Severity: models.AlertSeverity(c.Query("severity")),
|
|
||||||
Type: models.AlertType(c.Query("type")),
|
|
||||||
LotName: c.Query("lot_name"),
|
|
||||||
}
|
|
||||||
|
|
||||||
alertsList, total, err := h.alertService.List(filter, page, perPage)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{
|
|
||||||
"alerts": alertsList,
|
|
||||||
"total": total,
|
|
||||||
"page": page,
|
|
||||||
"per_page": perPage,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) AcknowledgeAlert(c *gin.Context) {
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Acknowledge(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "acknowledged"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) ResolveAlert(c *gin.Context) {
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Resolve(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "resolved"})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *PricingHandler) IgnoreAlert(c *gin.Context) {
|
|
||||||
id, err := strconv.ParseUint(c.Param("id"), 10, 32)
|
|
||||||
if err != nil {
|
|
||||||
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid alert id"})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := h.alertService.Ignore(uint(id)); err != nil {
|
|
||||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c.JSON(http.StatusOK, gin.H{"message": "ignored"})
|
|
||||||
}
|
|
||||||
@@ -3,8 +3,8 @@ package handlers
|
|||||||
import (
|
import (
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type QuoteHandler struct {
|
type QuoteHandler struct {
|
||||||
@@ -49,3 +49,19 @@ func (h *QuoteHandler) Calculate(c *gin.Context) {
|
|||||||
"total": result.Total,
|
"total": result.Total,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *QuoteHandler) PriceLevels(c *gin.Context) {
|
||||||
|
var req services.PriceLevelsRequest
|
||||||
|
if err := c.ShouldBindJSON(&req); err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.quoteService.CalculatePriceLevels(&req)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, result)
|
||||||
|
}
|
||||||
|
|||||||
272
internal/handlers/setup.go
Normal file
272
internal/handlers/setup.go
Normal file
@@ -0,0 +1,272 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"html/template"
|
||||||
|
"log/slog"
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
gormmysql "gorm.io/driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/logger"
|
||||||
|
)
|
||||||
|
|
||||||
|
type SetupHandler struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
connMgr *db.ConnectionManager
|
||||||
|
templates map[string]*template.Template
|
||||||
|
restartSig chan struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewSetupHandler(localDB *localdb.LocalDB, connMgr *db.ConnectionManager, templatesPath string, restartSig chan struct{}) (*SetupHandler, error) {
|
||||||
|
funcMap := template.FuncMap{
|
||||||
|
"sub": func(a, b int) int { return a - b },
|
||||||
|
"add": func(a, b int) int { return a + b },
|
||||||
|
}
|
||||||
|
|
||||||
|
templates := make(map[string]*template.Template)
|
||||||
|
|
||||||
|
// Load setup template (standalone, no base needed)
|
||||||
|
setupPath := filepath.Join(templatesPath, "setup.html")
|
||||||
|
var tmpl *template.Template
|
||||||
|
var err error
|
||||||
|
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(setupPath)
|
||||||
|
} else {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(qfassets.TemplatesFS, "web/templates/setup.html")
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("parsing setup template: %w", err)
|
||||||
|
}
|
||||||
|
templates["setup.html"] = tmpl
|
||||||
|
|
||||||
|
return &SetupHandler{
|
||||||
|
localDB: localDB,
|
||||||
|
connMgr: connMgr,
|
||||||
|
templates: templates,
|
||||||
|
restartSig: restartSig,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ShowSetup renders the database setup form
|
||||||
|
func (h *SetupHandler) ShowSetup(c *gin.Context) {
|
||||||
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
|
||||||
|
// Get existing settings if any
|
||||||
|
settings, _ := h.localDB.GetSettings()
|
||||||
|
|
||||||
|
data := gin.H{
|
||||||
|
"Settings": settings,
|
||||||
|
}
|
||||||
|
|
||||||
|
tmpl := h.templates["setup.html"]
|
||||||
|
if err := tmpl.ExecuteTemplate(c.Writer, "setup.html", data); err != nil {
|
||||||
|
c.String(http.StatusInternalServerError, "Template error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestConnection tests the database connection without saving
|
||||||
|
func (h *SetupHandler) TestConnection(c *gin.Context) {
|
||||||
|
host := c.PostForm("host")
|
||||||
|
portStr := c.PostForm("port")
|
||||||
|
database := c.PostForm("database")
|
||||||
|
user := c.PostForm("user")
|
||||||
|
password := c.PostForm("password")
|
||||||
|
|
||||||
|
port := 3306
|
||||||
|
if p, err := strconv.Atoi(portStr); err == nil {
|
||||||
|
port = p
|
||||||
|
}
|
||||||
|
|
||||||
|
// If password is empty, try to use saved password
|
||||||
|
if password == "" {
|
||||||
|
if settings, err := h.localDB.GetSettings(); err == nil && settings != nil {
|
||||||
|
password = settings.PasswordEncrypted // GetSettings returns decrypted password in this field
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||||
|
|
||||||
|
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Connection failed: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlDB, err := db.DB()
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Failed to get database handle: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer sqlDB.Close()
|
||||||
|
|
||||||
|
if err := sqlDB.Ping(); err != nil {
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Ping failed: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for required tables
|
||||||
|
var lotCount int64
|
||||||
|
if err := db.Table("lot").Count(&lotCount).Error; err != nil {
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Table 'lot' not found or inaccessible: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check write permission
|
||||||
|
canWrite := testWritePermission(db)
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": true,
|
||||||
|
"lot_count": lotCount,
|
||||||
|
"can_write": canWrite,
|
||||||
|
"message": fmt.Sprintf("Connected successfully! Found %d components.", lotCount),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SaveConnection saves the connection settings and signals restart
|
||||||
|
func (h *SetupHandler) SaveConnection(c *gin.Context) {
|
||||||
|
existingSettings, _ := h.localDB.GetSettings()
|
||||||
|
|
||||||
|
host := c.PostForm("host")
|
||||||
|
portStr := c.PostForm("port")
|
||||||
|
database := c.PostForm("database")
|
||||||
|
user := c.PostForm("user")
|
||||||
|
password := c.PostForm("password")
|
||||||
|
|
||||||
|
port := 3306
|
||||||
|
if p, err := strconv.Atoi(portStr); err == nil {
|
||||||
|
port = p
|
||||||
|
}
|
||||||
|
|
||||||
|
// If password is empty, use saved password
|
||||||
|
if password == "" {
|
||||||
|
if settings, err := h.localDB.GetSettings(); err == nil && settings != nil {
|
||||||
|
password = settings.PasswordEncrypted // GetSettings returns decrypted password in this field
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test connection first
|
||||||
|
dsn := buildMySQLDSN(host, port, database, user, password, 5*time.Second)
|
||||||
|
|
||||||
|
db, err := gorm.Open(gormmysql.Open(dsn), &gorm.Config{
|
||||||
|
Logger: logger.Default.LogMode(logger.Silent),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusBadRequest, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Connection failed: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
sqlDB, _ := db.DB()
|
||||||
|
sqlDB.Close()
|
||||||
|
|
||||||
|
// Save settings
|
||||||
|
if err := h.localDB.SaveSettings(host, port, database, user, password); err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": fmt.Sprintf("Failed to save settings: %v", err),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to connect immediately to verify settings
|
||||||
|
if h.connMgr != nil {
|
||||||
|
if err := h.connMgr.TryConnect(); err != nil {
|
||||||
|
slog.Warn("failed to connect after saving settings", "error", err)
|
||||||
|
} else {
|
||||||
|
slog.Info("successfully connected to database after saving settings")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
settingsChanged := existingSettings == nil ||
|
||||||
|
existingSettings.Host != host ||
|
||||||
|
existingSettings.Port != port ||
|
||||||
|
existingSettings.Database != database ||
|
||||||
|
existingSettings.User != user ||
|
||||||
|
existingSettings.PasswordEncrypted != password
|
||||||
|
|
||||||
|
restartQueued := settingsChanged && h.restartSig != nil
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": true,
|
||||||
|
"message": "Settings saved.",
|
||||||
|
"restart_required": settingsChanged,
|
||||||
|
"restart_queued": restartQueued,
|
||||||
|
})
|
||||||
|
|
||||||
|
// Signal restart after response is sent (if restart signal is configured)
|
||||||
|
if restartQueued {
|
||||||
|
go func() {
|
||||||
|
time.Sleep(500 * time.Millisecond) // Give time for response to be sent
|
||||||
|
select {
|
||||||
|
case h.restartSig <- struct{}{}:
|
||||||
|
default:
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStatus returns the current setup status
|
||||||
|
func (h *SetupHandler) GetStatus(c *gin.Context) {
|
||||||
|
hasSettings := h.localDB.HasSettings()
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"configured": hasSettings,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func testWritePermission(db *gorm.DB) bool {
|
||||||
|
// Simple check: try to create a temporary table and drop it
|
||||||
|
testTable := fmt.Sprintf("qt_write_test_%d", time.Now().UnixNano())
|
||||||
|
|
||||||
|
// Try to create a test table
|
||||||
|
err := db.Exec(fmt.Sprintf("CREATE TABLE %s (id INT)", testTable)).Error
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop it immediately
|
||||||
|
db.Exec(fmt.Sprintf("DROP TABLE %s", testTable))
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildMySQLDSN(host string, port int, database, user, password string, timeout time.Duration) string {
|
||||||
|
cfg := mysqlDriver.NewConfig()
|
||||||
|
cfg.User = user
|
||||||
|
cfg.Passwd = password
|
||||||
|
cfg.Net = "tcp"
|
||||||
|
cfg.Addr = net.JoinHostPort(host, strconv.Itoa(port))
|
||||||
|
cfg.DBName = database
|
||||||
|
cfg.ParseTime = true
|
||||||
|
cfg.Loc = time.Local
|
||||||
|
cfg.Timeout = timeout
|
||||||
|
cfg.Params = map[string]string{
|
||||||
|
"charset": "utf8mb4",
|
||||||
|
}
|
||||||
|
return cfg.FormatDSN()
|
||||||
|
}
|
||||||
654
internal/handlers/sync.go
Normal file
654
internal/handlers/sync.go
Normal file
@@ -0,0 +1,654 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"html/template"
|
||||||
|
"log/slog"
|
||||||
|
"net/http"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
stdsync "sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SyncHandler handles sync API endpoints
|
||||||
|
type SyncHandler struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
syncService *sync.Service
|
||||||
|
connMgr *db.ConnectionManager
|
||||||
|
autoSyncInterval time.Duration
|
||||||
|
onlineGraceFactor float64
|
||||||
|
tmpl *template.Template
|
||||||
|
readinessMu stdsync.Mutex
|
||||||
|
readinessCached *sync.SyncReadiness
|
||||||
|
readinessCachedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSyncHandler creates a new sync handler
|
||||||
|
func NewSyncHandler(localDB *localdb.LocalDB, syncService *sync.Service, connMgr *db.ConnectionManager, templatesPath string, autoSyncInterval time.Duration) (*SyncHandler, error) {
|
||||||
|
// Load sync_status partial template
|
||||||
|
partialPath := filepath.Join(templatesPath, "partials", "sync_status.html")
|
||||||
|
var tmpl *template.Template
|
||||||
|
var err error
|
||||||
|
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
||||||
|
tmpl, err = template.ParseFiles(partialPath)
|
||||||
|
} else {
|
||||||
|
tmpl, err = template.ParseFS(qfassets.TemplatesFS, "web/templates/partials/sync_status.html")
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return &SyncHandler{
|
||||||
|
localDB: localDB,
|
||||||
|
syncService: syncService,
|
||||||
|
connMgr: connMgr,
|
||||||
|
autoSyncInterval: autoSyncInterval,
|
||||||
|
onlineGraceFactor: 1.10,
|
||||||
|
tmpl: tmpl,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncStatusResponse represents the sync status
|
||||||
|
type SyncStatusResponse struct {
|
||||||
|
LastComponentSync *time.Time `json:"last_component_sync"`
|
||||||
|
LastPricelistSync *time.Time `json:"last_pricelist_sync"`
|
||||||
|
IsOnline bool `json:"is_online"`
|
||||||
|
ComponentsCount int64 `json:"components_count"`
|
||||||
|
PricelistsCount int64 `json:"pricelists_count"`
|
||||||
|
ServerPricelists int `json:"server_pricelists"`
|
||||||
|
NeedComponentSync bool `json:"need_component_sync"`
|
||||||
|
NeedPricelistSync bool `json:"need_pricelist_sync"`
|
||||||
|
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type SyncReadinessResponse struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
Blocked bool `json:"blocked"`
|
||||||
|
ReasonCode string `json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetStatus returns current sync status
|
||||||
|
// GET /api/sync/status
|
||||||
|
func (h *SyncHandler) GetStatus(c *gin.Context) {
|
||||||
|
// Check online status by pinging MariaDB
|
||||||
|
isOnline := h.checkOnline()
|
||||||
|
|
||||||
|
// Get sync times
|
||||||
|
lastComponentSync := h.localDB.GetComponentSyncTime()
|
||||||
|
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||||
|
|
||||||
|
// Get counts
|
||||||
|
componentsCount := h.localDB.CountLocalComponents()
|
||||||
|
pricelistsCount := h.localDB.CountLocalPricelists()
|
||||||
|
|
||||||
|
// Get server pricelist count if online
|
||||||
|
serverPricelists := 0
|
||||||
|
needPricelistSync := false
|
||||||
|
if isOnline {
|
||||||
|
status, err := h.syncService.GetStatus()
|
||||||
|
if err == nil {
|
||||||
|
serverPricelists = status.ServerPricelists
|
||||||
|
needPricelistSync = status.NeedsSync
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if component sync is needed (older than 24 hours)
|
||||||
|
needComponentSync := h.localDB.NeedComponentSync(24)
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncStatusResponse{
|
||||||
|
LastComponentSync: lastComponentSync,
|
||||||
|
LastPricelistSync: lastPricelistSync,
|
||||||
|
IsOnline: isOnline,
|
||||||
|
ComponentsCount: componentsCount,
|
||||||
|
PricelistsCount: pricelistsCount,
|
||||||
|
ServerPricelists: serverPricelists,
|
||||||
|
NeedComponentSync: needComponentSync,
|
||||||
|
NeedPricelistSync: needPricelistSync,
|
||||||
|
Readiness: readiness,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetReadiness returns sync readiness guard status.
|
||||||
|
// GET /api/sync/readiness
|
||||||
|
func (h *SyncHandler) GetReadiness(c *gin.Context) {
|
||||||
|
readiness, err := h.syncService.GetReadiness()
|
||||||
|
if err != nil && readiness == nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if readiness == nil {
|
||||||
|
c.JSON(http.StatusOK, SyncReadinessResponse{Status: sync.ReadinessUnknown, Blocked: false})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
c.JSON(http.StatusOK, SyncReadinessResponse{
|
||||||
|
Status: readiness.Status,
|
||||||
|
Blocked: readiness.Blocked,
|
||||||
|
ReasonCode: readiness.ReasonCode,
|
||||||
|
ReasonText: readiness.ReasonText,
|
||||||
|
RequiredMinAppVersion: readiness.RequiredMinAppVersion,
|
||||||
|
LastCheckedAt: readiness.LastCheckedAt,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *SyncHandler) ensureSyncReadiness(c *gin.Context) bool {
|
||||||
|
readiness, err := h.syncService.EnsureReadinessForSync()
|
||||||
|
if err == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
blocked := &sync.SyncBlockedError{}
|
||||||
|
if errors.As(err, &blocked) {
|
||||||
|
c.JSON(http.StatusLocked, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": blocked.Error(),
|
||||||
|
"reason_code": blocked.Readiness.ReasonCode,
|
||||||
|
"reason_text": blocked.Readiness.ReasonText,
|
||||||
|
"required_min_app_version": blocked.Readiness.RequiredMinAppVersion,
|
||||||
|
"status": blocked.Readiness.Status,
|
||||||
|
"blocked": true,
|
||||||
|
"last_checked_at": blocked.Readiness.LastCheckedAt,
|
||||||
|
})
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
_ = readiness
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncResultResponse represents sync operation result
|
||||||
|
type SyncResultResponse struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
Synced int `json:"synced"`
|
||||||
|
Duration string `json:"duration"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncComponents syncs components from MariaDB to local SQLite
|
||||||
|
// POST /api/sync/components
|
||||||
|
func (h *SyncHandler) SyncComponents(c *gin.Context) {
|
||||||
|
if !h.ensureSyncReadiness(c) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get database connection from ConnectionManager
|
||||||
|
mariaDB, err := h.connMgr.GetDB()
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Database connection failed: " + err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.localDB.SyncComponents(mariaDB)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("component sync failed", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncResultResponse{
|
||||||
|
Success: true,
|
||||||
|
Message: "Components synced successfully",
|
||||||
|
Synced: result.TotalSynced,
|
||||||
|
Duration: result.Duration.String(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncPricelists syncs pricelists from MariaDB to local SQLite
|
||||||
|
// POST /api/sync/pricelists
|
||||||
|
func (h *SyncHandler) SyncPricelists(c *gin.Context) {
|
||||||
|
if !h.ensureSyncReadiness(c) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
startTime := time.Now()
|
||||||
|
synced, err := h.syncService.SyncPricelists()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("pricelist sync failed", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncResultResponse{
|
||||||
|
Success: true,
|
||||||
|
Message: "Pricelists synced successfully",
|
||||||
|
Synced: synced,
|
||||||
|
Duration: time.Since(startTime).String(),
|
||||||
|
})
|
||||||
|
h.syncService.RecordSyncHeartbeat()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncAllResponse represents result of full sync
|
||||||
|
type SyncAllResponse struct {
|
||||||
|
Success bool `json:"success"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
PendingPushed int `json:"pending_pushed"`
|
||||||
|
ComponentsSynced int `json:"components_synced"`
|
||||||
|
PricelistsSynced int `json:"pricelists_synced"`
|
||||||
|
ProjectsImported int `json:"projects_imported"`
|
||||||
|
ProjectsUpdated int `json:"projects_updated"`
|
||||||
|
ProjectsSkipped int `json:"projects_skipped"`
|
||||||
|
ConfigurationsImported int `json:"configurations_imported"`
|
||||||
|
ConfigurationsUpdated int `json:"configurations_updated"`
|
||||||
|
ConfigurationsSkipped int `json:"configurations_skipped"`
|
||||||
|
Duration string `json:"duration"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncAll performs full bidirectional sync:
|
||||||
|
// - push pending local changes (projects/configurations) to server
|
||||||
|
// - pull components, pricelists, projects, and configurations from server
|
||||||
|
// POST /api/sync/all
|
||||||
|
func (h *SyncHandler) SyncAll(c *gin.Context) {
|
||||||
|
if !h.ensureSyncReadiness(c) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
startTime := time.Now()
|
||||||
|
var pendingPushed, componentsSynced, pricelistsSynced int
|
||||||
|
|
||||||
|
// Push local pending changes first (projects/configurations)
|
||||||
|
pendingPushed, err := h.syncService.PushPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("pending push failed during full sync", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Pending changes push failed: " + err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sync components
|
||||||
|
mariaDB, err := h.connMgr.GetDB()
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Database connection failed: " + err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
compResult, err := h.localDB.SyncComponents(mariaDB)
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("component sync failed during full sync", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Component sync failed: " + err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
componentsSynced = compResult.TotalSynced
|
||||||
|
|
||||||
|
// Sync pricelists
|
||||||
|
pricelistsSynced, err = h.syncService.SyncPricelists()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("pricelist sync failed during full sync", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Pricelist sync failed: " + err.Error(),
|
||||||
|
"pending_pushed": pendingPushed,
|
||||||
|
"components_synced": componentsSynced,
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
projectsResult, err := h.syncService.ImportProjectsToLocal()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("project import failed during full sync", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Project import failed: " + err.Error(),
|
||||||
|
"pending_pushed": pendingPushed,
|
||||||
|
"components_synced": componentsSynced,
|
||||||
|
"pricelists_synced": pricelistsSynced,
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
configsResult, err := h.syncService.ImportConfigurationsToLocal()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("configuration import failed during full sync", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": "Configuration import failed: " + err.Error(),
|
||||||
|
"pending_pushed": pendingPushed,
|
||||||
|
"components_synced": componentsSynced,
|
||||||
|
"pricelists_synced": pricelistsSynced,
|
||||||
|
"projects_imported": projectsResult.Imported,
|
||||||
|
"projects_updated": projectsResult.Updated,
|
||||||
|
"projects_skipped": projectsResult.Skipped,
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncAllResponse{
|
||||||
|
Success: true,
|
||||||
|
Message: "Full sync completed successfully",
|
||||||
|
PendingPushed: pendingPushed,
|
||||||
|
ComponentsSynced: componentsSynced,
|
||||||
|
PricelistsSynced: pricelistsSynced,
|
||||||
|
ProjectsImported: projectsResult.Imported,
|
||||||
|
ProjectsUpdated: projectsResult.Updated,
|
||||||
|
ProjectsSkipped: projectsResult.Skipped,
|
||||||
|
ConfigurationsImported: configsResult.Imported,
|
||||||
|
ConfigurationsUpdated: configsResult.Updated,
|
||||||
|
ConfigurationsSkipped: configsResult.Skipped,
|
||||||
|
Duration: time.Since(startTime).String(),
|
||||||
|
})
|
||||||
|
h.syncService.RecordSyncHeartbeat()
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkOnline checks if MariaDB is accessible
|
||||||
|
func (h *SyncHandler) checkOnline() bool {
|
||||||
|
return h.connMgr.IsOnline()
|
||||||
|
}
|
||||||
|
|
||||||
|
// PushPendingChanges pushes all pending changes to the server
|
||||||
|
// POST /api/sync/push
|
||||||
|
func (h *SyncHandler) PushPendingChanges(c *gin.Context) {
|
||||||
|
if !h.ensureSyncReadiness(c) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
startTime := time.Now()
|
||||||
|
pushed, err := h.syncService.PushPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("push pending changes failed", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncResultResponse{
|
||||||
|
Success: true,
|
||||||
|
Message: "Pending changes pushed successfully",
|
||||||
|
Synced: pushed,
|
||||||
|
Duration: time.Since(startTime).String(),
|
||||||
|
})
|
||||||
|
h.syncService.RecordSyncHeartbeat()
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPendingCount returns the number of pending changes
|
||||||
|
// GET /api/sync/pending/count
|
||||||
|
func (h *SyncHandler) GetPendingCount(c *gin.Context) {
|
||||||
|
count := h.localDB.GetPendingCount()
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"count": count,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPendingChanges returns all pending changes
|
||||||
|
// GET /api/sync/pending
|
||||||
|
func (h *SyncHandler) GetPendingChanges(c *gin.Context) {
|
||||||
|
changes, err := h.localDB.GetPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"changes": changes,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// RepairPendingChanges attempts to repair errored pending changes
|
||||||
|
// POST /api/sync/repair
|
||||||
|
func (h *SyncHandler) RepairPendingChanges(c *gin.Context) {
|
||||||
|
repaired, remainingErrors, err := h.localDB.RepairPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("repair pending changes failed", "error", err)
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"success": false,
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, gin.H{
|
||||||
|
"success": true,
|
||||||
|
"repaired": repaired,
|
||||||
|
"remaining_errors": remainingErrors,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncInfoResponse represents sync information for the modal
|
||||||
|
type SyncInfoResponse struct {
|
||||||
|
// Connection
|
||||||
|
DBHost string `json:"db_host"`
|
||||||
|
DBUser string `json:"db_user"`
|
||||||
|
DBName string `json:"db_name"`
|
||||||
|
|
||||||
|
// Status
|
||||||
|
IsOnline bool `json:"is_online"`
|
||||||
|
LastSyncAt *time.Time `json:"last_sync_at"`
|
||||||
|
|
||||||
|
// Statistics
|
||||||
|
LotCount int64 `json:"lot_count"`
|
||||||
|
LotLogCount int64 `json:"lot_log_count"`
|
||||||
|
ConfigCount int64 `json:"config_count"`
|
||||||
|
ProjectCount int64 `json:"project_count"`
|
||||||
|
|
||||||
|
// Pending changes
|
||||||
|
PendingChanges []localdb.PendingChange `json:"pending_changes"`
|
||||||
|
|
||||||
|
// Errors
|
||||||
|
ErrorCount int `json:"error_count"`
|
||||||
|
Errors []SyncError `json:"errors,omitempty"`
|
||||||
|
|
||||||
|
// Readiness guard
|
||||||
|
Readiness *sync.SyncReadiness `json:"readiness,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type SyncUsersStatusResponse struct {
|
||||||
|
IsOnline bool `json:"is_online"`
|
||||||
|
AutoSyncIntervalSeconds int64 `json:"auto_sync_interval_seconds"`
|
||||||
|
OnlineThresholdSeconds int64 `json:"online_threshold_seconds"`
|
||||||
|
GeneratedAt time.Time `json:"generated_at"`
|
||||||
|
Users []sync.UserSyncStatus `json:"users"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncError represents a sync error
|
||||||
|
type SyncError struct {
|
||||||
|
Timestamp time.Time `json:"timestamp"`
|
||||||
|
Message string `json:"message"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetInfo returns sync information for modal
|
||||||
|
// GET /api/sync/info
|
||||||
|
func (h *SyncHandler) GetInfo(c *gin.Context) {
|
||||||
|
// Check online status by pinging MariaDB
|
||||||
|
isOnline := h.checkOnline()
|
||||||
|
|
||||||
|
// Get DB connection info
|
||||||
|
var dbHost, dbUser, dbName string
|
||||||
|
if settings, err := h.localDB.GetSettings(); err == nil {
|
||||||
|
dbHost = settings.Host + ":" + fmt.Sprintf("%d", settings.Port)
|
||||||
|
dbUser = settings.User
|
||||||
|
dbName = settings.Database
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get sync times
|
||||||
|
lastPricelistSync := h.localDB.GetLastSyncTime()
|
||||||
|
|
||||||
|
// Get MariaDB counts (if online)
|
||||||
|
var lotCount, lotLogCount int64
|
||||||
|
if isOnline {
|
||||||
|
if mariaDB, err := h.connMgr.GetDB(); err == nil {
|
||||||
|
mariaDB.Table("lot").Count(&lotCount)
|
||||||
|
mariaDB.Table("lot_log").Count(&lotLogCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get local counts
|
||||||
|
configCount := h.localDB.CountConfigurations()
|
||||||
|
projectCount := h.localDB.CountProjects()
|
||||||
|
|
||||||
|
// Get error count (only changes with LastError != "")
|
||||||
|
errorCount := int(h.localDB.CountErroredChanges())
|
||||||
|
|
||||||
|
// Get pending changes
|
||||||
|
changes, err := h.localDB.GetPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
slog.Error("failed to get pending changes for sync info", "error", err)
|
||||||
|
changes = []localdb.PendingChange{}
|
||||||
|
}
|
||||||
|
|
||||||
|
var syncErrors []SyncError
|
||||||
|
for _, change := range changes {
|
||||||
|
if change.LastError != "" {
|
||||||
|
syncErrors = append(syncErrors, SyncError{
|
||||||
|
Timestamp: change.CreatedAt,
|
||||||
|
Message: change.LastError,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Limit to last 10 errors
|
||||||
|
if len(syncErrors) > 10 {
|
||||||
|
syncErrors = syncErrors[:10]
|
||||||
|
}
|
||||||
|
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncInfoResponse{
|
||||||
|
DBHost: dbHost,
|
||||||
|
DBUser: dbUser,
|
||||||
|
DBName: dbName,
|
||||||
|
IsOnline: isOnline,
|
||||||
|
LastSyncAt: lastPricelistSync,
|
||||||
|
LotCount: lotCount,
|
||||||
|
LotLogCount: lotLogCount,
|
||||||
|
ConfigCount: configCount,
|
||||||
|
ProjectCount: projectCount,
|
||||||
|
PendingChanges: changes,
|
||||||
|
ErrorCount: errorCount,
|
||||||
|
Errors: syncErrors,
|
||||||
|
Readiness: readiness,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetUsersStatus returns last sync timestamps for users with sync heartbeats.
|
||||||
|
// GET /api/sync/users-status
|
||||||
|
func (h *SyncHandler) GetUsersStatus(c *gin.Context) {
|
||||||
|
threshold := time.Duration(float64(h.autoSyncInterval) * h.onlineGraceFactor)
|
||||||
|
isOnline := h.checkOnline()
|
||||||
|
|
||||||
|
if !isOnline {
|
||||||
|
c.JSON(http.StatusOK, SyncUsersStatusResponse{
|
||||||
|
IsOnline: false,
|
||||||
|
AutoSyncIntervalSeconds: int64(h.autoSyncInterval.Seconds()),
|
||||||
|
OnlineThresholdSeconds: int64(threshold.Seconds()),
|
||||||
|
GeneratedAt: time.Now().UTC(),
|
||||||
|
Users: []sync.UserSyncStatus{},
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Keep current client heartbeat fresh so app version is available in the table.
|
||||||
|
h.syncService.RecordSyncHeartbeat()
|
||||||
|
|
||||||
|
users, err := h.syncService.ListUserSyncStatuses(threshold)
|
||||||
|
if err != nil {
|
||||||
|
c.JSON(http.StatusInternalServerError, gin.H{
|
||||||
|
"error": err.Error(),
|
||||||
|
})
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.JSON(http.StatusOK, SyncUsersStatusResponse{
|
||||||
|
IsOnline: true,
|
||||||
|
AutoSyncIntervalSeconds: int64(h.autoSyncInterval.Seconds()),
|
||||||
|
OnlineThresholdSeconds: int64(threshold.Seconds()),
|
||||||
|
GeneratedAt: time.Now().UTC(),
|
||||||
|
Users: users,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncStatusPartial renders the sync status partial for htmx
|
||||||
|
// GET /partials/sync-status
|
||||||
|
func (h *SyncHandler) SyncStatusPartial(c *gin.Context) {
|
||||||
|
// Check online status from middleware
|
||||||
|
isOfflineValue, exists := c.Get("is_offline")
|
||||||
|
isOffline := false
|
||||||
|
if exists {
|
||||||
|
isOffline = isOfflineValue.(bool)
|
||||||
|
} else {
|
||||||
|
// Fallback: check directly if middleware didn't set it
|
||||||
|
isOffline = !h.checkOnline()
|
||||||
|
slog.Warn("is_offline not found in context, checking directly")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get pending count
|
||||||
|
pendingCount := h.localDB.GetPendingCount()
|
||||||
|
readiness := h.getReadinessCached(10 * time.Second)
|
||||||
|
isBlocked := readiness != nil && readiness.Blocked
|
||||||
|
|
||||||
|
slog.Debug("rendering sync status", "is_offline", isOffline, "pending_count", pendingCount, "sync_blocked", isBlocked)
|
||||||
|
|
||||||
|
data := gin.H{
|
||||||
|
"IsOffline": isOffline,
|
||||||
|
"PendingCount": pendingCount,
|
||||||
|
"IsBlocked": isBlocked,
|
||||||
|
"BlockedReason": func() string {
|
||||||
|
if readiness == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return readiness.ReasonText
|
||||||
|
}(),
|
||||||
|
}
|
||||||
|
|
||||||
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
if err := h.tmpl.ExecuteTemplate(c.Writer, "sync_status", data); err != nil {
|
||||||
|
slog.Error("failed to render sync_status template", "error", err)
|
||||||
|
c.String(http.StatusInternalServerError, "Template error: "+err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *SyncHandler) getReadinessCached(maxAge time.Duration) *sync.SyncReadiness {
|
||||||
|
h.readinessMu.Lock()
|
||||||
|
if h.readinessCached != nil && time.Since(h.readinessCachedAt) < maxAge {
|
||||||
|
cached := *h.readinessCached
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
return &cached
|
||||||
|
}
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
|
||||||
|
readiness, err := h.syncService.GetReadiness()
|
||||||
|
if err != nil && readiness == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
h.readinessMu.Lock()
|
||||||
|
h.readinessCached = readiness
|
||||||
|
h.readinessCachedAt = time.Now()
|
||||||
|
h.readinessMu.Unlock()
|
||||||
|
return readiness
|
||||||
|
}
|
||||||
64
internal/handlers/sync_readiness_test.go
Normal file
64
internal/handlers/sync_readiness_test.go
Normal file
@@ -0,0 +1,64 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncReadinessOfflineBlocked(t *testing.T) {
|
||||||
|
gin.SetMode(gin.TestMode)
|
||||||
|
|
||||||
|
dir := t.TempDir()
|
||||||
|
local, err := localdb.New(filepath.Join(dir, "qfs.db"))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
service := syncsvc.NewService(nil, local)
|
||||||
|
h, err := NewSyncHandler(local, service, nil, filepath.Join("web", "templates"), 5*time.Minute)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("new sync handler: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
router := gin.New()
|
||||||
|
router.GET("/api/sync/readiness", h.GetReadiness)
|
||||||
|
router.POST("/api/sync/push", h.PushPendingChanges)
|
||||||
|
|
||||||
|
readinessResp := httptest.NewRecorder()
|
||||||
|
readinessReq, _ := http.NewRequest(http.MethodGet, "/api/sync/readiness", nil)
|
||||||
|
router.ServeHTTP(readinessResp, readinessReq)
|
||||||
|
if readinessResp.Code != http.StatusOK {
|
||||||
|
t.Fatalf("unexpected readiness status: %d", readinessResp.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
var readinessBody map[string]any
|
||||||
|
if err := json.Unmarshal(readinessResp.Body.Bytes(), &readinessBody); err != nil {
|
||||||
|
t.Fatalf("decode readiness body: %v", err)
|
||||||
|
}
|
||||||
|
if blocked, _ := readinessBody["blocked"].(bool); !blocked {
|
||||||
|
t.Fatalf("expected blocked readiness, got %v", readinessBody["blocked"])
|
||||||
|
}
|
||||||
|
|
||||||
|
pushResp := httptest.NewRecorder()
|
||||||
|
pushReq, _ := http.NewRequest(http.MethodPost, "/api/sync/push", nil)
|
||||||
|
router.ServeHTTP(pushResp, pushReq)
|
||||||
|
if pushResp.Code != http.StatusLocked {
|
||||||
|
t.Fatalf("expected 423 for blocked sync push, got %d body=%s", pushResp.Code, pushResp.Body.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
var pushBody map[string]any
|
||||||
|
if err := json.Unmarshal(pushResp.Body.Bytes(), &pushBody); err != nil {
|
||||||
|
t.Fatalf("decode push body: %v", err)
|
||||||
|
}
|
||||||
|
if pushBody["reason_text"] == nil || pushBody["reason_text"] == "" {
|
||||||
|
t.Fatalf("expected reason_text in blocked response, got %v", pushBody)
|
||||||
|
}
|
||||||
|
}
|
||||||
244
internal/handlers/web.go
Normal file
244
internal/handlers/web.go
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
package handlers
|
||||||
|
|
||||||
|
import (
|
||||||
|
"html/template"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
qfassets "git.mchus.pro/mchus/quoteforge"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
)
|
||||||
|
|
||||||
|
type WebHandler struct {
|
||||||
|
templates map[string]*template.Template
|
||||||
|
componentService *services.ComponentService
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewWebHandler(templatesPath string, componentService *services.ComponentService) (*WebHandler, error) {
|
||||||
|
funcMap := template.FuncMap{
|
||||||
|
"sub": func(a, b int) int { return a - b },
|
||||||
|
"add": func(a, b int) int { return a + b },
|
||||||
|
"mul": func(a, b int) int { return a * b },
|
||||||
|
"div": func(a, b int) int {
|
||||||
|
if b == 0 {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return (a + b - 1) / b
|
||||||
|
},
|
||||||
|
"deref": func(f *float64) float64 {
|
||||||
|
if f == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return *f
|
||||||
|
},
|
||||||
|
"jsesc": func(s string) string {
|
||||||
|
// Escape string for safe use in JavaScript
|
||||||
|
result := ""
|
||||||
|
for _, r := range s {
|
||||||
|
switch r {
|
||||||
|
case '\\':
|
||||||
|
result += "\\\\"
|
||||||
|
case '\'':
|
||||||
|
result += "\\'"
|
||||||
|
case '"':
|
||||||
|
result += "\\\""
|
||||||
|
case '\n':
|
||||||
|
result += "\\n"
|
||||||
|
case '\r':
|
||||||
|
result += "\\r"
|
||||||
|
case '\t':
|
||||||
|
result += "\\t"
|
||||||
|
default:
|
||||||
|
result += string(r)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
templates := make(map[string]*template.Template)
|
||||||
|
basePath := filepath.Join(templatesPath, "base.html")
|
||||||
|
useDisk := false
|
||||||
|
if stat, statErr := os.Stat(templatesPath); statErr == nil && stat.IsDir() {
|
||||||
|
useDisk = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load each page template with base
|
||||||
|
simplePages := []string{"login.html", "configs.html", "projects.html", "project_detail.html", "pricelists.html", "pricelist_detail.html", "config_revisions.html"}
|
||||||
|
for _, page := range simplePages {
|
||||||
|
pagePath := filepath.Join(templatesPath, page)
|
||||||
|
var tmpl *template.Template
|
||||||
|
var err error
|
||||||
|
if useDisk {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(basePath, pagePath)
|
||||||
|
} else {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
|
qfassets.TemplatesFS,
|
||||||
|
"web/templates/base.html",
|
||||||
|
"web/templates/"+page,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
templates[page] = tmpl
|
||||||
|
}
|
||||||
|
|
||||||
|
// Index page needs components_list.html as well
|
||||||
|
indexPath := filepath.Join(templatesPath, "index.html")
|
||||||
|
componentsListPath := filepath.Join(templatesPath, "components_list.html")
|
||||||
|
var indexTmpl *template.Template
|
||||||
|
var err error
|
||||||
|
if useDisk {
|
||||||
|
indexTmpl, err = template.New("").Funcs(funcMap).ParseFiles(basePath, indexPath, componentsListPath)
|
||||||
|
} else {
|
||||||
|
indexTmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
|
qfassets.TemplatesFS,
|
||||||
|
"web/templates/base.html",
|
||||||
|
"web/templates/index.html",
|
||||||
|
"web/templates/components_list.html",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
templates["index.html"] = indexTmpl
|
||||||
|
|
||||||
|
// Load partial templates (no base needed)
|
||||||
|
partials := []string{"components_list.html"}
|
||||||
|
for _, partial := range partials {
|
||||||
|
partialPath := filepath.Join(templatesPath, partial)
|
||||||
|
var tmpl *template.Template
|
||||||
|
var err error
|
||||||
|
if useDisk {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFiles(partialPath)
|
||||||
|
} else {
|
||||||
|
tmpl, err = template.New("").Funcs(funcMap).ParseFS(
|
||||||
|
qfassets.TemplatesFS,
|
||||||
|
"web/templates/"+partial,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
templates[partial] = tmpl
|
||||||
|
}
|
||||||
|
|
||||||
|
return &WebHandler{
|
||||||
|
templates: templates,
|
||||||
|
componentService: componentService,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) render(c *gin.Context, name string, data gin.H) {
|
||||||
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
tmpl, ok := h.templates[name]
|
||||||
|
if !ok {
|
||||||
|
c.String(500, "Template not found: %s", name)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
// Execute the page template which will use base
|
||||||
|
if err := tmpl.ExecuteTemplate(c.Writer, name, data); err != nil {
|
||||||
|
c.String(500, "Template error: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Index(c *gin.Context) {
|
||||||
|
// Redirect to projects page - configurator is accessed via /configurator?uuid=...
|
||||||
|
c.Redirect(302, "/projects")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Configurator(c *gin.Context) {
|
||||||
|
categories, _ := h.componentService.GetCategories()
|
||||||
|
uuid := c.Query("uuid")
|
||||||
|
|
||||||
|
filter := repository.ComponentFilter{}
|
||||||
|
result, err := h.componentService.List(filter, 1, 20)
|
||||||
|
|
||||||
|
data := gin.H{
|
||||||
|
"ActivePage": "configurator",
|
||||||
|
"Categories": categories,
|
||||||
|
"Components": []interface{}{},
|
||||||
|
"Total": int64(0),
|
||||||
|
"Page": 1,
|
||||||
|
"PerPage": 20,
|
||||||
|
"ConfigUUID": uuid,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err == nil && result != nil {
|
||||||
|
data["Components"] = result.Components
|
||||||
|
data["Total"] = result.Total
|
||||||
|
data["Page"] = result.Page
|
||||||
|
data["PerPage"] = result.PerPage
|
||||||
|
}
|
||||||
|
|
||||||
|
h.render(c, "index.html", data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Login(c *gin.Context) {
|
||||||
|
h.render(c, "login.html", nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Configs(c *gin.Context) {
|
||||||
|
h.render(c, "configs.html", gin.H{"ActivePage": "configs"})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Projects(c *gin.Context) {
|
||||||
|
h.render(c, "projects.html", gin.H{"ActivePage": "projects"})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) ProjectDetail(c *gin.Context) {
|
||||||
|
h.render(c, "project_detail.html", gin.H{
|
||||||
|
"ActivePage": "projects",
|
||||||
|
"ProjectUUID": c.Param("uuid"),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) ConfigRevisions(c *gin.Context) {
|
||||||
|
h.render(c, "config_revisions.html", gin.H{
|
||||||
|
"ActivePage": "configs",
|
||||||
|
"ConfigUUID": c.Param("uuid"),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) Pricelists(c *gin.Context) {
|
||||||
|
h.render(c, "pricelists.html", gin.H{"ActivePage": "pricelists"})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *WebHandler) PricelistDetail(c *gin.Context) {
|
||||||
|
h.render(c, "pricelist_detail.html", gin.H{"ActivePage": "pricelists"})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Partials for htmx
|
||||||
|
|
||||||
|
func (h *WebHandler) ComponentsPartial(c *gin.Context) {
|
||||||
|
page, _ := strconv.Atoi(c.DefaultQuery("page", "1"))
|
||||||
|
|
||||||
|
filter := repository.ComponentFilter{
|
||||||
|
Category: c.Query("category"),
|
||||||
|
Search: c.Query("search"),
|
||||||
|
}
|
||||||
|
|
||||||
|
data := gin.H{
|
||||||
|
"Components": []interface{}{},
|
||||||
|
"Total": int64(0),
|
||||||
|
"Page": page,
|
||||||
|
"PerPage": 20,
|
||||||
|
}
|
||||||
|
|
||||||
|
result, err := h.componentService.List(filter, page, 20)
|
||||||
|
if err == nil && result != nil {
|
||||||
|
data["Components"] = result.Components
|
||||||
|
data["Total"] = result.Total
|
||||||
|
data["Page"] = result.Page
|
||||||
|
data["PerPage"] = result.PerPage
|
||||||
|
}
|
||||||
|
|
||||||
|
c.Header("Content-Type", "text/html; charset=utf-8")
|
||||||
|
if tmpl, ok := h.templates["components_list.html"]; ok {
|
||||||
|
tmpl.ExecuteTemplate(c.Writer, "components_list.html", data)
|
||||||
|
}
|
||||||
|
}
|
||||||
329
internal/localdb/components.go
Normal file
329
internal/localdb/components.go
Normal file
@@ -0,0 +1,329 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ComponentFilter for searching with filters
|
||||||
|
type ComponentFilter struct {
|
||||||
|
Category string
|
||||||
|
Search string
|
||||||
|
HasPrice bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// ComponentSyncResult contains statistics from component sync
|
||||||
|
type ComponentSyncResult struct {
|
||||||
|
TotalSynced int
|
||||||
|
NewCount int
|
||||||
|
UpdateCount int
|
||||||
|
Duration time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
// SyncComponents loads components from MariaDB (lot + qt_lot_metadata) into local_components
|
||||||
|
func (l *LocalDB) SyncComponents(mariaDB *gorm.DB) (*ComponentSyncResult, error) {
|
||||||
|
startTime := time.Now()
|
||||||
|
|
||||||
|
// Query to join lot with qt_lot_metadata (metadata only, no pricing)
|
||||||
|
// Use LEFT JOIN to include lots without metadata
|
||||||
|
type componentRow struct {
|
||||||
|
LotName string
|
||||||
|
LotDescription string
|
||||||
|
Category *string
|
||||||
|
Model *string
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows []componentRow
|
||||||
|
err := mariaDB.Raw(`
|
||||||
|
SELECT
|
||||||
|
l.lot_name,
|
||||||
|
l.lot_description,
|
||||||
|
COALESCE(c.code, SUBSTRING_INDEX(l.lot_name, '_', 1)) as category,
|
||||||
|
m.model
|
||||||
|
FROM lot l
|
||||||
|
LEFT JOIN qt_lot_metadata m ON l.lot_name = m.lot_name
|
||||||
|
LEFT JOIN qt_categories c ON m.category_id = c.id
|
||||||
|
WHERE m.is_hidden = FALSE OR m.is_hidden IS NULL
|
||||||
|
ORDER BY l.lot_name
|
||||||
|
`).Scan(&rows).Error
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("querying components from MariaDB: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(rows) == 0 {
|
||||||
|
slog.Warn("no components found in MariaDB")
|
||||||
|
return &ComponentSyncResult{
|
||||||
|
Duration: time.Since(startTime),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get existing local components for comparison
|
||||||
|
existingMap := make(map[string]bool)
|
||||||
|
var existing []LocalComponent
|
||||||
|
if err := l.db.Find(&existing).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("reading existing local components: %w", err)
|
||||||
|
}
|
||||||
|
for _, c := range existing {
|
||||||
|
existingMap[c.LotName] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prepare components for batch insert/update
|
||||||
|
syncTime := time.Now()
|
||||||
|
components := make([]LocalComponent, 0, len(rows))
|
||||||
|
newCount := 0
|
||||||
|
|
||||||
|
for _, row := range rows {
|
||||||
|
category := ""
|
||||||
|
if row.Category != nil {
|
||||||
|
category = *row.Category
|
||||||
|
} else {
|
||||||
|
// Parse category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
||||||
|
parts := strings.SplitN(row.LotName, "_", 2)
|
||||||
|
if len(parts) >= 1 {
|
||||||
|
category = parts[0]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
model := ""
|
||||||
|
if row.Model != nil {
|
||||||
|
model = *row.Model
|
||||||
|
}
|
||||||
|
|
||||||
|
comp := LocalComponent{
|
||||||
|
LotName: row.LotName,
|
||||||
|
LotDescription: row.LotDescription,
|
||||||
|
Category: category,
|
||||||
|
Model: model,
|
||||||
|
}
|
||||||
|
components = append(components, comp)
|
||||||
|
|
||||||
|
if !existingMap[row.LotName] {
|
||||||
|
newCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use transaction for bulk upsert
|
||||||
|
err = l.db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
// Delete all existing and insert new (simpler than upsert for SQLite)
|
||||||
|
if err := tx.Where("1=1").Delete(&LocalComponent{}).Error; err != nil {
|
||||||
|
return fmt.Errorf("clearing local components: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch insert
|
||||||
|
batchSize := 500
|
||||||
|
for i := 0; i < len(components); i += batchSize {
|
||||||
|
end := i + batchSize
|
||||||
|
if end > len(components) {
|
||||||
|
end = len(components)
|
||||||
|
}
|
||||||
|
if err := tx.CreateInBatches(components[i:end], batchSize).Error; err != nil {
|
||||||
|
return fmt.Errorf("inserting components batch: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update last sync time
|
||||||
|
if err := l.SetComponentSyncTime(syncTime); err != nil {
|
||||||
|
slog.Warn("failed to update component sync time", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &ComponentSyncResult{
|
||||||
|
TotalSynced: len(components),
|
||||||
|
NewCount: newCount,
|
||||||
|
UpdateCount: len(components) - newCount,
|
||||||
|
Duration: time.Since(startTime),
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("components synced",
|
||||||
|
"total", result.TotalSynced,
|
||||||
|
"new", result.NewCount,
|
||||||
|
"updated", result.UpdateCount,
|
||||||
|
"duration", result.Duration)
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SearchLocalComponents searches components in local cache by query string
|
||||||
|
// Searches in lot_name, lot_description, category, and model fields
|
||||||
|
func (l *LocalDB) SearchLocalComponents(query string, limit int) ([]LocalComponent, error) {
|
||||||
|
if limit <= 0 {
|
||||||
|
limit = 50
|
||||||
|
}
|
||||||
|
|
||||||
|
var components []LocalComponent
|
||||||
|
|
||||||
|
if query == "" {
|
||||||
|
// Return all components with limit
|
||||||
|
err := l.db.Order("lot_name").Limit(limit).Find(&components).Error
|
||||||
|
return components, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Search with LIKE on multiple fields
|
||||||
|
searchPattern := "%" + strings.ToLower(query) + "%"
|
||||||
|
|
||||||
|
err := l.db.Where(
|
||||||
|
"LOWER(lot_name) LIKE ? OR LOWER(lot_description) LIKE ? OR LOWER(category) LIKE ? OR LOWER(model) LIKE ?",
|
||||||
|
searchPattern, searchPattern, searchPattern, searchPattern,
|
||||||
|
).Order("lot_name").Limit(limit).Find(&components).Error
|
||||||
|
|
||||||
|
return components, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// SearchLocalComponentsByCategory searches components by category and optional query
|
||||||
|
func (l *LocalDB) SearchLocalComponentsByCategory(category string, query string, limit int) ([]LocalComponent, error) {
|
||||||
|
if limit <= 0 {
|
||||||
|
limit = 50
|
||||||
|
}
|
||||||
|
|
||||||
|
var components []LocalComponent
|
||||||
|
db := l.db.Where("LOWER(category) = ?", strings.ToLower(category))
|
||||||
|
|
||||||
|
if query != "" {
|
||||||
|
searchPattern := "%" + strings.ToLower(query) + "%"
|
||||||
|
db = db.Where(
|
||||||
|
"LOWER(lot_name) LIKE ? OR LOWER(lot_description) LIKE ? OR LOWER(model) LIKE ?",
|
||||||
|
searchPattern, searchPattern, searchPattern,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
err := db.Order("lot_name").Limit(limit).Find(&components).Error
|
||||||
|
return components, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListComponents returns components with filtering and pagination
|
||||||
|
func (l *LocalDB) ListComponents(filter ComponentFilter, offset, limit int) ([]LocalComponent, int64, error) {
|
||||||
|
db := l.db
|
||||||
|
|
||||||
|
// Apply category filter
|
||||||
|
if filter.Category != "" {
|
||||||
|
db = db.Where("LOWER(category) = ?", strings.ToLower(filter.Category))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply search filter
|
||||||
|
if filter.Search != "" {
|
||||||
|
searchPattern := "%" + strings.ToLower(filter.Search) + "%"
|
||||||
|
db = db.Where(
|
||||||
|
"LOWER(lot_name) LIKE ? OR LOWER(lot_description) LIKE ? OR LOWER(category) LIKE ? OR LOWER(model) LIKE ?",
|
||||||
|
searchPattern, searchPattern, searchPattern, searchPattern,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get total count
|
||||||
|
var total int64
|
||||||
|
if err := db.Model(&LocalComponent{}).Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Apply pagination and get results
|
||||||
|
var components []LocalComponent
|
||||||
|
if err := db.Order("lot_name").Offset(offset).Limit(limit).Find(&components).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return components, total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLocalComponent returns a single component by lot_name
|
||||||
|
func (l *LocalDB) GetLocalComponent(lotName string) (*LocalComponent, error) {
|
||||||
|
var component LocalComponent
|
||||||
|
err := l.db.Where("lot_name = ?", lotName).First(&component).Error
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &component, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLocalComponentCategoriesByLotNames returns category for each lot_name in the local component cache.
|
||||||
|
// Missing lots are not included in the map; caller is responsible for strict validation.
|
||||||
|
func (l *LocalDB) GetLocalComponentCategoriesByLotNames(lotNames []string) (map[string]string, error) {
|
||||||
|
result := make(map[string]string, len(lotNames))
|
||||||
|
if len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type row struct {
|
||||||
|
LotName string `gorm:"column:lot_name"`
|
||||||
|
Category string `gorm:"column:category"`
|
||||||
|
}
|
||||||
|
var rows []row
|
||||||
|
if err := l.db.Model(&LocalComponent{}).
|
||||||
|
Select("lot_name, category").
|
||||||
|
Where("lot_name IN ?", lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, r := range rows {
|
||||||
|
result[r.LotName] = r.Category
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLocalComponentCategories returns distinct categories from local components
|
||||||
|
func (l *LocalDB) GetLocalComponentCategories() ([]string, error) {
|
||||||
|
var categories []string
|
||||||
|
err := l.db.Model(&LocalComponent{}).
|
||||||
|
Distinct("category").
|
||||||
|
Where("category != ''").
|
||||||
|
Order("category").
|
||||||
|
Pluck("category", &categories).Error
|
||||||
|
return categories, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountLocalComponents returns the total number of local components
|
||||||
|
func (l *LocalDB) CountLocalComponents() int64 {
|
||||||
|
var count int64
|
||||||
|
l.db.Model(&LocalComponent{}).Count(&count)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountLocalComponentsByCategory returns component count by category
|
||||||
|
func (l *LocalDB) CountLocalComponentsByCategory(category string) int64 {
|
||||||
|
var count int64
|
||||||
|
l.db.Model(&LocalComponent{}).Where("LOWER(category) = ?", strings.ToLower(category)).Count(&count)
|
||||||
|
return count
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetComponentSyncTime returns the last component sync timestamp
|
||||||
|
func (l *LocalDB) GetComponentSyncTime() *time.Time {
|
||||||
|
var setting struct {
|
||||||
|
Value string
|
||||||
|
}
|
||||||
|
if err := l.db.Table("app_settings").
|
||||||
|
Where("key = ?", "last_component_sync").
|
||||||
|
First(&setting).Error; err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
t, err := time.Parse(time.RFC3339, setting.Value)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &t
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetComponentSyncTime sets the last component sync timestamp
|
||||||
|
func (l *LocalDB) SetComponentSyncTime(t time.Time) error {
|
||||||
|
return l.db.Exec(`
|
||||||
|
INSERT INTO app_settings (key, value, updated_at)
|
||||||
|
VALUES (?, ?, ?)
|
||||||
|
ON CONFLICT(key) DO UPDATE SET value = excluded.value, updated_at = excluded.updated_at
|
||||||
|
`, "last_component_sync", t.Format(time.RFC3339), time.Now().Format(time.RFC3339)).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// NeedComponentSync checks if component sync is needed (older than specified hours)
|
||||||
|
func (l *LocalDB) NeedComponentSync(maxAgeHours int) bool {
|
||||||
|
syncTime := l.GetComponentSyncTime()
|
||||||
|
if syncTime == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return time.Since(*syncTime).Hours() > float64(maxAgeHours)
|
||||||
|
}
|
||||||
244
internal/localdb/converters.go
Normal file
244
internal/localdb/converters.go
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ConfigurationToLocal converts models.Configuration to LocalConfiguration
|
||||||
|
func ConfigurationToLocal(cfg *models.Configuration) *LocalConfiguration {
|
||||||
|
items := make(LocalConfigItems, len(cfg.Items))
|
||||||
|
for i, item := range cfg.Items {
|
||||||
|
items[i] = LocalConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
local := &LocalConfiguration{
|
||||||
|
UUID: cfg.UUID,
|
||||||
|
ProjectUUID: cfg.ProjectUUID,
|
||||||
|
IsActive: true,
|
||||||
|
Name: cfg.Name,
|
||||||
|
Items: items,
|
||||||
|
TotalPrice: cfg.TotalPrice,
|
||||||
|
CustomPrice: cfg.CustomPrice,
|
||||||
|
Notes: cfg.Notes,
|
||||||
|
IsTemplate: cfg.IsTemplate,
|
||||||
|
ServerCount: cfg.ServerCount,
|
||||||
|
ServerModel: cfg.ServerModel,
|
||||||
|
SupportCode: cfg.SupportCode,
|
||||||
|
Article: cfg.Article,
|
||||||
|
PricelistID: cfg.PricelistID,
|
||||||
|
OnlyInStock: cfg.OnlyInStock,
|
||||||
|
PriceUpdatedAt: cfg.PriceUpdatedAt,
|
||||||
|
CreatedAt: cfg.CreatedAt,
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUserID: derefUint(cfg.UserID),
|
||||||
|
OriginalUsername: cfg.OwnerUsername,
|
||||||
|
}
|
||||||
|
|
||||||
|
if local.OriginalUsername == "" && cfg.User != nil {
|
||||||
|
local.OriginalUsername = cfg.User.Username
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.ID > 0 {
|
||||||
|
serverID := cfg.ID
|
||||||
|
local.ServerID = &serverID
|
||||||
|
}
|
||||||
|
|
||||||
|
return local
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalToConfiguration converts LocalConfiguration to models.Configuration
|
||||||
|
func LocalToConfiguration(local *LocalConfiguration) *models.Configuration {
|
||||||
|
items := make(models.ConfigItems, len(local.Items))
|
||||||
|
for i, item := range local.Items {
|
||||||
|
items[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := &models.Configuration{
|
||||||
|
UUID: local.UUID,
|
||||||
|
OwnerUsername: local.OriginalUsername,
|
||||||
|
ProjectUUID: local.ProjectUUID,
|
||||||
|
Name: local.Name,
|
||||||
|
Items: items,
|
||||||
|
TotalPrice: local.TotalPrice,
|
||||||
|
CustomPrice: local.CustomPrice,
|
||||||
|
Notes: local.Notes,
|
||||||
|
IsTemplate: local.IsTemplate,
|
||||||
|
ServerCount: local.ServerCount,
|
||||||
|
ServerModel: local.ServerModel,
|
||||||
|
SupportCode: local.SupportCode,
|
||||||
|
Article: local.Article,
|
||||||
|
PricelistID: local.PricelistID,
|
||||||
|
OnlyInStock: local.OnlyInStock,
|
||||||
|
PriceUpdatedAt: local.PriceUpdatedAt,
|
||||||
|
CreatedAt: local.CreatedAt,
|
||||||
|
}
|
||||||
|
|
||||||
|
if local.ServerID != nil {
|
||||||
|
cfg.ID = *local.ServerID
|
||||||
|
}
|
||||||
|
if local.OriginalUserID != 0 {
|
||||||
|
userID := local.OriginalUserID
|
||||||
|
cfg.UserID = &userID
|
||||||
|
}
|
||||||
|
if local.CurrentVersion != nil {
|
||||||
|
cfg.CurrentVersionNo = local.CurrentVersion.VersionNo
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg
|
||||||
|
}
|
||||||
|
|
||||||
|
func derefUint(v *uint) uint {
|
||||||
|
if v == nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return *v
|
||||||
|
}
|
||||||
|
|
||||||
|
func ProjectToLocal(project *models.Project) *LocalProject {
|
||||||
|
local := &LocalProject{
|
||||||
|
UUID: project.UUID,
|
||||||
|
OwnerUsername: project.OwnerUsername,
|
||||||
|
Code: project.Code,
|
||||||
|
Variant: project.Variant,
|
||||||
|
Name: project.Name,
|
||||||
|
TrackerURL: project.TrackerURL,
|
||||||
|
IsActive: project.IsActive,
|
||||||
|
IsSystem: project.IsSystem,
|
||||||
|
CreatedAt: project.CreatedAt,
|
||||||
|
UpdatedAt: project.UpdatedAt,
|
||||||
|
SyncStatus: "pending",
|
||||||
|
}
|
||||||
|
if project.ID > 0 {
|
||||||
|
serverID := project.ID
|
||||||
|
local.ServerID = &serverID
|
||||||
|
}
|
||||||
|
return local
|
||||||
|
}
|
||||||
|
|
||||||
|
func LocalToProject(local *LocalProject) *models.Project {
|
||||||
|
project := &models.Project{
|
||||||
|
UUID: local.UUID,
|
||||||
|
OwnerUsername: local.OwnerUsername,
|
||||||
|
Code: local.Code,
|
||||||
|
Variant: local.Variant,
|
||||||
|
Name: local.Name,
|
||||||
|
TrackerURL: local.TrackerURL,
|
||||||
|
IsActive: local.IsActive,
|
||||||
|
IsSystem: local.IsSystem,
|
||||||
|
CreatedAt: local.CreatedAt,
|
||||||
|
UpdatedAt: local.UpdatedAt,
|
||||||
|
}
|
||||||
|
if local.ServerID != nil {
|
||||||
|
project.ID = *local.ServerID
|
||||||
|
}
|
||||||
|
return project
|
||||||
|
}
|
||||||
|
|
||||||
|
// PricelistToLocal converts models.Pricelist to LocalPricelist
|
||||||
|
func PricelistToLocal(pl *models.Pricelist) *LocalPricelist {
|
||||||
|
name := pl.Notification
|
||||||
|
if name == "" {
|
||||||
|
name = pl.Version
|
||||||
|
}
|
||||||
|
|
||||||
|
return &LocalPricelist{
|
||||||
|
ServerID: pl.ID,
|
||||||
|
Source: pl.Source,
|
||||||
|
Version: pl.Version,
|
||||||
|
Name: name,
|
||||||
|
CreatedAt: pl.CreatedAt,
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
IsUsed: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalToPricelist converts LocalPricelist to models.Pricelist
|
||||||
|
func LocalToPricelist(local *LocalPricelist) *models.Pricelist {
|
||||||
|
return &models.Pricelist{
|
||||||
|
ID: local.ServerID,
|
||||||
|
Source: local.Source,
|
||||||
|
Version: local.Version,
|
||||||
|
Notification: local.Name,
|
||||||
|
CreatedAt: local.CreatedAt,
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// PricelistItemToLocal converts models.PricelistItem to LocalPricelistItem
|
||||||
|
func PricelistItemToLocal(item *models.PricelistItem, localPricelistID uint) *LocalPricelistItem {
|
||||||
|
partnumbers := make(LocalStringList, 0, len(item.Partnumbers))
|
||||||
|
partnumbers = append(partnumbers, item.Partnumbers...)
|
||||||
|
return &LocalPricelistItem{
|
||||||
|
PricelistID: localPricelistID,
|
||||||
|
LotName: item.LotName,
|
||||||
|
LotCategory: item.LotCategory,
|
||||||
|
Price: item.Price,
|
||||||
|
AvailableQty: item.AvailableQty,
|
||||||
|
Partnumbers: partnumbers,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalToPricelistItem converts LocalPricelistItem to models.PricelistItem
|
||||||
|
func LocalToPricelistItem(local *LocalPricelistItem, serverPricelistID uint) *models.PricelistItem {
|
||||||
|
partnumbers := make([]string, 0, len(local.Partnumbers))
|
||||||
|
partnumbers = append(partnumbers, local.Partnumbers...)
|
||||||
|
return &models.PricelistItem{
|
||||||
|
ID: local.ID,
|
||||||
|
PricelistID: serverPricelistID,
|
||||||
|
LotName: local.LotName,
|
||||||
|
LotCategory: local.LotCategory,
|
||||||
|
Price: local.Price,
|
||||||
|
AvailableQty: local.AvailableQty,
|
||||||
|
Partnumbers: partnumbers,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ComponentToLocal converts models.LotMetadata to LocalComponent
|
||||||
|
func ComponentToLocal(meta *models.LotMetadata) *LocalComponent {
|
||||||
|
var lotDesc string
|
||||||
|
var category string
|
||||||
|
|
||||||
|
if meta.Lot != nil {
|
||||||
|
lotDesc = meta.Lot.LotDescription
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract category from lot_name (e.g., "CPU_AMD_9654" -> "CPU")
|
||||||
|
if len(meta.LotName) > 0 {
|
||||||
|
for i, ch := range meta.LotName {
|
||||||
|
if ch == '_' {
|
||||||
|
category = meta.LotName[:i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &LocalComponent{
|
||||||
|
LotName: meta.LotName,
|
||||||
|
LotDescription: lotDesc,
|
||||||
|
Category: category,
|
||||||
|
Model: meta.Model,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalToComponent converts LocalComponent to models.LotMetadata
|
||||||
|
func LocalToComponent(local *LocalComponent) *models.LotMetadata {
|
||||||
|
return &models.LotMetadata{
|
||||||
|
LotName: local.LotName,
|
||||||
|
Model: local.Model,
|
||||||
|
Lot: &models.Lot{
|
||||||
|
LotName: local.LotName,
|
||||||
|
LotDescription: local.LotDescription,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
34
internal/localdb/converters_test.go
Normal file
34
internal/localdb/converters_test.go
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPricelistItemToLocal_PreservesLotCategory(t *testing.T) {
|
||||||
|
item := &models.PricelistItem{
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
local := PricelistItemToLocal(item, 123)
|
||||||
|
if local.LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected LotCategory=CPU, got %q", local.LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLocalToPricelistItem_PreservesLotCategory(t *testing.T) {
|
||||||
|
local := &LocalPricelistItem{
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
}
|
||||||
|
|
||||||
|
item := LocalToPricelistItem(local, 456)
|
||||||
|
if item.LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected LotCategory=CPU, got %q", item.LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
87
internal/localdb/encryption.go
Normal file
87
internal/localdb/encryption.go
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/aes"
|
||||||
|
"crypto/cipher"
|
||||||
|
"crypto/rand"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/base64"
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
// getEncryptionKey derives a 32-byte key from environment variable or machine ID
|
||||||
|
func getEncryptionKey() []byte {
|
||||||
|
key := os.Getenv("QUOTEFORGE_ENCRYPTION_KEY")
|
||||||
|
if key == "" {
|
||||||
|
// Fallback to a machine-based key (hostname + fixed salt)
|
||||||
|
hostname, _ := os.Hostname()
|
||||||
|
key = hostname + "quoteforge-salt-2024"
|
||||||
|
}
|
||||||
|
// Hash to get exactly 32 bytes for AES-256
|
||||||
|
hash := sha256.Sum256([]byte(key))
|
||||||
|
return hash[:]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Encrypt encrypts plaintext using AES-256-GCM
|
||||||
|
func Encrypt(plaintext string) (string, error) {
|
||||||
|
if plaintext == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
key := getEncryptionKey()
|
||||||
|
block, err := aes.NewCipher(key)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
gcm, err := cipher.NewGCM(block)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
nonce := make([]byte, gcm.NonceSize())
|
||||||
|
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
ciphertext := gcm.Seal(nonce, nonce, []byte(plaintext), nil)
|
||||||
|
return base64.StdEncoding.EncodeToString(ciphertext), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Decrypt decrypts ciphertext that was encrypted with Encrypt
|
||||||
|
func Decrypt(ciphertext string) (string, error) {
|
||||||
|
if ciphertext == "" {
|
||||||
|
return "", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
key := getEncryptionKey()
|
||||||
|
data, err := base64.StdEncoding.DecodeString(ciphertext)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
block, err := aes.NewCipher(key)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
gcm, err := cipher.NewGCM(block)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
nonceSize := gcm.NonceSize()
|
||||||
|
if len(data) < nonceSize {
|
||||||
|
return "", errors.New("ciphertext too short")
|
||||||
|
}
|
||||||
|
|
||||||
|
nonce, ciphertextBytes := data[:nonceSize], data[nonceSize:]
|
||||||
|
plaintext, err := gcm.Open(nil, nonce, ciphertextBytes, nil)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(plaintext), nil
|
||||||
|
}
|
||||||
255
internal/localdb/local_migrations_test.go
Normal file
255
internal/localdb/local_migrations_test.go
Normal file
@@ -0,0 +1,255 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsBackfillsExistingConfigurations(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "legacy_local.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
cfg := &LocalConfiguration{
|
||||||
|
UUID: "legacy-cfg",
|
||||||
|
Name: "Legacy",
|
||||||
|
Items: LocalConfigItems{},
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(cfg); err != nil {
|
||||||
|
t.Fatalf("save seed config: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Where("configuration_uuid = ?", "legacy-cfg").Delete(&LocalConfigurationVersion{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete seed versions: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", "legacy-cfg").
|
||||||
|
Update("current_version_id", nil).Error; err != nil {
|
||||||
|
t.Fatalf("clear current_version_id: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Where("1=1").Delete(&LocalSchemaMigration{}).Error; err != nil {
|
||||||
|
t.Fatalf("clear migration records: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := runLocalMigrations(local.DB()); err != nil {
|
||||||
|
t.Fatalf("run local migrations manually: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
migratedCfg, err := local.GetConfigurationByUUID("legacy-cfg")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get migrated config: %v", err)
|
||||||
|
}
|
||||||
|
if migratedCfg.CurrentVersionID == nil || *migratedCfg.CurrentVersionID == "" {
|
||||||
|
t.Fatalf("expected current_version_id after migration")
|
||||||
|
}
|
||||||
|
if !migratedCfg.IsActive {
|
||||||
|
t.Fatalf("expected migrated config to be active")
|
||||||
|
}
|
||||||
|
|
||||||
|
var versionCount int64
|
||||||
|
if err := local.DB().Model(&LocalConfigurationVersion{}).
|
||||||
|
Where("configuration_uuid = ?", "legacy-cfg").
|
||||||
|
Count(&versionCount).Error; err != nil {
|
||||||
|
t.Fatalf("count versions: %v", err)
|
||||||
|
}
|
||||||
|
if versionCount != 1 {
|
||||||
|
t.Fatalf("expected 1 backfilled version, got %d", versionCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
var migrationCount int64
|
||||||
|
if err := local.DB().Model(&LocalSchemaMigration{}).Count(&migrationCount).Error; err != nil {
|
||||||
|
t.Fatalf("count local migrations: %v", err)
|
||||||
|
}
|
||||||
|
if migrationCount == 0 {
|
||||||
|
t.Fatalf("expected local migrations to be recorded")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsFixesPricelistVersionUniqueIndex(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "pricelist_index_fix.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&LocalPricelist{
|
||||||
|
ServerID: 10,
|
||||||
|
Version: "2026-02-06-001",
|
||||||
|
Name: "v1",
|
||||||
|
CreatedAt: time.Now().Add(-time.Hour),
|
||||||
|
SyncedAt: time.Now().Add(-time.Hour),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save first pricelist: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.DB().Exec(`
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_local_pricelists_version_legacy
|
||||||
|
ON local_pricelists(version)
|
||||||
|
`).Error; err != nil {
|
||||||
|
t.Fatalf("create legacy unique version index: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.DB().Where("id = ?", "2026_02_06_pricelist_index_fix").
|
||||||
|
Delete(&LocalSchemaMigration{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete migration record: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := runLocalMigrations(local.DB()); err != nil {
|
||||||
|
t.Fatalf("rerun local migrations: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&LocalPricelist{
|
||||||
|
ServerID: 11,
|
||||||
|
Version: "2026-02-06-001",
|
||||||
|
Name: "v1-duplicate-version",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("save second pricelist with duplicate version: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var count int64
|
||||||
|
if err := local.DB().Model(&LocalPricelist{}).Count(&count).Error; err != nil {
|
||||||
|
t.Fatalf("count pricelists: %v", err)
|
||||||
|
}
|
||||||
|
if count != 2 {
|
||||||
|
t.Fatalf("expected 2 pricelists, got %d", count)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsDeduplicatesConfigurationVersionsBySpecAndPrice(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "versions_dedup.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
cfg := &LocalConfiguration{
|
||||||
|
UUID: "dedup-cfg",
|
||||||
|
Name: "Dedup",
|
||||||
|
Items: LocalConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(cfg); err != nil {
|
||||||
|
t.Fatalf("save seed config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
baseV1Data, err := BuildConfigurationSnapshot(cfg)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("build v1 snapshot: %v", err)
|
||||||
|
}
|
||||||
|
baseV1 := LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 1,
|
||||||
|
Data: baseV1Data,
|
||||||
|
AppVersion: "test",
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&baseV1).Error; err != nil {
|
||||||
|
t.Fatalf("insert base v1: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", baseV1.ID).Error; err != nil {
|
||||||
|
t.Fatalf("set current_version_id to v1: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
v2 := LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 2,
|
||||||
|
Data: baseV1.Data,
|
||||||
|
AppVersion: "test",
|
||||||
|
CreatedAt: time.Now().Add(1 * time.Second),
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&v2).Error; err != nil {
|
||||||
|
t.Fatalf("insert duplicate v2: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
modified := *cfg
|
||||||
|
modified.Items = LocalConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 100}}
|
||||||
|
total := modified.Items.Total()
|
||||||
|
modified.TotalPrice = &total
|
||||||
|
modified.UpdatedAt = time.Now()
|
||||||
|
v3Data, err := BuildConfigurationSnapshot(&modified)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("build v3 snapshot: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
v3 := LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 3,
|
||||||
|
Data: v3Data,
|
||||||
|
AppVersion: "test",
|
||||||
|
CreatedAt: time.Now().Add(2 * time.Second),
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&v3).Error; err != nil {
|
||||||
|
t.Fatalf("insert v3: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
v4 := LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 4,
|
||||||
|
Data: v3Data,
|
||||||
|
AppVersion: "test",
|
||||||
|
CreatedAt: time.Now().Add(3 * time.Second),
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&v4).Error; err != nil {
|
||||||
|
t.Fatalf("insert duplicate v4: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.DB().Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", v4.ID).Error; err != nil {
|
||||||
|
t.Fatalf("point current_version_id to duplicate v4: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.DB().Where("id = ?", "2026_02_19_configuration_versions_dedup_spec_price").
|
||||||
|
Delete(&LocalSchemaMigration{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete dedup migration record: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := runLocalMigrations(local.DB()); err != nil {
|
||||||
|
t.Fatalf("rerun local migrations: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var versions []LocalConfigurationVersion
|
||||||
|
if err := local.DB().Where("configuration_uuid = ?", cfg.UUID).
|
||||||
|
Order("version_no ASC").
|
||||||
|
Find(&versions).Error; err != nil {
|
||||||
|
t.Fatalf("load versions after dedup: %v", err)
|
||||||
|
}
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions after dedup, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[0].VersionNo != 1 || versions[1].VersionNo != 3 {
|
||||||
|
t.Fatalf("expected kept version numbers [1,3], got [%d,%d]", versions[0].VersionNo, versions[1].VersionNo)
|
||||||
|
}
|
||||||
|
|
||||||
|
var after LocalConfiguration
|
||||||
|
if err := local.DB().Where("uuid = ?", cfg.UUID).First(&after).Error; err != nil {
|
||||||
|
t.Fatalf("load config after dedup: %v", err)
|
||||||
|
}
|
||||||
|
if after.CurrentVersionID == nil || *after.CurrentVersionID != v3.ID {
|
||||||
|
t.Fatalf("expected current_version_id to point to kept latest version v3")
|
||||||
|
}
|
||||||
|
}
|
||||||
1249
internal/localdb/localdb.go
Normal file
1249
internal/localdb/localdb.go
Normal file
File diff suppressed because it is too large
Load Diff
60
internal/localdb/migration_projects_test.go
Normal file
60
internal/localdb/migration_projects_test.go
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRunLocalMigrationsBackfillsDefaultProject(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "projects_backfill.db")
|
||||||
|
|
||||||
|
local, err := New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open localdb: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
|
||||||
|
cfg := &LocalConfiguration{
|
||||||
|
UUID: "cfg-without-project",
|
||||||
|
Name: "Cfg no project",
|
||||||
|
Items: LocalConfigItems{},
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(cfg); err != nil {
|
||||||
|
t.Fatalf("save config: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().
|
||||||
|
Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("project_uuid", nil).Error; err != nil {
|
||||||
|
t.Fatalf("clear project_uuid: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Where("id = ?", "2026_02_06_projects_backfill").Delete(&LocalSchemaMigration{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete local migration record: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := runLocalMigrations(local.DB()); err != nil {
|
||||||
|
t.Fatalf("run local migrations: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := local.GetConfigurationByUUID(cfg.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get updated config: %v", err)
|
||||||
|
}
|
||||||
|
if updated.ProjectUUID == nil || *updated.ProjectUUID == "" {
|
||||||
|
t.Fatalf("expected project_uuid to be backfilled")
|
||||||
|
}
|
||||||
|
|
||||||
|
project, err := local.GetProjectByUUID(*updated.ProjectUUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get system project: %v", err)
|
||||||
|
}
|
||||||
|
if project.Name == nil || *project.Name != "Без проекта" {
|
||||||
|
t.Fatalf("expected system project name, got %v", project.Name)
|
||||||
|
}
|
||||||
|
if !project.IsSystem {
|
||||||
|
t.Fatalf("expected system project flag")
|
||||||
|
}
|
||||||
|
}
|
||||||
131
internal/localdb/migration_versioning_test.go
Normal file
131
internal/localdb/migration_versioning_test.go
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestMigration006BackfillCreatesV1AndCurrentPointer(t *testing.T) {
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "migration_backfill.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(dbPath), &gorm.Config{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE local_configurations (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
server_id INTEGER NULL,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
items TEXT,
|
||||||
|
total_price REAL,
|
||||||
|
custom_price REAL,
|
||||||
|
notes TEXT,
|
||||||
|
is_template BOOLEAN DEFAULT FALSE,
|
||||||
|
server_count INTEGER DEFAULT 1,
|
||||||
|
price_updated_at DATETIME NULL,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME,
|
||||||
|
synced_at DATETIME,
|
||||||
|
sync_status TEXT DEFAULT 'local',
|
||||||
|
original_user_id INTEGER DEFAULT 0,
|
||||||
|
original_username TEXT DEFAULT ''
|
||||||
|
);`).Error; err != nil {
|
||||||
|
t.Fatalf("create pre-migration schema: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := `[{"lot_name":"CPU_X","quantity":2,"unit_price":1000}]`
|
||||||
|
now := time.Now().UTC().Format(time.RFC3339)
|
||||||
|
if err := db.Exec(`
|
||||||
|
INSERT INTO local_configurations
|
||||||
|
(uuid, name, items, total_price, notes, server_count, created_at, updated_at, sync_status, original_username)
|
||||||
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)`,
|
||||||
|
"cfg-1", "Cfg One", items, 2000.0, "note", 1, now, now, "pending", "tester",
|
||||||
|
).Error; err != nil {
|
||||||
|
t.Fatalf("seed pre-migration data: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
migrationPath := filepath.Join("..", "..", "migrations", "006_add_local_configuration_versions.sql")
|
||||||
|
sqlBytes, err := os.ReadFile(migrationPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("read migration file: %v", err)
|
||||||
|
}
|
||||||
|
if err := execSQLScript(db, string(sqlBytes)); err != nil {
|
||||||
|
t.Fatalf("apply migration: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var count int64
|
||||||
|
if err := db.Table("local_configuration_versions").Where("configuration_uuid = ?", "cfg-1").Count(&count).Error; err != nil {
|
||||||
|
t.Fatalf("count versions: %v", err)
|
||||||
|
}
|
||||||
|
if count != 1 {
|
||||||
|
t.Fatalf("expected 1 version, got %d", count)
|
||||||
|
}
|
||||||
|
|
||||||
|
var currentVersionID *string
|
||||||
|
if err := db.Table("local_configurations").Select("current_version_id").Where("uuid = ?", "cfg-1").Scan(¤tVersionID).Error; err != nil {
|
||||||
|
t.Fatalf("read current_version_id: %v", err)
|
||||||
|
}
|
||||||
|
if currentVersionID == nil || *currentVersionID == "" {
|
||||||
|
t.Fatalf("expected current_version_id to be set")
|
||||||
|
}
|
||||||
|
|
||||||
|
var row struct {
|
||||||
|
ID string
|
||||||
|
VersionNo int
|
||||||
|
Data string
|
||||||
|
}
|
||||||
|
if err := db.Table("local_configuration_versions").
|
||||||
|
Select("id, version_no, data").
|
||||||
|
Where("configuration_uuid = ?", "cfg-1").
|
||||||
|
First(&row).Error; err != nil {
|
||||||
|
t.Fatalf("load v1 row: %v", err)
|
||||||
|
}
|
||||||
|
if row.VersionNo != 1 {
|
||||||
|
t.Fatalf("expected version_no=1, got %d", row.VersionNo)
|
||||||
|
}
|
||||||
|
if row.ID != *currentVersionID {
|
||||||
|
t.Fatalf("expected current_version_id=%s, got %s", row.ID, *currentVersionID)
|
||||||
|
}
|
||||||
|
|
||||||
|
var snapshot map[string]any
|
||||||
|
if err := json.Unmarshal([]byte(row.Data), &snapshot); err != nil {
|
||||||
|
t.Fatalf("parse snapshot json: %v", err)
|
||||||
|
}
|
||||||
|
if snapshot["uuid"] != "cfg-1" {
|
||||||
|
t.Fatalf("expected snapshot uuid cfg-1, got %v", snapshot["uuid"])
|
||||||
|
}
|
||||||
|
if snapshot["name"] != "Cfg One" {
|
||||||
|
t.Fatalf("expected snapshot name Cfg One, got %v", snapshot["name"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func execSQLScript(db *gorm.DB, script string) error {
|
||||||
|
var cleaned []string
|
||||||
|
for _, line := range strings.Split(script, "\n") {
|
||||||
|
trimmed := strings.TrimSpace(line)
|
||||||
|
if strings.HasPrefix(trimmed, "--") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
cleaned = append(cleaned, line)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, stmt := range strings.Split(strings.Join(cleaned, "\n"), ";") {
|
||||||
|
sql := strings.TrimSpace(stmt)
|
||||||
|
if sql == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if err := db.Exec(sql).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
808
internal/localdb/migrations.go
Normal file
808
internal/localdb/migrations.go
Normal file
@@ -0,0 +1,808 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
type LocalSchemaMigration struct {
|
||||||
|
ID string `gorm:"primaryKey;size:128"`
|
||||||
|
Name string `gorm:"not null;size:255"`
|
||||||
|
AppliedAt time.Time `gorm:"not null"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalSchemaMigration) TableName() string {
|
||||||
|
return "local_schema_migrations"
|
||||||
|
}
|
||||||
|
|
||||||
|
type localMigration struct {
|
||||||
|
id string
|
||||||
|
name string
|
||||||
|
run func(tx *gorm.DB) error
|
||||||
|
}
|
||||||
|
|
||||||
|
var localMigrations = []localMigration{
|
||||||
|
{
|
||||||
|
id: "2026_02_04_versioning_backfill",
|
||||||
|
name: "Ensure configuration versioning data and current pointers",
|
||||||
|
run: backfillConfigurationVersions,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_04_is_active_backfill",
|
||||||
|
name: "Ensure is_active defaults to true for existing configurations",
|
||||||
|
run: backfillConfigurationIsActive,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_06_projects_backfill",
|
||||||
|
name: "Create default projects and attach existing configurations",
|
||||||
|
run: backfillProjectsForConfigurations,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_06_pricelist_backfill",
|
||||||
|
name: "Attach existing configurations to latest local pricelist and recalc usage",
|
||||||
|
run: backfillConfigurationPricelists,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_06_pricelist_index_fix",
|
||||||
|
name: "Use unique server_id for local pricelists and allow duplicate versions",
|
||||||
|
run: fixLocalPricelistIndexes,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_06_pricelist_source",
|
||||||
|
name: "Backfill source for local pricelists and create source indexes",
|
||||||
|
run: backfillLocalPricelistSource,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_09_drop_component_unused_fields",
|
||||||
|
name: "Remove current_price and synced_at from local_components (unused fields)",
|
||||||
|
run: dropComponentUnusedFields,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_09_add_warehouse_competitor_pricelists",
|
||||||
|
name: "Add warehouse_pricelist_id and competitor_pricelist_id to local_configurations",
|
||||||
|
run: addWarehouseCompetitorPriceLists,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_pricelist_item_category",
|
||||||
|
name: "Add lot_category to local_pricelist_items and create indexes",
|
||||||
|
run: addLocalPricelistItemCategoryAndIndexes,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_article",
|
||||||
|
name: "Add article to local_configurations",
|
||||||
|
run: addLocalConfigurationArticle,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_server_model",
|
||||||
|
name: "Add server_model to local_configurations",
|
||||||
|
run: addLocalConfigurationServerModel,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_11_local_config_support_code",
|
||||||
|
name: "Add support_code to local_configurations",
|
||||||
|
run: addLocalConfigurationSupportCode,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_code",
|
||||||
|
name: "Add project code to local_projects and backfill",
|
||||||
|
run: addLocalProjectCode,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_variant",
|
||||||
|
name: "Add project variant to local_projects and backfill",
|
||||||
|
run: addLocalProjectVariant,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_13_local_project_name_nullable",
|
||||||
|
name: "Allow NULL project names in local_projects",
|
||||||
|
run: allowLocalProjectNameNull,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "2026_02_19_configuration_versions_dedup_spec_price",
|
||||||
|
name: "Deduplicate configuration revisions by spec+price",
|
||||||
|
run: deduplicateConfigurationVersionsBySpecAndPrice,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
func runLocalMigrations(db *gorm.DB) error {
|
||||||
|
if err := db.AutoMigrate(&LocalSchemaMigration{}); err != nil {
|
||||||
|
return fmt.Errorf("migrate local schema migrations table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, migration := range localMigrations {
|
||||||
|
var count int64
|
||||||
|
if err := db.Model(&LocalSchemaMigration{}).Where("id = ?", migration.id).Count(&count).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local migration %s: %w", migration.id, err)
|
||||||
|
}
|
||||||
|
if count > 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
if err := migration.run(tx); err != nil {
|
||||||
|
return fmt.Errorf("run migration %s: %w", migration.id, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
record := &LocalSchemaMigration{
|
||||||
|
ID: migration.id,
|
||||||
|
Name: migration.name,
|
||||||
|
AppliedAt: time.Now(),
|
||||||
|
}
|
||||||
|
if err := tx.Create(record).Error; err != nil {
|
||||||
|
return fmt.Errorf("insert migration %s record: %w", migration.id, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("local migration applied", "id", migration.id, "name", migration.name)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func backfillConfigurationVersions(tx *gorm.DB) error {
|
||||||
|
var configs []LocalConfiguration
|
||||||
|
if err := tx.Find(&configs).Error; err != nil {
|
||||||
|
return fmt.Errorf("load local configurations for backfill: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range configs {
|
||||||
|
cfg := configs[i]
|
||||||
|
var versionCount int64
|
||||||
|
if err := tx.Model(&LocalConfigurationVersion{}).
|
||||||
|
Where("configuration_uuid = ?", cfg.UUID).
|
||||||
|
Count(&versionCount).Error; err != nil {
|
||||||
|
return fmt.Errorf("count versions for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if versionCount == 0 {
|
||||||
|
snapshot, err := BuildConfigurationSnapshot(&cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("build initial snapshot for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
note := "Initial snapshot backfill (v1)"
|
||||||
|
version := LocalConfigurationVersion{
|
||||||
|
ID: uuid.NewString(),
|
||||||
|
ConfigurationUUID: cfg.UUID,
|
||||||
|
VersionNo: 1,
|
||||||
|
Data: snapshot,
|
||||||
|
ChangeNote: ¬e,
|
||||||
|
AppVersion: "backfill",
|
||||||
|
CreatedAt: chooseNonZeroTime(cfg.CreatedAt, time.Now()),
|
||||||
|
}
|
||||||
|
if err := tx.Create(&version).Error; err != nil {
|
||||||
|
return fmt.Errorf("create v1 backfill for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.CurrentVersionID == nil || *cfg.CurrentVersionID == "" {
|
||||||
|
var latest LocalConfigurationVersion
|
||||||
|
if err := tx.Where("configuration_uuid = ?", cfg.UUID).
|
||||||
|
Order("version_no DESC").
|
||||||
|
First(&latest).Error; err != nil {
|
||||||
|
return fmt.Errorf("load latest version for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", latest.ID).Error; err != nil {
|
||||||
|
return fmt.Errorf("set current version for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func backfillConfigurationIsActive(tx *gorm.DB) error {
|
||||||
|
return tx.Exec("UPDATE local_configurations SET is_active = 1 WHERE is_active IS NULL").Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func backfillProjectsForConfigurations(tx *gorm.DB) error {
|
||||||
|
var owners []string
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Distinct("original_username").
|
||||||
|
Pluck("original_username", &owners).Error; err != nil {
|
||||||
|
return fmt.Errorf("load owners for projects backfill: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, owner := range owners {
|
||||||
|
project, err := ensureDefaultProjectTx(tx, owner)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("original_username = ? AND (project_uuid IS NULL OR project_uuid = '')", owner).
|
||||||
|
Update("project_uuid", project.UUID).Error; err != nil {
|
||||||
|
return fmt.Errorf("assign default project for owner %s: %w", owner, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureDefaultProjectTx(tx *gorm.DB, ownerUsername string) (*LocalProject, error) {
|
||||||
|
var project LocalProject
|
||||||
|
err := tx.Where("owner_username = ? AND is_system = ? AND name = ?", ownerUsername, true, "Без проекта").
|
||||||
|
First(&project).Error
|
||||||
|
if err == nil {
|
||||||
|
return &project, nil
|
||||||
|
}
|
||||||
|
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return nil, fmt.Errorf("load system project for %s: %w", ownerUsername, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
project = LocalProject{
|
||||||
|
UUID: uuid.NewString(),
|
||||||
|
OwnerUsername: ownerUsername,
|
||||||
|
Code: "Без проекта",
|
||||||
|
Name: ptrString("Без проекта"),
|
||||||
|
IsActive: true,
|
||||||
|
IsSystem: true,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
SyncStatus: "pending",
|
||||||
|
}
|
||||||
|
if err := tx.Create(&project).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("create system project for %s: %w", ownerUsername, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &project, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalProjectCode(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects ADD COLUMN code TEXT`).Error; err != nil {
|
||||||
|
if !strings.Contains(strings.ToLower(err.Error()), "duplicate") &&
|
||||||
|
!strings.Contains(strings.ToLower(err.Error()), "exists") {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop unique index if it already exists to allow de-duplication updates.
|
||||||
|
if err := tx.Exec(`DROP INDEX IF EXISTS idx_local_projects_code`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy code from current project name.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = TRIM(COALESCE(name, ''))`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure any remaining blanks have a unique fallback.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = 'P-' || uuid
|
||||||
|
WHERE code IS NULL OR TRIM(code) = ''`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// De-duplicate codes: OPS-1948-2, OPS-1948-3...
|
||||||
|
if err := tx.Exec(`
|
||||||
|
WITH ranked AS (
|
||||||
|
SELECT id, code,
|
||||||
|
ROW_NUMBER() OVER (PARTITION BY code ORDER BY id) AS rn
|
||||||
|
FROM local_projects
|
||||||
|
)
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = code || '-' || (SELECT rn FROM ranked WHERE ranked.id = local_projects.id)
|
||||||
|
WHERE id IN (SELECT id FROM ranked WHERE rn > 1)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create unique index for project codes (ignore if exists).
|
||||||
|
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code ON local_projects(code)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalProjectVariant(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects ADD COLUMN variant TEXT NOT NULL DEFAULT ''`).Error; err != nil {
|
||||||
|
if !strings.Contains(strings.ToLower(err.Error()), "duplicate") &&
|
||||||
|
!strings.Contains(strings.ToLower(err.Error()), "exists") {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drop legacy code index if present.
|
||||||
|
if err := tx.Exec(`DROP INDEX IF EXISTS idx_local_projects_code`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reset code from name and clear variant.
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_projects
|
||||||
|
SET code = TRIM(COALESCE(name, '')),
|
||||||
|
variant = ''`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// De-duplicate by assigning variant numbers: 2,3...
|
||||||
|
if err := tx.Exec(`
|
||||||
|
WITH ranked AS (
|
||||||
|
SELECT id, code,
|
||||||
|
ROW_NUMBER() OVER (PARTITION BY code ORDER BY id) AS rn
|
||||||
|
FROM local_projects
|
||||||
|
)
|
||||||
|
UPDATE local_projects
|
||||||
|
SET variant = CASE
|
||||||
|
WHEN (SELECT rn FROM ranked WHERE ranked.id = local_projects.id) = 1 THEN ''
|
||||||
|
ELSE '-' || CAST((SELECT rn FROM ranked WHERE ranked.id = local_projects.id) AS TEXT)
|
||||||
|
END`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code_variant ON local_projects(code, variant)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func allowLocalProjectNameNull(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_projects RENAME TO local_projects_old`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE TABLE local_projects (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
server_id INTEGER NULL,
|
||||||
|
owner_username TEXT NOT NULL,
|
||||||
|
code TEXT NOT NULL,
|
||||||
|
variant TEXT NOT NULL DEFAULT '',
|
||||||
|
name TEXT NULL,
|
||||||
|
tracker_url TEXT NULL,
|
||||||
|
is_active INTEGER NOT NULL DEFAULT 1,
|
||||||
|
is_system INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME,
|
||||||
|
synced_at DATETIME NULL,
|
||||||
|
sync_status TEXT DEFAULT 'local'
|
||||||
|
)`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_owner_username ON local_projects(owner_username)`).Error
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_active ON local_projects(is_active)`).Error
|
||||||
|
_ = tx.Exec(`CREATE INDEX IF NOT EXISTS idx_local_projects_is_system ON local_projects(is_system)`).Error
|
||||||
|
_ = tx.Exec(`CREATE UNIQUE INDEX IF NOT EXISTS idx_local_projects_code_variant ON local_projects(code, variant)`).Error
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO local_projects (id, uuid, server_id, owner_username, code, variant, name, tracker_url, is_active, is_system, created_at, updated_at, synced_at, sync_status)
|
||||||
|
SELECT id, uuid, server_id, owner_username, code, variant, name, tracker_url, is_active, is_system, created_at, updated_at, synced_at, sync_status
|
||||||
|
FROM local_projects_old`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = tx.Exec(`DROP TABLE local_projects_old`).Error
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func backfillConfigurationPricelists(tx *gorm.DB) error {
|
||||||
|
var latest LocalPricelist
|
||||||
|
if err := tx.Where("source = ?", "estimate").Order("created_at DESC").First(&latest).Error; err != nil {
|
||||||
|
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return fmt.Errorf("load latest local pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("pricelist_id IS NULL").
|
||||||
|
Update("pricelist_id", latest.ServerID).Error; err != nil {
|
||||||
|
return fmt.Errorf("backfill configuration pricelist_id: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Model(&LocalPricelist{}).Where("1 = 1").Update("is_used", false).Error; err != nil {
|
||||||
|
return fmt.Errorf("reset local pricelist usage flags: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_pricelists
|
||||||
|
SET is_used = 1
|
||||||
|
WHERE server_id IN (
|
||||||
|
SELECT DISTINCT pricelist_id
|
||||||
|
FROM local_configurations
|
||||||
|
WHERE pricelist_id IS NOT NULL AND is_active = 1
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("recalculate local pricelist usage flags: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func chooseNonZeroTime(candidate time.Time, fallback time.Time) time.Time {
|
||||||
|
if candidate.IsZero() {
|
||||||
|
return fallback
|
||||||
|
}
|
||||||
|
return candidate
|
||||||
|
}
|
||||||
|
|
||||||
|
func deduplicateConfigurationVersionsBySpecAndPrice(tx *gorm.DB) error {
|
||||||
|
var configs []LocalConfiguration
|
||||||
|
if err := tx.Select("uuid", "current_version_id").Find(&configs).Error; err != nil {
|
||||||
|
return fmt.Errorf("load configurations for revision deduplication: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var removedTotal int
|
||||||
|
for i := range configs {
|
||||||
|
cfg := configs[i]
|
||||||
|
|
||||||
|
var versions []LocalConfigurationVersion
|
||||||
|
if err := tx.Where("configuration_uuid = ?", cfg.UUID).
|
||||||
|
Order("version_no ASC, created_at ASC").
|
||||||
|
Find(&versions).Error; err != nil {
|
||||||
|
return fmt.Errorf("load versions for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
if len(versions) < 2 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteIDs := make([]string, 0)
|
||||||
|
deleteSet := make(map[string]struct{})
|
||||||
|
kept := make([]LocalConfigurationVersion, 0, len(versions))
|
||||||
|
var prevKey string
|
||||||
|
hasPrev := false
|
||||||
|
|
||||||
|
for _, version := range versions {
|
||||||
|
snapshotCfg, err := DecodeConfigurationSnapshot(version.Data)
|
||||||
|
if err != nil {
|
||||||
|
// Keep malformed snapshots untouched and reset chain to avoid accidental removals.
|
||||||
|
kept = append(kept, version)
|
||||||
|
hasPrev = false
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
key, err := BuildConfigurationSpecPriceFingerprint(snapshotCfg)
|
||||||
|
if err != nil {
|
||||||
|
kept = append(kept, version)
|
||||||
|
hasPrev = false
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if !hasPrev || key != prevKey {
|
||||||
|
kept = append(kept, version)
|
||||||
|
prevKey = key
|
||||||
|
hasPrev = true
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
deleteIDs = append(deleteIDs, version.ID)
|
||||||
|
deleteSet[version.ID] = struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(deleteIDs) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Where("id IN ?", deleteIDs).Delete(&LocalConfigurationVersion{}).Error; err != nil {
|
||||||
|
return fmt.Errorf("delete duplicate versions for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
removedTotal += len(deleteIDs)
|
||||||
|
|
||||||
|
latestKeptID := kept[len(kept)-1].ID
|
||||||
|
if cfg.CurrentVersionID == nil || *cfg.CurrentVersionID == "" {
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", latestKeptID).Error; err != nil {
|
||||||
|
return fmt.Errorf("set missing current_version_id for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, deleted := deleteSet[*cfg.CurrentVersionID]; deleted {
|
||||||
|
if err := tx.Model(&LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", cfg.UUID).
|
||||||
|
Update("current_version_id", latestKeptID).Error; err != nil {
|
||||||
|
return fmt.Errorf("repair current_version_id for %s: %w", cfg.UUID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removedTotal > 0 {
|
||||||
|
slog.Info("deduplicated configuration revisions", "removed_versions", removedTotal)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func fixLocalPricelistIndexes(tx *gorm.DB) error {
|
||||||
|
type indexRow struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
Unique int `gorm:"column:unique"`
|
||||||
|
}
|
||||||
|
var indexes []indexRow
|
||||||
|
if err := tx.Raw("PRAGMA index_list('local_pricelists')").Scan(&indexes).Error; err != nil {
|
||||||
|
return fmt.Errorf("list local_pricelists indexes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, idx := range indexes {
|
||||||
|
if idx.Unique == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
type indexInfoRow struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var info []indexInfoRow
|
||||||
|
if err := tx.Raw(fmt.Sprintf("PRAGMA index_info('%s')", strings.ReplaceAll(idx.Name, "'", "''"))).Scan(&info).Error; err != nil {
|
||||||
|
return fmt.Errorf("load index info for %s: %w", idx.Name, err)
|
||||||
|
}
|
||||||
|
if len(info) != 1 || info[0].Name != "version" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
quoted := strings.ReplaceAll(idx.Name, `"`, `""`)
|
||||||
|
if err := tx.Exec(fmt.Sprintf(`DROP INDEX IF EXISTS "%s"`, quoted)).Error; err != nil {
|
||||||
|
return fmt.Errorf("drop unique version index %s: %w", idx.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_local_pricelists_server_id
|
||||||
|
ON local_pricelists(server_id)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure unique index local_pricelists(server_id): %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelists_version
|
||||||
|
ON local_pricelists(version)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure index local_pricelists(version): %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func backfillLocalPricelistSource(tx *gorm.DB) error {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE local_pricelists
|
||||||
|
SET source = 'estimate'
|
||||||
|
WHERE source IS NULL OR source = ''
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("backfill local_pricelists.source: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelists_source_created_at
|
||||||
|
ON local_pricelists(source, created_at DESC)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_pricelists_source_created_at: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dropComponentUnusedFields(tx *gorm.DB) error {
|
||||||
|
// Check if columns exist
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_components')
|
||||||
|
WHERE name IN ('current_price', 'synced_at')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check columns existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 0 {
|
||||||
|
slog.Info("unused fields already removed from local_components")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SQLite: recreate table without current_price and synced_at
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE TABLE local_components_new (
|
||||||
|
lot_name TEXT PRIMARY KEY,
|
||||||
|
lot_description TEXT,
|
||||||
|
category TEXT,
|
||||||
|
model TEXT
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create new local_components table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO local_components_new (lot_name, lot_description, category, model)
|
||||||
|
SELECT lot_name, lot_description, category, model
|
||||||
|
FROM local_components
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("copy data to new table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`DROP TABLE local_components`).Error; err != nil {
|
||||||
|
return fmt.Errorf("drop old table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`ALTER TABLE local_components_new RENAME TO local_components`).Error; err != nil {
|
||||||
|
return fmt.Errorf("rename new table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("dropped current_price and synced_at columns from local_components")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addWarehouseCompetitorPriceLists(tx *gorm.DB) error {
|
||||||
|
// Check if columns exist
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('warehouse_pricelist_id', 'competitor_pricelist_id')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check columns existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 2 {
|
||||||
|
slog.Info("warehouse and competitor pricelist columns already exist")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add columns if they don't exist
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN warehouse_pricelist_id INTEGER
|
||||||
|
`).Error; err != nil {
|
||||||
|
// Column might already exist, ignore
|
||||||
|
if !strings.Contains(err.Error(), "duplicate column") {
|
||||||
|
return fmt.Errorf("add warehouse_pricelist_id column: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN competitor_pricelist_id INTEGER
|
||||||
|
`).Error; err != nil {
|
||||||
|
// Column might already exist, ignore
|
||||||
|
if !strings.Contains(err.Error(), "duplicate column") {
|
||||||
|
return fmt.Errorf("add competitor_pricelist_id column: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create indexes
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_configurations_warehouse_pricelist
|
||||||
|
ON local_configurations(warehouse_pricelist_id)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create warehouse pricelist index: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_configurations_competitor_pricelist
|
||||||
|
ON local_configurations(competitor_pricelist_id)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create competitor pricelist index: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("added warehouse and competitor pricelist fields to local_configurations")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalPricelistItemCategoryAndIndexes(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_pricelist_items')
|
||||||
|
WHERE name IN ('lot_category')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_pricelist_items(lot_category) existence: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_pricelist_items
|
||||||
|
ADD COLUMN lot_category TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_pricelist_items.lot_category: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added lot_category to local_pricelist_items")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelist_items_pricelist_lot
|
||||||
|
ON local_pricelist_items(pricelist_id, lot_name)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_pricelist_items_pricelist_lot: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_local_pricelist_items_lot_category
|
||||||
|
ON local_pricelist_items(lot_category)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("ensure idx_local_pricelist_items_lot_category: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationArticle(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('article')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(article) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN article TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.article: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added article to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationServerModel(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('server_model')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(server_model) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN server_model TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.server_model: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added server_model to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func addLocalConfigurationSupportCode(tx *gorm.DB) error {
|
||||||
|
type columnInfo struct {
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
}
|
||||||
|
var columns []columnInfo
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT name FROM pragma_table_info('local_configurations')
|
||||||
|
WHERE name IN ('support_code')
|
||||||
|
`).Scan(&columns).Error; err != nil {
|
||||||
|
return fmt.Errorf("check local_configurations(support_code) existence: %w", err)
|
||||||
|
}
|
||||||
|
if len(columns) == 0 {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
ALTER TABLE local_configurations
|
||||||
|
ADD COLUMN support_code TEXT
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("add local_configurations.support_code: %w", err)
|
||||||
|
}
|
||||||
|
slog.Info("added support_code to local_configurations")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
244
internal/localdb/models.go
Normal file
244
internal/localdb/models.go
Normal file
@@ -0,0 +1,244 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"database/sql/driver"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// AppSetting stores application settings in local SQLite
|
||||||
|
type AppSetting struct {
|
||||||
|
Key string `gorm:"primaryKey" json:"key"`
|
||||||
|
Value string `gorm:"not null" json:"value"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (AppSetting) TableName() string {
|
||||||
|
return "app_settings"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalConfigItem represents an item in a configuration
|
||||||
|
type LocalConfigItem struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
UnitPrice float64 `json:"unit_price"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalConfigItems is a slice of LocalConfigItem that can be stored as JSON
|
||||||
|
type LocalConfigItems []LocalConfigItem
|
||||||
|
|
||||||
|
func (c LocalConfigItems) Value() (driver.Value, error) {
|
||||||
|
return json.Marshal(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *LocalConfigItems) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*c = make(LocalConfigItems, 0)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch v := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = v
|
||||||
|
case string:
|
||||||
|
bytes = []byte(v)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for LocalConfigItems")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, c)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c LocalConfigItems) Total() float64 {
|
||||||
|
var total float64
|
||||||
|
for _, item := range c {
|
||||||
|
total += item.UnitPrice * float64(item.Quantity)
|
||||||
|
}
|
||||||
|
return total
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalStringList is a JSON-encoded list of strings stored as TEXT in SQLite.
|
||||||
|
type LocalStringList []string
|
||||||
|
|
||||||
|
func (s LocalStringList) Value() (driver.Value, error) {
|
||||||
|
return json.Marshal(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *LocalStringList) Scan(value interface{}) error {
|
||||||
|
if value == nil {
|
||||||
|
*s = make(LocalStringList, 0)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
var bytes []byte
|
||||||
|
switch v := value.(type) {
|
||||||
|
case []byte:
|
||||||
|
bytes = v
|
||||||
|
case string:
|
||||||
|
bytes = []byte(v)
|
||||||
|
default:
|
||||||
|
return errors.New("type assertion failed for LocalStringList")
|
||||||
|
}
|
||||||
|
return json.Unmarshal(bytes, s)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalConfiguration stores configurations in local SQLite
|
||||||
|
type LocalConfiguration struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
||||||
|
ServerID *uint `json:"server_id"` // ID on MariaDB server, NULL if local only
|
||||||
|
ProjectUUID *string `gorm:"index" json:"project_uuid,omitempty"`
|
||||||
|
CurrentVersionID *string `gorm:"index" json:"current_version_id,omitempty"`
|
||||||
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
|
Name string `gorm:"not null" json:"name"`
|
||||||
|
Items LocalConfigItems `gorm:"type:text" json:"items"` // JSON stored as text in SQLite
|
||||||
|
TotalPrice *float64 `json:"total_price"`
|
||||||
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
|
Notes string `json:"notes"`
|
||||||
|
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
||||||
|
ServerCount int `gorm:"default:1" json:"server_count"`
|
||||||
|
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
||||||
|
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
||||||
|
Article string `gorm:"size:80" json:"article,omitempty"`
|
||||||
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
SyncedAt *time.Time `json:"synced_at"`
|
||||||
|
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // 'local', 'synced', 'modified'
|
||||||
|
OriginalUserID uint `json:"original_user_id"` // UserID from MariaDB for reference
|
||||||
|
OriginalUsername string `gorm:"not null;default:'';index" json:"original_username"`
|
||||||
|
CurrentVersion *LocalConfigurationVersion `gorm:"foreignKey:CurrentVersionID;references:ID" json:"current_version,omitempty"`
|
||||||
|
Versions []LocalConfigurationVersion `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"versions,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalConfiguration) TableName() string {
|
||||||
|
return "local_configurations"
|
||||||
|
}
|
||||||
|
|
||||||
|
type LocalProject struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
UUID string `gorm:"uniqueIndex;not null" json:"uuid"`
|
||||||
|
ServerID *uint `json:"server_id,omitempty"`
|
||||||
|
OwnerUsername string `gorm:"not null;index" json:"owner_username"`
|
||||||
|
Code string `gorm:"not null;index:idx_local_projects_code_variant,priority:1" json:"code"`
|
||||||
|
Variant string `gorm:"default:'';index:idx_local_projects_code_variant,priority:2" json:"variant"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
|
TrackerURL string `json:"tracker_url"`
|
||||||
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
|
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
SyncedAt *time.Time `json:"synced_at,omitempty"`
|
||||||
|
SyncStatus string `gorm:"default:'local'" json:"sync_status"` // local/synced/pending
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalProject) TableName() string {
|
||||||
|
return "local_projects"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalConfigurationVersion stores immutable full snapshots for each configuration version
|
||||||
|
type LocalConfigurationVersion struct {
|
||||||
|
ID string `gorm:"primaryKey" json:"id"`
|
||||||
|
ConfigurationUUID string `gorm:"not null;index:idx_lcv_config_created,priority:1;index:idx_lcv_config_version,priority:1;uniqueIndex:idx_lcv_config_version_unique,priority:1" json:"configuration_uuid"`
|
||||||
|
VersionNo int `gorm:"not null;index:idx_lcv_config_version,sort:desc,priority:2;uniqueIndex:idx_lcv_config_version_unique,priority:2" json:"version_no"`
|
||||||
|
Data string `gorm:"type:text;not null" json:"data"`
|
||||||
|
ChangeNote *string `json:"change_note,omitempty"`
|
||||||
|
CreatedBy *string `json:"created_by,omitempty"`
|
||||||
|
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
||||||
|
CreatedAt time.Time `gorm:"not null;autoCreateTime;index:idx_lcv_config_created,sort:desc,priority:2" json:"created_at"`
|
||||||
|
Configuration *LocalConfiguration `gorm:"foreignKey:ConfigurationUUID;references:UUID" json:"configuration,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalConfigurationVersion) TableName() string {
|
||||||
|
return "local_configuration_versions"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalPricelist stores cached pricelists from server
|
||||||
|
type LocalPricelist struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
ServerID uint `gorm:"not null;uniqueIndex" json:"server_id"` // ID on MariaDB server
|
||||||
|
Source string `gorm:"not null;default:'estimate';index:idx_local_pricelists_source_created_at,priority:1" json:"source"`
|
||||||
|
Version string `gorm:"not null;index" json:"version"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
CreatedAt time.Time `gorm:"index:idx_local_pricelists_source_created_at,priority:2,sort:desc" json:"created_at"`
|
||||||
|
SyncedAt time.Time `json:"synced_at"`
|
||||||
|
IsUsed bool `gorm:"default:false" json:"is_used"` // Used by any local configuration
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalPricelist) TableName() string {
|
||||||
|
return "local_pricelists"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalPricelistItem stores pricelist items
|
||||||
|
type LocalPricelistItem struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
PricelistID uint `gorm:"not null;index" json:"pricelist_id"`
|
||||||
|
LotName string `gorm:"not null" json:"lot_name"`
|
||||||
|
LotCategory string `gorm:"column:lot_category" json:"lot_category,omitempty"`
|
||||||
|
Price float64 `gorm:"not null" json:"price"`
|
||||||
|
AvailableQty *float64 `json:"available_qty,omitempty"`
|
||||||
|
Partnumbers LocalStringList `gorm:"type:text" json:"partnumbers,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalPricelistItem) TableName() string {
|
||||||
|
return "local_pricelist_items"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalComponent stores cached components for offline search (metadata only)
|
||||||
|
// All pricing is now sourced from local_pricelist_items based on configuration pricelist selection
|
||||||
|
type LocalComponent struct {
|
||||||
|
LotName string `gorm:"primaryKey" json:"lot_name"`
|
||||||
|
LotDescription string `json:"lot_description"`
|
||||||
|
Category string `json:"category"`
|
||||||
|
Model string `json:"model"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalComponent) TableName() string {
|
||||||
|
return "local_components"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalRemoteMigrationApplied tracks remote SQLite migrations received from server and applied locally.
|
||||||
|
type LocalRemoteMigrationApplied struct {
|
||||||
|
ID string `gorm:"primaryKey;size:128" json:"id"`
|
||||||
|
Checksum string `gorm:"size:128;not null" json:"checksum"`
|
||||||
|
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
||||||
|
AppliedAt time.Time `gorm:"not null" json:"applied_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalRemoteMigrationApplied) TableName() string {
|
||||||
|
return "local_remote_migrations_applied"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LocalSyncGuardState stores latest sync readiness decision for UI and preflight checks.
|
||||||
|
type LocalSyncGuardState struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
Status string `gorm:"size:32;not null;index" json:"status"` // ready|blocked|unknown
|
||||||
|
ReasonCode string `gorm:"size:128" json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `gorm:"type:text" json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `gorm:"size:64" json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
|
UpdatedAt time.Time `json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LocalSyncGuardState) TableName() string {
|
||||||
|
return "local_sync_guard_state"
|
||||||
|
}
|
||||||
|
|
||||||
|
// PendingChange stores changes that need to be synced to the server
|
||||||
|
type PendingChange struct {
|
||||||
|
ID int64 `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
EntityType string `gorm:"not null;index" json:"entity_type"` // "configuration", "project", "specification"
|
||||||
|
EntityUUID string `gorm:"not null;index" json:"entity_uuid"`
|
||||||
|
Operation string `gorm:"not null" json:"operation"` // "create", "update", "rollback", "deactivate", "reactivate", "delete"
|
||||||
|
Payload string `gorm:"type:text" json:"payload"` // JSON snapshot of the entity
|
||||||
|
CreatedAt time.Time `gorm:"not null" json:"created_at"`
|
||||||
|
Attempts int `gorm:"default:0" json:"attempts"` // Retry count for sync
|
||||||
|
LastError string `gorm:"type:text" json:"last_error,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (PendingChange) TableName() string {
|
||||||
|
return "pending_changes"
|
||||||
|
}
|
||||||
145
internal/localdb/snapshots.go
Normal file
145
internal/localdb/snapshots.go
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
package localdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BuildConfigurationSnapshot serializes the full local configuration state.
|
||||||
|
func BuildConfigurationSnapshot(localCfg *LocalConfiguration) (string, error) {
|
||||||
|
snapshot := map[string]interface{}{
|
||||||
|
"id": localCfg.ID,
|
||||||
|
"uuid": localCfg.UUID,
|
||||||
|
"server_id": localCfg.ServerID,
|
||||||
|
"project_uuid": localCfg.ProjectUUID,
|
||||||
|
"current_version_id": localCfg.CurrentVersionID,
|
||||||
|
"is_active": localCfg.IsActive,
|
||||||
|
"name": localCfg.Name,
|
||||||
|
"items": localCfg.Items,
|
||||||
|
"total_price": localCfg.TotalPrice,
|
||||||
|
"custom_price": localCfg.CustomPrice,
|
||||||
|
"notes": localCfg.Notes,
|
||||||
|
"is_template": localCfg.IsTemplate,
|
||||||
|
"server_count": localCfg.ServerCount,
|
||||||
|
"server_model": localCfg.ServerModel,
|
||||||
|
"support_code": localCfg.SupportCode,
|
||||||
|
"article": localCfg.Article,
|
||||||
|
"pricelist_id": localCfg.PricelistID,
|
||||||
|
"only_in_stock": localCfg.OnlyInStock,
|
||||||
|
"price_updated_at": localCfg.PriceUpdatedAt,
|
||||||
|
"created_at": localCfg.CreatedAt,
|
||||||
|
"updated_at": localCfg.UpdatedAt,
|
||||||
|
"synced_at": localCfg.SyncedAt,
|
||||||
|
"sync_status": localCfg.SyncStatus,
|
||||||
|
"original_user_id": localCfg.OriginalUserID,
|
||||||
|
"original_username": localCfg.OriginalUsername,
|
||||||
|
}
|
||||||
|
|
||||||
|
data, err := json.Marshal(snapshot)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("marshal configuration snapshot: %w", err)
|
||||||
|
}
|
||||||
|
return string(data), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DecodeConfigurationSnapshot returns editable fields from one saved snapshot.
|
||||||
|
func DecodeConfigurationSnapshot(data string) (*LocalConfiguration, error) {
|
||||||
|
var snapshot struct {
|
||||||
|
ProjectUUID *string `json:"project_uuid"`
|
||||||
|
IsActive *bool `json:"is_active"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Items LocalConfigItems `json:"items"`
|
||||||
|
TotalPrice *float64 `json:"total_price"`
|
||||||
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
|
Notes string `json:"notes"`
|
||||||
|
IsTemplate bool `json:"is_template"`
|
||||||
|
ServerCount int `json:"server_count"`
|
||||||
|
ServerModel string `json:"server_model"`
|
||||||
|
SupportCode string `json:"support_code"`
|
||||||
|
Article string `json:"article"`
|
||||||
|
PricelistID *uint `json:"pricelist_id"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
|
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
||||||
|
OriginalUserID uint `json:"original_user_id"`
|
||||||
|
OriginalUsername string `json:"original_username"`
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := json.Unmarshal([]byte(data), &snapshot); err != nil {
|
||||||
|
return nil, fmt.Errorf("unmarshal snapshot JSON: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
isActive := true
|
||||||
|
if snapshot.IsActive != nil {
|
||||||
|
isActive = *snapshot.IsActive
|
||||||
|
}
|
||||||
|
|
||||||
|
return &LocalConfiguration{
|
||||||
|
IsActive: isActive,
|
||||||
|
ProjectUUID: snapshot.ProjectUUID,
|
||||||
|
Name: snapshot.Name,
|
||||||
|
Items: snapshot.Items,
|
||||||
|
TotalPrice: snapshot.TotalPrice,
|
||||||
|
CustomPrice: snapshot.CustomPrice,
|
||||||
|
Notes: snapshot.Notes,
|
||||||
|
IsTemplate: snapshot.IsTemplate,
|
||||||
|
ServerCount: snapshot.ServerCount,
|
||||||
|
ServerModel: snapshot.ServerModel,
|
||||||
|
SupportCode: snapshot.SupportCode,
|
||||||
|
Article: snapshot.Article,
|
||||||
|
PricelistID: snapshot.PricelistID,
|
||||||
|
OnlyInStock: snapshot.OnlyInStock,
|
||||||
|
PriceUpdatedAt: snapshot.PriceUpdatedAt,
|
||||||
|
OriginalUserID: snapshot.OriginalUserID,
|
||||||
|
OriginalUsername: snapshot.OriginalUsername,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type configurationSpecPriceFingerprint struct {
|
||||||
|
Items []configurationSpecPriceFingerprintItem `json:"items"`
|
||||||
|
ServerCount int `json:"server_count"`
|
||||||
|
TotalPrice *float64 `json:"total_price,omitempty"`
|
||||||
|
CustomPrice *float64 `json:"custom_price,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type configurationSpecPriceFingerprintItem struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
UnitPrice float64 `json:"unit_price"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// BuildConfigurationSpecPriceFingerprint returns a stable JSON key based on
|
||||||
|
// spec + price fields only, used for revision deduplication.
|
||||||
|
func BuildConfigurationSpecPriceFingerprint(localCfg *LocalConfiguration) (string, error) {
|
||||||
|
items := make([]configurationSpecPriceFingerprintItem, 0, len(localCfg.Items))
|
||||||
|
for _, item := range localCfg.Items {
|
||||||
|
items = append(items, configurationSpecPriceFingerprintItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
sort.Slice(items, func(i, j int) bool {
|
||||||
|
if items[i].LotName != items[j].LotName {
|
||||||
|
return items[i].LotName < items[j].LotName
|
||||||
|
}
|
||||||
|
if items[i].Quantity != items[j].Quantity {
|
||||||
|
return items[i].Quantity < items[j].Quantity
|
||||||
|
}
|
||||||
|
return items[i].UnitPrice < items[j].UnitPrice
|
||||||
|
})
|
||||||
|
|
||||||
|
payload := configurationSpecPriceFingerprint{
|
||||||
|
Items: items,
|
||||||
|
ServerCount: localCfg.ServerCount,
|
||||||
|
TotalPrice: localCfg.TotalPrice,
|
||||||
|
CustomPrice: localCfg.CustomPrice,
|
||||||
|
}
|
||||||
|
|
||||||
|
raw, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("marshal spec+price fingerprint: %w", err)
|
||||||
|
}
|
||||||
|
return string(raw), nil
|
||||||
|
}
|
||||||
238
internal/lotmatch/resolver.go
Normal file
238
internal/lotmatch/resolver.go
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
package lotmatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrResolveConflict = errors.New("multiple lot matches")
|
||||||
|
ErrResolveNotFound = errors.New("lot not found")
|
||||||
|
)
|
||||||
|
|
||||||
|
type LotResolver struct {
|
||||||
|
partnumberToLots map[string][]string
|
||||||
|
exactLots map[string]string
|
||||||
|
allLots []string
|
||||||
|
}
|
||||||
|
|
||||||
|
type MappingMatcher struct {
|
||||||
|
exact map[string][]string
|
||||||
|
exactLot map[string]string
|
||||||
|
wildcard []wildcardMapping
|
||||||
|
}
|
||||||
|
|
||||||
|
type wildcardMapping struct {
|
||||||
|
lotName string
|
||||||
|
re *regexp.Regexp
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLotResolverFromDB(db *gorm.DB) (*LotResolver, error) {
|
||||||
|
mappings, lots, err := loadMappingsAndLots(db)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return NewLotResolver(mappings, lots), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewMappingMatcherFromDB(db *gorm.DB) (*MappingMatcher, error) {
|
||||||
|
mappings, lots, err := loadMappingsAndLots(db)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return NewMappingMatcher(mappings, lots), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewLotResolver(mappings []models.LotPartnumber, lots []models.Lot) *LotResolver {
|
||||||
|
partnumberToLots := make(map[string][]string, len(mappings))
|
||||||
|
for _, m := range mappings {
|
||||||
|
pn := NormalizeKey(m.Partnumber)
|
||||||
|
lot := strings.TrimSpace(m.LotName)
|
||||||
|
if pn == "" || lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
partnumberToLots[pn] = append(partnumberToLots[pn], lot)
|
||||||
|
}
|
||||||
|
for key := range partnumberToLots {
|
||||||
|
partnumberToLots[key] = uniqueCaseInsensitive(partnumberToLots[key])
|
||||||
|
}
|
||||||
|
|
||||||
|
exactLots := make(map[string]string, len(lots))
|
||||||
|
allLots := make([]string, 0, len(lots))
|
||||||
|
for _, l := range lots {
|
||||||
|
name := strings.TrimSpace(l.LotName)
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exactLots[NormalizeKey(name)] = name
|
||||||
|
allLots = append(allLots, name)
|
||||||
|
}
|
||||||
|
sort.Slice(allLots, func(i, j int) bool {
|
||||||
|
li := len([]rune(allLots[i]))
|
||||||
|
lj := len([]rune(allLots[j]))
|
||||||
|
if li == lj {
|
||||||
|
return allLots[i] < allLots[j]
|
||||||
|
}
|
||||||
|
return li > lj
|
||||||
|
})
|
||||||
|
|
||||||
|
return &LotResolver{
|
||||||
|
partnumberToLots: partnumberToLots,
|
||||||
|
exactLots: exactLots,
|
||||||
|
allLots: allLots,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewMappingMatcher(mappings []models.LotPartnumber, lots []models.Lot) *MappingMatcher {
|
||||||
|
exact := make(map[string][]string, len(mappings))
|
||||||
|
wildcards := make([]wildcardMapping, 0, len(mappings))
|
||||||
|
for _, m := range mappings {
|
||||||
|
pn := NormalizeKey(m.Partnumber)
|
||||||
|
lot := strings.TrimSpace(m.LotName)
|
||||||
|
if pn == "" || lot == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.Contains(pn, "*") {
|
||||||
|
pattern := "^" + regexp.QuoteMeta(pn) + "$"
|
||||||
|
pattern = strings.ReplaceAll(pattern, "\\*", ".*")
|
||||||
|
re, err := regexp.Compile(pattern)
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
wildcards = append(wildcards, wildcardMapping{lotName: lot, re: re})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exact[pn] = append(exact[pn], lot)
|
||||||
|
}
|
||||||
|
for key := range exact {
|
||||||
|
exact[key] = uniqueCaseInsensitive(exact[key])
|
||||||
|
}
|
||||||
|
|
||||||
|
exactLot := make(map[string]string, len(lots))
|
||||||
|
for _, l := range lots {
|
||||||
|
name := strings.TrimSpace(l.LotName)
|
||||||
|
if name == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
exactLot[NormalizeKey(name)] = name
|
||||||
|
}
|
||||||
|
|
||||||
|
return &MappingMatcher{
|
||||||
|
exact: exact,
|
||||||
|
exactLot: exactLot,
|
||||||
|
wildcard: wildcards,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *LotResolver) Resolve(partnumber string) (string, string, error) {
|
||||||
|
key := NormalizeKey(partnumber)
|
||||||
|
if key == "" {
|
||||||
|
return "", "", ErrResolveNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if mapped := r.partnumberToLots[key]; len(mapped) > 0 {
|
||||||
|
if len(mapped) == 1 {
|
||||||
|
return mapped[0], "mapping_table", nil
|
||||||
|
}
|
||||||
|
return "", "", ErrResolveConflict
|
||||||
|
}
|
||||||
|
if exact, ok := r.exactLots[key]; ok {
|
||||||
|
return exact, "article_exact", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
best := ""
|
||||||
|
bestLen := -1
|
||||||
|
tie := false
|
||||||
|
for _, lot := range r.allLots {
|
||||||
|
lotKey := NormalizeKey(lot)
|
||||||
|
if lotKey == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(key, lotKey) {
|
||||||
|
l := len([]rune(lotKey))
|
||||||
|
if l > bestLen {
|
||||||
|
best = lot
|
||||||
|
bestLen = l
|
||||||
|
tie = false
|
||||||
|
} else if l == bestLen && !strings.EqualFold(best, lot) {
|
||||||
|
tie = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if best == "" {
|
||||||
|
return "", "", ErrResolveNotFound
|
||||||
|
}
|
||||||
|
if tie {
|
||||||
|
return "", "", ErrResolveConflict
|
||||||
|
}
|
||||||
|
return best, "prefix", nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *MappingMatcher) MatchLots(partnumber string) []string {
|
||||||
|
if m == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
key := NormalizeKey(partnumber)
|
||||||
|
if key == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lots := make([]string, 0, 2)
|
||||||
|
if exact := m.exact[key]; len(exact) > 0 {
|
||||||
|
lots = append(lots, exact...)
|
||||||
|
}
|
||||||
|
for _, wc := range m.wildcard {
|
||||||
|
if wc.re == nil || !wc.re.MatchString(key) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lots = append(lots, wc.lotName)
|
||||||
|
}
|
||||||
|
if lot, ok := m.exactLot[key]; ok && strings.TrimSpace(lot) != "" {
|
||||||
|
lots = append(lots, lot)
|
||||||
|
}
|
||||||
|
return uniqueCaseInsensitive(lots)
|
||||||
|
}
|
||||||
|
|
||||||
|
func NormalizeKey(v string) string {
|
||||||
|
s := strings.ToLower(strings.TrimSpace(v))
|
||||||
|
replacer := strings.NewReplacer(" ", "", "-", "", "_", "", ".", "", "/", "", "\\", "", "\"", "", "'", "", "(", "", ")", "")
|
||||||
|
return replacer.Replace(s)
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadMappingsAndLots(db *gorm.DB) ([]models.LotPartnumber, []models.Lot, error) {
|
||||||
|
var mappings []models.LotPartnumber
|
||||||
|
if err := db.Find(&mappings).Error; err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
var lots []models.Lot
|
||||||
|
if err := db.Select("lot_name").Find(&lots).Error; err != nil {
|
||||||
|
return nil, nil, err
|
||||||
|
}
|
||||||
|
return mappings, lots, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func uniqueCaseInsensitive(values []string) []string {
|
||||||
|
seen := make(map[string]struct{}, len(values))
|
||||||
|
out := make([]string, 0, len(values))
|
||||||
|
for _, v := range values {
|
||||||
|
trimmed := strings.TrimSpace(v)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
key := strings.ToLower(trimmed)
|
||||||
|
if _, ok := seen[key]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seen[key] = struct{}{}
|
||||||
|
out = append(out, trimmed)
|
||||||
|
}
|
||||||
|
sort.Slice(out, func(i, j int) bool {
|
||||||
|
return strings.ToLower(out[i]) < strings.ToLower(out[j])
|
||||||
|
})
|
||||||
|
return out
|
||||||
|
}
|
||||||
62
internal/lotmatch/resolver_test.go
Normal file
62
internal/lotmatch/resolver_test.go
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
package lotmatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestLotResolverPrecedence(t *testing.T) {
|
||||||
|
resolver := NewLotResolver(
|
||||||
|
[]models.LotPartnumber{
|
||||||
|
{Partnumber: "PN-1", LotName: "LOT_A"},
|
||||||
|
},
|
||||||
|
[]models.Lot{
|
||||||
|
{LotName: "CPU_X_LONG"},
|
||||||
|
{LotName: "CPU_X"},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
lot, by, err := resolver.Resolve("PN-1")
|
||||||
|
if err != nil || lot != "LOT_A" || by != "mapping_table" {
|
||||||
|
t.Fatalf("expected mapping_table LOT_A, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lot, by, err = resolver.Resolve("CPU_X")
|
||||||
|
if err != nil || lot != "CPU_X" || by != "article_exact" {
|
||||||
|
t.Fatalf("expected article_exact CPU_X, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lot, by, err = resolver.Resolve("CPU_X_LONG_001")
|
||||||
|
if err != nil || lot != "CPU_X_LONG" || by != "prefix" {
|
||||||
|
t.Fatalf("expected prefix CPU_X_LONG, got lot=%s by=%s err=%v", lot, by, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMappingMatcherWildcardAndExactLot(t *testing.T) {
|
||||||
|
matcher := NewMappingMatcher(
|
||||||
|
[]models.LotPartnumber{
|
||||||
|
{Partnumber: "R750*", LotName: "SERVER_R750"},
|
||||||
|
{Partnumber: "HDD-01", LotName: "HDD_01"},
|
||||||
|
},
|
||||||
|
[]models.Lot{
|
||||||
|
{LotName: "MEM_DDR5_16G_4800"},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
check := func(partnumber string, want string) {
|
||||||
|
t.Helper()
|
||||||
|
got := matcher.MatchLots(partnumber)
|
||||||
|
if len(got) != 1 || got[0] != want {
|
||||||
|
t.Fatalf("partnumber %s: expected [%s], got %#v", partnumber, want, got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
check("R750XD", "SERVER_R750")
|
||||||
|
check("HDD-01", "HDD_01")
|
||||||
|
check("MEM_DDR5_16G_4800", "MEM_DDR5_16G_4800")
|
||||||
|
|
||||||
|
if got := matcher.MatchLots("UNKNOWN"); len(got) != 0 {
|
||||||
|
t.Fatalf("expected no matches for UNKNOWN, got %#v", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -4,9 +4,9 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/services"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -99,3 +99,12 @@ func GetUserID(c *gin.Context) uint {
|
|||||||
}
|
}
|
||||||
return claims.UserID
|
return claims.UserID
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetUsername extracts username from context
|
||||||
|
func GetUsername(c *gin.Context) string {
|
||||||
|
claims := GetClaims(c)
|
||||||
|
if claims == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return claims.Username
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,22 +1,55 @@
|
|||||||
package middleware
|
package middleware
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
)
|
)
|
||||||
|
|
||||||
func CORS() gin.HandlerFunc {
|
func CORS() gin.HandlerFunc {
|
||||||
return func(c *gin.Context) {
|
return func(c *gin.Context) {
|
||||||
c.Header("Access-Control-Allow-Origin", "*")
|
origin := strings.TrimSpace(c.GetHeader("Origin"))
|
||||||
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS")
|
if origin != "" {
|
||||||
c.Header("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization")
|
if isLoopbackOrigin(origin) {
|
||||||
c.Header("Access-Control-Expose-Headers", "Content-Length, Content-Disposition")
|
c.Header("Access-Control-Allow-Origin", origin)
|
||||||
c.Header("Access-Control-Max-Age", "86400")
|
c.Header("Vary", "Origin")
|
||||||
|
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, PATCH, DELETE, OPTIONS")
|
||||||
|
c.Header("Access-Control-Allow-Headers", "Origin, Content-Type, Accept, Authorization")
|
||||||
|
c.Header("Access-Control-Expose-Headers", "Content-Length, Content-Disposition")
|
||||||
|
c.Header("Access-Control-Max-Age", "86400")
|
||||||
|
} else if c.Request.Method == http.MethodOptions {
|
||||||
|
c.AbortWithStatus(http.StatusForbidden)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if c.Request.Method == "OPTIONS" {
|
if c.Request.Method == http.MethodOptions {
|
||||||
c.AbortWithStatus(204)
|
c.AbortWithStatus(http.StatusNoContent)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
c.Next()
|
c.Next()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isLoopbackOrigin(origin string) bool {
|
||||||
|
u, err := url.Parse(origin)
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if u.Scheme != "http" && u.Scheme != "https" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
host := strings.TrimSpace(u.Hostname())
|
||||||
|
if host == "" {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if strings.EqualFold(host, "localhost") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
ip := net.ParseIP(host)
|
||||||
|
return ip != nil && ip.IsLoopback()
|
||||||
|
}
|
||||||
|
|||||||
29
internal/middleware/offline.go
Normal file
29
internal/middleware/offline.go
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
package middleware
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log/slog"
|
||||||
|
|
||||||
|
"github.com/gin-gonic/gin"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
// OfflineDetector creates middleware that detects offline mode
|
||||||
|
// Sets context values:
|
||||||
|
// - "is_offline" (bool) - true if MariaDB is unavailable
|
||||||
|
// - "localdb" (*localdb.LocalDB) - local database instance for fallback
|
||||||
|
func OfflineDetector(connMgr *db.ConnectionManager, local *localdb.LocalDB) gin.HandlerFunc {
|
||||||
|
return func(c *gin.Context) {
|
||||||
|
isOffline := !connMgr.IsOnline()
|
||||||
|
|
||||||
|
// Set context values for handlers
|
||||||
|
c.Set("is_offline", isOffline)
|
||||||
|
c.Set("localdb", local)
|
||||||
|
|
||||||
|
if isOffline {
|
||||||
|
slog.Debug("offline mode detected - MariaDB unavailable")
|
||||||
|
}
|
||||||
|
|
||||||
|
c.Next()
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -13,17 +13,32 @@ func (Category) TableName() string {
|
|||||||
return "qt_categories"
|
return "qt_categories"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DefaultCategories defines the standard categories with display order
|
||||||
|
// Order: BB, CPU, MEM, GPU, SSD, RAID, HBA, NIC, PSU, RISERS, ACC, and others
|
||||||
var DefaultCategories = []Category{
|
var DefaultCategories = []Category{
|
||||||
{Code: "MB", Name: "Motherboard", NameRu: "Материнская плата", DisplayOrder: 1, IsRequired: true},
|
{Code: "BB", Name: "Barebone", NameRu: "Шасси", DisplayOrder: 1, IsRequired: true},
|
||||||
{Code: "CPU", Name: "Processor", NameRu: "Процессор", DisplayOrder: 2, IsRequired: true},
|
{Code: "CPU", Name: "Processor", NameRu: "Процессор", DisplayOrder: 2, IsRequired: true},
|
||||||
{Code: "MEM", Name: "Memory", NameRu: "Оперативная память", DisplayOrder: 3, IsRequired: true},
|
{Code: "MEM", Name: "Memory", NameRu: "Оперативная память", DisplayOrder: 3, IsRequired: true},
|
||||||
{Code: "GPU", Name: "Graphics Card", NameRu: "Видеокарта", DisplayOrder: 4},
|
{Code: "GPU", Name: "Graphics Card", NameRu: "Видеокарта", DisplayOrder: 4},
|
||||||
{Code: "SSD", Name: "SSD Storage", NameRu: "SSD накопитель", DisplayOrder: 5},
|
{Code: "SSD", Name: "SSD Storage", NameRu: "SSD накопитель", DisplayOrder: 5},
|
||||||
{Code: "HDD", Name: "HDD Storage", NameRu: "HDD накопитель", DisplayOrder: 6},
|
{Code: "RAID", Name: "RAID Controller", NameRu: "RAID контроллер", DisplayOrder: 6},
|
||||||
{Code: "RAID", Name: "RAID Controller", NameRu: "RAID контроллер", DisplayOrder: 7},
|
{Code: "HBA", Name: "HBA Adapter", NameRu: "HBA адаптер", DisplayOrder: 7},
|
||||||
{Code: "NIC", Name: "Network Card", NameRu: "Сетевая карта", DisplayOrder: 8},
|
{Code: "NIC", Name: "Network Card", NameRu: "Сетевая карта", DisplayOrder: 8},
|
||||||
{Code: "HCA", Name: "HCA Adapter", NameRu: "HCA адаптер", DisplayOrder: 9},
|
{Code: "PSU", Name: "Power Supply", NameRu: "Блок питания", DisplayOrder: 9},
|
||||||
{Code: "HBA", Name: "HBA Adapter", NameRu: "HBA адаптер", DisplayOrder: 10},
|
{Code: "RISERS", Name: "Risers", NameRu: "Райзеры", DisplayOrder: 10},
|
||||||
{Code: "DPU", Name: "DPU", NameRu: "DPU", DisplayOrder: 11},
|
{Code: "ACC", Name: "Accessories", NameRu: "Аксессуары", DisplayOrder: 11},
|
||||||
{Code: "PS", Name: "Power Supply", NameRu: "Блок питания", DisplayOrder: 12},
|
// Additional categories
|
||||||
|
{Code: "MB", Name: "Motherboard", NameRu: "Материнская плата", DisplayOrder: 12},
|
||||||
|
{Code: "HDD", Name: "HDD Storage", NameRu: "HDD накопитель", DisplayOrder: 13},
|
||||||
|
{Code: "HCA", Name: "HCA Adapter", NameRu: "HCA адаптер", DisplayOrder: 14},
|
||||||
|
{Code: "DPU", Name: "DPU", NameRu: "DPU", DisplayOrder: 15},
|
||||||
|
{Code: "M2", Name: "M.2 Storage", NameRu: "M.2 накопитель", DisplayOrder: 16},
|
||||||
|
{Code: "EDSFF", Name: "EDSFF Storage", NameRu: "EDSFF накопитель", DisplayOrder: 17},
|
||||||
|
{Code: "HHHL", Name: "HHHL Storage", NameRu: "HHHL накопитель", DisplayOrder: 18},
|
||||||
|
{Code: "PS", Name: "Power Supply (Legacy)", NameRu: "Блок питания", DisplayOrder: 19},
|
||||||
|
{Code: "CARD", Name: "Cards", NameRu: "Карты", DisplayOrder: 20},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MaxKnownDisplayOrder is the highest display order for known categories
|
||||||
|
// New categories will get display order starting from this + 1
|
||||||
|
const MaxKnownDisplayOrder = 100
|
||||||
|
|||||||
@@ -40,15 +40,30 @@ func (c ConfigItems) Total() float64 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Configuration struct {
|
type Configuration struct {
|
||||||
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
||||||
UserID uint `gorm:"not null" json:"user_id"`
|
UserID *uint `json:"user_id,omitempty"` // Legacy field, no longer required for ownership
|
||||||
Name string `gorm:"size:200;not null" json:"name"`
|
OwnerUsername string `gorm:"size:100;not null;default:'';index" json:"owner_username"`
|
||||||
Items ConfigItems `gorm:"type:json;not null" json:"items"`
|
ProjectUUID *string `gorm:"size:36;index" json:"project_uuid,omitempty"`
|
||||||
TotalPrice *float64 `gorm:"type:decimal(12,2)" json:"total_price"`
|
AppVersion string `gorm:"size:64" json:"app_version,omitempty"`
|
||||||
Notes string `gorm:"type:text" json:"notes"`
|
Name string `gorm:"size:200;not null" json:"name"`
|
||||||
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
Items ConfigItems `gorm:"type:json;not null" json:"items"`
|
||||||
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
TotalPrice *float64 `gorm:"type:decimal(12,2)" json:"total_price"`
|
||||||
|
CustomPrice *float64 `gorm:"type:decimal(12,2)" json:"custom_price"`
|
||||||
|
Notes string `gorm:"type:text" json:"notes"`
|
||||||
|
IsTemplate bool `gorm:"default:false" json:"is_template"`
|
||||||
|
ServerCount int `gorm:"default:1" json:"server_count"`
|
||||||
|
ServerModel string `gorm:"size:100" json:"server_model,omitempty"`
|
||||||
|
SupportCode string `gorm:"size:20" json:"support_code,omitempty"`
|
||||||
|
Article string `gorm:"size:80" json:"article,omitempty"`
|
||||||
|
PricelistID *uint `gorm:"index" json:"pricelist_id,omitempty"`
|
||||||
|
WarehousePricelistID *uint `gorm:"index" json:"warehouse_pricelist_id,omitempty"`
|
||||||
|
CompetitorPricelistID *uint `gorm:"index" json:"competitor_pricelist_id,omitempty"`
|
||||||
|
DisablePriceRefresh bool `gorm:"default:false" json:"disable_price_refresh"`
|
||||||
|
OnlyInStock bool `gorm:"default:false" json:"only_in_stock"`
|
||||||
|
PriceUpdatedAt *time.Time `gorm:"type:timestamp" json:"price_updated_at,omitempty"`
|
||||||
|
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
||||||
|
CurrentVersionNo int `gorm:"-" json:"current_version_no,omitempty"`
|
||||||
|
|
||||||
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
|
User *User `gorm:"foreignKey:UserID" json:"user,omitempty"`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,10 +2,11 @@ package models
|
|||||||
|
|
||||||
import "time"
|
import "time"
|
||||||
|
|
||||||
// Lot represents existing lot table (READ-ONLY)
|
// Lot represents existing lot table
|
||||||
type Lot struct {
|
type Lot struct {
|
||||||
LotName string `gorm:"column:lot_name;primaryKey;size:255"`
|
LotName string `gorm:"column:lot_name;primaryKey;size:255" json:"lot_name"`
|
||||||
LotDescription string `gorm:"column:lot_description;size:10000"`
|
LotDescription string `gorm:"column:lot_description;size:10000" json:"lot_description"`
|
||||||
|
LotCategory *string `gorm:"column:lot_category;size:50" json:"lot_category"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (Lot) TableName() string {
|
func (Lot) TableName() string {
|
||||||
@@ -36,3 +37,44 @@ type Supplier struct {
|
|||||||
func (Supplier) TableName() string {
|
func (Supplier) TableName() string {
|
||||||
return "supplier"
|
return "supplier"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// StockLog stores warehouse stock snapshots imported from external files.
|
||||||
|
type StockLog struct {
|
||||||
|
StockLogID uint `gorm:"column:stock_log_id;primaryKey;autoIncrement"`
|
||||||
|
Partnumber string `gorm:"column:partnumber;size:255;not null"`
|
||||||
|
Supplier *string `gorm:"column:supplier;size:255"`
|
||||||
|
Date time.Time `gorm:"column:date;type:date;not null"`
|
||||||
|
Price float64 `gorm:"column:price;not null"`
|
||||||
|
Quality *string `gorm:"column:quality;size:255"`
|
||||||
|
Comments *string `gorm:"column:comments;size:15000"`
|
||||||
|
Vendor *string `gorm:"column:vendor;size:255"`
|
||||||
|
Qty *float64 `gorm:"column:qty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (StockLog) TableName() string {
|
||||||
|
return "stock_log"
|
||||||
|
}
|
||||||
|
|
||||||
|
// LotPartnumber maps external part numbers to internal lots.
|
||||||
|
type LotPartnumber struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber;size:255;primaryKey" json:"partnumber"`
|
||||||
|
LotName string `gorm:"column:lot_name;size:255;primaryKey" json:"lot_name"`
|
||||||
|
Description *string `gorm:"column:description;size:10000" json:"description,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (LotPartnumber) TableName() string {
|
||||||
|
return "lot_partnumbers"
|
||||||
|
}
|
||||||
|
|
||||||
|
// StockIgnoreRule contains import ignore pattern rules.
|
||||||
|
type StockIgnoreRule struct {
|
||||||
|
ID uint `gorm:"column:id;primaryKey;autoIncrement" json:"id"`
|
||||||
|
Target string `gorm:"column:target;size:20;not null" json:"target"` // partnumber|description
|
||||||
|
MatchType string `gorm:"column:match_type;size:20;not null" json:"match_type"` // exact|prefix|suffix
|
||||||
|
Pattern string `gorm:"column:pattern;size:500;not null" json:"pattern"`
|
||||||
|
CreatedAt time.Time `gorm:"column:created_at;autoCreateTime" json:"created_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (StockIgnoreRule) TableName() string {
|
||||||
|
return "stock_ignore_rules"
|
||||||
|
}
|
||||||
|
|||||||
@@ -35,18 +35,23 @@ func (s *Specs) Scan(value interface{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type LotMetadata struct {
|
type LotMetadata struct {
|
||||||
LotName string `gorm:"column:lot_name;primaryKey;size:255" json:"lot_name"`
|
LotName string `gorm:"column:lot_name;primaryKey;size:255" json:"lot_name"`
|
||||||
CategoryID *uint `gorm:"column:category_id" json:"category_id"`
|
CategoryID *uint `gorm:"column:category_id" json:"category_id"`
|
||||||
Vendor string `gorm:"size:50" json:"vendor"`
|
Model string `gorm:"size:100" json:"model"`
|
||||||
Model string `gorm:"size:100" json:"model"`
|
Specs Specs `gorm:"type:json" json:"specs"`
|
||||||
Specs Specs `gorm:"type:json" json:"specs"`
|
CurrentPrice *float64 `gorm:"type:decimal(12,2)" json:"current_price"`
|
||||||
CurrentPrice *float64 `gorm:"type:decimal(12,2)" json:"current_price"`
|
PriceMethod PriceMethod `gorm:"type:enum('manual','median','average','weighted_median');default:'median'" json:"price_method"`
|
||||||
PriceMethod PriceMethod `gorm:"type:enum('manual','median','average','weighted_median');default:'median'" json:"price_method"`
|
PricePeriodDays int `gorm:"default:90" json:"price_period_days"`
|
||||||
PricePeriodDays int `gorm:"default:90" json:"price_period_days"`
|
PriceCoefficient float64 `gorm:"type:decimal(5,2);default:0" json:"price_coefficient"`
|
||||||
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
ManualPrice *float64 `gorm:"type:decimal(12,2)" json:"manual_price"`
|
||||||
RequestCount int `gorm:"default:0" json:"request_count"`
|
PriceUpdatedAt *time.Time `json:"price_updated_at"`
|
||||||
LastRequestDate *time.Time `gorm:"type:date" json:"last_request_date"`
|
RequestCount int `gorm:"default:0" json:"request_count"`
|
||||||
PopularityScore float64 `gorm:"type:decimal(10,4);default:0" json:"popularity_score"`
|
LastRequestDate *time.Time `gorm:"type:date" json:"last_request_date"`
|
||||||
|
PopularityScore float64 `gorm:"type:decimal(10,4);default:0" json:"popularity_score"`
|
||||||
|
MetaPrices string `gorm:"size:1000" json:"meta_prices"`
|
||||||
|
MetaMethod string `gorm:"size:20" json:"meta_method"`
|
||||||
|
MetaPeriodDays int `gorm:"default:90" json:"meta_period_days"`
|
||||||
|
IsHidden bool `gorm:"default:false" json:"is_hidden"`
|
||||||
|
|
||||||
// Relations
|
// Relations
|
||||||
Lot *Lot `gorm:"foreignKey:LotName;references:LotName" json:"lot,omitempty"`
|
Lot *Lot `gorm:"foreignKey:LotName;references:LotName" json:"lot,omitempty"`
|
||||||
|
|||||||
@@ -1,6 +1,11 @@
|
|||||||
package models
|
package models
|
||||||
|
|
||||||
import "gorm.io/gorm"
|
import (
|
||||||
|
"log/slog"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
// AllModels returns all models for auto-migration
|
// AllModels returns all models for auto-migration
|
||||||
func AllModels() []interface{} {
|
func AllModels() []interface{} {
|
||||||
@@ -8,16 +13,33 @@ func AllModels() []interface{} {
|
|||||||
&User{},
|
&User{},
|
||||||
&Category{},
|
&Category{},
|
||||||
&LotMetadata{},
|
&LotMetadata{},
|
||||||
|
&Project{},
|
||||||
&Configuration{},
|
&Configuration{},
|
||||||
&PriceOverride{},
|
&PriceOverride{},
|
||||||
&PricingAlert{},
|
&PricingAlert{},
|
||||||
&ComponentUsageStats{},
|
&ComponentUsageStats{},
|
||||||
|
&Pricelist{},
|
||||||
|
&PricelistItem{},
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Migrate runs auto-migration for all QuoteForge tables
|
// Migrate runs auto-migration for all QuoteForge tables
|
||||||
|
// Handles MySQL constraint errors gracefully for existing tables
|
||||||
func Migrate(db *gorm.DB) error {
|
func Migrate(db *gorm.DB) error {
|
||||||
return db.AutoMigrate(AllModels()...)
|
for _, model := range AllModels() {
|
||||||
|
if err := db.AutoMigrate(model); err != nil {
|
||||||
|
// Skip known MySQL constraint errors for existing tables
|
||||||
|
errStr := err.Error()
|
||||||
|
if strings.Contains(errStr, "Can't DROP") ||
|
||||||
|
strings.Contains(errStr, "Duplicate key name") ||
|
||||||
|
strings.Contains(errStr, "check that it exists") {
|
||||||
|
slog.Warn("migration warning (skipped)", "model", model, "error", errStr)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SeedCategories inserts default categories if not exist
|
// SeedCategories inserts default categories if not exist
|
||||||
@@ -30,3 +52,54 @@ func SeedCategories(db *gorm.DB) error {
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SeedAdminUser creates default admin user if not exists
|
||||||
|
// Default credentials: admin / admin123
|
||||||
|
func SeedAdminUser(db *gorm.DB, passwordHash string) error {
|
||||||
|
var count int64
|
||||||
|
db.Model(&User{}).Where("username = ?", "admin").Count(&count)
|
||||||
|
if count > 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
admin := &User{
|
||||||
|
Username: "admin",
|
||||||
|
Email: "admin@example.com",
|
||||||
|
PasswordHash: passwordHash,
|
||||||
|
Role: RoleAdmin,
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
return db.Create(admin).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnsureDBUser creates or returns the user corresponding to the database connection username.
|
||||||
|
// This is used when RBAC is disabled - configurations are owned by the DB user.
|
||||||
|
// Returns the user ID that should be used for all operations.
|
||||||
|
func EnsureDBUser(db *gorm.DB, dbUsername string) (uint, error) {
|
||||||
|
if dbUsername == "" {
|
||||||
|
return 0, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var user User
|
||||||
|
err := db.Where("username = ?", dbUsername).First(&user).Error
|
||||||
|
if err == nil {
|
||||||
|
return user.ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// User doesn't exist, create it
|
||||||
|
user = User{
|
||||||
|
Username: dbUsername,
|
||||||
|
Email: dbUsername + "@db.local",
|
||||||
|
PasswordHash: "-", // No password - this is a DB user, not an app user
|
||||||
|
Role: RoleAdmin,
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := db.Create(&user).Error; err != nil {
|
||||||
|
slog.Error("failed to create DB user", "username", dbUsername, "error", err)
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
slog.Info("created DB user for configurations", "username", dbUsername, "user_id", user.ID)
|
||||||
|
return user.ID, nil
|
||||||
|
}
|
||||||
|
|||||||
91
internal/models/pricelist.go
Normal file
91
internal/models/pricelist.go
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PricelistSource string
|
||||||
|
|
||||||
|
const (
|
||||||
|
PricelistSourceEstimate PricelistSource = "estimate"
|
||||||
|
PricelistSourceWarehouse PricelistSource = "warehouse"
|
||||||
|
PricelistSourceCompetitor PricelistSource = "competitor"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (s PricelistSource) IsValid() bool {
|
||||||
|
switch s {
|
||||||
|
case PricelistSourceEstimate, PricelistSourceWarehouse, PricelistSourceCompetitor:
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func NormalizePricelistSource(source string) PricelistSource {
|
||||||
|
switch PricelistSource(source) {
|
||||||
|
case PricelistSourceWarehouse:
|
||||||
|
return PricelistSourceWarehouse
|
||||||
|
case PricelistSourceCompetitor:
|
||||||
|
return PricelistSourceCompetitor
|
||||||
|
default:
|
||||||
|
return PricelistSourceEstimate
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pricelist represents a versioned snapshot of prices
|
||||||
|
type Pricelist struct {
|
||||||
|
ID uint `gorm:"primaryKey" json:"id"`
|
||||||
|
Source string `gorm:"size:20;not null;default:'estimate';uniqueIndex:idx_qt_pricelists_source_version,priority:1;index:idx_qt_pricelists_source_created_at,priority:1" json:"source"`
|
||||||
|
Version string `gorm:"size:20;not null;uniqueIndex:idx_qt_pricelists_source_version,priority:2" json:"version"` // Format: YYYY-MM-DD-NNN
|
||||||
|
Notification string `gorm:"size:500" json:"notification"` // Notification shown in configurator
|
||||||
|
CreatedAt time.Time `gorm:"index:idx_qt_pricelists_source_created_at,priority:2,sort:desc" json:"created_at"`
|
||||||
|
CreatedBy string `gorm:"size:100" json:"created_by"`
|
||||||
|
IsActive bool `gorm:"default:true" json:"is_active"`
|
||||||
|
UsageCount int `gorm:"default:0" json:"usage_count"`
|
||||||
|
ExpiresAt *time.Time `json:"expires_at"`
|
||||||
|
ItemCount int `gorm:"-" json:"item_count,omitempty"` // Virtual field for display
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Pricelist) TableName() string {
|
||||||
|
return "qt_pricelists"
|
||||||
|
}
|
||||||
|
|
||||||
|
// PricelistItem represents a single item in a pricelist
|
||||||
|
type PricelistItem struct {
|
||||||
|
ID uint `gorm:"primaryKey" json:"id"`
|
||||||
|
PricelistID uint `gorm:"not null;index:idx_pricelist_lot" json:"pricelist_id"`
|
||||||
|
LotName string `gorm:"size:255;not null;index:idx_pricelist_lot" json:"lot_name"`
|
||||||
|
LotCategory string `gorm:"column:lot_category;size:50" json:"lot_category,omitempty"`
|
||||||
|
Price float64 `gorm:"type:decimal(12,2);not null" json:"price"`
|
||||||
|
PriceMethod string `gorm:"size:20" json:"price_method"`
|
||||||
|
|
||||||
|
// Price calculation settings (snapshot from qt_lot_metadata)
|
||||||
|
PricePeriodDays int `gorm:"default:90" json:"price_period_days"`
|
||||||
|
PriceCoefficient float64 `gorm:"type:decimal(5,2);default:0" json:"price_coefficient"`
|
||||||
|
ManualPrice *float64 `gorm:"type:decimal(12,2)" json:"manual_price,omitempty"`
|
||||||
|
MetaPrices string `gorm:"size:1000" json:"meta_prices,omitempty"`
|
||||||
|
|
||||||
|
// Virtual fields for display
|
||||||
|
LotDescription string `gorm:"-" json:"lot_description,omitempty"`
|
||||||
|
Category string `gorm:"-" json:"category,omitempty"`
|
||||||
|
AvailableQty *float64 `gorm:"-" json:"available_qty,omitempty"`
|
||||||
|
Partnumbers []string `gorm:"-" json:"partnumbers,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (PricelistItem) TableName() string {
|
||||||
|
return "qt_pricelist_items"
|
||||||
|
}
|
||||||
|
|
||||||
|
// PricelistSummary is used for list views
|
||||||
|
type PricelistSummary struct {
|
||||||
|
ID uint `json:"id"`
|
||||||
|
Source string `json:"source"`
|
||||||
|
Version string `json:"version"`
|
||||||
|
Notification string `json:"notification"`
|
||||||
|
CreatedAt time.Time `json:"created_at"`
|
||||||
|
CreatedBy string `json:"created_by"`
|
||||||
|
IsActive bool `json:"is_active"`
|
||||||
|
UsageCount int `json:"usage_count"`
|
||||||
|
ExpiresAt *time.Time `json:"expires_at"`
|
||||||
|
ItemCount int64 `json:"item_count"`
|
||||||
|
}
|
||||||
21
internal/models/project.go
Normal file
21
internal/models/project.go
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
package models
|
||||||
|
|
||||||
|
import "time"
|
||||||
|
|
||||||
|
type Project struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement" json:"id"`
|
||||||
|
UUID string `gorm:"size:36;uniqueIndex;not null" json:"uuid"`
|
||||||
|
OwnerUsername string `gorm:"size:100;not null;index" json:"owner_username"`
|
||||||
|
Code string `gorm:"size:100;not null;index:idx_qt_projects_code_variant,priority:1" json:"code"`
|
||||||
|
Variant string `gorm:"size:100;not null;default:'';index:idx_qt_projects_code_variant,priority:2" json:"variant"`
|
||||||
|
Name *string `gorm:"size:200" json:"name,omitempty"`
|
||||||
|
TrackerURL string `gorm:"size:500" json:"tracker_url"`
|
||||||
|
IsActive bool `gorm:"default:true;index" json:"is_active"`
|
||||||
|
IsSystem bool `gorm:"default:false;index" json:"is_system"`
|
||||||
|
CreatedAt time.Time `gorm:"autoCreateTime" json:"created_at"`
|
||||||
|
UpdatedAt time.Time `gorm:"autoUpdateTime" json:"updated_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (Project) TableName() string {
|
||||||
|
return "qt_projects"
|
||||||
|
}
|
||||||
227
internal/models/sql_migrations.go
Normal file
227
internal/models/sql_migrations.go
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
mysqlDriver "github.com/go-sql-driver/mysql"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
type SQLSchemaMigration struct {
|
||||||
|
ID uint `gorm:"primaryKey;autoIncrement"`
|
||||||
|
Filename string `gorm:"size:255;uniqueIndex;not null"`
|
||||||
|
AppliedAt time.Time `gorm:"autoCreateTime"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (SQLSchemaMigration) TableName() string {
|
||||||
|
return "qt_schema_migrations"
|
||||||
|
}
|
||||||
|
|
||||||
|
// NeedsSQLMigrations reports whether at least one SQL migration from migrationsDir
|
||||||
|
// is not yet recorded in qt_schema_migrations.
|
||||||
|
func NeedsSQLMigrations(db *gorm.DB, migrationsDir string) (bool, error) {
|
||||||
|
files, err := listSQLMigrationFiles(migrationsDir)
|
||||||
|
if err != nil {
|
||||||
|
return false, err
|
||||||
|
}
|
||||||
|
if len(files) == 0 {
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// If tracking table does not exist yet, migrations are required.
|
||||||
|
if !db.Migrator().HasTable(&SQLSchemaMigration{}) {
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var count int64
|
||||||
|
if err := db.Model(&SQLSchemaMigration{}).Where("filename IN ?", files).Count(&count).Error; err != nil {
|
||||||
|
return false, fmt.Errorf("check applied migrations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return count < int64(len(files)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RunSQLMigrations applies SQL files from migrationsDir once and records them in qt_schema_migrations.
|
||||||
|
// Local SQLite-only scripts are skipped automatically.
|
||||||
|
func RunSQLMigrations(db *gorm.DB, migrationsDir string) error {
|
||||||
|
if err := ensureSQLMigrationsTable(db); err != nil {
|
||||||
|
return fmt.Errorf("migrate qt_schema_migrations table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files, err := listSQLMigrationFiles(migrationsDir)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, filename := range files {
|
||||||
|
var count int64
|
||||||
|
if err := db.Model(&SQLSchemaMigration{}).Where("filename = ?", filename).Count(&count).Error; err != nil {
|
||||||
|
return fmt.Errorf("check migration %s: %w", filename, err)
|
||||||
|
}
|
||||||
|
if count > 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
path := filepath.Join(migrationsDir, filename)
|
||||||
|
content, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read migration %s: %w", filename, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
statements := splitSQLStatements(string(content))
|
||||||
|
if len(statements) == 0 {
|
||||||
|
if err := db.Create(&SQLSchemaMigration{Filename: filename}).Error; err != nil {
|
||||||
|
return fmt.Errorf("record empty migration %s: %w", filename, err)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := executeMigrationStatements(db, filename, statements); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := db.Create(&SQLSchemaMigration{Filename: filename}).Error; err != nil {
|
||||||
|
return fmt.Errorf("record migration %s: %w", filename, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsMigrationPermissionError returns true if err indicates insufficient privileges
|
||||||
|
// to create/alter/read migration metadata or target schema objects.
|
||||||
|
func IsMigrationPermissionError(err error) bool {
|
||||||
|
if err == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var mysqlErr *mysqlDriver.MySQLError
|
||||||
|
if errors.As(err, &mysqlErr) {
|
||||||
|
switch mysqlErr.Number {
|
||||||
|
case 1044, 1045, 1142, 1143, 1227:
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
lower := strings.ToLower(err.Error())
|
||||||
|
patterns := []string{
|
||||||
|
"command denied to user",
|
||||||
|
"access denied for user",
|
||||||
|
"permission denied",
|
||||||
|
"insufficient privilege",
|
||||||
|
"sqlstate 42000",
|
||||||
|
}
|
||||||
|
for _, pattern := range patterns {
|
||||||
|
if strings.Contains(lower, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureSQLMigrationsTable(db *gorm.DB) error {
|
||||||
|
stmt := `
|
||||||
|
CREATE TABLE IF NOT EXISTS qt_schema_migrations (
|
||||||
|
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
|
||||||
|
filename VARCHAR(255) NOT NULL UNIQUE,
|
||||||
|
applied_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);`
|
||||||
|
return db.Exec(stmt).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func executeMigrationStatements(db *gorm.DB, filename string, statements []string) error {
|
||||||
|
for _, stmt := range statements {
|
||||||
|
if err := db.Exec(stmt).Error; err != nil {
|
||||||
|
if isIgnorableMigrationError(err.Error()) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return fmt.Errorf("exec migration %s statement %q: %w", filename, stmt, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func isSQLiteOnlyMigration(filename string) bool {
|
||||||
|
lower := strings.ToLower(filename)
|
||||||
|
return strings.Contains(lower, "local_")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isIgnorableMigrationError(message string) bool {
|
||||||
|
lower := strings.ToLower(message)
|
||||||
|
ignorable := []string{
|
||||||
|
"duplicate column name",
|
||||||
|
"duplicate key name",
|
||||||
|
"already exists",
|
||||||
|
"can't create table",
|
||||||
|
"duplicate foreign key constraint name",
|
||||||
|
"errno 121",
|
||||||
|
}
|
||||||
|
for _, pattern := range ignorable {
|
||||||
|
if strings.Contains(lower, pattern) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitSQLStatements(script string) []string {
|
||||||
|
scanner := bufio.NewScanner(strings.NewReader(script))
|
||||||
|
scanner.Buffer(make([]byte, 1024), 1024*1024)
|
||||||
|
|
||||||
|
lines := make([]string, 0, 128)
|
||||||
|
for scanner.Scan() {
|
||||||
|
line := strings.TrimSpace(scanner.Text())
|
||||||
|
if line == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasPrefix(line, "--") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lines = append(lines, scanner.Text())
|
||||||
|
}
|
||||||
|
|
||||||
|
combined := strings.Join(lines, "\n")
|
||||||
|
raw := strings.Split(combined, ";")
|
||||||
|
stmts := make([]string, 0, len(raw))
|
||||||
|
for _, stmt := range raw {
|
||||||
|
trimmed := strings.TrimSpace(stmt)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
stmts = append(stmts, trimmed)
|
||||||
|
}
|
||||||
|
return stmts
|
||||||
|
}
|
||||||
|
|
||||||
|
func listSQLMigrationFiles(migrationsDir string) ([]string, error) {
|
||||||
|
entries, err := os.ReadDir(migrationsDir)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, os.ErrNotExist) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("read migrations dir %s: %w", migrationsDir, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files := make([]string, 0, len(entries))
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
name := entry.Name()
|
||||||
|
if !strings.HasSuffix(strings.ToLower(name), ".sql") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if isSQLiteOnlyMigration(name) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
files = append(files, name)
|
||||||
|
}
|
||||||
|
sort.Strings(files)
|
||||||
|
return files, nil
|
||||||
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -36,3 +36,41 @@ func (r *CategoryRepository) GetByID(id uint) (*models.Category, error) {
|
|||||||
}
|
}
|
||||||
return &category, nil
|
return &category, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// CreateIfNotExists creates a new category if it doesn't exist, returns existing one if it does
|
||||||
|
func (r *CategoryRepository) CreateIfNotExists(code string) (*models.Category, error) {
|
||||||
|
// Try to find existing
|
||||||
|
existing, err := r.GetByCode(code)
|
||||||
|
if err == nil {
|
||||||
|
return existing, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get max display order to put new category at the end
|
||||||
|
var maxOrder int
|
||||||
|
r.db.Model(&models.Category{}).Select("COALESCE(MAX(display_order), 0)").Scan(&maxOrder)
|
||||||
|
|
||||||
|
// Create new category
|
||||||
|
newCat := &models.Category{
|
||||||
|
Code: code,
|
||||||
|
Name: code, // Use code as name initially
|
||||||
|
NameRu: code,
|
||||||
|
DisplayOrder: maxOrder + 1,
|
||||||
|
IsRequired: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := r.db.Create(newCat).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return newCat, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create creates a new category
|
||||||
|
func (r *CategoryRepository) Create(category *models.Category) error {
|
||||||
|
return r.db.Create(category).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update updates an existing category
|
||||||
|
func (r *CategoryRepository) Update(category *models.Category) error {
|
||||||
|
return r.db.Save(category).Error
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ package repository
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -16,10 +16,12 @@ func NewComponentRepository(db *gorm.DB) *ComponentRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ComponentFilter struct {
|
type ComponentFilter struct {
|
||||||
Category string
|
Category string
|
||||||
Vendor string
|
Search string
|
||||||
Search string
|
HasPrice bool
|
||||||
HasPrice bool
|
ExcludeHidden bool
|
||||||
|
SortField string
|
||||||
|
SortDir string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ComponentRepository) List(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error) {
|
func (r *ComponentRepository) List(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error) {
|
||||||
@@ -34,9 +36,6 @@ func (r *ComponentRepository) List(filter ComponentFilter, offset, limit int) ([
|
|||||||
query = query.Joins("JOIN qt_categories ON qt_lot_metadata.category_id = qt_categories.id").
|
query = query.Joins("JOIN qt_categories ON qt_lot_metadata.category_id = qt_categories.id").
|
||||||
Where("qt_categories.code = ?", filter.Category)
|
Where("qt_categories.code = ?", filter.Category)
|
||||||
}
|
}
|
||||||
if filter.Vendor != "" {
|
|
||||||
query = query.Where("vendor = ?", filter.Vendor)
|
|
||||||
}
|
|
||||||
if filter.Search != "" {
|
if filter.Search != "" {
|
||||||
search := "%" + filter.Search + "%"
|
search := "%" + filter.Search + "%"
|
||||||
query = query.Where("lot_name LIKE ? OR model LIKE ?", search, search)
|
query = query.Where("lot_name LIKE ? OR model LIKE ?", search, search)
|
||||||
@@ -44,13 +43,39 @@ func (r *ComponentRepository) List(filter ComponentFilter, offset, limit int) ([
|
|||||||
if filter.HasPrice {
|
if filter.HasPrice {
|
||||||
query = query.Where("current_price IS NOT NULL AND current_price > 0")
|
query = query.Where("current_price IS NOT NULL AND current_price > 0")
|
||||||
}
|
}
|
||||||
|
if filter.ExcludeHidden {
|
||||||
|
query = query.Where("is_hidden = ? OR is_hidden IS NULL", false)
|
||||||
|
}
|
||||||
|
|
||||||
query.Count(&total)
|
query.Count(&total)
|
||||||
|
|
||||||
// Sort by popularity + freshness, no price goes last
|
// Apply sorting
|
||||||
|
sortDir := "ASC"
|
||||||
|
if filter.SortDir == "desc" {
|
||||||
|
sortDir = "DESC"
|
||||||
|
}
|
||||||
|
|
||||||
|
switch filter.SortField {
|
||||||
|
case "popularity_score":
|
||||||
|
query = query.Order("popularity_score " + sortDir)
|
||||||
|
case "current_price":
|
||||||
|
query = query.Order("CASE WHEN current_price IS NULL OR current_price = 0 THEN 1 ELSE 0 END").
|
||||||
|
Order("current_price " + sortDir)
|
||||||
|
case "lot_name":
|
||||||
|
query = query.Order("lot_name " + sortDir)
|
||||||
|
case "quote_count":
|
||||||
|
// Sort by quote count from lot_log table
|
||||||
|
query = query.
|
||||||
|
Select("qt_lot_metadata.*, (SELECT COUNT(*) FROM lot_log WHERE lot_log.lot = qt_lot_metadata.lot_name) as quote_count_sort").
|
||||||
|
Order("quote_count_sort " + sortDir)
|
||||||
|
default:
|
||||||
|
// Default: sort by popularity, no price goes last
|
||||||
|
query = query.
|
||||||
|
Order("CASE WHEN current_price IS NULL OR current_price = 0 THEN 1 ELSE 0 END").
|
||||||
|
Order("popularity_score DESC")
|
||||||
|
}
|
||||||
|
|
||||||
err := query.
|
err := query.
|
||||||
Order("CASE WHEN current_price IS NULL OR current_price = 0 THEN 1 ELSE 0 END").
|
|
||||||
Order("popularity_score DESC").
|
|
||||||
Offset(offset).
|
Offset(offset).
|
||||||
Limit(limit).
|
Limit(limit).
|
||||||
Find(&components).Error
|
Find(&components).Error
|
||||||
@@ -85,19 +110,12 @@ func (r *ComponentRepository) Update(component *models.LotMetadata) error {
|
|||||||
return r.db.Save(component).Error
|
return r.db.Save(component).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ComponentRepository) Create(component *models.LotMetadata) error {
|
func (r *ComponentRepository) DB() *gorm.DB {
|
||||||
return r.db.Create(component).Error
|
return r.db
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ComponentRepository) GetVendors(category string) ([]string, error) {
|
func (r *ComponentRepository) Create(component *models.LotMetadata) error {
|
||||||
var vendors []string
|
return r.db.Create(component).Error
|
||||||
query := r.db.Model(&models.LotMetadata{}).Distinct("vendor")
|
|
||||||
if category != "" {
|
|
||||||
query = query.Joins("JOIN qt_categories ON qt_lot_metadata.category_id = qt_categories.id").
|
|
||||||
Where("qt_categories.code = ?", category)
|
|
||||||
}
|
|
||||||
err := query.Pluck("vendor", &vendors).Error
|
|
||||||
return vendors, err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ComponentRepository) IncrementRequestCount(lotName string) error {
|
func (r *ComponentRepository) IncrementRequestCount(lotName string) error {
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -19,7 +19,7 @@ func (r *ConfigurationRepository) Create(config *models.Configuration) error {
|
|||||||
|
|
||||||
func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error) {
|
func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error) {
|
||||||
var config models.Configuration
|
var config models.Configuration
|
||||||
err := r.db.Preload("User").First(&config, id).Error
|
err := r.db.First(&config, id).Error
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -28,7 +28,7 @@ func (r *ConfigurationRepository) GetByID(id uint) (*models.Configuration, error
|
|||||||
|
|
||||||
func (r *ConfigurationRepository) GetByUUID(uuid string) (*models.Configuration, error) {
|
func (r *ConfigurationRepository) GetByUUID(uuid string) (*models.Configuration, error) {
|
||||||
var config models.Configuration
|
var config models.Configuration
|
||||||
err := r.db.Preload("User").Where("uuid = ?", uuid).First(&config).Error
|
err := r.db.Where("uuid = ?", uuid).First(&config).Error
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -43,13 +43,15 @@ func (r *ConfigurationRepository) Delete(id uint) error {
|
|||||||
return r.db.Delete(&models.Configuration{}, id).Error
|
return r.db.Delete(&models.Configuration{}, id).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *ConfigurationRepository) ListByUser(userID uint, offset, limit int) ([]models.Configuration, int64, error) {
|
func (r *ConfigurationRepository) ListByUser(ownerUsername string, offset, limit int) ([]models.Configuration, int64, error) {
|
||||||
var configs []models.Configuration
|
var configs []models.Configuration
|
||||||
var total int64
|
var total int64
|
||||||
|
|
||||||
r.db.Model(&models.Configuration{}).Where("user_id = ?", userID).Count(&total)
|
ownerScope := "owner_username = ?"
|
||||||
|
|
||||||
|
r.db.Model(&models.Configuration{}).Where(ownerScope, ownerUsername).Count(&total)
|
||||||
err := r.db.
|
err := r.db.
|
||||||
Where("user_id = ?", userID).
|
Where(ownerScope, ownerUsername).
|
||||||
Order("created_at DESC").
|
Order("created_at DESC").
|
||||||
Offset(offset).
|
Offset(offset).
|
||||||
Limit(limit).
|
Limit(limit).
|
||||||
@@ -64,7 +66,6 @@ func (r *ConfigurationRepository) ListTemplates(offset, limit int) ([]models.Con
|
|||||||
|
|
||||||
r.db.Model(&models.Configuration{}).Where("is_template = ?", true).Count(&total)
|
r.db.Model(&models.Configuration{}).Where("is_template = ?", true).Count(&total)
|
||||||
err := r.db.
|
err := r.db.
|
||||||
Preload("User").
|
|
||||||
Where("is_template = ?", true).
|
Where("is_template = ?", true).
|
||||||
Order("created_at DESC").
|
Order("created_at DESC").
|
||||||
Offset(offset).
|
Offset(offset).
|
||||||
@@ -73,3 +74,18 @@ func (r *ConfigurationRepository) ListTemplates(offset, limit int) ([]models.Con
|
|||||||
|
|
||||||
return configs, total, err
|
return configs, total, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ListAll returns all configurations without user filter
|
||||||
|
func (r *ConfigurationRepository) ListAll(offset, limit int) ([]models.Configuration, int64, error) {
|
||||||
|
var configs []models.Configuration
|
||||||
|
var total int64
|
||||||
|
|
||||||
|
r.db.Model(&models.Configuration{}).Count(&total)
|
||||||
|
err := r.db.
|
||||||
|
Order("created_at DESC").
|
||||||
|
Offset(offset).
|
||||||
|
Limit(limit).
|
||||||
|
Find(&configs).Error
|
||||||
|
|
||||||
|
return configs, total, err
|
||||||
|
}
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ package repository
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -97,3 +97,28 @@ func (r *PriceRepository) GetQuoteCount(lotName string, periodDays int) (int64,
|
|||||||
|
|
||||||
return count, err
|
return count, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetQuoteCounts returns quote counts for multiple lot names
|
||||||
|
func (r *PriceRepository) GetQuoteCounts(lotNames []string) (map[string]int64, error) {
|
||||||
|
type Result struct {
|
||||||
|
Lot string
|
||||||
|
Count int64
|
||||||
|
}
|
||||||
|
var results []Result
|
||||||
|
|
||||||
|
err := r.db.Model(&models.LotLog{}).
|
||||||
|
Select("lot, COUNT(*) as count").
|
||||||
|
Where("lot IN ?", lotNames).
|
||||||
|
Group("lot").
|
||||||
|
Scan(&results).Error
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
counts := make(map[string]int64)
|
||||||
|
for _, r := range results {
|
||||||
|
counts[r.Lot] = r.Count
|
||||||
|
}
|
||||||
|
return counts, nil
|
||||||
|
}
|
||||||
|
|||||||
512
internal/repository/pricelist.go
Normal file
512
internal/repository/pricelist.go
Normal file
@@ -0,0 +1,512 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/lotmatch"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
type PricelistRepository struct {
|
||||||
|
db *gorm.DB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewPricelistRepository(db *gorm.DB) *PricelistRepository {
|
||||||
|
return &PricelistRepository{db: db}
|
||||||
|
}
|
||||||
|
|
||||||
|
// List returns pricelists with pagination
|
||||||
|
func (r *PricelistRepository) List(offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
|
return r.ListBySource("", offset, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListBySource returns pricelists filtered by source when provided.
|
||||||
|
func (r *PricelistRepository) ListBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
|
query := r.db.Model(&models.Pricelist{}).
|
||||||
|
Where("EXISTS (SELECT 1 FROM qt_pricelist_items WHERE qt_pricelist_items.pricelist_id = qt_pricelists.id)")
|
||||||
|
if source != "" {
|
||||||
|
query = query.Where("source = ?", source)
|
||||||
|
}
|
||||||
|
|
||||||
|
var total int64
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("counting pricelists: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var pricelists []models.Pricelist
|
||||||
|
if err := query.Order("created_at DESC").Offset(offset).Limit(limit).Find(&pricelists).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("listing pricelists: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.toSummaries(pricelists), total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListActive returns active pricelists with pagination.
|
||||||
|
func (r *PricelistRepository) ListActive(offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
|
return r.ListActiveBySource("", offset, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListActiveBySource returns active pricelists filtered by source when provided.
|
||||||
|
func (r *PricelistRepository) ListActiveBySource(source string, offset, limit int) ([]models.PricelistSummary, int64, error) {
|
||||||
|
query := r.db.Model(&models.Pricelist{}).
|
||||||
|
Where("is_active = ?", true).
|
||||||
|
Where("EXISTS (SELECT 1 FROM qt_pricelist_items WHERE qt_pricelist_items.pricelist_id = qt_pricelists.id)")
|
||||||
|
if source != "" {
|
||||||
|
query = query.Where("source = ?", source)
|
||||||
|
}
|
||||||
|
|
||||||
|
var total int64
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("counting active pricelists: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var pricelists []models.Pricelist
|
||||||
|
if err := query.Order("created_at DESC").Offset(offset).Limit(limit).Find(&pricelists).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("listing active pricelists: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.toSummaries(pricelists), total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountActive returns the number of active pricelists.
|
||||||
|
func (r *PricelistRepository) CountActive() (int64, error) {
|
||||||
|
var total int64
|
||||||
|
if err := r.db.Model(&models.Pricelist{}).Where("is_active = ?", true).Count(&total).Error; err != nil {
|
||||||
|
return 0, fmt.Errorf("counting active pricelists: %w", err)
|
||||||
|
}
|
||||||
|
return total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PricelistRepository) toSummaries(pricelists []models.Pricelist) []models.PricelistSummary {
|
||||||
|
// Get item counts for each pricelist
|
||||||
|
summaries := make([]models.PricelistSummary, len(pricelists))
|
||||||
|
for i, pl := range pricelists {
|
||||||
|
var itemCount int64
|
||||||
|
r.db.Model(&models.PricelistItem{}).Where("pricelist_id = ?", pl.ID).Count(&itemCount)
|
||||||
|
usageCount, _ := r.CountUsage(pl.ID)
|
||||||
|
|
||||||
|
summaries[i] = models.PricelistSummary{
|
||||||
|
ID: pl.ID,
|
||||||
|
Source: pl.Source,
|
||||||
|
Version: pl.Version,
|
||||||
|
Notification: pl.Notification,
|
||||||
|
CreatedAt: pl.CreatedAt,
|
||||||
|
CreatedBy: pl.CreatedBy,
|
||||||
|
IsActive: pl.IsActive,
|
||||||
|
UsageCount: int(usageCount),
|
||||||
|
ExpiresAt: pl.ExpiresAt,
|
||||||
|
ItemCount: itemCount,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return summaries
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByID returns a pricelist by ID
|
||||||
|
func (r *PricelistRepository) GetByID(id uint) (*models.Pricelist, error) {
|
||||||
|
var pricelist models.Pricelist
|
||||||
|
if err := r.db.First(&pricelist, id).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("getting pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get item count
|
||||||
|
var itemCount int64
|
||||||
|
r.db.Model(&models.PricelistItem{}).Where("pricelist_id = ?", id).Count(&itemCount)
|
||||||
|
pricelist.ItemCount = int(itemCount)
|
||||||
|
if usageCount, err := r.CountUsage(id); err == nil {
|
||||||
|
pricelist.UsageCount = int(usageCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &pricelist, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByVersion returns a pricelist by version string
|
||||||
|
func (r *PricelistRepository) GetByVersion(version string) (*models.Pricelist, error) {
|
||||||
|
return r.GetBySourceAndVersion(string(models.PricelistSourceEstimate), version)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetBySourceAndVersion returns a pricelist by source/version.
|
||||||
|
func (r *PricelistRepository) GetBySourceAndVersion(source, version string) (*models.Pricelist, error) {
|
||||||
|
var pricelist models.Pricelist
|
||||||
|
if err := r.db.Where("source = ? AND version = ?", source, version).First(&pricelist).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("getting pricelist by version: %w", err)
|
||||||
|
}
|
||||||
|
return &pricelist, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLatestActive returns the most recent active pricelist
|
||||||
|
func (r *PricelistRepository) GetLatestActive() (*models.Pricelist, error) {
|
||||||
|
return r.GetLatestActiveBySource(string(models.PricelistSourceEstimate))
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLatestActiveBySource returns the most recent active pricelist by source.
|
||||||
|
func (r *PricelistRepository) GetLatestActiveBySource(source string) (*models.Pricelist, error) {
|
||||||
|
var pricelist models.Pricelist
|
||||||
|
if err := r.db.Where("is_active = ? AND source = ?", true, source).Order("created_at DESC").First(&pricelist).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("getting latest pricelist: %w", err)
|
||||||
|
}
|
||||||
|
return &pricelist, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create creates a new pricelist
|
||||||
|
func (r *PricelistRepository) Create(pricelist *models.Pricelist) error {
|
||||||
|
if err := r.db.Create(pricelist).Error; err != nil {
|
||||||
|
return fmt.Errorf("creating pricelist: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update updates a pricelist
|
||||||
|
func (r *PricelistRepository) Update(pricelist *models.Pricelist) error {
|
||||||
|
if err := r.db.Save(pricelist).Error; err != nil {
|
||||||
|
return fmt.Errorf("updating pricelist: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete deletes a pricelist if usage_count is 0
|
||||||
|
func (r *PricelistRepository) Delete(id uint) error {
|
||||||
|
usageCount, err := r.CountUsage(id)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if usageCount > 0 {
|
||||||
|
return fmt.Errorf("cannot delete pricelist with usage_count > 0 (current: %d)", usageCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete items first
|
||||||
|
if err := r.db.Where("pricelist_id = ?", id).Delete(&models.PricelistItem{}).Error; err != nil {
|
||||||
|
return fmt.Errorf("deleting pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete pricelist
|
||||||
|
if err := r.db.Delete(&models.Pricelist{}, id).Error; err != nil {
|
||||||
|
return fmt.Errorf("deleting pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateItems batch inserts pricelist items
|
||||||
|
func (r *PricelistRepository) CreateItems(items []models.PricelistItem) error {
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use batch insert for better performance
|
||||||
|
batchSize := 500
|
||||||
|
for i := 0; i < len(items); i += batchSize {
|
||||||
|
end := i + batchSize
|
||||||
|
if end > len(items) {
|
||||||
|
end = len(items)
|
||||||
|
}
|
||||||
|
if err := r.db.CreateInBatches(items[i:end], batchSize).Error; err != nil {
|
||||||
|
return fmt.Errorf("batch inserting pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetItems returns pricelist items with pagination
|
||||||
|
func (r *PricelistRepository) GetItems(pricelistID uint, offset, limit int, search string) ([]models.PricelistItem, int64, error) {
|
||||||
|
var total int64
|
||||||
|
query := r.db.Model(&models.PricelistItem{}).Where("pricelist_id = ?", pricelistID)
|
||||||
|
|
||||||
|
if search != "" {
|
||||||
|
query = query.Where("lot_name LIKE ?", "%"+search+"%")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("counting pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var items []models.PricelistItem
|
||||||
|
if err := query.Order("lot_name").Offset(offset).Limit(limit).Find(&items).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("listing pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enrich with lot descriptions
|
||||||
|
for i := range items {
|
||||||
|
var lot models.Lot
|
||||||
|
if err := r.db.Where("lot_name = ?", items[i].LotName).First(&lot).Error; err == nil {
|
||||||
|
items[i].LotDescription = lot.LotDescription
|
||||||
|
}
|
||||||
|
items[i].Category = strings.TrimSpace(items[i].LotCategory)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := r.enrichItemsWithStock(items); err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("enriching pricelist items with stock: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return items, total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *PricelistRepository) enrichItemsWithStock(items []models.PricelistItem) error {
|
||||||
|
if len(items) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
resolver, err := lotmatch.NewLotResolverFromDB(r.db)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
type stockRow struct {
|
||||||
|
Partnumber string `gorm:"column:partnumber"`
|
||||||
|
Qty *float64 `gorm:"column:qty"`
|
||||||
|
}
|
||||||
|
rows := make([]stockRow, 0)
|
||||||
|
if err := r.db.Raw(`
|
||||||
|
SELECT s.partnumber, s.qty
|
||||||
|
FROM stock_log s
|
||||||
|
INNER JOIN (
|
||||||
|
SELECT partnumber, MAX(date) AS max_date
|
||||||
|
FROM stock_log
|
||||||
|
GROUP BY partnumber
|
||||||
|
) latest ON latest.partnumber = s.partnumber AND latest.max_date = s.date
|
||||||
|
WHERE s.qty IS NOT NULL
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
lotTotals := make(map[string]float64, len(items))
|
||||||
|
lotPartnumbers := make(map[string][]string, len(items))
|
||||||
|
seenPartnumbers := make(map[string]map[string]struct{}, len(items))
|
||||||
|
|
||||||
|
for i := range rows {
|
||||||
|
row := rows[i]
|
||||||
|
if strings.TrimSpace(row.Partnumber) == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lotName, _, resolveErr := resolver.Resolve(row.Partnumber)
|
||||||
|
if resolveErr != nil || strings.TrimSpace(lotName) == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if row.Qty != nil {
|
||||||
|
lotTotals[lotName] += *row.Qty
|
||||||
|
}
|
||||||
|
|
||||||
|
pn := strings.TrimSpace(row.Partnumber)
|
||||||
|
if pn == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if _, ok := seenPartnumbers[lotName]; !ok {
|
||||||
|
seenPartnumbers[lotName] = make(map[string]struct{}, 4)
|
||||||
|
}
|
||||||
|
key := strings.ToLower(pn)
|
||||||
|
if _, exists := seenPartnumbers[lotName][key]; exists {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seenPartnumbers[lotName][key] = struct{}{}
|
||||||
|
lotPartnumbers[lotName] = append(lotPartnumbers[lotName], pn)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range items {
|
||||||
|
lotName := items[i].LotName
|
||||||
|
if qty, ok := lotTotals[lotName]; ok {
|
||||||
|
qtyCopy := qty
|
||||||
|
items[i].AvailableQty = &qtyCopy
|
||||||
|
}
|
||||||
|
if partnumbers := lotPartnumbers[lotName]; len(partnumbers) > 0 {
|
||||||
|
sort.Slice(partnumbers, func(a, b int) bool {
|
||||||
|
return strings.ToLower(partnumbers[a]) < strings.ToLower(partnumbers[b])
|
||||||
|
})
|
||||||
|
items[i].Partnumbers = partnumbers
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLotNames returns distinct lot names from pricelist items.
|
||||||
|
func (r *PricelistRepository) GetLotNames(pricelistID uint) ([]string, error) {
|
||||||
|
var lotNames []string
|
||||||
|
if err := r.db.Model(&models.PricelistItem{}).
|
||||||
|
Where("pricelist_id = ?", pricelistID).
|
||||||
|
Distinct("lot_name").
|
||||||
|
Order("lot_name ASC").
|
||||||
|
Pluck("lot_name", &lotNames).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("listing pricelist lot names: %w", err)
|
||||||
|
}
|
||||||
|
return lotNames, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPriceForLot returns item price for a lot within a pricelist.
|
||||||
|
func (r *PricelistRepository) GetPriceForLot(pricelistID uint, lotName string) (float64, error) {
|
||||||
|
var item models.PricelistItem
|
||||||
|
if err := r.db.Where("pricelist_id = ? AND lot_name = ?", pricelistID, lotName).First(&item).Error; err != nil {
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
return item.Price, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPricesForLots returns price map for given lots within a pricelist.
|
||||||
|
func (r *PricelistRepository) GetPricesForLots(pricelistID uint, lotNames []string) (map[string]float64, error) {
|
||||||
|
result := make(map[string]float64, len(lotNames))
|
||||||
|
if pricelistID == 0 || len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows []models.PricelistItem
|
||||||
|
if err := r.db.Select("lot_name, price").
|
||||||
|
Where("pricelist_id = ? AND lot_name IN ?", pricelistID, lotNames).
|
||||||
|
Find(&rows).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, row := range rows {
|
||||||
|
if row.Price > 0 {
|
||||||
|
result[row.LotName] = row.Price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetActive toggles active flag on a pricelist.
|
||||||
|
func (r *PricelistRepository) SetActive(id uint, isActive bool) error {
|
||||||
|
return r.db.Model(&models.Pricelist{}).Where("id = ?", id).Update("is_active", isActive).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateVersion generates a new version string in format YYYY-MM-DD-NNN
|
||||||
|
func (r *PricelistRepository) GenerateVersion() (string, error) {
|
||||||
|
return r.GenerateVersionBySource(string(models.PricelistSourceEstimate))
|
||||||
|
}
|
||||||
|
|
||||||
|
// GenerateVersionBySource generates a new version string in format YYYY-MM-DD-NNN scoped by source.
|
||||||
|
func (r *PricelistRepository) GenerateVersionBySource(source string) (string, error) {
|
||||||
|
today := time.Now().Format("2006-01-02")
|
||||||
|
prefix := versionPrefixBySource(source)
|
||||||
|
|
||||||
|
var last models.Pricelist
|
||||||
|
err := r.db.Model(&models.Pricelist{}).
|
||||||
|
Select("version").
|
||||||
|
Where("source = ? AND version LIKE ?", source, prefix+"-"+today+"-%").
|
||||||
|
Order("version DESC").
|
||||||
|
Limit(1).
|
||||||
|
Take(&last).Error
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return fmt.Sprintf("%s-%s-001", prefix, today), nil
|
||||||
|
}
|
||||||
|
return "", fmt.Errorf("loading latest today's pricelist version: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
parts := strings.Split(last.Version, "-")
|
||||||
|
if len(parts) < 4 {
|
||||||
|
return "", fmt.Errorf("invalid pricelist version format: %s", last.Version)
|
||||||
|
}
|
||||||
|
|
||||||
|
n, err := strconv.Atoi(parts[len(parts)-1])
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("parsing pricelist sequence %q: %w", parts[len(parts)-1], err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Sprintf("%s-%s-%03d", prefix, today, n+1), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func versionPrefixBySource(source string) string {
|
||||||
|
switch models.NormalizePricelistSource(source) {
|
||||||
|
case models.PricelistSourceWarehouse:
|
||||||
|
return "S"
|
||||||
|
case models.PricelistSourceCompetitor:
|
||||||
|
return "B"
|
||||||
|
default:
|
||||||
|
return "E"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPriceForLotBySource returns item price for a lot from latest active pricelist of source.
|
||||||
|
func (r *PricelistRepository) GetPriceForLotBySource(source, lotName string) (float64, uint, error) {
|
||||||
|
latest, err := r.GetLatestActiveBySource(source)
|
||||||
|
if err != nil {
|
||||||
|
return 0, 0, err
|
||||||
|
}
|
||||||
|
price, err := r.GetPriceForLot(latest.ID, lotName)
|
||||||
|
if err != nil {
|
||||||
|
return 0, 0, err
|
||||||
|
}
|
||||||
|
return price, latest.ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CanWrite checks if the current database user has INSERT permission on qt_pricelists
|
||||||
|
func (r *PricelistRepository) CanWrite() bool {
|
||||||
|
canWrite, _ := r.CanWriteDebug()
|
||||||
|
return canWrite
|
||||||
|
}
|
||||||
|
|
||||||
|
// CanWriteDebug checks write permission and returns debug info
|
||||||
|
// Uses raw SQL with explicit columns to avoid schema mismatch issues
|
||||||
|
func (r *PricelistRepository) CanWriteDebug() (bool, string) {
|
||||||
|
// Check if table exists first
|
||||||
|
var count int64
|
||||||
|
if err := r.db.Table("qt_pricelists").Count(&count).Error; err != nil {
|
||||||
|
return false, fmt.Sprintf("table check failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use raw SQL with only essential columns that always exist
|
||||||
|
// This avoids GORM model validation and schema mismatch issues
|
||||||
|
tx := r.db.Begin()
|
||||||
|
if tx.Error != nil {
|
||||||
|
return false, fmt.Sprintf("begin tx failed: %v", tx.Error)
|
||||||
|
}
|
||||||
|
defer tx.Rollback() // Always rollback - this is just a permission test
|
||||||
|
|
||||||
|
testVersion := fmt.Sprintf("test-%06d", time.Now().Unix()%1000000)
|
||||||
|
|
||||||
|
// Raw SQL insert with only core columns
|
||||||
|
err := tx.Exec(`
|
||||||
|
INSERT INTO qt_pricelists (version, created_by, is_active)
|
||||||
|
VALUES (?, 'system', 1)
|
||||||
|
`, testVersion).Error
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
// Check if it's a permission error vs other errors
|
||||||
|
errStr := err.Error()
|
||||||
|
if strings.Contains(errStr, "INSERT command denied") ||
|
||||||
|
strings.Contains(errStr, "Access denied") {
|
||||||
|
return false, "no write permission"
|
||||||
|
}
|
||||||
|
return false, fmt.Sprintf("insert failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, "ok"
|
||||||
|
}
|
||||||
|
|
||||||
|
// IncrementUsageCount increments the usage count for a pricelist
|
||||||
|
func (r *PricelistRepository) IncrementUsageCount(id uint) error {
|
||||||
|
return r.db.Model(&models.Pricelist{}).Where("id = ?", id).
|
||||||
|
UpdateColumn("usage_count", gorm.Expr("usage_count + 1")).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// DecrementUsageCount decrements the usage count for a pricelist
|
||||||
|
func (r *PricelistRepository) DecrementUsageCount(id uint) error {
|
||||||
|
return r.db.Model(&models.Pricelist{}).Where("id = ?", id).
|
||||||
|
UpdateColumn("usage_count", gorm.Expr("GREATEST(usage_count - 1, 0)")).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// CountUsage returns number of configurations referencing pricelist.
|
||||||
|
func (r *PricelistRepository) CountUsage(id uint) (int64, error) {
|
||||||
|
var count int64
|
||||||
|
if err := r.db.Table("qt_configurations").Where("pricelist_id = ?", id).Count(&count).Error; err != nil {
|
||||||
|
return 0, fmt.Errorf("counting configurations for pricelist %d: %w", id, err)
|
||||||
|
}
|
||||||
|
return count, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetExpiredUnused returns pricelists that are expired and unused
|
||||||
|
func (r *PricelistRepository) GetExpiredUnused() ([]models.Pricelist, error) {
|
||||||
|
var pricelists []models.Pricelist
|
||||||
|
if err := r.db.Where("expires_at < ? AND usage_count = 0", time.Now()).
|
||||||
|
Find(&pricelists).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("getting expired pricelists: %w", err)
|
||||||
|
}
|
||||||
|
return pricelists, nil
|
||||||
|
}
|
||||||
140
internal/repository/pricelist_test.go
Normal file
140
internal/repository/pricelist_test.go
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGenerateVersion_FirstOfDay(t *testing.T) {
|
||||||
|
repo := newTestPricelistRepository(t)
|
||||||
|
|
||||||
|
version, err := repo.GenerateVersionBySource(string(models.PricelistSourceEstimate))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
today := time.Now().Format("2006-01-02")
|
||||||
|
want := fmt.Sprintf("E-%s-001", today)
|
||||||
|
if version != want {
|
||||||
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateVersion_UsesMaxSuffixNotCount(t *testing.T) {
|
||||||
|
repo := newTestPricelistRepository(t)
|
||||||
|
today := time.Now().Format("2006-01-02")
|
||||||
|
|
||||||
|
seed := []models.Pricelist{
|
||||||
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-001", today), CreatedBy: "test", IsActive: true},
|
||||||
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-003", today), CreatedBy: "test", IsActive: true},
|
||||||
|
}
|
||||||
|
for _, pl := range seed {
|
||||||
|
if err := repo.Create(&pl); err != nil {
|
||||||
|
t.Fatalf("seed insert failed: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
version, err := repo.GenerateVersionBySource(string(models.PricelistSourceEstimate))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
want := fmt.Sprintf("E-%s-004", today)
|
||||||
|
if version != want {
|
||||||
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateVersion_IsolatedBySource(t *testing.T) {
|
||||||
|
repo := newTestPricelistRepository(t)
|
||||||
|
today := time.Now().Format("2006-01-02")
|
||||||
|
|
||||||
|
seed := []models.Pricelist{
|
||||||
|
{Source: string(models.PricelistSourceEstimate), Version: fmt.Sprintf("E-%s-009", today), CreatedBy: "test", IsActive: true},
|
||||||
|
{Source: string(models.PricelistSourceWarehouse), Version: fmt.Sprintf("S-%s-002", today), CreatedBy: "test", IsActive: true},
|
||||||
|
}
|
||||||
|
for _, pl := range seed {
|
||||||
|
if err := repo.Create(&pl); err != nil {
|
||||||
|
t.Fatalf("seed insert failed: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
version, err := repo.GenerateVersionBySource(string(models.PricelistSourceWarehouse))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateVersionBySource returned error: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
want := fmt.Sprintf("S-%s-003", today)
|
||||||
|
if version != want {
|
||||||
|
t.Fatalf("expected %s, got %s", want, version)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetItems_WarehouseAvailableQtyUsesPrefixResolver(t *testing.T) {
|
||||||
|
repo := newTestPricelistRepository(t)
|
||||||
|
db := repo.db
|
||||||
|
|
||||||
|
warehouse := models.Pricelist{
|
||||||
|
Source: string(models.PricelistSourceWarehouse),
|
||||||
|
Version: "S-2026-02-07-001",
|
||||||
|
CreatedBy: "test",
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
if err := db.Create(&warehouse).Error; err != nil {
|
||||||
|
t.Fatalf("create pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Create(&models.PricelistItem{
|
||||||
|
PricelistID: warehouse.ID,
|
||||||
|
LotName: "SSD_NVME_03.2T",
|
||||||
|
Price: 100,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Create(&models.Lot{LotName: "SSD_NVME_03.2T"}).Error; err != nil {
|
||||||
|
t.Fatalf("create lot: %v", err)
|
||||||
|
}
|
||||||
|
qty := 5.0
|
||||||
|
if err := db.Create(&models.StockLog{
|
||||||
|
Partnumber: "SSD_NVME_03.2T_GEN3_P4610",
|
||||||
|
Date: time.Now(),
|
||||||
|
Price: 200,
|
||||||
|
Qty: &qty,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create stock log: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items, total, err := repo.GetItems(warehouse.ID, 0, 20, "")
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GetItems: %v", err)
|
||||||
|
}
|
||||||
|
if total != 1 {
|
||||||
|
t.Fatalf("expected total=1, got %d", total)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 item, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].AvailableQty == nil {
|
||||||
|
t.Fatalf("expected available qty to be set")
|
||||||
|
}
|
||||||
|
if *items[0].AvailableQty != 5 {
|
||||||
|
t.Fatalf("expected available qty=5, got %v", *items[0].AvailableQty)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newTestPricelistRepository(t *testing.T) *PricelistRepository {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
db, err := gorm.Open(sqlite.Open("file::memory:?cache=shared"), &gorm.Config{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}, &models.Lot{}, &models.LotPartnumber{}, &models.StockLog{}); err != nil {
|
||||||
|
t.Fatalf("migrate: %v", err)
|
||||||
|
}
|
||||||
|
return NewPricelistRepository(db)
|
||||||
|
}
|
||||||
196
internal/repository/project.go
Normal file
196
internal/repository/project.go
Normal file
@@ -0,0 +1,196 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
"gorm.io/gorm/clause"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ProjectRepository struct {
|
||||||
|
db *gorm.DB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewProjectRepository(db *gorm.DB) *ProjectRepository {
|
||||||
|
return &ProjectRepository{db: db}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) Create(project *models.Project) error {
|
||||||
|
return r.db.Create(project).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) Update(project *models.Project) error {
|
||||||
|
return r.db.Save(project).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) UpsertByUUID(project *models.Project) error {
|
||||||
|
if err := r.db.Clauses(clause.OnConflict{
|
||||||
|
Columns: []clause.Column{{Name: "uuid"}},
|
||||||
|
DoUpdates: clause.AssignmentColumns([]string{
|
||||||
|
"owner_username",
|
||||||
|
"code",
|
||||||
|
"variant",
|
||||||
|
"name",
|
||||||
|
"tracker_url",
|
||||||
|
"is_active",
|
||||||
|
"is_system",
|
||||||
|
"updated_at",
|
||||||
|
}),
|
||||||
|
}).Create(project).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure caller always gets canonical server ID.
|
||||||
|
var persisted models.Project
|
||||||
|
if err := r.db.Where("uuid = ?", project.UUID).First(&persisted).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
project.ID = persisted.ID
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) GetByUUID(uuid string) (*models.Project, error) {
|
||||||
|
var project models.Project
|
||||||
|
if err := r.db.Where("uuid = ?", uuid).First(&project).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &project, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) GetSystemByOwner(ownerUsername string) (*models.Project, error) {
|
||||||
|
var project models.Project
|
||||||
|
if err := r.db.Where("owner_username = ? AND is_system = ? AND name = ?", ownerUsername, true, "Без проекта").
|
||||||
|
First(&project).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &project, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) List(offset, limit int, includeArchived bool) ([]models.Project, int64, error) {
|
||||||
|
var projects []models.Project
|
||||||
|
var total int64
|
||||||
|
|
||||||
|
query := r.db.Model(&models.Project{})
|
||||||
|
if !includeArchived {
|
||||||
|
query = query.Where("is_active = ?", true)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := query.Count(&total).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := query.Order("created_at DESC").Offset(offset).Limit(limit).Find(&projects).Error; err != nil {
|
||||||
|
return nil, 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return projects, total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) ListByOwner(ownerUsername string, includeArchived bool) ([]models.Project, error) {
|
||||||
|
var projects []models.Project
|
||||||
|
|
||||||
|
query := r.db.Where("owner_username = ?", ownerUsername)
|
||||||
|
if !includeArchived {
|
||||||
|
query = query.Where("is_active = ?", true)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := query.Order("created_at DESC").Find(&projects).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return projects, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) Archive(uuid string) error {
|
||||||
|
return r.db.Model(&models.Project{}).Where("uuid = ?", uuid).Update("is_active", false).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *ProjectRepository) Reactivate(uuid string) error {
|
||||||
|
return r.db.Model(&models.Project{}).Where("uuid = ?", uuid).Update("is_active", true).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// PurgeEmptyNamelessProjects removes service-trash projects that have no configurations attached:
|
||||||
|
// 1) projects with empty names;
|
||||||
|
// 2) duplicate "Без проекта" rows without configurations (case-insensitive, trimmed).
|
||||||
|
func (r *ProjectRepository) PurgeEmptyNamelessProjects() (int64, error) {
|
||||||
|
tx := r.db.Exec(`
|
||||||
|
DELETE p
|
||||||
|
FROM qt_projects p
|
||||||
|
WHERE (
|
||||||
|
TRIM(COALESCE(p.name, '')) = ''
|
||||||
|
OR LOWER(TRIM(COALESCE(p.name, ''))) = LOWER('Без проекта')
|
||||||
|
)
|
||||||
|
AND NOT EXISTS (
|
||||||
|
SELECT 1
|
||||||
|
FROM qt_configurations c
|
||||||
|
WHERE c.project_uuid = p.uuid
|
||||||
|
)`)
|
||||||
|
return tx.RowsAffected, tx.Error
|
||||||
|
}
|
||||||
|
|
||||||
|
// EnsureSystemProjectsAndBackfillConfigurations ensures there is a single shared system project
|
||||||
|
// named "Без проекта", reassigns orphan/legacy links to it and removes duplicates.
|
||||||
|
func (r *ProjectRepository) EnsureSystemProjectsAndBackfillConfigurations() error {
|
||||||
|
return r.db.Transaction(func(tx *gorm.DB) error {
|
||||||
|
type row struct {
|
||||||
|
UUID string `gorm:"column:uuid"`
|
||||||
|
}
|
||||||
|
var canonical row
|
||||||
|
err := tx.Raw(`
|
||||||
|
SELECT uuid
|
||||||
|
FROM qt_projects
|
||||||
|
WHERE LOWER(TRIM(COALESCE(name, ''))) = LOWER('Без проекта')
|
||||||
|
AND is_system = TRUE
|
||||||
|
ORDER BY CASE WHEN TRIM(COALESCE(owner_username, '')) = '' THEN 0 ELSE 1 END, created_at ASC, id ASC
|
||||||
|
LIMIT 1`).Scan(&canonical).Error
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if canonical.UUID == "" {
|
||||||
|
if err := tx.Exec(`
|
||||||
|
INSERT INTO qt_projects (uuid, owner_username, name, is_active, is_system, created_at, updated_at)
|
||||||
|
VALUES (UUID(), '', 'Без проекта', TRUE, TRUE, NOW(), NOW())`).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := tx.Raw(`
|
||||||
|
SELECT uuid
|
||||||
|
FROM qt_projects
|
||||||
|
WHERE LOWER(TRIM(COALESCE(name, ''))) = LOWER('Без проекта')
|
||||||
|
ORDER BY created_at DESC, id DESC
|
||||||
|
LIMIT 1`).Scan(&canonical).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if canonical.UUID == "" {
|
||||||
|
return gorm.ErrRecordNotFound
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE qt_projects
|
||||||
|
SET name = 'Без проекта',
|
||||||
|
is_active = TRUE,
|
||||||
|
is_system = TRUE
|
||||||
|
WHERE uuid = ?`, canonical.UUID).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE qt_configurations
|
||||||
|
SET project_uuid = ?
|
||||||
|
WHERE project_uuid IS NULL OR project_uuid = ''`, canonical.UUID).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := tx.Exec(`
|
||||||
|
UPDATE qt_configurations c
|
||||||
|
JOIN qt_projects p ON p.uuid = c.project_uuid
|
||||||
|
SET c.project_uuid = ?
|
||||||
|
WHERE LOWER(TRIM(COALESCE(p.name, ''))) = LOWER('Без проекта')
|
||||||
|
AND p.uuid <> ?`, canonical.UUID, canonical.UUID).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return tx.Exec(`
|
||||||
|
DELETE FROM qt_projects
|
||||||
|
WHERE LOWER(TRIM(COALESCE(name, ''))) = LOWER('Без проекта')
|
||||||
|
AND uuid <> ?`, canonical.UUID).Error
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -3,7 +3,7 @@ package repository
|
|||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -90,3 +90,26 @@ func (r *StatsRepository) ResetMonthlyCounters() error {
|
|||||||
Where("1 = 1").
|
Where("1 = 1").
|
||||||
Update("quotes_last_30d", 0).Error
|
Update("quotes_last_30d", 0).Error
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// UpdatePopularityScores recalculates popularity_score in qt_lot_metadata
|
||||||
|
// based on supplier quotes from lot_log table
|
||||||
|
func (r *StatsRepository) UpdatePopularityScores() error {
|
||||||
|
// Formula: popularity_score = quotes_last_30d * 3 + quotes_last_90d * 1 + quotes_total * 0.1
|
||||||
|
// This gives more weight to recent supplier activity
|
||||||
|
return r.db.Exec(`
|
||||||
|
UPDATE qt_lot_metadata m
|
||||||
|
LEFT JOIN (
|
||||||
|
SELECT
|
||||||
|
lot,
|
||||||
|
COUNT(*) as quotes_total,
|
||||||
|
SUM(CASE WHEN date >= DATE_SUB(NOW(), INTERVAL 30 DAY) THEN 1 ELSE 0 END) as quotes_last_30d,
|
||||||
|
SUM(CASE WHEN date >= DATE_SUB(NOW(), INTERVAL 90 DAY) THEN 1 ELSE 0 END) as quotes_last_90d
|
||||||
|
FROM lot_log
|
||||||
|
GROUP BY lot
|
||||||
|
) s ON m.lot_name = s.lot
|
||||||
|
SET m.popularity_score = COALESCE(
|
||||||
|
s.quotes_last_30d * 3 + s.quotes_last_90d * 1 + s.quotes_total * 0.1,
|
||||||
|
0
|
||||||
|
)
|
||||||
|
`).Error
|
||||||
|
}
|
||||||
|
|||||||
393
internal/repository/unified.go
Normal file
393
internal/repository/unified.go
Normal file
@@ -0,0 +1,393 @@
|
|||||||
|
package repository
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
// DataSource defines the unified interface for data access
|
||||||
|
// It abstracts whether data comes from MariaDB (online) or SQLite (offline)
|
||||||
|
type DataSource interface {
|
||||||
|
// Components
|
||||||
|
GetComponents(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error)
|
||||||
|
GetComponent(lotName string) (*models.LotMetadata, error)
|
||||||
|
|
||||||
|
// Configurations
|
||||||
|
SaveConfiguration(cfg *models.Configuration) error
|
||||||
|
GetConfigurations(ownerUsername string) ([]models.Configuration, error)
|
||||||
|
GetConfigurationByUUID(uuid string) (*models.Configuration, error)
|
||||||
|
DeleteConfiguration(uuid string) error
|
||||||
|
|
||||||
|
// Pricelists (read-only in offline mode)
|
||||||
|
GetPricelists() ([]models.PricelistSummary, error)
|
||||||
|
GetPricelistByID(id uint) (*models.Pricelist, error)
|
||||||
|
GetPricelistItems(pricelistID uint) ([]models.PricelistItem, error)
|
||||||
|
GetLatestPricelist() (*models.Pricelist, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnifiedRepo implements DataSource with automatic online/offline switching
|
||||||
|
type UnifiedRepo struct {
|
||||||
|
mariaDB *gorm.DB
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
isOnline bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewUnifiedRepo creates a new unified repository
|
||||||
|
func NewUnifiedRepo(mariaDB *gorm.DB, localDB *localdb.LocalDB, isOnline bool) *UnifiedRepo {
|
||||||
|
return &UnifiedRepo{
|
||||||
|
mariaDB: mariaDB,
|
||||||
|
localDB: localDB,
|
||||||
|
isOnline: isOnline,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetOnlineStatus updates the online/offline status
|
||||||
|
func (r *UnifiedRepo) SetOnlineStatus(online bool) {
|
||||||
|
r.isOnline = online
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsOnline returns the current online/offline status
|
||||||
|
func (r *UnifiedRepo) IsOnline() bool {
|
||||||
|
return r.isOnline
|
||||||
|
}
|
||||||
|
|
||||||
|
// Component methods
|
||||||
|
|
||||||
|
// GetComponents returns components from MariaDB (online) or local cache (offline)
|
||||||
|
func (r *UnifiedRepo) GetComponents(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
return r.getComponentsOnline(filter, offset, limit)
|
||||||
|
}
|
||||||
|
return r.getComponentsOffline(filter, offset, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *UnifiedRepo) getComponentsOnline(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error) {
|
||||||
|
repo := NewComponentRepository(r.mariaDB)
|
||||||
|
return repo.List(filter, offset, limit)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *UnifiedRepo) getComponentsOffline(filter ComponentFilter, offset, limit int) ([]models.LotMetadata, int64, error) {
|
||||||
|
var components []localdb.LocalComponent
|
||||||
|
query := r.localDB.DB().Model(&localdb.LocalComponent{})
|
||||||
|
|
||||||
|
// Apply filters
|
||||||
|
if filter.Category != "" {
|
||||||
|
query = query.Where("category = ?", filter.Category)
|
||||||
|
}
|
||||||
|
if filter.Search != "" {
|
||||||
|
search := "%" + filter.Search + "%"
|
||||||
|
query = query.Where("lot_name LIKE ? OR lot_description LIKE ? OR model LIKE ?", search, search, search)
|
||||||
|
}
|
||||||
|
var total int64
|
||||||
|
query.Count(&total)
|
||||||
|
|
||||||
|
// Apply sorting
|
||||||
|
sortDir := "ASC"
|
||||||
|
if filter.SortDir == "desc" {
|
||||||
|
sortDir = "DESC"
|
||||||
|
}
|
||||||
|
switch filter.SortField {
|
||||||
|
case "lot_name":
|
||||||
|
query = query.Order("lot_name " + sortDir)
|
||||||
|
default:
|
||||||
|
query = query.Order("lot_name ASC")
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := query.Offset(offset).Limit(limit).Find(&components).Error; err != nil {
|
||||||
|
return nil, 0, fmt.Errorf("fetching offline components: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to models.LotMetadata
|
||||||
|
result := make([]models.LotMetadata, len(components))
|
||||||
|
for i, comp := range components {
|
||||||
|
result[i] = models.LotMetadata{
|
||||||
|
LotName: comp.LotName,
|
||||||
|
Model: comp.Model,
|
||||||
|
Lot: &models.Lot{
|
||||||
|
LotName: comp.LotName,
|
||||||
|
LotDescription: comp.LotDescription,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, total, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetComponent returns a single component by lot name
|
||||||
|
func (r *UnifiedRepo) GetComponent(lotName string) (*models.LotMetadata, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewComponentRepository(r.mariaDB)
|
||||||
|
return repo.GetByLotName(lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
var comp localdb.LocalComponent
|
||||||
|
if err := r.localDB.DB().Where("lot_name = ?", lotName).First(&comp).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching offline component: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &models.LotMetadata{
|
||||||
|
LotName: comp.LotName,
|
||||||
|
Model: comp.Model,
|
||||||
|
Lot: &models.Lot{
|
||||||
|
LotName: comp.LotName,
|
||||||
|
LotDescription: comp.LotDescription,
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Configuration methods
|
||||||
|
|
||||||
|
// SaveConfiguration saves a configuration (online: MariaDB, offline: SQLite + pending_changes)
|
||||||
|
func (r *UnifiedRepo) SaveConfiguration(cfg *models.Configuration) error {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewConfigurationRepository(r.mariaDB)
|
||||||
|
return repo.Create(cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: save to local SQLite and queue for sync
|
||||||
|
localCfg := &localdb.LocalConfiguration{
|
||||||
|
UUID: cfg.UUID,
|
||||||
|
Name: cfg.Name,
|
||||||
|
TotalPrice: cfg.TotalPrice,
|
||||||
|
CustomPrice: cfg.CustomPrice,
|
||||||
|
Notes: cfg.Notes,
|
||||||
|
IsTemplate: cfg.IsTemplate,
|
||||||
|
ServerCount: cfg.ServerCount,
|
||||||
|
CreatedAt: cfg.CreatedAt,
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
SyncStatus: "pending",
|
||||||
|
OriginalUsername: cfg.OwnerUsername,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert items
|
||||||
|
localItems := make(localdb.LocalConfigItems, len(cfg.Items))
|
||||||
|
for i, item := range cfg.Items {
|
||||||
|
localItems[i] = localdb.LocalConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
localCfg.Items = localItems
|
||||||
|
|
||||||
|
if err := r.localDB.SaveConfiguration(localCfg); err != nil {
|
||||||
|
return fmt.Errorf("saving local configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add to pending changes queue
|
||||||
|
payload, err := json.Marshal(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("marshaling configuration for sync: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.localDB.AddPendingChange("configuration", cfg.UUID, "create", string(payload))
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetConfigurations returns all configurations for a user
|
||||||
|
func (r *UnifiedRepo) GetConfigurations(ownerUsername string) ([]models.Configuration, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewConfigurationRepository(r.mariaDB)
|
||||||
|
configs, _, err := repo.ListByUser(ownerUsername, 0, 1000)
|
||||||
|
return configs, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: get from local SQLite
|
||||||
|
localConfigs, err := r.localDB.GetConfigurations()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local configurations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to models.Configuration
|
||||||
|
result := make([]models.Configuration, len(localConfigs))
|
||||||
|
for i, lc := range localConfigs {
|
||||||
|
items := make(models.ConfigItems, len(lc.Items))
|
||||||
|
for j, item := range lc.Items {
|
||||||
|
items[j] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
result[i] = models.Configuration{
|
||||||
|
UUID: lc.UUID,
|
||||||
|
OwnerUsername: lc.OriginalUsername,
|
||||||
|
Name: lc.Name,
|
||||||
|
Items: items,
|
||||||
|
TotalPrice: lc.TotalPrice,
|
||||||
|
CustomPrice: lc.CustomPrice,
|
||||||
|
Notes: lc.Notes,
|
||||||
|
IsTemplate: lc.IsTemplate,
|
||||||
|
ServerCount: lc.ServerCount,
|
||||||
|
CreatedAt: lc.CreatedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetConfigurationByUUID returns a configuration by UUID
|
||||||
|
func (r *UnifiedRepo) GetConfigurationByUUID(uuid string) (*models.Configuration, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewConfigurationRepository(r.mariaDB)
|
||||||
|
return repo.GetByUUID(uuid)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := r.localDB.GetConfigurationByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make(models.ConfigItems, len(localCfg.Items))
|
||||||
|
for i, item := range localCfg.Items {
|
||||||
|
items[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: item.UnitPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &models.Configuration{
|
||||||
|
UUID: localCfg.UUID,
|
||||||
|
Name: localCfg.Name,
|
||||||
|
Items: items,
|
||||||
|
TotalPrice: localCfg.TotalPrice,
|
||||||
|
CustomPrice: localCfg.CustomPrice,
|
||||||
|
Notes: localCfg.Notes,
|
||||||
|
IsTemplate: localCfg.IsTemplate,
|
||||||
|
ServerCount: localCfg.ServerCount,
|
||||||
|
CreatedAt: localCfg.CreatedAt,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteConfiguration deletes a configuration
|
||||||
|
func (r *UnifiedRepo) DeleteConfiguration(uuid string) error {
|
||||||
|
if r.isOnline {
|
||||||
|
// Get ID first
|
||||||
|
cfg, err := r.GetConfigurationByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
repo := NewConfigurationRepository(r.mariaDB)
|
||||||
|
return repo.Delete(cfg.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: delete from local and queue sync
|
||||||
|
if err := r.localDB.DeleteConfiguration(uuid); err != nil {
|
||||||
|
return fmt.Errorf("deleting local configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r.localDB.AddPendingChange("configuration", uuid, "delete", "")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pricelist methods
|
||||||
|
|
||||||
|
// GetPricelists returns all pricelists
|
||||||
|
func (r *UnifiedRepo) GetPricelists() ([]models.PricelistSummary, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewPricelistRepository(r.mariaDB)
|
||||||
|
summaries, _, err := repo.List(0, 1000)
|
||||||
|
return summaries, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: get from local cache
|
||||||
|
localPLs, err := r.localDB.GetLocalPricelists()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local pricelists: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
summaries := make([]models.PricelistSummary, len(localPLs))
|
||||||
|
for i, pl := range localPLs {
|
||||||
|
itemCount := r.localDB.CountLocalPricelistItems(pl.ID)
|
||||||
|
summaries[i] = models.PricelistSummary{
|
||||||
|
ID: pl.ServerID,
|
||||||
|
Version: pl.Version,
|
||||||
|
CreatedAt: pl.CreatedAt,
|
||||||
|
ItemCount: itemCount,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return summaries, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPricelistByID returns a pricelist by ID
|
||||||
|
func (r *UnifiedRepo) GetPricelistByID(id uint) (*models.Pricelist, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewPricelistRepository(r.mariaDB)
|
||||||
|
return repo.GetByID(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: get from local cache
|
||||||
|
localPL, err := r.localDB.GetLocalPricelistByServerID(id)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
itemCount := r.localDB.CountLocalPricelistItems(localPL.ID)
|
||||||
|
return &models.Pricelist{
|
||||||
|
ID: localPL.ServerID,
|
||||||
|
Version: localPL.Version,
|
||||||
|
CreatedAt: localPL.CreatedAt,
|
||||||
|
ItemCount: int(itemCount),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetPricelistItems returns items for a pricelist
|
||||||
|
func (r *UnifiedRepo) GetPricelistItems(pricelistID uint) ([]models.PricelistItem, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewPricelistRepository(r.mariaDB)
|
||||||
|
items, _, err := repo.GetItems(pricelistID, 0, 100000, "")
|
||||||
|
return items, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: get from local cache
|
||||||
|
// First find the local pricelist by server ID
|
||||||
|
localPL, err := r.localDB.GetLocalPricelistByServerID(pricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localItems, err := r.localDB.GetLocalPricelistItems(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching local pricelist items: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items := make([]models.PricelistItem, len(localItems))
|
||||||
|
for i, item := range localItems {
|
||||||
|
items[i] = models.PricelistItem{
|
||||||
|
ID: item.ID,
|
||||||
|
PricelistID: pricelistID,
|
||||||
|
LotName: item.LotName,
|
||||||
|
Price: item.Price,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return items, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetLatestPricelist returns the latest pricelist
|
||||||
|
func (r *UnifiedRepo) GetLatestPricelist() (*models.Pricelist, error) {
|
||||||
|
if r.isOnline {
|
||||||
|
repo := NewPricelistRepository(r.mariaDB)
|
||||||
|
return repo.GetLatestActive()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Offline: get from local cache
|
||||||
|
localPL, err := r.localDB.GetLatestLocalPricelist()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("fetching latest local pricelist: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
itemCount := r.localDB.CountLocalPricelistItems(localPL.ID)
|
||||||
|
return &models.Pricelist{
|
||||||
|
ID: localPL.ServerID,
|
||||||
|
Version: localPL.Version,
|
||||||
|
CreatedAt: localPL.CreatedAt,
|
||||||
|
ItemCount: int(itemCount),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
package repository
|
package repository
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"gorm.io/gorm"
|
"gorm.io/gorm"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -1,199 +0,0 @@
|
|||||||
package alerts
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/config"
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Service struct {
|
|
||||||
alertRepo *repository.AlertRepository
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
priceRepo *repository.PriceRepository
|
|
||||||
statsRepo *repository.StatsRepository
|
|
||||||
config config.AlertsConfig
|
|
||||||
pricingConfig config.PricingConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewService(
|
|
||||||
alertRepo *repository.AlertRepository,
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
priceRepo *repository.PriceRepository,
|
|
||||||
statsRepo *repository.StatsRepository,
|
|
||||||
alertCfg config.AlertsConfig,
|
|
||||||
pricingCfg config.PricingConfig,
|
|
||||||
) *Service {
|
|
||||||
return &Service{
|
|
||||||
alertRepo: alertRepo,
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
priceRepo: priceRepo,
|
|
||||||
statsRepo: statsRepo,
|
|
||||||
config: alertCfg,
|
|
||||||
pricingConfig: pricingCfg,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) List(filter repository.AlertFilter, page, perPage int) ([]models.PricingAlert, int64, error) {
|
|
||||||
if page < 1 {
|
|
||||||
page = 1
|
|
||||||
}
|
|
||||||
if perPage < 1 || perPage > 100 {
|
|
||||||
perPage = 20
|
|
||||||
}
|
|
||||||
offset := (page - 1) * perPage
|
|
||||||
|
|
||||||
return s.alertRepo.List(filter, offset, perPage)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Acknowledge(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusAcknowledged)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Resolve(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusResolved)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) Ignore(id uint) error {
|
|
||||||
return s.alertRepo.UpdateStatus(id, models.AlertStatusIgnored)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) GetNewAlertsCount() (int64, error) {
|
|
||||||
return s.alertRepo.CountByStatus(models.AlertStatusNew)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CheckAndGenerateAlerts scans components and creates alerts
|
|
||||||
func (s *Service) CheckAndGenerateAlerts() error {
|
|
||||||
if !s.config.Enabled {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get top components by usage
|
|
||||||
topComponents, err := s.statsRepo.GetTopComponents(100)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, stats := range topComponents {
|
|
||||||
component, err := s.componentRepo.GetByLotName(stats.LotName)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check high demand + stale price
|
|
||||||
if err := s.checkHighDemandStalePrice(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check trending without price
|
|
||||||
if err := s.checkTrendingNoPrice(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check no recent quotes
|
|
||||||
if err := s.checkNoRecentQuotes(component, &stats); err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkHighDemandStalePrice(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// high_demand_stale_price: >= 5 quotes/month AND price > 60 days old
|
|
||||||
if stats.QuotesLast30d < s.config.HighDemandThreshold {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.PriceUpdatedAt == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
daysSinceUpdate := int(time.Since(*comp.PriceUpdatedAt).Hours() / 24)
|
|
||||||
if daysSinceUpdate <= s.pricingConfig.FreshnessYellowDays {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if alert already exists
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertHighDemandStalePrice)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertHighDemandStalePrice,
|
|
||||||
Severity: models.SeverityCritical,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: высокий спрос (%d КП/мес), но цена устарела (%d дней)", comp.LotName, stats.QuotesLast30d, daysSinceUpdate),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"quotes_30d": stats.QuotesLast30d,
|
|
||||||
"days_since_update": daysSinceUpdate,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkTrendingNoPrice(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// trending_no_price: trend > 50% AND no price
|
|
||||||
if stats.TrendDirection != models.TrendUp || stats.TrendPercent < float64(s.config.TrendingThresholdPercent) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if comp.CurrentPrice != nil && *comp.CurrentPrice > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertTrendingNoPrice)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertTrendingNoPrice,
|
|
||||||
Severity: models.SeverityHigh,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: рост спроса +%.0f%%, но цена не установлена", comp.LotName, stats.TrendPercent),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"trend_percent": stats.TrendPercent,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Service) checkNoRecentQuotes(comp *models.LotMetadata, stats *models.ComponentUsageStats) error {
|
|
||||||
// no_recent_quotes: popular component, no supplier quotes > 90 days
|
|
||||||
if stats.QuotesLast30d < 3 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
quoteCount, err := s.priceRepo.GetQuoteCount(comp.LotName, s.pricingConfig.FreshnessRedDays)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if quoteCount > 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
exists, _ := s.alertRepo.ExistsByLotAndType(comp.LotName, models.AlertNoRecentQuotes)
|
|
||||||
if exists {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
alert := &models.PricingAlert{
|
|
||||||
LotName: comp.LotName,
|
|
||||||
AlertType: models.AlertNoRecentQuotes,
|
|
||||||
Severity: models.SeverityMedium,
|
|
||||||
Message: fmt.Sprintf("Компонент %s: популярный (%d КП), но нет новых котировок >%d дней", comp.LotName, stats.QuotesLast30d, s.pricingConfig.FreshnessRedDays),
|
|
||||||
Details: models.AlertDetails{
|
|
||||||
"quotes_30d": stats.QuotesLast30d,
|
|
||||||
"no_quotes_days": s.pricingConfig.FreshnessRedDays,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.alertRepo.Create(alert)
|
|
||||||
}
|
|
||||||
@@ -5,9 +5,9 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/golang-jwt/jwt/v5"
|
"github.com/golang-jwt/jwt/v5"
|
||||||
"github.com/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"golang.org/x/crypto/bcrypt"
|
"golang.org/x/crypto/bcrypt"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
package services
|
package services
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
)
|
)
|
||||||
|
|
||||||
type ComponentService struct {
|
type ComponentService struct {
|
||||||
@@ -25,19 +26,16 @@ func NewComponentService(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParsePartNumber extracts category, vendor, model from lot_name
|
// ParsePartNumber extracts category and model from lot_name
|
||||||
// "CPU_AMD_9654" → category="CPU", vendor="AMD", model="9654"
|
// "CPU_AMD_9654" → category="CPU", model="AMD_9654"
|
||||||
// "MB_INTEL_4.Sapphire_2S_32xDDR5" → category="MB", vendor="INTEL", model="4.Sapphire_2S_32xDDR5"
|
// "MB_INTEL_4.Sapphire_2S_32xDDR5" → category="MB", model="INTEL_4.Sapphire_2S_32xDDR5"
|
||||||
func ParsePartNumber(lotName string) (category, vendor, model string) {
|
func ParsePartNumber(lotName string) (category, model string) {
|
||||||
parts := strings.SplitN(lotName, "_", 3)
|
parts := strings.SplitN(lotName, "_", 2)
|
||||||
if len(parts) >= 1 {
|
if len(parts) >= 1 {
|
||||||
category = parts[0]
|
category = parts[0]
|
||||||
}
|
}
|
||||||
if len(parts) >= 2 {
|
if len(parts) >= 2 {
|
||||||
vendor = parts[1]
|
model = parts[1]
|
||||||
}
|
|
||||||
if len(parts) >= 3 {
|
|
||||||
model = parts[2]
|
|
||||||
}
|
}
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -50,25 +48,37 @@ type ComponentListResult struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type ComponentView struct {
|
type ComponentView struct {
|
||||||
LotName string `json:"lot_name"`
|
LotName string `json:"lot_name"`
|
||||||
Description string `json:"description"`
|
Description string `json:"description"`
|
||||||
Category string `json:"category"`
|
Category string `json:"category"`
|
||||||
CategoryName string `json:"category_name"`
|
CategoryName string `json:"category_name"`
|
||||||
Vendor string `json:"vendor"`
|
Model string `json:"model"`
|
||||||
Model string `json:"model"`
|
PriceFreshness models.PriceFreshness `json:"price_freshness"`
|
||||||
CurrentPrice *float64 `json:"current_price"`
|
PopularityScore float64 `json:"popularity_score"`
|
||||||
PriceFreshness models.PriceFreshness `json:"price_freshness"`
|
Specs models.Specs `json:"specs,omitempty"`
|
||||||
PopularityScore float64 `json:"popularity_score"`
|
|
||||||
Specs models.Specs `json:"specs,omitempty"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ComponentService) List(filter repository.ComponentFilter, page, perPage int) (*ComponentListResult, error) {
|
func (s *ComponentService) List(filter repository.ComponentFilter, page, perPage int) (*ComponentListResult, error) {
|
||||||
|
// If no database connection (offline mode), return empty list
|
||||||
|
// Components should be loaded via /api/sync/components first
|
||||||
|
if s.componentRepo == nil {
|
||||||
|
return &ComponentListResult{
|
||||||
|
Components: []ComponentView{},
|
||||||
|
Total: 0,
|
||||||
|
Page: page,
|
||||||
|
PerPage: perPage,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
if page < 1 {
|
if page < 1 {
|
||||||
page = 1
|
page = 1
|
||||||
}
|
}
|
||||||
if perPage < 1 || perPage > 100 {
|
if perPage < 1 {
|
||||||
perPage = 20
|
perPage = 20
|
||||||
}
|
}
|
||||||
|
if perPage > 5000 {
|
||||||
|
perPage = 5000
|
||||||
|
}
|
||||||
offset := (page - 1) * perPage
|
offset := (page - 1) * perPage
|
||||||
|
|
||||||
components, total, err := s.componentRepo.List(filter, offset, perPage)
|
components, total, err := s.componentRepo.List(filter, offset, perPage)
|
||||||
@@ -80,9 +90,7 @@ func (s *ComponentService) List(filter repository.ComponentFilter, page, perPage
|
|||||||
for i, c := range components {
|
for i, c := range components {
|
||||||
view := ComponentView{
|
view := ComponentView{
|
||||||
LotName: c.LotName,
|
LotName: c.LotName,
|
||||||
Vendor: c.Vendor,
|
|
||||||
Model: c.Model,
|
Model: c.Model,
|
||||||
CurrentPrice: c.CurrentPrice,
|
|
||||||
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
||||||
PopularityScore: c.PopularityScore,
|
PopularityScore: c.PopularityScore,
|
||||||
Specs: c.Specs,
|
Specs: c.Specs,
|
||||||
@@ -108,6 +116,11 @@ func (s *ComponentService) List(filter repository.ComponentFilter, page, perPage
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *ComponentService) GetByLotName(lotName string) (*ComponentView, error) {
|
func (s *ComponentService) GetByLotName(lotName string) (*ComponentView, error) {
|
||||||
|
// If no database connection (offline mode), return error
|
||||||
|
if s.componentRepo == nil {
|
||||||
|
return nil, fmt.Errorf("offline mode: component data not available")
|
||||||
|
}
|
||||||
|
|
||||||
c, err := s.componentRepo.GetByLotName(lotName)
|
c, err := s.componentRepo.GetByLotName(lotName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -118,9 +131,7 @@ func (s *ComponentService) GetByLotName(lotName string) (*ComponentView, error)
|
|||||||
|
|
||||||
view := &ComponentView{
|
view := &ComponentView{
|
||||||
LotName: c.LotName,
|
LotName: c.LotName,
|
||||||
Vendor: c.Vendor,
|
|
||||||
Model: c.Model,
|
Model: c.Model,
|
||||||
CurrentPrice: c.CurrentPrice,
|
|
||||||
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
PriceFreshness: c.GetPriceFreshness(30, 60, 90, 3),
|
||||||
PopularityScore: c.PopularityScore,
|
PopularityScore: c.PopularityScore,
|
||||||
Specs: c.Specs,
|
Specs: c.Specs,
|
||||||
@@ -138,15 +149,20 @@ func (s *ComponentService) GetByLotName(lotName string) (*ComponentView, error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *ComponentService) GetCategories() ([]models.Category, error) {
|
func (s *ComponentService) GetCategories() ([]models.Category, error) {
|
||||||
|
// If no database connection (offline mode), return default categories
|
||||||
|
if s.categoryRepo == nil {
|
||||||
|
return models.DefaultCategories, nil
|
||||||
|
}
|
||||||
return s.categoryRepo.GetAll()
|
return s.categoryRepo.GetAll()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ComponentService) GetVendors(category string) ([]string, error) {
|
|
||||||
return s.componentRepo.GetVendors(category)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ImportFromLot creates metadata entries for lots that don't have them
|
// ImportFromLot creates metadata entries for lots that don't have them
|
||||||
func (s *ComponentService) ImportFromLot() (int, error) {
|
func (s *ComponentService) ImportFromLot() (int, error) {
|
||||||
|
// If no database connection (offline mode), return error
|
||||||
|
if s.componentRepo == nil || s.categoryRepo == nil {
|
||||||
|
return 0, fmt.Errorf("offline mode: import not available")
|
||||||
|
}
|
||||||
|
|
||||||
lots, err := s.componentRepo.GetLotsWithoutMetadata()
|
lots, err := s.componentRepo.GetLotsWithoutMetadata()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return 0, err
|
||||||
@@ -159,22 +175,36 @@ func (s *ComponentService) ImportFromLot() (int, error) {
|
|||||||
|
|
||||||
categoryMap := make(map[string]uint)
|
categoryMap := make(map[string]uint)
|
||||||
for _, cat := range categories {
|
for _, cat := range categories {
|
||||||
categoryMap[cat.Code] = cat.ID
|
categoryMap[strings.ToUpper(cat.Code)] = cat.ID
|
||||||
}
|
}
|
||||||
|
|
||||||
imported := 0
|
imported := 0
|
||||||
for _, lot := range lots {
|
for _, lot := range lots {
|
||||||
category, vendor, model := ParsePartNumber(lot.LotName)
|
// Use lot_category from database if available, otherwise parse from lot_name
|
||||||
|
var category string
|
||||||
|
if lot.LotCategory != nil && *lot.LotCategory != "" {
|
||||||
|
category = strings.ToUpper(*lot.LotCategory)
|
||||||
|
} else {
|
||||||
|
category, _ = ParsePartNumber(lot.LotName)
|
||||||
|
category = strings.ToUpper(category)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, model := ParsePartNumber(lot.LotName)
|
||||||
|
|
||||||
metadata := &models.LotMetadata{
|
metadata := &models.LotMetadata{
|
||||||
LotName: lot.LotName,
|
LotName: lot.LotName,
|
||||||
Vendor: vendor,
|
|
||||||
Model: model,
|
Model: model,
|
||||||
Specs: make(models.Specs),
|
Specs: make(models.Specs),
|
||||||
}
|
}
|
||||||
|
|
||||||
if catID, ok := categoryMap[category]; ok {
|
if catID, ok := categoryMap[category]; ok {
|
||||||
metadata.CategoryID = &catID
|
metadata.CategoryID = &catID
|
||||||
|
} else {
|
||||||
|
// Create new category if it doesn't exist
|
||||||
|
newCat, err := s.categoryRepo.CreateIfNotExists(category)
|
||||||
|
if err == nil && newCat != nil {
|
||||||
|
metadata.CategoryID = &newCat.ID
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.componentRepo.Create(metadata); err != nil {
|
if err := s.componentRepo.Create(metadata); err != nil {
|
||||||
|
|||||||
@@ -1,55 +1,104 @@
|
|||||||
package services
|
package services
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
"errors"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
"github.com/google/uuid"
|
"github.com/google/uuid"
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrConfigNotFound = errors.New("configuration not found")
|
ErrConfigNotFound = errors.New("configuration not found")
|
||||||
ErrConfigForbidden = errors.New("access to configuration forbidden")
|
ErrConfigForbidden = errors.New("access to configuration forbidden")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// ConfigurationGetter is an interface for services that can retrieve configurations
|
||||||
|
// Used by handlers to work with both ConfigurationService and LocalConfigurationService
|
||||||
|
type ConfigurationGetter interface {
|
||||||
|
GetByUUID(uuid string, ownerUsername string) (*models.Configuration, error)
|
||||||
|
}
|
||||||
|
|
||||||
type ConfigurationService struct {
|
type ConfigurationService struct {
|
||||||
configRepo *repository.ConfigurationRepository
|
configRepo *repository.ConfigurationRepository
|
||||||
|
projectRepo *repository.ProjectRepository
|
||||||
componentRepo *repository.ComponentRepository
|
componentRepo *repository.ComponentRepository
|
||||||
|
pricelistRepo *repository.PricelistRepository
|
||||||
quoteService *QuoteService
|
quoteService *QuoteService
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewConfigurationService(
|
func NewConfigurationService(
|
||||||
configRepo *repository.ConfigurationRepository,
|
configRepo *repository.ConfigurationRepository,
|
||||||
|
projectRepo *repository.ProjectRepository,
|
||||||
componentRepo *repository.ComponentRepository,
|
componentRepo *repository.ComponentRepository,
|
||||||
|
pricelistRepo *repository.PricelistRepository,
|
||||||
quoteService *QuoteService,
|
quoteService *QuoteService,
|
||||||
) *ConfigurationService {
|
) *ConfigurationService {
|
||||||
return &ConfigurationService{
|
return &ConfigurationService{
|
||||||
configRepo: configRepo,
|
configRepo: configRepo,
|
||||||
|
projectRepo: projectRepo,
|
||||||
componentRepo: componentRepo,
|
componentRepo: componentRepo,
|
||||||
|
pricelistRepo: pricelistRepo,
|
||||||
quoteService: quoteService,
|
quoteService: quoteService,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type CreateConfigRequest struct {
|
type CreateConfigRequest struct {
|
||||||
Name string `json:"name"`
|
Name string `json:"name"`
|
||||||
Items models.ConfigItems `json:"items"`
|
Items models.ConfigItems `json:"items"`
|
||||||
Notes string `json:"notes"`
|
ProjectUUID *string `json:"project_uuid,omitempty"`
|
||||||
IsTemplate bool `json:"is_template"`
|
CustomPrice *float64 `json:"custom_price"`
|
||||||
|
Notes string `json:"notes"`
|
||||||
|
IsTemplate bool `json:"is_template"`
|
||||||
|
ServerCount int `json:"server_count"`
|
||||||
|
ServerModel string `json:"server_model,omitempty"`
|
||||||
|
SupportCode string `json:"support_code,omitempty"`
|
||||||
|
Article string `json:"article,omitempty"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
|
OnlyInStock bool `json:"only_in_stock"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) Create(userID uint, req *CreateConfigRequest) (*models.Configuration, error) {
|
type ArticlePreviewRequest struct {
|
||||||
|
Items models.ConfigItems `json:"items"`
|
||||||
|
ServerModel string `json:"server_model"`
|
||||||
|
SupportCode string `json:"support_code,omitempty"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) Create(ownerUsername string, req *CreateConfigRequest) (*models.Configuration, error) {
|
||||||
|
projectUUID, err := s.resolveProjectUUID(ownerUsername, req.ProjectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
pricelistID, err := s.resolvePricelistID(req.PricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
total := req.Items.Total()
|
total := req.Items.Total()
|
||||||
|
|
||||||
|
// If server count is greater than 1, multiply the total by server count
|
||||||
|
if req.ServerCount > 1 {
|
||||||
|
total *= float64(req.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
config := &models.Configuration{
|
config := &models.Configuration{
|
||||||
UUID: uuid.New().String(),
|
UUID: uuid.New().String(),
|
||||||
UserID: userID,
|
OwnerUsername: ownerUsername,
|
||||||
Name: req.Name,
|
ProjectUUID: projectUUID,
|
||||||
Items: req.Items,
|
Name: req.Name,
|
||||||
TotalPrice: &total,
|
Items: req.Items,
|
||||||
Notes: req.Notes,
|
TotalPrice: &total,
|
||||||
IsTemplate: req.IsTemplate,
|
CustomPrice: req.CustomPrice,
|
||||||
|
Notes: req.Notes,
|
||||||
|
IsTemplate: req.IsTemplate,
|
||||||
|
ServerCount: req.ServerCount,
|
||||||
|
ServerModel: req.ServerModel,
|
||||||
|
SupportCode: req.SupportCode,
|
||||||
|
Article: req.Article,
|
||||||
|
PricelistID: pricelistID,
|
||||||
|
OnlyInStock: req.OnlyInStock,
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := s.configRepo.Create(config); err != nil {
|
if err := s.configRepo.Create(config); err != nil {
|
||||||
@@ -62,37 +111,59 @@ func (s *ConfigurationService) Create(userID uint, req *CreateConfigRequest) (*m
|
|||||||
return config, nil
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) GetByUUID(uuid string, userID uint) (*models.Configuration, error) {
|
func (s *ConfigurationService) GetByUUID(uuid string, ownerUsername string) (*models.Configuration, error) {
|
||||||
config, err := s.configRepo.GetByUUID(uuid)
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, ErrConfigNotFound
|
return nil, ErrConfigNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
// Allow access if user owns config or it's a template
|
// Allow access if user owns config or it's a template
|
||||||
if config.UserID != userID && !config.IsTemplate {
|
if !s.isOwner(config, ownerUsername) && !config.IsTemplate {
|
||||||
return nil, ErrConfigForbidden
|
return nil, ErrConfigForbidden
|
||||||
}
|
}
|
||||||
|
|
||||||
return config, nil
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) Update(uuid string, userID uint, req *CreateConfigRequest) (*models.Configuration, error) {
|
func (s *ConfigurationService) Update(uuid string, ownerUsername string, req *CreateConfigRequest) (*models.Configuration, error) {
|
||||||
config, err := s.configRepo.GetByUUID(uuid)
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, ErrConfigNotFound
|
return nil, ErrConfigNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.UserID != userID {
|
if !s.isOwner(config, ownerUsername) {
|
||||||
return nil, ErrConfigForbidden
|
return nil, ErrConfigForbidden
|
||||||
}
|
}
|
||||||
|
|
||||||
|
projectUUID, err := s.resolveProjectUUID(ownerUsername, req.ProjectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
pricelistID, err := s.resolvePricelistID(req.PricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
total := req.Items.Total()
|
total := req.Items.Total()
|
||||||
|
|
||||||
|
// If server count is greater than 1, multiply the total by server count
|
||||||
|
if req.ServerCount > 1 {
|
||||||
|
total *= float64(req.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
config.Name = req.Name
|
config.Name = req.Name
|
||||||
|
config.ProjectUUID = projectUUID
|
||||||
config.Items = req.Items
|
config.Items = req.Items
|
||||||
config.TotalPrice = &total
|
config.TotalPrice = &total
|
||||||
|
config.CustomPrice = req.CustomPrice
|
||||||
config.Notes = req.Notes
|
config.Notes = req.Notes
|
||||||
config.IsTemplate = req.IsTemplate
|
config.IsTemplate = req.IsTemplate
|
||||||
|
config.ServerCount = req.ServerCount
|
||||||
|
config.ServerModel = req.ServerModel
|
||||||
|
config.SupportCode = req.SupportCode
|
||||||
|
config.Article = req.Article
|
||||||
|
config.PricelistID = pricelistID
|
||||||
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
if err := s.configRepo.Update(config); err != nil {
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -101,20 +172,86 @@ func (s *ConfigurationService) Update(uuid string, userID uint, req *CreateConfi
|
|||||||
return config, nil
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) Delete(uuid string, userID uint) error {
|
func (s *ConfigurationService) Delete(uuid string, ownerUsername string) error {
|
||||||
config, err := s.configRepo.GetByUUID(uuid)
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return ErrConfigNotFound
|
return ErrConfigNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.UserID != userID {
|
if !s.isOwner(config, ownerUsername) {
|
||||||
return ErrConfigForbidden
|
return ErrConfigForbidden
|
||||||
}
|
}
|
||||||
|
|
||||||
return s.configRepo.Delete(config.ID)
|
return s.configRepo.Delete(config.ID)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) ListByUser(userID uint, page, perPage int) ([]models.Configuration, int64, error) {
|
func (s *ConfigurationService) Rename(uuid string, ownerUsername string, newName string) (*models.Configuration, error) {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if !s.isOwner(config, ownerUsername) {
|
||||||
|
return nil, ErrConfigForbidden
|
||||||
|
}
|
||||||
|
|
||||||
|
config.Name = newName
|
||||||
|
|
||||||
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) Clone(configUUID string, ownerUsername string, newName string) (*models.Configuration, error) {
|
||||||
|
return s.CloneToProject(configUUID, ownerUsername, newName, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) CloneToProject(configUUID string, ownerUsername string, newName string, projectUUID *string) (*models.Configuration, error) {
|
||||||
|
original, err := s.GetByUUID(configUUID, ownerUsername)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
resolvedProjectUUID := original.ProjectUUID
|
||||||
|
if projectUUID != nil {
|
||||||
|
resolvedProjectUUID, err = s.resolveProjectUUID(ownerUsername, projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create copy with new UUID and name
|
||||||
|
total := original.Items.Total()
|
||||||
|
|
||||||
|
// If server count is greater than 1, multiply the total by server count
|
||||||
|
if original.ServerCount > 1 {
|
||||||
|
total *= float64(original.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
clone := &models.Configuration{
|
||||||
|
UUID: uuid.New().String(),
|
||||||
|
OwnerUsername: ownerUsername,
|
||||||
|
ProjectUUID: resolvedProjectUUID,
|
||||||
|
Name: newName,
|
||||||
|
Items: original.Items,
|
||||||
|
TotalPrice: &total,
|
||||||
|
CustomPrice: original.CustomPrice,
|
||||||
|
Notes: original.Notes,
|
||||||
|
IsTemplate: false, // Clone is never a template
|
||||||
|
ServerCount: original.ServerCount,
|
||||||
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.configRepo.Create(clone); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return clone, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) ListByUser(ownerUsername string, page, perPage int) ([]models.Configuration, int64, error) {
|
||||||
if page < 1 {
|
if page < 1 {
|
||||||
page = 1
|
page = 1
|
||||||
}
|
}
|
||||||
@@ -123,7 +260,238 @@ func (s *ConfigurationService) ListByUser(userID uint, page, perPage int) ([]mod
|
|||||||
}
|
}
|
||||||
offset := (page - 1) * perPage
|
offset := (page - 1) * perPage
|
||||||
|
|
||||||
return s.configRepo.ListByUser(userID, offset, perPage)
|
return s.configRepo.ListByUser(ownerUsername, offset, perPage)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListAll returns all configurations without user filter (for use when auth is disabled)
|
||||||
|
func (s *ConfigurationService) ListAll(page, perPage int) ([]models.Configuration, int64, error) {
|
||||||
|
if page < 1 {
|
||||||
|
page = 1
|
||||||
|
}
|
||||||
|
if perPage < 1 || perPage > 100 {
|
||||||
|
perPage = 20
|
||||||
|
}
|
||||||
|
offset := (page - 1) * perPage
|
||||||
|
|
||||||
|
return s.configRepo.ListAll(offset, perPage)
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetByUUIDNoAuth returns configuration without ownership check (for use when auth is disabled)
|
||||||
|
func (s *ConfigurationService) GetByUUIDNoAuth(uuid string) (*models.Configuration, error) {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
return config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateNoAuth updates configuration without ownership check
|
||||||
|
func (s *ConfigurationService) UpdateNoAuth(uuid string, req *CreateConfigRequest) (*models.Configuration, error) {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
projectUUID, err := s.resolveProjectUUID(config.OwnerUsername, req.ProjectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
pricelistID, err := s.resolvePricelistID(req.PricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
total := req.Items.Total()
|
||||||
|
if req.ServerCount > 1 {
|
||||||
|
total *= float64(req.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
config.Name = req.Name
|
||||||
|
config.ProjectUUID = projectUUID
|
||||||
|
config.Items = req.Items
|
||||||
|
config.TotalPrice = &total
|
||||||
|
config.CustomPrice = req.CustomPrice
|
||||||
|
config.Notes = req.Notes
|
||||||
|
config.IsTemplate = req.IsTemplate
|
||||||
|
config.ServerCount = req.ServerCount
|
||||||
|
config.PricelistID = pricelistID
|
||||||
|
config.OnlyInStock = req.OnlyInStock
|
||||||
|
|
||||||
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DeleteNoAuth deletes configuration without ownership check
|
||||||
|
func (s *ConfigurationService) DeleteNoAuth(uuid string) error {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return ErrConfigNotFound
|
||||||
|
}
|
||||||
|
return s.configRepo.Delete(config.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// RenameNoAuth renames configuration without ownership check
|
||||||
|
func (s *ConfigurationService) RenameNoAuth(uuid string, newName string) (*models.Configuration, error) {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
config.Name = newName
|
||||||
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return config, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CloneNoAuth clones configuration without ownership check
|
||||||
|
func (s *ConfigurationService) CloneNoAuth(configUUID string, newName string, ownerUsername string) (*models.Configuration, error) {
|
||||||
|
return s.CloneNoAuthToProject(configUUID, newName, ownerUsername, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) CloneNoAuthToProject(configUUID string, newName string, ownerUsername string, projectUUID *string) (*models.Configuration, error) {
|
||||||
|
original, err := s.configRepo.GetByUUID(configUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
resolvedProjectUUID := original.ProjectUUID
|
||||||
|
if projectUUID != nil {
|
||||||
|
resolvedProjectUUID, err = s.resolveProjectUUID(ownerUsername, projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
total := original.Items.Total()
|
||||||
|
if original.ServerCount > 1 {
|
||||||
|
total *= float64(original.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
clone := &models.Configuration{
|
||||||
|
UUID: uuid.New().String(),
|
||||||
|
OwnerUsername: ownerUsername,
|
||||||
|
ProjectUUID: resolvedProjectUUID,
|
||||||
|
Name: newName,
|
||||||
|
Items: original.Items,
|
||||||
|
TotalPrice: &total,
|
||||||
|
CustomPrice: original.CustomPrice,
|
||||||
|
Notes: original.Notes,
|
||||||
|
IsTemplate: false,
|
||||||
|
ServerCount: original.ServerCount,
|
||||||
|
PricelistID: original.PricelistID,
|
||||||
|
OnlyInStock: original.OnlyInStock,
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.configRepo.Create(clone); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return clone, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) resolveProjectUUID(ownerUsername string, projectUUID *string) (*string, error) {
|
||||||
|
_ = ownerUsername
|
||||||
|
if s.projectRepo == nil {
|
||||||
|
return projectUUID, nil
|
||||||
|
}
|
||||||
|
if projectUUID == nil || *projectUUID == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
project, err := s.projectRepo.GetByUUID(*projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if !project.IsActive {
|
||||||
|
return nil, errors.New("project is archived")
|
||||||
|
}
|
||||||
|
|
||||||
|
return &project.UUID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConfigurationService) resolvePricelistID(pricelistID *uint) (*uint, error) {
|
||||||
|
if s.pricelistRepo == nil {
|
||||||
|
return pricelistID, nil
|
||||||
|
}
|
||||||
|
if pricelistID != nil && *pricelistID > 0 {
|
||||||
|
if _, err := s.pricelistRepo.GetByID(*pricelistID); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return pricelistID, nil
|
||||||
|
}
|
||||||
|
latest, err := s.pricelistRepo.GetLatestActive()
|
||||||
|
if err != nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return &latest.ID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// RefreshPricesNoAuth refreshes prices without ownership check
|
||||||
|
func (s *ConfigurationService) RefreshPricesNoAuth(uuid string) (*models.Configuration, error) {
|
||||||
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
var latestPricelistID *uint
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
if pl, err := s.pricelistRepo.GetLatestActive(); err == nil {
|
||||||
|
latestPricelistID = &pl.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
updatedItems := make(models.ConfigItems, len(config.Items))
|
||||||
|
for i, item := range config.Items {
|
||||||
|
if latestPricelistID != nil {
|
||||||
|
if price, err := s.pricelistRepo.GetPriceForLot(*latestPricelistID, item.LotName); err == nil && price > 0 {
|
||||||
|
updatedItems[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: price,
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.componentRepo == nil {
|
||||||
|
updatedItems[i] = item
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
metadata, err := s.componentRepo.GetByLotName(item.LotName)
|
||||||
|
if err != nil || metadata.CurrentPrice == nil {
|
||||||
|
updatedItems[i] = item
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
updatedItems[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: *metadata.CurrentPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
config.Items = updatedItems
|
||||||
|
total := updatedItems.Total()
|
||||||
|
if config.ServerCount > 1 {
|
||||||
|
total *= float64(config.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
config.TotalPrice = &total
|
||||||
|
if latestPricelistID != nil {
|
||||||
|
config.PricelistID = latestPricelistID
|
||||||
|
}
|
||||||
|
now := time.Now()
|
||||||
|
config.PriceUpdatedAt = &now
|
||||||
|
|
||||||
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return config, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) ListTemplates(page, perPage int) ([]models.Configuration, int64, error) {
|
func (s *ConfigurationService) ListTemplates(page, perPage int) ([]models.Configuration, int64, error) {
|
||||||
@@ -138,39 +506,129 @@ func (s *ConfigurationService) ListTemplates(page, perPage int) ([]models.Config
|
|||||||
return s.configRepo.ListTemplates(offset, perPage)
|
return s.configRepo.ListTemplates(offset, perPage)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Export configuration as JSON
|
// RefreshPrices updates all component prices in the configuration with current prices
|
||||||
type ConfigExport struct {
|
func (s *ConfigurationService) RefreshPrices(uuid string, ownerUsername string) (*models.Configuration, error) {
|
||||||
Name string `json:"name"`
|
config, err := s.configRepo.GetByUUID(uuid)
|
||||||
Notes string `json:"notes"`
|
|
||||||
Items models.ConfigItems `json:"items"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *ConfigurationService) ExportJSON(uuid string, userID uint) ([]byte, error) {
|
|
||||||
config, err := s.GetByUUID(uuid, userID)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
return nil, ErrConfigNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if !s.isOwner(config, ownerUsername) {
|
||||||
|
return nil, ErrConfigForbidden
|
||||||
|
}
|
||||||
|
|
||||||
|
var latestPricelistID *uint
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
if pl, err := s.pricelistRepo.GetLatestActive(); err == nil {
|
||||||
|
latestPricelistID = &pl.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update prices for all items
|
||||||
|
updatedItems := make(models.ConfigItems, len(config.Items))
|
||||||
|
for i, item := range config.Items {
|
||||||
|
if latestPricelistID != nil {
|
||||||
|
if price, err := s.pricelistRepo.GetPriceForLot(*latestPricelistID, item.LotName); err == nil && price > 0 {
|
||||||
|
updatedItems[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: price,
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get current component price
|
||||||
|
if s.componentRepo == nil {
|
||||||
|
updatedItems[i] = item
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
metadata, err := s.componentRepo.GetByLotName(item.LotName)
|
||||||
|
if err != nil || metadata.CurrentPrice == nil {
|
||||||
|
// Keep original item if component not found or no price available
|
||||||
|
updatedItems[i] = item
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update item with current price
|
||||||
|
updatedItems[i] = models.ConfigItem{
|
||||||
|
LotName: item.LotName,
|
||||||
|
Quantity: item.Quantity,
|
||||||
|
UnitPrice: *metadata.CurrentPrice,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update configuration
|
||||||
|
config.Items = updatedItems
|
||||||
|
total := updatedItems.Total()
|
||||||
|
|
||||||
|
// If server count is greater than 1, multiply the total by server count
|
||||||
|
if config.ServerCount > 1 {
|
||||||
|
total *= float64(config.ServerCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
config.TotalPrice = &total
|
||||||
|
if latestPricelistID != nil {
|
||||||
|
config.PricelistID = latestPricelistID
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set price update timestamp
|
||||||
|
now := time.Now()
|
||||||
|
config.PriceUpdatedAt = &now
|
||||||
|
|
||||||
|
if err := s.configRepo.Update(config); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
export := ConfigExport{
|
return config, nil
|
||||||
Name: config.Name,
|
|
||||||
Notes: config.Notes,
|
|
||||||
Items: config.Items,
|
|
||||||
}
|
|
||||||
|
|
||||||
return json.MarshalIndent(export, "", " ")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConfigurationService) ImportJSON(userID uint, data []byte) (*models.Configuration, error) {
|
func (s *ConfigurationService) isOwner(config *models.Configuration, ownerUsername string) bool {
|
||||||
var export ConfigExport
|
if config == nil || ownerUsername == "" {
|
||||||
if err := json.Unmarshal(data, &export); err != nil {
|
return false
|
||||||
return nil, err
|
|
||||||
}
|
}
|
||||||
|
if config.OwnerUsername != "" {
|
||||||
req := &CreateConfigRequest{
|
return config.OwnerUsername == ownerUsername
|
||||||
Name: export.Name,
|
|
||||||
Notes: export.Notes,
|
|
||||||
Items: export.Items,
|
|
||||||
}
|
}
|
||||||
|
if config.User != nil {
|
||||||
return s.Create(userID, req)
|
return config.User.Username == ownerUsername
|
||||||
|
}
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// // Export configuration as JSON
|
||||||
|
// type ConfigExport struct {
|
||||||
|
// Name string `json:"name"`
|
||||||
|
// Notes string `json:"notes"`
|
||||||
|
// Items models.ConfigItems `json:"items"`
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// func (s *ConfigurationService) ExportJSON(uuid string, userID uint) ([]byte, error) {
|
||||||
|
// config, err := s.GetByUUID(uuid, userID)
|
||||||
|
// if err != nil {
|
||||||
|
// return nil, err
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// export := ConfigExport{
|
||||||
|
// Name: config.Name,
|
||||||
|
// Notes: config.Notes,
|
||||||
|
// Items: config.Items,
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// return json.MarshalIndent(export, "", " ")
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// func (s *ConfigurationService) ImportJSON(userID uint, data []byte) (*models.Configuration, error) {
|
||||||
|
// var export ConfigExport
|
||||||
|
// if err := json.Unmarshal(data, &export); err != nil {
|
||||||
|
// return nil, err
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// req := &CreateConfigRequest{
|
||||||
|
// Name: export.Name,
|
||||||
|
// Notes: export.Notes,
|
||||||
|
// Items: export.Items,
|
||||||
|
// }
|
||||||
|
//
|
||||||
|
// return s.Create(userID, req)
|
||||||
|
// }
|
||||||
|
|||||||
@@ -4,29 +4,32 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"math"
|
||||||
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/config"
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"github.com/xuri/excelize/v2"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
)
|
)
|
||||||
|
|
||||||
type ExportService struct {
|
type ExportService struct {
|
||||||
config config.ExportConfig
|
config config.ExportConfig
|
||||||
|
categoryRepo *repository.CategoryRepository
|
||||||
|
localDB *localdb.LocalDB
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewExportService(cfg config.ExportConfig) *ExportService {
|
func NewExportService(cfg config.ExportConfig, categoryRepo *repository.CategoryRepository, local *localdb.LocalDB) *ExportService {
|
||||||
return &ExportService{config: cfg}
|
return &ExportService{
|
||||||
}
|
config: cfg,
|
||||||
|
categoryRepo: categoryRepo,
|
||||||
type ExportData struct {
|
localDB: local,
|
||||||
Name string
|
}
|
||||||
Items []ExportItem
|
|
||||||
Total float64
|
|
||||||
Notes string
|
|
||||||
CreatedAt time.Time
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ExportItem represents a single component in an export block.
|
||||||
type ExportItem struct {
|
type ExportItem struct {
|
||||||
LotName string
|
LotName string
|
||||||
Description string
|
Description string
|
||||||
@@ -36,140 +39,287 @@ type ExportItem struct {
|
|||||||
TotalPrice float64
|
TotalPrice float64
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ExportService) ToCSV(data *ExportData) ([]byte, error) {
|
// ConfigExportBlock represents one configuration (server) in the export.
|
||||||
var buf bytes.Buffer
|
type ConfigExportBlock struct {
|
||||||
w := csv.NewWriter(&buf)
|
Article string
|
||||||
|
ServerCount int
|
||||||
// Header
|
UnitPrice float64 // sum of component prices for one server
|
||||||
headers := []string{"Артикул", "Описание", "Категория", "Количество", "Цена за единицу", "Сумма"}
|
Items []ExportItem
|
||||||
if err := w.Write(headers); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Items
|
|
||||||
for _, item := range data.Items {
|
|
||||||
row := []string{
|
|
||||||
item.LotName,
|
|
||||||
item.Description,
|
|
||||||
item.Category,
|
|
||||||
fmt.Sprintf("%d", item.Quantity),
|
|
||||||
fmt.Sprintf("%.2f", item.UnitPrice),
|
|
||||||
fmt.Sprintf("%.2f", item.TotalPrice),
|
|
||||||
}
|
|
||||||
if err := w.Write(row); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Total row
|
|
||||||
if err := w.Write([]string{"", "", "", "", "ИТОГО:", fmt.Sprintf("%.2f", data.Total)}); err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
w.Flush()
|
|
||||||
return buf.Bytes(), w.Error()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ExportService) ToXLSX(data *ExportData) ([]byte, error) {
|
// ProjectExportData holds all configuration blocks for a project-level export.
|
||||||
f := excelize.NewFile()
|
type ProjectExportData struct {
|
||||||
sheet := "Конфигурация"
|
Configs []ConfigExportBlock
|
||||||
f.SetSheetName("Sheet1", sheet)
|
CreatedAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
// Styles
|
// ToCSV writes project export data in the new structured CSV format.
|
||||||
headerStyle, _ := f.NewStyle(&excelize.Style{
|
//
|
||||||
Font: &excelize.Font{Bold: true, Size: 12, Color: "#FFFFFF"},
|
// Format:
|
||||||
Fill: excelize.Fill{Type: "pattern", Color: []string{"#4472C4"}, Pattern: 1},
|
//
|
||||||
Alignment: &excelize.Alignment{Horizontal: "center", Vertical: "center"},
|
// Line;Type;p/n;Description;Qty (1 pcs.);Qty (total);Price (1 pcs.);Price (total)
|
||||||
Border: []excelize.Border{
|
// 10;;DL380-ARTICLE;;;10;10470;104 700
|
||||||
{Type: "left", Color: "#000000", Style: 1},
|
// ;;MB_INTEL_...;;1;;2074,5;
|
||||||
{Type: "top", Color: "#000000", Style: 1},
|
// ...
|
||||||
{Type: "bottom", Color: "#000000", Style: 1},
|
// (empty row)
|
||||||
{Type: "right", Color: "#000000", Style: 1},
|
// 20;;DL380-ARTICLE-2;;;2;10470;20 940
|
||||||
},
|
// ...
|
||||||
})
|
func (s *ExportService) ToCSV(w io.Writer, data *ProjectExportData) error {
|
||||||
|
// Write UTF-8 BOM for Excel compatibility
|
||||||
totalStyle, _ := f.NewStyle(&excelize.Style{
|
if _, err := w.Write([]byte{0xEF, 0xBB, 0xBF}); err != nil {
|
||||||
Font: &excelize.Font{Bold: true, Size: 12},
|
return fmt.Errorf("failed to write BOM: %w", err)
|
||||||
Fill: excelize.Fill{Type: "pattern", Color: []string{"#E2EFDA"}, Pattern: 1},
|
|
||||||
})
|
|
||||||
|
|
||||||
priceStyle, _ := f.NewStyle(&excelize.Style{
|
|
||||||
NumFmt: 4, // #,##0.00
|
|
||||||
})
|
|
||||||
|
|
||||||
// Title
|
|
||||||
f.SetCellValue(sheet, "A1", s.config.CompanyName)
|
|
||||||
f.SetCellValue(sheet, "A2", "Коммерческое предложение: "+data.Name)
|
|
||||||
f.SetCellValue(sheet, "A3", "Дата: "+data.CreatedAt.Format("02.01.2006"))
|
|
||||||
|
|
||||||
// Headers
|
|
||||||
headers := []string{"Артикул", "Описание", "Категория", "Кол-во", "Цена", "Сумма"}
|
|
||||||
for i, h := range headers {
|
|
||||||
cell := fmt.Sprintf("%c5", 'A'+i)
|
|
||||||
f.SetCellValue(sheet, cell, h)
|
|
||||||
f.SetCellStyle(sheet, cell, cell, headerStyle)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Data rows
|
csvWriter := csv.NewWriter(w)
|
||||||
row := 6
|
csvWriter.Comma = ';'
|
||||||
for _, item := range data.Items {
|
defer csvWriter.Flush()
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("A%d", row), item.LotName)
|
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("B%d", row), item.Description)
|
// Header
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("C%d", row), item.Category)
|
headers := []string{"Line", "Type", "p/n", "Description", "Qty (1 pcs.)", "Qty (total)", "Price (1 pcs.)", "Price (total)"}
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("D%d", row), item.Quantity)
|
if err := csvWriter.Write(headers); err != nil {
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("E%d", row), item.UnitPrice)
|
return fmt.Errorf("failed to write header: %w", err)
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("F%d", row), item.TotalPrice)
|
|
||||||
f.SetCellStyle(sheet, fmt.Sprintf("E%d", row), fmt.Sprintf("F%d", row), priceStyle)
|
|
||||||
row++
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Total row
|
// Get category hierarchy for sorting
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("E%d", row), "ИТОГО:")
|
categoryOrder := make(map[string]int)
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("F%d", row), data.Total)
|
if s.categoryRepo != nil {
|
||||||
f.SetCellStyle(sheet, fmt.Sprintf("E%d", row), fmt.Sprintf("F%d", row), totalStyle)
|
categories, err := s.categoryRepo.GetAll()
|
||||||
|
if err == nil {
|
||||||
// Notes
|
for _, cat := range categories {
|
||||||
if data.Notes != "" {
|
categoryOrder[cat.Code] = cat.DisplayOrder
|
||||||
row += 2
|
}
|
||||||
f.SetCellValue(sheet, fmt.Sprintf("A%d", row), "Примечания: "+data.Notes)
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Column widths
|
for i, block := range data.Configs {
|
||||||
f.SetColWidth(sheet, "A", "A", 25)
|
lineNo := (i + 1) * 10
|
||||||
f.SetColWidth(sheet, "B", "B", 50)
|
|
||||||
f.SetColWidth(sheet, "C", "C", 15)
|
|
||||||
f.SetColWidth(sheet, "D", "D", 10)
|
|
||||||
f.SetColWidth(sheet, "E", "E", 15)
|
|
||||||
f.SetColWidth(sheet, "F", "F", 15)
|
|
||||||
|
|
||||||
|
serverCount := block.ServerCount
|
||||||
|
if serverCount < 1 {
|
||||||
|
serverCount = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
totalPrice := block.UnitPrice * float64(serverCount)
|
||||||
|
|
||||||
|
// Server summary row
|
||||||
|
serverRow := []string{
|
||||||
|
fmt.Sprintf("%d", lineNo), // Line
|
||||||
|
"", // Type
|
||||||
|
block.Article, // p/n
|
||||||
|
"", // Description
|
||||||
|
"", // Qty (1 pcs.)
|
||||||
|
fmt.Sprintf("%d", serverCount), // Qty (total)
|
||||||
|
formatPriceInt(block.UnitPrice), // Price (1 pcs.)
|
||||||
|
formatPriceWithSpace(totalPrice), // Price (total)
|
||||||
|
}
|
||||||
|
if err := csvWriter.Write(serverRow); err != nil {
|
||||||
|
return fmt.Errorf("failed to write server row: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort items by category display order
|
||||||
|
sortedItems := make([]ExportItem, len(block.Items))
|
||||||
|
copy(sortedItems, block.Items)
|
||||||
|
sortItemsByCategory(sortedItems, categoryOrder)
|
||||||
|
|
||||||
|
// Component rows
|
||||||
|
for _, item := range sortedItems {
|
||||||
|
componentRow := []string{
|
||||||
|
"", // Line
|
||||||
|
item.Category, // Type
|
||||||
|
item.LotName, // p/n
|
||||||
|
"", // Description
|
||||||
|
fmt.Sprintf("%d", item.Quantity), // Qty (1 pcs.)
|
||||||
|
"", // Qty (total)
|
||||||
|
formatPriceComma(item.UnitPrice), // Price (1 pcs.)
|
||||||
|
"", // Price (total)
|
||||||
|
}
|
||||||
|
if err := csvWriter.Write(componentRow); err != nil {
|
||||||
|
return fmt.Errorf("failed to write component row: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Empty separator row between blocks (skip after last)
|
||||||
|
if i < len(data.Configs)-1 {
|
||||||
|
if err := csvWriter.Write([]string{"", "", "", "", "", "", "", ""}); err != nil {
|
||||||
|
return fmt.Errorf("failed to write separator row: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
csvWriter.Flush()
|
||||||
|
if err := csvWriter.Error(); err != nil {
|
||||||
|
return fmt.Errorf("csv writer error: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ToCSVBytes is a backward-compatible wrapper that returns CSV data as bytes.
|
||||||
|
func (s *ExportService) ToCSVBytes(data *ProjectExportData) ([]byte, error) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
if err := f.Write(&buf); err != nil {
|
if err := s.ToCSV(&buf, data); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return buf.Bytes(), nil
|
return buf.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ExportService) ConfigToExportData(config *models.Configuration) *ExportData {
|
// ConfigToExportData converts a single configuration into ProjectExportData.
|
||||||
items := make([]ExportItem, len(config.Items))
|
func (s *ExportService) ConfigToExportData(cfg *models.Configuration) *ProjectExportData {
|
||||||
var total float64
|
block := s.buildExportBlock(cfg)
|
||||||
|
return &ProjectExportData{
|
||||||
|
Configs: []ConfigExportBlock{block},
|
||||||
|
CreatedAt: cfg.CreatedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
for i, item := range config.Items {
|
// ProjectToExportData converts multiple configurations into ProjectExportData.
|
||||||
|
func (s *ExportService) ProjectToExportData(configs []models.Configuration) *ProjectExportData {
|
||||||
|
blocks := make([]ConfigExportBlock, 0, len(configs))
|
||||||
|
for i := range configs {
|
||||||
|
blocks = append(blocks, s.buildExportBlock(&configs[i]))
|
||||||
|
}
|
||||||
|
return &ProjectExportData{
|
||||||
|
Configs: blocks,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ExportService) buildExportBlock(cfg *models.Configuration) ConfigExportBlock {
|
||||||
|
// Batch-fetch categories from local data (pricelist items → local_components fallback)
|
||||||
|
lotNames := make([]string, len(cfg.Items))
|
||||||
|
for i, item := range cfg.Items {
|
||||||
|
lotNames[i] = item.LotName
|
||||||
|
}
|
||||||
|
categories := s.resolveCategories(cfg.PricelistID, lotNames)
|
||||||
|
|
||||||
|
items := make([]ExportItem, len(cfg.Items))
|
||||||
|
var unitTotal float64
|
||||||
|
|
||||||
|
for i, item := range cfg.Items {
|
||||||
itemTotal := item.UnitPrice * float64(item.Quantity)
|
itemTotal := item.UnitPrice * float64(item.Quantity)
|
||||||
items[i] = ExportItem{
|
items[i] = ExportItem{
|
||||||
LotName: item.LotName,
|
LotName: item.LotName,
|
||||||
|
Category: categories[item.LotName],
|
||||||
Quantity: item.Quantity,
|
Quantity: item.Quantity,
|
||||||
UnitPrice: item.UnitPrice,
|
UnitPrice: item.UnitPrice,
|
||||||
TotalPrice: itemTotal,
|
TotalPrice: itemTotal,
|
||||||
}
|
}
|
||||||
total += itemTotal
|
unitTotal += itemTotal
|
||||||
}
|
}
|
||||||
|
|
||||||
return &ExportData{
|
serverCount := cfg.ServerCount
|
||||||
Name: config.Name,
|
if serverCount < 1 {
|
||||||
Items: items,
|
serverCount = 1
|
||||||
Total: total,
|
}
|
||||||
Notes: config.Notes,
|
|
||||||
CreatedAt: config.CreatedAt,
|
return ConfigExportBlock{
|
||||||
|
Article: cfg.Article,
|
||||||
|
ServerCount: serverCount,
|
||||||
|
UnitPrice: unitTotal,
|
||||||
|
Items: items,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// resolveCategories returns lot_name → category map.
|
||||||
|
// Primary source: pricelist items (lot_category). Fallback: local_components table.
|
||||||
|
func (s *ExportService) resolveCategories(pricelistID *uint, lotNames []string) map[string]string {
|
||||||
|
if len(lotNames) == 0 || s.localDB == nil {
|
||||||
|
return map[string]string{}
|
||||||
|
}
|
||||||
|
|
||||||
|
categories := make(map[string]string, len(lotNames))
|
||||||
|
|
||||||
|
// Primary: pricelist items
|
||||||
|
if pricelistID != nil && *pricelistID > 0 {
|
||||||
|
if cats, err := s.localDB.GetLocalLotCategoriesByServerPricelistID(*pricelistID, lotNames); err == nil {
|
||||||
|
for lot, cat := range cats {
|
||||||
|
if strings.TrimSpace(cat) != "" {
|
||||||
|
categories[lot] = cat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: local_components for any still missing
|
||||||
|
var missing []string
|
||||||
|
for _, lot := range lotNames {
|
||||||
|
if categories[lot] == "" {
|
||||||
|
missing = append(missing, lot)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(missing) > 0 {
|
||||||
|
if fallback, err := s.localDB.GetLocalComponentCategoriesByLotNames(missing); err == nil {
|
||||||
|
for lot, cat := range fallback {
|
||||||
|
if strings.TrimSpace(cat) != "" {
|
||||||
|
categories[lot] = cat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return categories
|
||||||
|
}
|
||||||
|
|
||||||
|
// sortItemsByCategory sorts items by category display order (items without category go to the end).
|
||||||
|
func sortItemsByCategory(items []ExportItem, categoryOrder map[string]int) {
|
||||||
|
for i := 0; i < len(items)-1; i++ {
|
||||||
|
for j := i + 1; j < len(items); j++ {
|
||||||
|
orderI, hasI := categoryOrder[items[i].Category]
|
||||||
|
orderJ, hasJ := categoryOrder[items[j].Category]
|
||||||
|
|
||||||
|
if !hasI && hasJ {
|
||||||
|
items[i], items[j] = items[j], items[i]
|
||||||
|
} else if hasI && hasJ && orderI > orderJ {
|
||||||
|
items[i], items[j] = items[j], items[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatPriceComma formats a price with comma as decimal separator (e.g., "2074,5").
|
||||||
|
// Trailing zeros after the comma are trimmed, and if the value is an integer, no comma is shown.
|
||||||
|
func formatPriceComma(value float64) string {
|
||||||
|
if value == math.Trunc(value) {
|
||||||
|
return fmt.Sprintf("%.0f", value)
|
||||||
|
}
|
||||||
|
s := fmt.Sprintf("%.2f", value)
|
||||||
|
s = strings.ReplaceAll(s, ".", ",")
|
||||||
|
// Trim trailing zero: "2074,50" -> "2074,5"
|
||||||
|
s = strings.TrimRight(s, "0")
|
||||||
|
s = strings.TrimRight(s, ",")
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatPriceInt formats price as integer (rounded), no decimal.
|
||||||
|
func formatPriceInt(value float64) string {
|
||||||
|
return fmt.Sprintf("%.0f", math.Round(value))
|
||||||
|
}
|
||||||
|
|
||||||
|
// formatPriceWithSpace formats a price as an integer with space as thousands separator (e.g., "104 700").
|
||||||
|
func formatPriceWithSpace(value float64) string {
|
||||||
|
intVal := int64(math.Round(value))
|
||||||
|
if intVal < 0 {
|
||||||
|
return "-" + formatIntWithSpace(-intVal)
|
||||||
|
}
|
||||||
|
return formatIntWithSpace(intVal)
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatIntWithSpace(n int64) string {
|
||||||
|
s := fmt.Sprintf("%d", n)
|
||||||
|
if len(s) <= 3 {
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
var result strings.Builder
|
||||||
|
remainder := len(s) % 3
|
||||||
|
if remainder > 0 {
|
||||||
|
result.WriteString(s[:remainder])
|
||||||
|
}
|
||||||
|
for i := remainder; i < len(s); i += 3 {
|
||||||
|
if result.Len() > 0 {
|
||||||
|
result.WriteByte(' ')
|
||||||
|
}
|
||||||
|
result.WriteString(s[i : i+3])
|
||||||
|
}
|
||||||
|
return result.String()
|
||||||
|
}
|
||||||
|
|||||||
406
internal/services/export_test.go
Normal file
406
internal/services/export_test.go
Normal file
@@ -0,0 +1,406 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/csv"
|
||||||
|
"io"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
func newTestProjectData(items []ExportItem, article string, serverCount int) *ProjectExportData {
|
||||||
|
var unitTotal float64
|
||||||
|
for _, item := range items {
|
||||||
|
unitTotal += item.UnitPrice * float64(item.Quantity)
|
||||||
|
}
|
||||||
|
if serverCount < 1 {
|
||||||
|
serverCount = 1
|
||||||
|
}
|
||||||
|
return &ProjectExportData{
|
||||||
|
Configs: []ConfigExportBlock{
|
||||||
|
{
|
||||||
|
Article: article,
|
||||||
|
ServerCount: serverCount,
|
||||||
|
UnitPrice: unitTotal,
|
||||||
|
Items: items,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_UTF8BOM(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 1,
|
||||||
|
UnitPrice: 100.0,
|
||||||
|
TotalPrice: 100.0,
|
||||||
|
},
|
||||||
|
}, "TEST-ARTICLE", 1)
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
if len(csvBytes) < 3 {
|
||||||
|
t.Fatalf("CSV too short to contain BOM")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := csvBytes[:3]
|
||||||
|
if !bytes.Equal(actualBOM, expectedBOM) {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch. Expected %v, got %v", expectedBOM, actualBOM)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_SemicolonDelimiter(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{
|
||||||
|
LotName: "LOT-001",
|
||||||
|
Category: "CAT",
|
||||||
|
Quantity: 2,
|
||||||
|
UnitPrice: 100.50,
|
||||||
|
TotalPrice: 201.00,
|
||||||
|
},
|
||||||
|
}, "TEST-ARTICLE", 1)
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Read header
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 8 {
|
||||||
|
t.Errorf("Expected 8 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedHeader := []string{"Line", "Type", "p/n", "Description", "Qty (1 pcs.)", "Qty (total)", "Price (1 pcs.)", "Price (total)"}
|
||||||
|
for i, col := range expectedHeader {
|
||||||
|
if i < len(header) && header[i] != col {
|
||||||
|
t.Errorf("Column %d: expected %q, got %q", i, col, header[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read server row
|
||||||
|
serverRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read server row: %v", err)
|
||||||
|
}
|
||||||
|
if serverRow[0] != "10" {
|
||||||
|
t.Errorf("Expected line number 10, got %s", serverRow[0])
|
||||||
|
}
|
||||||
|
if serverRow[2] != "TEST-ARTICLE" {
|
||||||
|
t.Errorf("Expected article TEST-ARTICLE, got %s", serverRow[2])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read component row
|
||||||
|
itemRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read item row: %v", err)
|
||||||
|
}
|
||||||
|
if itemRow[2] != "LOT-001" {
|
||||||
|
t.Errorf("Lot name mismatch: expected LOT-001, got %s", itemRow[2])
|
||||||
|
}
|
||||||
|
if itemRow[4] != "2" {
|
||||||
|
t.Errorf("Quantity mismatch: expected 2, got %s", itemRow[4])
|
||||||
|
}
|
||||||
|
if itemRow[6] != "100,5" {
|
||||||
|
t.Errorf("Unit price mismatch: expected 100,5, got %s", itemRow[6])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_ServerRow(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{LotName: "LOT-001", Category: "CAT", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
{LotName: "LOT-002", Category: "CAT", Quantity: 2, UnitPrice: 50.0, TotalPrice: 100.0},
|
||||||
|
}, "DL380-ART", 10)
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Skip header
|
||||||
|
reader.Read()
|
||||||
|
|
||||||
|
// Read server row
|
||||||
|
serverRow, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read server row: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if serverRow[0] != "10" {
|
||||||
|
t.Errorf("Expected line 10, got %s", serverRow[0])
|
||||||
|
}
|
||||||
|
if serverRow[2] != "DL380-ART" {
|
||||||
|
t.Errorf("Expected article DL380-ART, got %s", serverRow[2])
|
||||||
|
}
|
||||||
|
if serverRow[5] != "10" {
|
||||||
|
t.Errorf("Expected server count 10, got %s", serverRow[5])
|
||||||
|
}
|
||||||
|
// UnitPrice = 100 + 100 = 200
|
||||||
|
if serverRow[6] != "200" {
|
||||||
|
t.Errorf("Expected unit price 200, got %s", serverRow[6])
|
||||||
|
}
|
||||||
|
// TotalPrice = 200 * 10 = 2000
|
||||||
|
if serverRow[7] != "2 000" {
|
||||||
|
t.Errorf("Expected total price '2 000', got %q", serverRow[7])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_CategorySorting(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{LotName: "LOT-001", Category: "CAT-A", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
{LotName: "LOT-002", Category: "CAT-C", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
{LotName: "LOT-003", Category: "CAT-B", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
}, "ART", 1)
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
// Skip header and server row
|
||||||
|
reader.Read()
|
||||||
|
reader.Read()
|
||||||
|
|
||||||
|
// Without category repo, items maintain original order
|
||||||
|
row1, _ := reader.Read()
|
||||||
|
if row1[2] != "LOT-001" {
|
||||||
|
t.Errorf("Expected LOT-001 first, got %s", row1[2])
|
||||||
|
}
|
||||||
|
|
||||||
|
row2, _ := reader.Read()
|
||||||
|
if row2[2] != "LOT-002" {
|
||||||
|
t.Errorf("Expected LOT-002 second, got %s", row2[2])
|
||||||
|
}
|
||||||
|
|
||||||
|
row3, _ := reader.Read()
|
||||||
|
if row3[2] != "LOT-003" {
|
||||||
|
t.Errorf("Expected LOT-003 third, got %s", row3[2])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_EmptyData(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := &ProjectExportData{
|
||||||
|
Configs: []ConfigExportBlock{},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
|
||||||
|
header, err := reader.Read()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to read header: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(header) != 8 {
|
||||||
|
t.Errorf("Expected 8 columns, got %d", len(header))
|
||||||
|
}
|
||||||
|
|
||||||
|
// No more rows expected
|
||||||
|
_, err = reader.Read()
|
||||||
|
if err != io.EOF {
|
||||||
|
t.Errorf("Expected EOF after header, got: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSVBytes_BackwardCompat(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{LotName: "LOT-001", Category: "CAT", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
}, "ART", 1)
|
||||||
|
|
||||||
|
csvBytes, err := svc.ToCSVBytes(data)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("ToCSVBytes failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(csvBytes) < 3 {
|
||||||
|
t.Fatalf("CSV bytes too short")
|
||||||
|
}
|
||||||
|
|
||||||
|
expectedBOM := []byte{0xEF, 0xBB, 0xBF}
|
||||||
|
actualBOM := csvBytes[:3]
|
||||||
|
if !bytes.Equal(actualBOM, expectedBOM) {
|
||||||
|
t.Errorf("UTF-8 BOM mismatch in ToCSVBytes")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_WriterError(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := newTestProjectData([]ExportItem{
|
||||||
|
{LotName: "LOT-001", Category: "CAT", Quantity: 1, UnitPrice: 100.0, TotalPrice: 100.0},
|
||||||
|
}, "ART", 1)
|
||||||
|
|
||||||
|
failingWriter := &failingWriter{}
|
||||||
|
|
||||||
|
if err := svc.ToCSV(failingWriter, data); err == nil {
|
||||||
|
t.Errorf("Expected error from failing writer, got nil")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestToCSV_MultipleBlocks(t *testing.T) {
|
||||||
|
svc := NewExportService(config.ExportConfig{}, nil, nil)
|
||||||
|
|
||||||
|
data := &ProjectExportData{
|
||||||
|
Configs: []ConfigExportBlock{
|
||||||
|
{
|
||||||
|
Article: "ART-1",
|
||||||
|
ServerCount: 2,
|
||||||
|
UnitPrice: 500.0,
|
||||||
|
Items: []ExportItem{
|
||||||
|
{LotName: "LOT-A", Category: "CPU", Quantity: 1, UnitPrice: 500.0, TotalPrice: 500.0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Article: "ART-2",
|
||||||
|
ServerCount: 3,
|
||||||
|
UnitPrice: 1000.0,
|
||||||
|
Items: []ExportItem{
|
||||||
|
{LotName: "LOT-B", Category: "MEM", Quantity: 2, UnitPrice: 500.0, TotalPrice: 1000.0},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
if err := svc.ToCSV(&buf, data); err != nil {
|
||||||
|
t.Fatalf("ToCSV failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
csvBytes := buf.Bytes()
|
||||||
|
reader := csv.NewReader(bytes.NewReader(csvBytes[3:]))
|
||||||
|
reader.Comma = ';'
|
||||||
|
reader.FieldsPerRecord = -1 // allow variable fields
|
||||||
|
|
||||||
|
// Header
|
||||||
|
reader.Read()
|
||||||
|
|
||||||
|
// Block 1: server row
|
||||||
|
srv1, _ := reader.Read()
|
||||||
|
if srv1[0] != "10" {
|
||||||
|
t.Errorf("Block 1 line: expected 10, got %s", srv1[0])
|
||||||
|
}
|
||||||
|
if srv1[7] != "1 000" {
|
||||||
|
t.Errorf("Block 1 total: expected '1 000', got %q", srv1[7])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Block 1: component row
|
||||||
|
comp1, _ := reader.Read()
|
||||||
|
if comp1[2] != "LOT-A" {
|
||||||
|
t.Errorf("Block 1 component: expected LOT-A, got %s", comp1[2])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Separator row
|
||||||
|
sep, _ := reader.Read()
|
||||||
|
allEmpty := true
|
||||||
|
for _, v := range sep {
|
||||||
|
if v != "" {
|
||||||
|
allEmpty = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !allEmpty {
|
||||||
|
t.Errorf("Expected empty separator row, got %v", sep)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Block 2: server row
|
||||||
|
srv2, _ := reader.Read()
|
||||||
|
if srv2[0] != "20" {
|
||||||
|
t.Errorf("Block 2 line: expected 20, got %s", srv2[0])
|
||||||
|
}
|
||||||
|
if srv2[7] != "3 000" {
|
||||||
|
t.Errorf("Block 2 total: expected '3 000', got %q", srv2[7])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFormatPriceWithSpace(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
input float64
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{0, "0"},
|
||||||
|
{100, "100"},
|
||||||
|
{1000, "1 000"},
|
||||||
|
{10470, "10 470"},
|
||||||
|
{104700, "104 700"},
|
||||||
|
{1000000, "1 000 000"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
result := formatPriceWithSpace(tt.input)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("formatPriceWithSpace(%v): expected %q, got %q", tt.input, tt.expected, result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFormatPriceComma(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
input float64
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{100.0, "100"},
|
||||||
|
{2074.5, "2074,5"},
|
||||||
|
{100.50, "100,5"},
|
||||||
|
{99.99, "99,99"},
|
||||||
|
{0, "0"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
result := formatPriceComma(tt.input)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("formatPriceComma(%v): expected %q, got %q", tt.input, tt.expected, result)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// failingWriter always returns an error
|
||||||
|
type failingWriter struct{}
|
||||||
|
|
||||||
|
func (fw *failingWriter) Write(p []byte) (int, error) {
|
||||||
|
return 0, io.EOF
|
||||||
|
}
|
||||||
1396
internal/services/local_configuration.go
Normal file
1396
internal/services/local_configuration.go
Normal file
File diff suppressed because it is too large
Load Diff
567
internal/services/local_configuration_versioning_test.go
Normal file
567
internal/services/local_configuration_versioning_test.go
Normal file
@@ -0,0 +1,567 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSaveCreatesNewVersionAndUpdatesCurrentPointer(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "v1",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "v1",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[0].VersionNo != 1 || versions[1].VersionNo != 2 {
|
||||||
|
t.Fatalf("expected version_no [1,2], got [%d,%d]", versions[0].VersionNo, versions[1].VersionNo)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load local config: %v", err)
|
||||||
|
}
|
||||||
|
if cfg.CurrentVersionID == nil || *cfg.CurrentVersionID != versions[1].ID {
|
||||||
|
t.Fatalf("current_version_id should point to v2")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRollbackCreatesNewVersionWithTargetData(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "base",
|
||||||
|
Items: models.ConfigItems{{LotName: "RAM_A", Quantity: 2, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "base",
|
||||||
|
Items: models.ConfigItems{{LotName: "RAM_A", Quantity: 3, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := service.RollbackToVersionWithNote(created.UUID, 1, "tester", "test rollback"); err != nil {
|
||||||
|
t.Fatalf("rollback to v1: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 3 {
|
||||||
|
t.Fatalf("expected 3 versions, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[2].VersionNo != 3 {
|
||||||
|
t.Fatalf("expected v3 as rollback version, got v%d", versions[2].VersionNo)
|
||||||
|
}
|
||||||
|
if versions[2].Data != versions[0].Data {
|
||||||
|
t.Fatalf("expected rollback snapshot data equal to v1 data")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateNoAuthSkipsRevisionWhenSpecAndPriceUnchanged(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "dedupe",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "dedupe",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("first update config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "dedupe",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("second update config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions (create + first update), got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[1].VersionNo != 2 {
|
||||||
|
t.Fatalf("expected latest version_no=2, got %d", versions[1].VersionNo)
|
||||||
|
}
|
||||||
|
|
||||||
|
var pendingCount int64
|
||||||
|
if err := local.DB().
|
||||||
|
Table("pending_changes").
|
||||||
|
Where("entity_type = ? AND entity_uuid = ?", "configuration", created.UUID).
|
||||||
|
Count(&pendingCount).Error; err != nil {
|
||||||
|
t.Fatalf("count pending changes: %v", err)
|
||||||
|
}
|
||||||
|
if pendingCount != 2 {
|
||||||
|
t.Fatalf("expected 2 pending changes (create + first update), got %d", pendingCount)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAppendOnlyInvariantOldRowsUnchanged(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "initial",
|
||||||
|
Items: models.ConfigItems{{LotName: "SSD_A", Quantity: 1, UnitPrice: 300}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versionsBefore := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versionsBefore) != 1 {
|
||||||
|
t.Fatalf("expected exactly one version after create")
|
||||||
|
}
|
||||||
|
v1Before := versionsBefore[0]
|
||||||
|
|
||||||
|
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "initial",
|
||||||
|
Items: models.ConfigItems{{LotName: "SSD_A", Quantity: 2, UnitPrice: 300}},
|
||||||
|
ServerCount: 1,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := service.RollbackToVersion(created.UUID, 1, "tester"); err != nil {
|
||||||
|
t.Fatalf("rollback: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versionsAfter := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versionsAfter) != 3 {
|
||||||
|
t.Fatalf("expected 3 versions, got %d", len(versionsAfter))
|
||||||
|
}
|
||||||
|
v1After := versionsAfter[0]
|
||||||
|
|
||||||
|
if v1After.ID != v1Before.ID {
|
||||||
|
t.Fatalf("v1 id changed: before=%s after=%s", v1Before.ID, v1After.ID)
|
||||||
|
}
|
||||||
|
if v1After.Data != v1Before.Data {
|
||||||
|
t.Fatalf("v1 data changed")
|
||||||
|
}
|
||||||
|
if !v1After.CreatedAt.Equal(v1Before.CreatedAt) {
|
||||||
|
t.Fatalf("v1 created_at changed")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestConcurrentSaveNoDuplicateVersionNumbers(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "base",
|
||||||
|
Items: models.ConfigItems{{LotName: "NIC_A", Quantity: 1, UnitPrice: 150}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
const workers = 8
|
||||||
|
start := make(chan struct{})
|
||||||
|
errCh := make(chan error, workers)
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
for i := 0; i < workers; i++ {
|
||||||
|
i := i
|
||||||
|
wg.Add(1)
|
||||||
|
go func() {
|
||||||
|
defer wg.Done()
|
||||||
|
<-start
|
||||||
|
if err := updateWithRetry(service, created.UUID, i+2); err != nil {
|
||||||
|
errCh <- err
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
}
|
||||||
|
|
||||||
|
close(start)
|
||||||
|
wg.Wait()
|
||||||
|
close(errCh)
|
||||||
|
|
||||||
|
for err := range errCh {
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("concurrent save failed: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type counts struct {
|
||||||
|
Total int64
|
||||||
|
DistinctCount int64
|
||||||
|
Max int
|
||||||
|
}
|
||||||
|
var c counts
|
||||||
|
if err := local.DB().Raw(`
|
||||||
|
SELECT
|
||||||
|
COUNT(*) as total,
|
||||||
|
COUNT(DISTINCT version_no) as distinct_count,
|
||||||
|
COALESCE(MAX(version_no), 0) as max
|
||||||
|
FROM local_configuration_versions
|
||||||
|
WHERE configuration_uuid = ?`, created.UUID).Scan(&c).Error; err != nil {
|
||||||
|
t.Fatalf("query version counts: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.Total != c.DistinctCount {
|
||||||
|
t.Fatalf("duplicate version numbers detected: total=%d distinct=%d", c.Total, c.DistinctCount)
|
||||||
|
}
|
||||||
|
expected := int64(workers + 1) // initial create version + each successful save
|
||||||
|
if c.Total != expected || c.Max != int(expected) {
|
||||||
|
t.Fatalf("expected total=max=%d, got total=%d max=%d", expected, c.Total, c.Max)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateNoAuthKeepsProjectWhenProjectUUIDOmitted(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
project := &localdb.LocalProject{
|
||||||
|
UUID: "project-keep",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Code: "TEST-KEEP",
|
||||||
|
Name: ptrString("Keep Project"),
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
SyncStatus: "synced",
|
||||||
|
}
|
||||||
|
if err := local.SaveProject(project); err != nil {
|
||||||
|
t.Fatalf("save project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "cfg",
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if created.ProjectUUID == nil || *created.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected created config project_uuid=%s", project.UUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "cfg-updated",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("update config without project_uuid: %v", err)
|
||||||
|
}
|
||||||
|
if updated.ProjectUUID == nil || *updated.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected project_uuid to stay %s after update, got %+v", project.UUID, updated.ProjectUUID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateNoAuthAllowsOrphanProjectWhenUUIDUnchanged(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
project := &localdb.LocalProject{
|
||||||
|
UUID: "project-orphan",
|
||||||
|
OwnerUsername: "tester",
|
||||||
|
Code: "TEST-ORPHAN",
|
||||||
|
Name: ptrString("Orphan Project"),
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
UpdatedAt: time.Now(),
|
||||||
|
SyncStatus: "synced",
|
||||||
|
}
|
||||||
|
if err := local.SaveProject(project); err != nil {
|
||||||
|
t.Fatalf("save project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "cfg",
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulate missing project in local cache while config still references its UUID.
|
||||||
|
if err := local.DB().Where("uuid = ?", project.UUID).Delete(&localdb.LocalProject{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "cfg-updated",
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("update config with orphan project_uuid: %v", err)
|
||||||
|
}
|
||||||
|
if updated.ProjectUUID == nil || *updated.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected project_uuid to stay %s after update, got %+v", project.UUID, updated.ProjectUUID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUpdateNoAuthRecoversWhenCurrentVersionMissing(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "cfg",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulate corrupted/legacy versioning state:
|
||||||
|
// local configuration exists, but all version rows are gone and pointer is stale.
|
||||||
|
if err := local.DB().Where("configuration_uuid = ?", created.UUID).
|
||||||
|
Delete(&localdb.LocalConfigurationVersion{}).Error; err != nil {
|
||||||
|
t.Fatalf("delete versions: %v", err)
|
||||||
|
}
|
||||||
|
staleID := "missing-version-id"
|
||||||
|
if err := local.DB().Model(&localdb.LocalConfiguration{}).
|
||||||
|
Where("uuid = ?", created.UUID).
|
||||||
|
Update("current_version_id", staleID).Error; err != nil {
|
||||||
|
t.Fatalf("set stale current_version_id: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
updated, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "cfg-updated",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 100}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("update config with missing current version: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if updated.Name != "cfg-updated" {
|
||||||
|
t.Fatalf("expected updated name, got %q", updated.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 1 {
|
||||||
|
t.Fatalf("expected 1 recreated version, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[0].VersionNo != 1 {
|
||||||
|
t.Fatalf("expected recreated version_no=1, got %d", versions[0].VersionNo)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
|
func newLocalConfigServiceForTest(t *testing.T) (*LocalConfigurationService, *localdb.LocalDB) {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
dbPath := filepath.Join(t.TempDir(), "local.db")
|
||||||
|
local, err := localdb.New(dbPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() {
|
||||||
|
_ = local.Close()
|
||||||
|
})
|
||||||
|
|
||||||
|
return NewLocalConfigurationService(
|
||||||
|
local,
|
||||||
|
syncsvc.NewService(nil, local),
|
||||||
|
&QuoteService{},
|
||||||
|
func() bool { return false },
|
||||||
|
), local
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadVersions(t *testing.T, local *localdb.LocalDB, configurationUUID string) []localdb.LocalConfigurationVersion {
|
||||||
|
t.Helper()
|
||||||
|
var versions []localdb.LocalConfigurationVersion
|
||||||
|
if err := local.DB().
|
||||||
|
Where("configuration_uuid = ?", configurationUUID).
|
||||||
|
Order("version_no ASC").
|
||||||
|
Find(&versions).Error; err != nil {
|
||||||
|
t.Fatalf("load versions: %v", err)
|
||||||
|
}
|
||||||
|
return versions
|
||||||
|
}
|
||||||
|
|
||||||
|
func updateWithRetry(service *LocalConfigurationService, uuid string, quantity int) error {
|
||||||
|
var lastErr error
|
||||||
|
for i := 0; i < 6; i++ {
|
||||||
|
_, err := service.UpdateNoAuth(uuid, &CreateConfigRequest{
|
||||||
|
Name: "base",
|
||||||
|
Items: models.ConfigItems{{LotName: "NIC_A", Quantity: quantity, UnitPrice: 150}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
lastErr = err
|
||||||
|
if errors.Is(err, ErrVersionConflict) || strings.Contains(err.Error(), "database is locked") {
|
||||||
|
time.Sleep(10 * time.Millisecond)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return fmt.Errorf("update retries exhausted: %w", lastErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRollbackVersionSnapshotJSONMatchesV1(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "initial",
|
||||||
|
Items: models.ConfigItems{{LotName: "GPU_A", Quantity: 1, UnitPrice: 2000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := service.UpdateNoAuth(created.UUID, &CreateConfigRequest{
|
||||||
|
Name: "initial",
|
||||||
|
Items: models.ConfigItems{{LotName: "GPU_A", Quantity: 2, UnitPrice: 2000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := service.RollbackToVersion(created.UUID, 1, "tester"); err != nil {
|
||||||
|
t.Fatalf("rollback: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 3 {
|
||||||
|
t.Fatalf("expected 3 versions")
|
||||||
|
}
|
||||||
|
|
||||||
|
var v1 map[string]any
|
||||||
|
var v3 map[string]any
|
||||||
|
if err := json.Unmarshal([]byte(versions[0].Data), &v1); err != nil {
|
||||||
|
t.Fatalf("unmarshal v1: %v", err)
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal([]byte(versions[2].Data), &v3); err != nil {
|
||||||
|
t.Fatalf("unmarshal v3: %v", err)
|
||||||
|
}
|
||||||
|
if fmt.Sprintf("%v", v1["name"]) != fmt.Sprintf("%v", v3["name"]) {
|
||||||
|
t.Fatalf("rollback snapshot differs from v1 snapshot by name")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteMarksInactiveAndCreatesVersion(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "to-archive",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_Z", Quantity: 1, UnitPrice: 500}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if err := service.DeleteNoAuth(created.UUID); err != nil {
|
||||||
|
t.Fatalf("delete no auth: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load archived config: %v", err)
|
||||||
|
}
|
||||||
|
if cfg.IsActive {
|
||||||
|
t.Fatalf("expected config to be inactive after delete")
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 2 {
|
||||||
|
t.Fatalf("expected 2 versions after archive, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[1].VersionNo != 2 {
|
||||||
|
t.Fatalf("expected archive to create version 2, got %d", versions[1].VersionNo)
|
||||||
|
}
|
||||||
|
|
||||||
|
list, total, err := service.ListAll(1, 20)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("list all: %v", err)
|
||||||
|
}
|
||||||
|
if total != int64(len(list)) {
|
||||||
|
t.Fatalf("unexpected total/list mismatch")
|
||||||
|
}
|
||||||
|
if len(list) != 0 {
|
||||||
|
t.Fatalf("expected archived config to be hidden from list")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReactivateRestoresArchivedConfigurationAndCreatesVersion(t *testing.T) {
|
||||||
|
service, local := newLocalConfigServiceForTest(t)
|
||||||
|
|
||||||
|
created, err := service.Create("tester", &CreateConfigRequest{
|
||||||
|
Name: "to-reactivate",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_R", Quantity: 1, UnitPrice: 700}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if err := service.DeleteNoAuth(created.UUID); err != nil {
|
||||||
|
t.Fatalf("archive config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := service.ReactivateNoAuth(created.UUID); err != nil {
|
||||||
|
t.Fatalf("reactivate config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load reactivated config: %v", err)
|
||||||
|
}
|
||||||
|
if !cfg.IsActive {
|
||||||
|
t.Fatalf("expected config to be active after reactivation")
|
||||||
|
}
|
||||||
|
|
||||||
|
versions := loadVersions(t, local, created.UUID)
|
||||||
|
if len(versions) != 3 {
|
||||||
|
t.Fatalf("expected 3 versions after reactivation, got %d", len(versions))
|
||||||
|
}
|
||||||
|
if versions[2].VersionNo != 3 {
|
||||||
|
t.Fatalf("expected reactivation version 3, got %d", versions[2].VersionNo)
|
||||||
|
}
|
||||||
|
|
||||||
|
list, _, err := service.ListAll(1, 20)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("list all after reactivation: %v", err)
|
||||||
|
}
|
||||||
|
if len(list) != 1 {
|
||||||
|
t.Fatalf("expected reactivated config to be visible in list")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,121 +0,0 @@
|
|||||||
package pricing
|
|
||||||
|
|
||||||
import (
|
|
||||||
"math"
|
|
||||||
"sort"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
|
||||||
|
|
||||||
// CalculateMedian returns the median of prices
|
|
||||||
func CalculateMedian(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
sorted := make([]float64, len(prices))
|
|
||||||
copy(sorted, prices)
|
|
||||||
sort.Float64s(sorted)
|
|
||||||
|
|
||||||
n := len(sorted)
|
|
||||||
if n%2 == 0 {
|
|
||||||
return (sorted[n/2-1] + sorted[n/2]) / 2
|
|
||||||
}
|
|
||||||
return sorted[n/2]
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateAverage returns the arithmetic mean of prices
|
|
||||||
func CalculateAverage(prices []float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
var sum float64
|
|
||||||
for _, p := range prices {
|
|
||||||
sum += p
|
|
||||||
}
|
|
||||||
return sum / float64(len(prices))
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateWeightedMedian calculates median with exponential decay weights
|
|
||||||
// More recent prices have higher weight
|
|
||||||
func CalculateWeightedMedian(points []repository.PricePoint, decayDays int) float64 {
|
|
||||||
if len(points) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
type weightedPrice struct {
|
|
||||||
price float64
|
|
||||||
weight float64
|
|
||||||
}
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
weighted := make([]weightedPrice, len(points))
|
|
||||||
var totalWeight float64
|
|
||||||
|
|
||||||
for i, p := range points {
|
|
||||||
daysSince := now.Sub(p.Date).Hours() / 24
|
|
||||||
// weight = e^(-days / decay_days)
|
|
||||||
weight := math.Exp(-daysSince / float64(decayDays))
|
|
||||||
weighted[i] = weightedPrice{price: p.Price, weight: weight}
|
|
||||||
totalWeight += weight
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sort by price
|
|
||||||
sort.Slice(weighted, func(i, j int) bool {
|
|
||||||
return weighted[i].price < weighted[j].price
|
|
||||||
})
|
|
||||||
|
|
||||||
// Find weighted median
|
|
||||||
targetWeight := totalWeight / 2
|
|
||||||
var cumulativeWeight float64
|
|
||||||
|
|
||||||
for _, wp := range weighted {
|
|
||||||
cumulativeWeight += wp.weight
|
|
||||||
if cumulativeWeight >= targetWeight {
|
|
||||||
return wp.price
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return weighted[len(weighted)-1].price
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculatePercentile calculates the nth percentile of prices
|
|
||||||
func CalculatePercentile(prices []float64, percentile float64) float64 {
|
|
||||||
if len(prices) == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
sorted := make([]float64, len(prices))
|
|
||||||
copy(sorted, prices)
|
|
||||||
sort.Float64s(sorted)
|
|
||||||
|
|
||||||
index := (percentile / 100) * float64(len(sorted)-1)
|
|
||||||
lower := int(math.Floor(index))
|
|
||||||
upper := int(math.Ceil(index))
|
|
||||||
|
|
||||||
if lower == upper {
|
|
||||||
return sorted[lower]
|
|
||||||
}
|
|
||||||
|
|
||||||
fraction := index - float64(lower)
|
|
||||||
return sorted[lower]*(1-fraction) + sorted[upper]*fraction
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculateStdDev calculates standard deviation
|
|
||||||
func CalculateStdDev(prices []float64) float64 {
|
|
||||||
if len(prices) < 2 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
mean := CalculateAverage(prices)
|
|
||||||
var sumSquares float64
|
|
||||||
|
|
||||||
for _, p := range prices {
|
|
||||||
diff := p - mean
|
|
||||||
sumSquares += diff * diff
|
|
||||||
}
|
|
||||||
|
|
||||||
return math.Sqrt(sumSquares / float64(len(prices)-1))
|
|
||||||
}
|
|
||||||
@@ -1,178 +0,0 @@
|
|||||||
package pricing
|
|
||||||
|
|
||||||
import (
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/config"
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Service struct {
|
|
||||||
componentRepo *repository.ComponentRepository
|
|
||||||
priceRepo *repository.PriceRepository
|
|
||||||
config config.PricingConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewService(
|
|
||||||
componentRepo *repository.ComponentRepository,
|
|
||||||
priceRepo *repository.PriceRepository,
|
|
||||||
cfg config.PricingConfig,
|
|
||||||
) *Service {
|
|
||||||
return &Service{
|
|
||||||
componentRepo: componentRepo,
|
|
||||||
priceRepo: priceRepo,
|
|
||||||
config: cfg,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetEffectivePrice returns the current effective price for a component
|
|
||||||
// Priority: active override > calculated price > nil
|
|
||||||
func (s *Service) GetEffectivePrice(lotName string) (*float64, error) {
|
|
||||||
// Check for active override first
|
|
||||||
override, err := s.priceRepo.GetPriceOverride(lotName)
|
|
||||||
if err == nil && override != nil {
|
|
||||||
return &override.Price, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get component metadata
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return component.CurrentPrice, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// CalculatePrice calculates price using the specified method
|
|
||||||
func (s *Service) CalculatePrice(lotName string, method models.PriceMethod, periodDays int) (float64, error) {
|
|
||||||
if periodDays == 0 {
|
|
||||||
periodDays = s.config.DefaultPeriodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
points, err := s.priceRepo.GetPriceHistory(lotName, periodDays)
|
|
||||||
if err != nil {
|
|
||||||
return 0, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(points) == 0 {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
prices := make([]float64, len(points))
|
|
||||||
for i, p := range points {
|
|
||||||
prices[i] = p.Price
|
|
||||||
}
|
|
||||||
|
|
||||||
switch method {
|
|
||||||
case models.PriceMethodAverage:
|
|
||||||
return CalculateAverage(prices), nil
|
|
||||||
case models.PriceMethodWeightedMedian:
|
|
||||||
return CalculateWeightedMedian(points, s.config.DefaultPeriodDays), nil
|
|
||||||
case models.PriceMethodMedian:
|
|
||||||
fallthrough
|
|
||||||
default:
|
|
||||||
return CalculateMedian(prices), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateComponentPrice recalculates and updates the price for a component
|
|
||||||
func (s *Service) UpdateComponentPrice(lotName string) error {
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
price, err := s.CalculatePrice(lotName, component.PriceMethod, component.PricePeriodDays)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
if price > 0 {
|
|
||||||
component.CurrentPrice = &price
|
|
||||||
component.PriceUpdatedAt = &now
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.componentRepo.Update(component)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetManualPrice sets a manual price override
|
|
||||||
func (s *Service) SetManualPrice(lotName string, price float64, reason string, userID uint) error {
|
|
||||||
override := &models.PriceOverride{
|
|
||||||
LotName: lotName,
|
|
||||||
Price: price,
|
|
||||||
ValidFrom: time.Now(),
|
|
||||||
Reason: reason,
|
|
||||||
CreatedBy: userID,
|
|
||||||
}
|
|
||||||
return s.priceRepo.CreatePriceOverride(override)
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdatePriceMethod changes the pricing method for a component
|
|
||||||
func (s *Service) UpdatePriceMethod(lotName string, method models.PriceMethod, periodDays int) error {
|
|
||||||
component, err := s.componentRepo.GetByLotName(lotName)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
component.PriceMethod = method
|
|
||||||
if periodDays > 0 {
|
|
||||||
component.PricePeriodDays = periodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := s.componentRepo.Update(component); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return s.UpdateComponentPrice(lotName)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetPriceStats returns statistics for a component's price history
|
|
||||||
func (s *Service) GetPriceStats(lotName string, periodDays int) (*PriceStats, error) {
|
|
||||||
if periodDays == 0 {
|
|
||||||
periodDays = s.config.DefaultPeriodDays
|
|
||||||
}
|
|
||||||
|
|
||||||
points, err := s.priceRepo.GetPriceHistory(lotName, periodDays)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(points) == 0 {
|
|
||||||
return &PriceStats{QuoteCount: 0}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
prices := make([]float64, len(points))
|
|
||||||
for i, p := range points {
|
|
||||||
prices[i] = p.Price
|
|
||||||
}
|
|
||||||
|
|
||||||
return &PriceStats{
|
|
||||||
QuoteCount: len(points),
|
|
||||||
MinPrice: CalculatePercentile(prices, 0),
|
|
||||||
MaxPrice: CalculatePercentile(prices, 100),
|
|
||||||
MedianPrice: CalculateMedian(prices),
|
|
||||||
AveragePrice: CalculateAverage(prices),
|
|
||||||
StdDeviation: CalculateStdDev(prices),
|
|
||||||
LatestPrice: points[0].Price,
|
|
||||||
LatestDate: points[0].Date,
|
|
||||||
OldestDate: points[len(points)-1].Date,
|
|
||||||
Percentile25: CalculatePercentile(prices, 25),
|
|
||||||
Percentile75: CalculatePercentile(prices, 75),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type PriceStats struct {
|
|
||||||
QuoteCount int `json:"quote_count"`
|
|
||||||
MinPrice float64 `json:"min_price"`
|
|
||||||
MaxPrice float64 `json:"max_price"`
|
|
||||||
MedianPrice float64 `json:"median_price"`
|
|
||||||
AveragePrice float64 `json:"average_price"`
|
|
||||||
StdDeviation float64 `json:"std_deviation"`
|
|
||||||
LatestPrice float64 `json:"latest_price"`
|
|
||||||
LatestDate time.Time `json:"latest_date"`
|
|
||||||
OldestDate time.Time `json:"oldest_date"`
|
|
||||||
Percentile25 float64 `json:"percentile_25"`
|
|
||||||
Percentile75 float64 `json:"percentile_75"`
|
|
||||||
}
|
|
||||||
391
internal/services/project.go
Normal file
391
internal/services/project.go
Normal file
@@ -0,0 +1,391 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrProjectNotFound = errors.New("project not found")
|
||||||
|
ErrProjectForbidden = errors.New("access to project forbidden")
|
||||||
|
ErrProjectCodeExists = errors.New("project code and variant already exist")
|
||||||
|
ErrCannotDeleteMainVariant = errors.New("cannot delete main variant")
|
||||||
|
)
|
||||||
|
|
||||||
|
type ProjectService struct {
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewProjectService(localDB *localdb.LocalDB) *ProjectService {
|
||||||
|
return &ProjectService{localDB: localDB}
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateProjectRequest struct {
|
||||||
|
Code string `json:"code"`
|
||||||
|
Variant string `json:"variant,omitempty"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
|
TrackerURL string `json:"tracker_url"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type UpdateProjectRequest struct {
|
||||||
|
Code *string `json:"code,omitempty"`
|
||||||
|
Variant *string `json:"variant,omitempty"`
|
||||||
|
Name *string `json:"name,omitempty"`
|
||||||
|
TrackerURL *string `json:"tracker_url,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type ProjectConfigurationsResult struct {
|
||||||
|
ProjectUUID string `json:"project_uuid"`
|
||||||
|
Configs []models.Configuration `json:"configurations"`
|
||||||
|
Total float64 `json:"total"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) Create(ownerUsername string, req *CreateProjectRequest) (*models.Project, error) {
|
||||||
|
var namePtr *string
|
||||||
|
if req.Name != nil {
|
||||||
|
name := strings.TrimSpace(*req.Name)
|
||||||
|
if name != "" {
|
||||||
|
namePtr = &name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
code := strings.TrimSpace(req.Code)
|
||||||
|
if code == "" {
|
||||||
|
return nil, fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
variant := strings.TrimSpace(req.Variant)
|
||||||
|
if err := s.ensureUniqueProjectCodeVariant("", code, variant); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
localProject := &localdb.LocalProject{
|
||||||
|
UUID: uuid.NewString(),
|
||||||
|
OwnerUsername: ownerUsername,
|
||||||
|
Code: code,
|
||||||
|
Variant: variant,
|
||||||
|
Name: namePtr,
|
||||||
|
TrackerURL: normalizeProjectTrackerURL(code, req.TrackerURL),
|
||||||
|
IsActive: true,
|
||||||
|
IsSystem: false,
|
||||||
|
CreatedAt: now,
|
||||||
|
UpdatedAt: now,
|
||||||
|
SyncStatus: "pending",
|
||||||
|
}
|
||||||
|
if err := s.localDB.SaveProject(localProject); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := s.enqueueProjectPendingChange(localProject, "create"); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return localdb.LocalToProject(localProject), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) Update(projectUUID, ownerUsername string, req *UpdateProjectRequest) (*models.Project, error) {
|
||||||
|
localProject, err := s.localDB.GetProjectByUUID(projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.Code != nil {
|
||||||
|
code := strings.TrimSpace(*req.Code)
|
||||||
|
if code == "" {
|
||||||
|
return nil, fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
localProject.Code = code
|
||||||
|
}
|
||||||
|
if req.Variant != nil {
|
||||||
|
localProject.Variant = strings.TrimSpace(*req.Variant)
|
||||||
|
}
|
||||||
|
if err := s.ensureUniqueProjectCodeVariant(projectUUID, localProject.Code, localProject.Variant); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if req.Name != nil {
|
||||||
|
name := strings.TrimSpace(*req.Name)
|
||||||
|
if name == "" {
|
||||||
|
localProject.Name = nil
|
||||||
|
} else {
|
||||||
|
localProject.Name = &name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if req.TrackerURL != nil {
|
||||||
|
localProject.TrackerURL = normalizeProjectTrackerURL(localProject.Code, *req.TrackerURL)
|
||||||
|
} else if strings.TrimSpace(localProject.TrackerURL) == "" {
|
||||||
|
localProject.TrackerURL = normalizeProjectTrackerURL(localProject.Code, "")
|
||||||
|
}
|
||||||
|
localProject.UpdatedAt = time.Now()
|
||||||
|
localProject.SyncStatus = "pending"
|
||||||
|
if err := s.localDB.SaveProject(localProject); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if err := s.enqueueProjectPendingChange(localProject, "update"); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return localdb.LocalToProject(localProject), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) ensureUniqueProjectCodeVariant(excludeUUID, code, variant string) error {
|
||||||
|
normalizedCode := normalizeProjectCode(code)
|
||||||
|
normalizedVariant := normalizeProjectVariant(variant)
|
||||||
|
if normalizedCode == "" {
|
||||||
|
return fmt.Errorf("project code is required")
|
||||||
|
}
|
||||||
|
|
||||||
|
projects, err := s.localDB.GetAllProjects(true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for i := range projects {
|
||||||
|
project := projects[i]
|
||||||
|
if excludeUUID != "" && project.UUID == excludeUUID {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if normalizeProjectCode(project.Code) == normalizedCode &&
|
||||||
|
normalizeProjectVariant(project.Variant) == normalizedVariant {
|
||||||
|
return ErrProjectCodeExists
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeProjectCode(code string) string {
|
||||||
|
return strings.ToLower(strings.TrimSpace(code))
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeProjectVariant(variant string) string {
|
||||||
|
return strings.ToLower(strings.TrimSpace(variant))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) Archive(projectUUID, ownerUsername string) error {
|
||||||
|
return s.setProjectActive(projectUUID, ownerUsername, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) Reactivate(projectUUID, ownerUsername string) error {
|
||||||
|
return s.setProjectActive(projectUUID, ownerUsername, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) DeleteVariant(projectUUID, ownerUsername string) error {
|
||||||
|
localProject, err := s.localDB.GetProjectByUUID(projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(localProject.Variant) == "" {
|
||||||
|
return ErrCannotDeleteMainVariant
|
||||||
|
}
|
||||||
|
return s.setProjectActive(projectUUID, ownerUsername, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) setProjectActive(projectUUID, ownerUsername string, isActive bool) error {
|
||||||
|
return s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
var project localdb.LocalProject
|
||||||
|
if err := tx.Where("uuid = ?", projectUUID).First(&project).Error; err != nil {
|
||||||
|
return ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if project.IsActive == isActive {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
project.IsActive = isActive
|
||||||
|
project.UpdatedAt = time.Now()
|
||||||
|
project.SyncStatus = "pending"
|
||||||
|
if err := tx.Save(&project).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.enqueueProjectPendingChangeTx(tx, &project, boolToOp(isActive, "reactivate", "archive")); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var configs []localdb.LocalConfiguration
|
||||||
|
if err := tx.Where("project_uuid = ?", projectUUID).Find(&configs).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for i := range configs {
|
||||||
|
cfg := configs[i]
|
||||||
|
cfg.IsActive = isActive
|
||||||
|
cfg.SyncStatus = "pending"
|
||||||
|
cfg.UpdatedAt = time.Now()
|
||||||
|
if err := tx.Save(&cfg).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
modelCfg := localdb.LocalToConfiguration(&cfg)
|
||||||
|
payload, err := json.Marshal(modelCfg)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
change := &localdb.PendingChange{
|
||||||
|
EntityType: "configuration",
|
||||||
|
EntityUUID: cfg.UUID,
|
||||||
|
Operation: "update",
|
||||||
|
Payload: string(payload),
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
Attempts: 0,
|
||||||
|
}
|
||||||
|
if err := tx.Create(change).Error; err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) ListByUser(ownerUsername string, includeArchived bool) ([]models.Project, error) {
|
||||||
|
localProjects, err := s.localDB.GetAllProjects(includeArchived)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
projects := make([]models.Project, 0, len(localProjects))
|
||||||
|
for i := range localProjects {
|
||||||
|
projects = append(projects, *localdb.LocalToProject(&localProjects[i]))
|
||||||
|
}
|
||||||
|
return projects, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) GetByUUID(projectUUID, ownerUsername string) (*models.Project, error) {
|
||||||
|
localProject, err := s.localDB.GetProjectByUUID(projectUUID)
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
return localdb.LocalToProject(localProject), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) ListConfigurations(projectUUID, ownerUsername, status string) (*ProjectConfigurationsResult, error) {
|
||||||
|
project, err := s.GetByUUID(projectUUID, ownerUsername)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if !project.IsActive && status == "active" {
|
||||||
|
return &ProjectConfigurationsResult{
|
||||||
|
ProjectUUID: projectUUID,
|
||||||
|
Configs: []models.Configuration{},
|
||||||
|
Total: 0,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var localConfigs []localdb.LocalConfiguration
|
||||||
|
if err := s.localDB.DB().Preload("CurrentVersion").Order("created_at DESC").Find(&localConfigs).Error; err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
configs := make([]models.Configuration, 0, len(localConfigs))
|
||||||
|
total := 0.0
|
||||||
|
for i := range localConfigs {
|
||||||
|
localCfg := localConfigs[i]
|
||||||
|
if localCfg.ProjectUUID == nil || *localCfg.ProjectUUID != projectUUID {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
switch status {
|
||||||
|
case "active", "":
|
||||||
|
if !localCfg.IsActive {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
case "archived":
|
||||||
|
if localCfg.IsActive {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
case "all":
|
||||||
|
default:
|
||||||
|
if !localCfg.IsActive {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg := localdb.LocalToConfiguration(&localCfg)
|
||||||
|
if cfg.TotalPrice != nil {
|
||||||
|
total += *cfg.TotalPrice
|
||||||
|
}
|
||||||
|
configs = append(configs, *cfg)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &ProjectConfigurationsResult{
|
||||||
|
ProjectUUID: projectUUID,
|
||||||
|
Configs: configs,
|
||||||
|
Total: total,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) ResolveProjectUUID(ownerUsername string, projectUUID *string) (*string, error) {
|
||||||
|
if projectUUID == nil || strings.TrimSpace(*projectUUID) == "" {
|
||||||
|
project, err := s.localDB.EnsureDefaultProject(ownerUsername)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &project.UUID, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
project, err := s.localDB.GetProjectByUUID(strings.TrimSpace(*projectUUID))
|
||||||
|
if err != nil {
|
||||||
|
return nil, ErrProjectNotFound
|
||||||
|
}
|
||||||
|
if project.OwnerUsername != ownerUsername {
|
||||||
|
return nil, ErrProjectForbidden
|
||||||
|
}
|
||||||
|
if !project.IsActive {
|
||||||
|
return nil, fmt.Errorf("project is archived")
|
||||||
|
}
|
||||||
|
|
||||||
|
resolved := project.UUID
|
||||||
|
return &resolved, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeProjectTrackerURL(projectCode, trackerURL string) string {
|
||||||
|
trimmedURL := strings.TrimSpace(trackerURL)
|
||||||
|
if trimmedURL != "" {
|
||||||
|
return trimmedURL
|
||||||
|
}
|
||||||
|
|
||||||
|
trimmedCode := strings.TrimSpace(projectCode)
|
||||||
|
if trimmedCode == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
return "https://tracker.yandex.ru/" + url.PathEscape(trimmedCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) enqueueProjectPendingChange(project *localdb.LocalProject, operation string) error {
|
||||||
|
return s.enqueueProjectPendingChangeTx(s.localDB.DB(), project, operation)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ProjectService) enqueueProjectPendingChangeTx(tx *gorm.DB, project *localdb.LocalProject, operation string) error {
|
||||||
|
payload := sync.ProjectChangePayload{
|
||||||
|
EventID: uuid.NewString(),
|
||||||
|
ProjectUUID: project.UUID,
|
||||||
|
Operation: operation,
|
||||||
|
Snapshot: *localdb.LocalToProject(project),
|
||||||
|
CreatedAt: time.Now().UTC(),
|
||||||
|
IdempotencyKey: fmt.Sprintf("%s:%d:%s", project.UUID, project.UpdatedAt.UnixNano(), operation),
|
||||||
|
}
|
||||||
|
raw, err := json.Marshal(payload)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
change := &localdb.PendingChange{
|
||||||
|
EntityType: "project",
|
||||||
|
EntityUUID: project.UUID,
|
||||||
|
Operation: operation,
|
||||||
|
Payload: string(raw),
|
||||||
|
CreatedAt: time.Now(),
|
||||||
|
Attempts: 0,
|
||||||
|
}
|
||||||
|
return tx.Create(change).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func boolToOp(v bool, whenTrue, whenFalse string) string {
|
||||||
|
if v {
|
||||||
|
return whenTrue
|
||||||
|
}
|
||||||
|
return whenFalse
|
||||||
|
}
|
||||||
@@ -2,36 +2,59 @@ package services
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/mchus/quoteforge/internal/models"
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
"github.com/mchus/quoteforge/internal/repository"
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
"github.com/mchus/quoteforge/internal/services/pricing"
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrEmptyQuote = errors.New("quote cannot be empty")
|
ErrEmptyQuote = errors.New("quote cannot be empty")
|
||||||
ErrComponentNotFound = errors.New("component not found")
|
ErrComponentNotFound = errors.New("component not found")
|
||||||
ErrNoPriceAvailable = errors.New("no price available for component")
|
ErrNoPriceAvailable = errors.New("no price available for component")
|
||||||
)
|
)
|
||||||
|
|
||||||
type QuoteService struct {
|
type QuoteService struct {
|
||||||
componentRepo *repository.ComponentRepository
|
componentRepo *repository.ComponentRepository
|
||||||
statsRepo *repository.StatsRepository
|
statsRepo *repository.StatsRepository
|
||||||
pricingService *pricing.Service
|
pricelistRepo *repository.PricelistRepository
|
||||||
|
localDB *localdb.LocalDB
|
||||||
|
pricingService priceResolver
|
||||||
|
cacheMu sync.RWMutex
|
||||||
|
priceCache map[string]cachedLotPrice
|
||||||
|
cacheTTL time.Duration
|
||||||
|
}
|
||||||
|
|
||||||
|
type priceResolver interface {
|
||||||
|
GetEffectivePrice(lotName string) (*float64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewQuoteService(
|
func NewQuoteService(
|
||||||
componentRepo *repository.ComponentRepository,
|
componentRepo *repository.ComponentRepository,
|
||||||
statsRepo *repository.StatsRepository,
|
statsRepo *repository.StatsRepository,
|
||||||
pricingService *pricing.Service,
|
pricelistRepo *repository.PricelistRepository,
|
||||||
|
localDB *localdb.LocalDB,
|
||||||
|
pricingService priceResolver,
|
||||||
) *QuoteService {
|
) *QuoteService {
|
||||||
return &QuoteService{
|
return &QuoteService{
|
||||||
componentRepo: componentRepo,
|
componentRepo: componentRepo,
|
||||||
statsRepo: statsRepo,
|
statsRepo: statsRepo,
|
||||||
|
pricelistRepo: pricelistRepo,
|
||||||
|
localDB: localDB,
|
||||||
pricingService: pricingService,
|
pricingService: pricingService,
|
||||||
|
priceCache: make(map[string]cachedLotPrice, 4096),
|
||||||
|
cacheTTL: 10 * time.Second,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type cachedLotPrice struct {
|
||||||
|
price *float64
|
||||||
|
expiresAt time.Time
|
||||||
|
}
|
||||||
|
|
||||||
type QuoteItem struct {
|
type QuoteItem struct {
|
||||||
LotName string `json:"lot_name"`
|
LotName string `json:"lot_name"`
|
||||||
Quantity int `json:"quantity"`
|
Quantity int `json:"quantity"`
|
||||||
@@ -43,11 +66,11 @@ type QuoteItem struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type QuoteValidationResult struct {
|
type QuoteValidationResult struct {
|
||||||
Valid bool `json:"valid"`
|
Valid bool `json:"valid"`
|
||||||
Items []QuoteItem `json:"items"`
|
Items []QuoteItem `json:"items"`
|
||||||
Errors []string `json:"errors"`
|
Errors []string `json:"errors"`
|
||||||
Warnings []string `json:"warnings"`
|
Warnings []string `json:"warnings"`
|
||||||
Total float64 `json:"total"`
|
Total float64 `json:"total"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type QuoteRequest struct {
|
type QuoteRequest struct {
|
||||||
@@ -55,6 +78,36 @@ type QuoteRequest struct {
|
|||||||
LotName string `json:"lot_name"`
|
LotName string `json:"lot_name"`
|
||||||
Quantity int `json:"quantity"`
|
Quantity int `json:"quantity"`
|
||||||
} `json:"items"`
|
} `json:"items"`
|
||||||
|
PricelistID *uint `json:"pricelist_id,omitempty"` // Optional: use specific pricelist for pricing
|
||||||
|
}
|
||||||
|
|
||||||
|
type PriceLevelsRequest struct {
|
||||||
|
Items []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
} `json:"items"`
|
||||||
|
PricelistIDs map[string]uint `json:"pricelist_ids,omitempty"`
|
||||||
|
NoCache bool `json:"no_cache,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PriceLevelsItem struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
EstimatePrice *float64 `json:"estimate_price"`
|
||||||
|
WarehousePrice *float64 `json:"warehouse_price"`
|
||||||
|
CompetitorPrice *float64 `json:"competitor_price"`
|
||||||
|
DeltaWhEstimateAbs *float64 `json:"delta_wh_estimate_abs"`
|
||||||
|
DeltaWhEstimatePct *float64 `json:"delta_wh_estimate_pct"`
|
||||||
|
DeltaCompEstimateAbs *float64 `json:"delta_comp_estimate_abs"`
|
||||||
|
DeltaCompEstimatePct *float64 `json:"delta_comp_estimate_pct"`
|
||||||
|
DeltaCompWhAbs *float64 `json:"delta_comp_wh_abs"`
|
||||||
|
DeltaCompWhPct *float64 `json:"delta_comp_wh_pct"`
|
||||||
|
PriceMissing []string `json:"price_missing"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PriceLevelsResult struct {
|
||||||
|
Items []PriceLevelsItem `json:"items"`
|
||||||
|
ResolvedPricelistIDs map[string]uint `json:"resolved_pricelist_ids"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *QuoteService) ValidateAndCalculate(req *QuoteRequest) (*QuoteValidationResult, error) {
|
func (s *QuoteService) ValidateAndCalculate(req *QuoteRequest) (*QuoteValidationResult, error) {
|
||||||
@@ -62,6 +115,70 @@ func (s *QuoteService) ValidateAndCalculate(req *QuoteRequest) (*QuoteValidation
|
|||||||
return nil, ErrEmptyQuote
|
return nil, ErrEmptyQuote
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Strict local-first path: calculations use local SQLite snapshot regardless of online status.
|
||||||
|
if s.localDB != nil {
|
||||||
|
result := &QuoteValidationResult{
|
||||||
|
Valid: true,
|
||||||
|
Items: make([]QuoteItem, 0, len(req.Items)),
|
||||||
|
Errors: make([]string, 0),
|
||||||
|
Warnings: make([]string, 0),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine which pricelist to use for pricing
|
||||||
|
pricelistID := req.PricelistID
|
||||||
|
if pricelistID == nil || *pricelistID == 0 {
|
||||||
|
// By default, use latest estimate pricelist
|
||||||
|
latestPricelist, err := s.localDB.GetLatestLocalPricelistBySource("estimate")
|
||||||
|
if err == nil && latestPricelist != nil {
|
||||||
|
pricelistID = &latestPricelist.ServerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var total float64
|
||||||
|
for _, reqItem := range req.Items {
|
||||||
|
localComp, err := s.localDB.GetLocalComponent(reqItem.LotName)
|
||||||
|
if err != nil {
|
||||||
|
result.Valid = false
|
||||||
|
result.Errors = append(result.Errors, "Component not found: "+reqItem.LotName)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
item := QuoteItem{
|
||||||
|
LotName: reqItem.LotName,
|
||||||
|
Quantity: reqItem.Quantity,
|
||||||
|
Description: localComp.LotDescription,
|
||||||
|
Category: localComp.Category,
|
||||||
|
HasPrice: false,
|
||||||
|
UnitPrice: 0,
|
||||||
|
TotalPrice: 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get price from pricelist_items
|
||||||
|
if pricelistID != nil {
|
||||||
|
price, found := s.lookupPriceByPricelistID(*pricelistID, reqItem.LotName)
|
||||||
|
if found && price > 0 {
|
||||||
|
item.UnitPrice = price
|
||||||
|
item.TotalPrice = price * float64(reqItem.Quantity)
|
||||||
|
item.HasPrice = true
|
||||||
|
total += item.TotalPrice
|
||||||
|
} else {
|
||||||
|
result.Warnings = append(result.Warnings, "No price available for: "+reqItem.LotName)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
result.Warnings = append(result.Warnings, "No pricelist available for: "+reqItem.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Items = append(result.Items, item)
|
||||||
|
}
|
||||||
|
|
||||||
|
result.Total = total
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.componentRepo == nil || s.pricingService == nil {
|
||||||
|
return nil, errors.New("quote calculation not available")
|
||||||
|
}
|
||||||
|
|
||||||
result := &QuoteValidationResult{
|
result := &QuoteValidationResult{
|
||||||
Valid: true,
|
Valid: true,
|
||||||
Items: make([]QuoteItem, 0, len(req.Items)),
|
Items: make([]QuoteItem, 0, len(req.Items)),
|
||||||
@@ -127,8 +244,265 @@ func (s *QuoteService) ValidateAndCalculate(req *QuoteRequest) (*QuoteValidation
|
|||||||
return result, nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) CalculatePriceLevels(req *PriceLevelsRequest) (*PriceLevelsResult, error) {
|
||||||
|
if len(req.Items) == 0 {
|
||||||
|
return nil, ErrEmptyQuote
|
||||||
|
}
|
||||||
|
|
||||||
|
lotNames := make([]string, 0, len(req.Items))
|
||||||
|
seenLots := make(map[string]struct{}, len(req.Items))
|
||||||
|
for _, reqItem := range req.Items {
|
||||||
|
if _, ok := seenLots[reqItem.LotName]; ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
seenLots[reqItem.LotName] = struct{}{}
|
||||||
|
lotNames = append(lotNames, reqItem.LotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &PriceLevelsResult{
|
||||||
|
Items: make([]PriceLevelsItem, 0, len(req.Items)),
|
||||||
|
ResolvedPricelistIDs: map[string]uint{},
|
||||||
|
}
|
||||||
|
|
||||||
|
type levelState struct {
|
||||||
|
id uint
|
||||||
|
prices map[string]float64
|
||||||
|
}
|
||||||
|
levelBySource := map[models.PricelistSource]*levelState{
|
||||||
|
models.PricelistSourceEstimate: {prices: map[string]float64{}},
|
||||||
|
models.PricelistSourceWarehouse: {prices: map[string]float64{}},
|
||||||
|
models.PricelistSourceCompetitor: {prices: map[string]float64{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
for source, st := range levelBySource {
|
||||||
|
sourceKey := string(source)
|
||||||
|
if req.PricelistIDs != nil {
|
||||||
|
if explicitID, ok := req.PricelistIDs[sourceKey]; ok && explicitID > 0 {
|
||||||
|
st.id = explicitID
|
||||||
|
result.ResolvedPricelistIDs[sourceKey] = explicitID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if st.id == 0 && s.pricelistRepo != nil {
|
||||||
|
latest, err := s.pricelistRepo.GetLatestActiveBySource(sourceKey)
|
||||||
|
if err == nil {
|
||||||
|
st.id = latest.ID
|
||||||
|
result.ResolvedPricelistIDs[sourceKey] = latest.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if st.id == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
prices, err := s.lookupPricesByPricelistID(st.id, lotNames, req.NoCache)
|
||||||
|
if err == nil {
|
||||||
|
st.prices = prices
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, reqItem := range req.Items {
|
||||||
|
item := PriceLevelsItem{
|
||||||
|
LotName: reqItem.LotName,
|
||||||
|
Quantity: reqItem.Quantity,
|
||||||
|
PriceMissing: make([]string, 0, 3),
|
||||||
|
}
|
||||||
|
|
||||||
|
if p, ok := levelBySource[models.PricelistSourceEstimate].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
|
price := p
|
||||||
|
item.EstimatePrice = &price
|
||||||
|
}
|
||||||
|
if p, ok := levelBySource[models.PricelistSourceWarehouse].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
|
price := p
|
||||||
|
item.WarehousePrice = &price
|
||||||
|
}
|
||||||
|
if p, ok := levelBySource[models.PricelistSourceCompetitor].prices[reqItem.LotName]; ok && p > 0 {
|
||||||
|
price := p
|
||||||
|
item.CompetitorPrice = &price
|
||||||
|
}
|
||||||
|
|
||||||
|
if item.EstimatePrice == nil {
|
||||||
|
item.PriceMissing = append(item.PriceMissing, string(models.PricelistSourceEstimate))
|
||||||
|
}
|
||||||
|
if item.WarehousePrice == nil {
|
||||||
|
item.PriceMissing = append(item.PriceMissing, string(models.PricelistSourceWarehouse))
|
||||||
|
}
|
||||||
|
if item.CompetitorPrice == nil {
|
||||||
|
item.PriceMissing = append(item.PriceMissing, string(models.PricelistSourceCompetitor))
|
||||||
|
}
|
||||||
|
|
||||||
|
item.DeltaWhEstimateAbs, item.DeltaWhEstimatePct = calculateDelta(item.WarehousePrice, item.EstimatePrice)
|
||||||
|
item.DeltaCompEstimateAbs, item.DeltaCompEstimatePct = calculateDelta(item.CompetitorPrice, item.EstimatePrice)
|
||||||
|
item.DeltaCompWhAbs, item.DeltaCompWhPct = calculateDelta(item.CompetitorPrice, item.WarehousePrice)
|
||||||
|
|
||||||
|
result.Items = append(result.Items, item)
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) lookupPricesByPricelistID(pricelistID uint, lotNames []string, noCache bool) (map[string]float64, error) {
|
||||||
|
result := make(map[string]float64, len(lotNames))
|
||||||
|
if pricelistID == 0 || len(lotNames) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
missing := make([]string, 0, len(lotNames))
|
||||||
|
if noCache {
|
||||||
|
missing = append(missing, lotNames...)
|
||||||
|
} else {
|
||||||
|
now := time.Now()
|
||||||
|
s.cacheMu.RLock()
|
||||||
|
for _, lotName := range lotNames {
|
||||||
|
if entry, ok := s.priceCache[s.cacheKey(pricelistID, lotName)]; ok && entry.expiresAt.After(now) {
|
||||||
|
if entry.price != nil && *entry.price > 0 {
|
||||||
|
result[lotName] = *entry.price
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
missing = append(missing, lotName)
|
||||||
|
}
|
||||||
|
s.cacheMu.RUnlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(missing) == 0 {
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
loaded := make(map[string]float64, len(missing))
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
prices, err := s.pricelistRepo.GetPricesForLots(pricelistID, missing)
|
||||||
|
if err == nil {
|
||||||
|
for lotName, price := range prices {
|
||||||
|
if price > 0 {
|
||||||
|
result[lotName] = price
|
||||||
|
loaded[lotName] = price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.updateCache(pricelistID, missing, loaded)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback path (usually offline): local per-lot lookup.
|
||||||
|
if s.localDB != nil {
|
||||||
|
for _, lotName := range missing {
|
||||||
|
price, found := s.lookupPriceByPricelistID(pricelistID, lotName)
|
||||||
|
if found && price > 0 {
|
||||||
|
result[lotName] = price
|
||||||
|
loaded[lotName] = price
|
||||||
|
}
|
||||||
|
}
|
||||||
|
s.updateCache(pricelistID, missing, loaded)
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, fmt.Errorf("price lookup unavailable for pricelist %d", pricelistID)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) updateCache(pricelistID uint, requested []string, loaded map[string]float64) {
|
||||||
|
if len(requested) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
expiresAt := time.Now().Add(s.cacheTTL)
|
||||||
|
s.cacheMu.Lock()
|
||||||
|
defer s.cacheMu.Unlock()
|
||||||
|
|
||||||
|
for _, lotName := range requested {
|
||||||
|
if price, ok := loaded[lotName]; ok && price > 0 {
|
||||||
|
priceCopy := price
|
||||||
|
s.priceCache[s.cacheKey(pricelistID, lotName)] = cachedLotPrice{
|
||||||
|
price: &priceCopy,
|
||||||
|
expiresAt: expiresAt,
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
s.priceCache[s.cacheKey(pricelistID, lotName)] = cachedLotPrice{
|
||||||
|
price: nil,
|
||||||
|
expiresAt: expiresAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) cacheKey(pricelistID uint, lotName string) string {
|
||||||
|
return fmt.Sprintf("%d|%s", pricelistID, lotName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func calculateDelta(target, base *float64) (*float64, *float64) {
|
||||||
|
if target == nil || base == nil {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
abs := *target - *base
|
||||||
|
if *base == 0 {
|
||||||
|
return &abs, nil
|
||||||
|
}
|
||||||
|
pct := (abs / *base) * 100
|
||||||
|
return &abs, &pct
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) lookupLevelPrice(source models.PricelistSource, lotName string, pricelistIDs map[string]uint) (*float64, uint) {
|
||||||
|
sourceKey := string(source)
|
||||||
|
if id, ok := pricelistIDs[sourceKey]; ok && id > 0 {
|
||||||
|
price, found := s.lookupPriceByPricelistID(id, lotName)
|
||||||
|
if found {
|
||||||
|
return &price, id
|
||||||
|
}
|
||||||
|
return nil, id
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
price, id, err := s.pricelistRepo.GetPriceForLotBySource(sourceKey, lotName)
|
||||||
|
if err == nil && price > 0 {
|
||||||
|
return &price, id
|
||||||
|
}
|
||||||
|
|
||||||
|
latest, latestErr := s.pricelistRepo.GetLatestActiveBySource(sourceKey)
|
||||||
|
if latestErr == nil {
|
||||||
|
return nil, latest.ID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.localDB != nil {
|
||||||
|
localPL, err := s.localDB.GetLatestLocalPricelistBySource(sourceKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, 0
|
||||||
|
}
|
||||||
|
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
|
||||||
|
if err != nil || price <= 0 {
|
||||||
|
return nil, localPL.ServerID
|
||||||
|
}
|
||||||
|
return &price, localPL.ServerID
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *QuoteService) lookupPriceByPricelistID(pricelistID uint, lotName string) (float64, bool) {
|
||||||
|
if s.pricelistRepo != nil {
|
||||||
|
price, err := s.pricelistRepo.GetPriceForLot(pricelistID, lotName)
|
||||||
|
if err == nil && price > 0 {
|
||||||
|
return price, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if s.localDB != nil {
|
||||||
|
localPL, err := s.localDB.GetLocalPricelistByServerID(pricelistID)
|
||||||
|
if err != nil {
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
price, err := s.localDB.GetLocalPriceForLot(localPL.ID, lotName)
|
||||||
|
if err == nil && price > 0 {
|
||||||
|
return price, true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0, false
|
||||||
|
}
|
||||||
|
|
||||||
// RecordUsage records that components were used in a quote
|
// RecordUsage records that components were used in a quote
|
||||||
func (s *QuoteService) RecordUsage(items []models.ConfigItem) error {
|
func (s *QuoteService) RecordUsage(items []models.ConfigItem) error {
|
||||||
|
if s.statsRepo == nil {
|
||||||
|
// Offline mode: usage stats are unavailable and should not block config saves.
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
for _, item := range items {
|
for _, item := range items {
|
||||||
revenue := item.UnitPrice * float64(item.Quantity)
|
revenue := item.UnitPrice * float64(item.Quantity)
|
||||||
if err := s.statsRepo.IncrementUsage(item.LotName, item.Quantity, revenue); err != nil {
|
if err := s.statsRepo.IncrementUsage(item.LotName, item.Quantity, revenue); err != nil {
|
||||||
|
|||||||
124
internal/services/quote_price_levels_test.go
Normal file
124
internal/services/quote_price_levels_test.go
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
package services
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/repository"
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCalculatePriceLevels_WithMissingLevel(t *testing.T) {
|
||||||
|
db := newPriceLevelsTestDB(t)
|
||||||
|
repo := repository.NewPricelistRepository(db)
|
||||||
|
service := NewQuoteService(nil, nil, repo, nil, nil)
|
||||||
|
|
||||||
|
estimate := seedPricelistWithItem(t, repo, "estimate", "CPU_X", 100)
|
||||||
|
_ = estimate
|
||||||
|
seedPricelistWithItem(t, repo, "warehouse", "CPU_X", 120)
|
||||||
|
|
||||||
|
result, err := service.CalculatePriceLevels(&PriceLevelsRequest{
|
||||||
|
Items: []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
}{
|
||||||
|
{LotName: "CPU_X", Quantity: 2},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CalculatePriceLevels returned error: %v", err)
|
||||||
|
}
|
||||||
|
if len(result.Items) != 1 {
|
||||||
|
t.Fatalf("expected 1 item, got %d", len(result.Items))
|
||||||
|
}
|
||||||
|
item := result.Items[0]
|
||||||
|
if item.EstimatePrice == nil || *item.EstimatePrice != 100 {
|
||||||
|
t.Fatalf("expected estimate 100, got %#v", item.EstimatePrice)
|
||||||
|
}
|
||||||
|
if item.WarehousePrice == nil || *item.WarehousePrice != 120 {
|
||||||
|
t.Fatalf("expected warehouse 120, got %#v", item.WarehousePrice)
|
||||||
|
}
|
||||||
|
if item.CompetitorPrice != nil {
|
||||||
|
t.Fatalf("expected competitor nil, got %#v", item.CompetitorPrice)
|
||||||
|
}
|
||||||
|
if len(item.PriceMissing) != 1 || item.PriceMissing[0] != "competitor" {
|
||||||
|
t.Fatalf("expected price_missing [competitor], got %#v", item.PriceMissing)
|
||||||
|
}
|
||||||
|
if item.DeltaWhEstimateAbs == nil || *item.DeltaWhEstimateAbs != 20 {
|
||||||
|
t.Fatalf("expected delta abs 20, got %#v", item.DeltaWhEstimateAbs)
|
||||||
|
}
|
||||||
|
if item.DeltaWhEstimatePct == nil || *item.DeltaWhEstimatePct != 20 {
|
||||||
|
t.Fatalf("expected delta pct 20, got %#v", item.DeltaWhEstimatePct)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestCalculatePriceLevels_UsesExplicitPricelistIDs(t *testing.T) {
|
||||||
|
db := newPriceLevelsTestDB(t)
|
||||||
|
repo := repository.NewPricelistRepository(db)
|
||||||
|
service := NewQuoteService(nil, nil, repo, nil, nil)
|
||||||
|
|
||||||
|
olderEstimate := seedPricelistWithItem(t, repo, "estimate", "CPU_Y", 80)
|
||||||
|
seedPricelistWithItem(t, repo, "estimate", "CPU_Y", 90)
|
||||||
|
|
||||||
|
result, err := service.CalculatePriceLevels(&PriceLevelsRequest{
|
||||||
|
Items: []struct {
|
||||||
|
LotName string `json:"lot_name"`
|
||||||
|
Quantity int `json:"quantity"`
|
||||||
|
}{
|
||||||
|
{LotName: "CPU_Y", Quantity: 1},
|
||||||
|
},
|
||||||
|
PricelistIDs: map[string]uint{
|
||||||
|
"estimate": olderEstimate.ID,
|
||||||
|
},
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("CalculatePriceLevels returned error: %v", err)
|
||||||
|
}
|
||||||
|
item := result.Items[0]
|
||||||
|
if item.EstimatePrice == nil || *item.EstimatePrice != 80 {
|
||||||
|
t.Fatalf("expected explicit estimate 80, got %#v", item.EstimatePrice)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPriceLevelsTestDB(t *testing.T) *gorm.DB {
|
||||||
|
t.Helper()
|
||||||
|
db, err := gorm.Open(sqlite.Open("file::memory:?cache=shared"), &gorm.Config{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open sqlite: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}); err != nil {
|
||||||
|
t.Fatalf("migrate: %v", err)
|
||||||
|
}
|
||||||
|
return db
|
||||||
|
}
|
||||||
|
|
||||||
|
func seedPricelistWithItem(t *testing.T, repo *repository.PricelistRepository, source, lot string, price float64) *models.Pricelist {
|
||||||
|
t.Helper()
|
||||||
|
version, err := repo.GenerateVersionBySource(source)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("GenerateVersionBySource: %v", err)
|
||||||
|
}
|
||||||
|
expiresAt := time.Now().Add(24 * time.Hour)
|
||||||
|
pl := &models.Pricelist{
|
||||||
|
Source: source,
|
||||||
|
Version: version,
|
||||||
|
CreatedBy: "test",
|
||||||
|
IsActive: true,
|
||||||
|
ExpiresAt: &expiresAt,
|
||||||
|
}
|
||||||
|
if err := repo.Create(pl); err != nil {
|
||||||
|
t.Fatalf("create pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := repo.CreateItems([]models.PricelistItem{
|
||||||
|
{
|
||||||
|
PricelistID: pl.ID,
|
||||||
|
LotName: lot,
|
||||||
|
Price: price,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("create items: %v", err)
|
||||||
|
}
|
||||||
|
return pl
|
||||||
|
}
|
||||||
410
internal/services/sync/readiness.go
Normal file
410
internal/services/sync/readiness.go
Normal file
@@ -0,0 +1,410 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log/slog"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/appmeta"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
ReadinessReady = "ready"
|
||||||
|
ReadinessBlocked = "blocked"
|
||||||
|
ReadinessUnknown = "unknown"
|
||||||
|
)
|
||||||
|
|
||||||
|
var ErrSyncBlockedByReadiness = errors.New("sync blocked by readiness guard")
|
||||||
|
|
||||||
|
type SyncReadiness struct {
|
||||||
|
Status string `json:"status"`
|
||||||
|
Blocked bool `json:"blocked"`
|
||||||
|
ReasonCode string `json:"reason_code,omitempty"`
|
||||||
|
ReasonText string `json:"reason_text,omitempty"`
|
||||||
|
RequiredMinAppVersion *string `json:"required_min_app_version,omitempty"`
|
||||||
|
LastCheckedAt *time.Time `json:"last_checked_at,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type SyncBlockedError struct {
|
||||||
|
Readiness SyncReadiness
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *SyncBlockedError) Error() string {
|
||||||
|
if e == nil {
|
||||||
|
return ErrSyncBlockedByReadiness.Error()
|
||||||
|
}
|
||||||
|
if strings.TrimSpace(e.Readiness.ReasonText) != "" {
|
||||||
|
return e.Readiness.ReasonText
|
||||||
|
}
|
||||||
|
return ErrSyncBlockedByReadiness.Error()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) EnsureReadinessForSync() (*SyncReadiness, error) {
|
||||||
|
readiness, err := s.GetReadiness()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if readiness.Blocked {
|
||||||
|
return readiness, &SyncBlockedError{Readiness: *readiness}
|
||||||
|
}
|
||||||
|
return readiness, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) GetReadiness() (*SyncReadiness, error) {
|
||||||
|
now := time.Now().UTC()
|
||||||
|
if !s.isOnline() {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"OFFLINE_UNVERIFIED_SCHEMA",
|
||||||
|
"Синхронизация недоступна: нет соединения с сервером и нельзя проверить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
mariaDB, err := s.getDB()
|
||||||
|
if err != nil || mariaDB == nil {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"OFFLINE_UNVERIFIED_SCHEMA",
|
||||||
|
"Синхронизация недоступна: нет соединения с сервером и нельзя проверить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
migrations, err := listActiveClientMigrations(mariaDB)
|
||||||
|
if err != nil {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"REMOTE_MIGRATION_REGISTRY_UNAVAILABLE",
|
||||||
|
"Синхронизация заблокирована: не удалось проверить централизованные миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range migrations {
|
||||||
|
m := migrations[i]
|
||||||
|
if strings.TrimSpace(m.MinAppVersion) != "" {
|
||||||
|
if compareVersions(appmeta.Version(), m.MinAppVersion) < 0 {
|
||||||
|
min := m.MinAppVersion
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"MIN_APP_VERSION_REQUIRED",
|
||||||
|
fmt.Sprintf("Требуется обновление приложения до версии %s для безопасной синхронизации.", m.MinAppVersion),
|
||||||
|
&min,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.applyMissingRemoteMigrations(migrations); err != nil {
|
||||||
|
if strings.Contains(strings.ToLower(err.Error()), "checksum") {
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"REMOTE_MIGRATION_CHECKSUM_MISMATCH",
|
||||||
|
"Синхронизация заблокирована: контрольная сумма миграции не совпадает.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
return s.blockedReadiness(
|
||||||
|
now,
|
||||||
|
"LOCAL_MIGRATION_APPLY_FAILED",
|
||||||
|
"Синхронизация заблокирована: не удалось применить миграции локальной БД.",
|
||||||
|
nil,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.reportClientSchemaState(mariaDB, now); err != nil {
|
||||||
|
slog.Warn("failed to report client schema state", "error", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ready := &SyncReadiness{Status: ReadinessReady, Blocked: false, LastCheckedAt: &now}
|
||||||
|
if setErr := s.localDB.SetSyncGuardState(ReadinessReady, "", "", nil, &now); setErr != nil {
|
||||||
|
slog.Warn("failed to persist sync guard state", "error", setErr)
|
||||||
|
}
|
||||||
|
return ready, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) blockedReadiness(now time.Time, code, text string, minVersion *string) (*SyncReadiness, error) {
|
||||||
|
readiness := &SyncReadiness{
|
||||||
|
Status: ReadinessBlocked,
|
||||||
|
Blocked: true,
|
||||||
|
ReasonCode: code,
|
||||||
|
ReasonText: text,
|
||||||
|
RequiredMinAppVersion: minVersion,
|
||||||
|
LastCheckedAt: &now,
|
||||||
|
}
|
||||||
|
if err := s.localDB.SetSyncGuardState(ReadinessBlocked, code, text, minVersion, &now); err != nil {
|
||||||
|
slog.Warn("failed to persist blocked sync guard state", "error", err)
|
||||||
|
}
|
||||||
|
return readiness, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) isOnline() bool {
|
||||||
|
if s.directDB != nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if s.connMgr == nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return s.connMgr.IsOnline()
|
||||||
|
}
|
||||||
|
|
||||||
|
type clientLocalMigration struct {
|
||||||
|
ID string `gorm:"column:id"`
|
||||||
|
Name string `gorm:"column:name"`
|
||||||
|
SQLText string `gorm:"column:sql_text"`
|
||||||
|
Checksum string `gorm:"column:checksum"`
|
||||||
|
MinAppVersion string `gorm:"column:min_app_version"`
|
||||||
|
OrderNo int `gorm:"column:order_no"`
|
||||||
|
CreatedAt time.Time `gorm:"column:created_at"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func listActiveClientMigrations(db *gorm.DB) ([]clientLocalMigration, error) {
|
||||||
|
if strings.EqualFold(db.Dialector.Name(), "sqlite") {
|
||||||
|
return []clientLocalMigration{}, nil
|
||||||
|
}
|
||||||
|
if err := ensureClientMigrationRegistryTable(db); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
rows := make([]clientLocalMigration, 0)
|
||||||
|
if err := db.Raw(`
|
||||||
|
SELECT id, name, sql_text, checksum, COALESCE(min_app_version, '') AS min_app_version, order_no, created_at
|
||||||
|
FROM qt_client_local_migrations
|
||||||
|
WHERE is_active = 1
|
||||||
|
ORDER BY order_no ASC, created_at ASC, id ASC
|
||||||
|
`).Scan(&rows).Error; err != nil {
|
||||||
|
return nil, fmt.Errorf("load client local migrations: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return rows, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensureClientMigrationRegistryTable(db *gorm.DB) error {
|
||||||
|
// Check if table exists instead of trying to create (avoids permission issues)
|
||||||
|
if !tableExists(db, "qt_client_local_migrations") {
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE IF NOT EXISTS qt_client_local_migrations (
|
||||||
|
id VARCHAR(128) NOT NULL,
|
||||||
|
name VARCHAR(255) NOT NULL,
|
||||||
|
sql_text LONGTEXT NOT NULL,
|
||||||
|
checksum VARCHAR(128) NOT NULL,
|
||||||
|
min_app_version VARCHAR(64) NULL,
|
||||||
|
order_no INT NOT NULL DEFAULT 0,
|
||||||
|
is_active TINYINT(1) NOT NULL DEFAULT 1,
|
||||||
|
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||||
|
PRIMARY KEY (id),
|
||||||
|
INDEX idx_qt_client_local_migrations_active_order (is_active, order_no, created_at)
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create qt_client_local_migrations table: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if !tableExists(db, "qt_client_schema_state") {
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE IF NOT EXISTS qt_client_schema_state (
|
||||||
|
username VARCHAR(100) NOT NULL,
|
||||||
|
last_applied_migration_id VARCHAR(128) NULL,
|
||||||
|
app_version VARCHAR(64) NULL,
|
||||||
|
last_checked_at DATETIME NOT NULL,
|
||||||
|
updated_at DATETIME NOT NULL,
|
||||||
|
PRIMARY KEY (username),
|
||||||
|
INDEX idx_qt_client_schema_state_checked (last_checked_at)
|
||||||
|
)
|
||||||
|
`).Error; err != nil {
|
||||||
|
return fmt.Errorf("create qt_client_schema_state table: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func tableExists(db *gorm.DB, tableName string) bool {
|
||||||
|
var count int64
|
||||||
|
// For MariaDB/MySQL, check information_schema
|
||||||
|
if err := db.Raw(`
|
||||||
|
SELECT COUNT(*) FROM information_schema.TABLES
|
||||||
|
WHERE TABLE_SCHEMA = DATABASE() AND TABLE_NAME = ?
|
||||||
|
`, tableName).Scan(&count).Error; err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return count > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) applyMissingRemoteMigrations(migrations []clientLocalMigration) error {
|
||||||
|
for i := range migrations {
|
||||||
|
m := migrations[i]
|
||||||
|
computedChecksum := digestSQL(m.SQLText)
|
||||||
|
checksum := strings.TrimSpace(m.Checksum)
|
||||||
|
if checksum == "" {
|
||||||
|
checksum = computedChecksum
|
||||||
|
} else if !strings.EqualFold(checksum, computedChecksum) {
|
||||||
|
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
applied, err := s.localDB.GetRemoteMigrationApplied(m.ID)
|
||||||
|
if err == nil {
|
||||||
|
if strings.TrimSpace(applied.Checksum) != checksum {
|
||||||
|
return fmt.Errorf("checksum mismatch for migration %s", m.ID)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if !errors.Is(err, gorm.ErrRecordNotFound) {
|
||||||
|
return fmt.Errorf("check local applied migration %s: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.TrimSpace(m.SQLText) == "" {
|
||||||
|
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
||||||
|
return fmt.Errorf("mark empty migration %s as applied: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
statements := splitSQLStatementsLite(m.SQLText)
|
||||||
|
if err := s.localDB.DB().Transaction(func(tx *gorm.DB) error {
|
||||||
|
for _, stmt := range statements {
|
||||||
|
if err := tx.Exec(stmt).Error; err != nil {
|
||||||
|
return fmt.Errorf("apply migration %s statement %q: %w", m.ID, stmt, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := s.localDB.UpsertRemoteMigrationApplied(m.ID, checksum, appmeta.Version(), time.Now().UTC()); err != nil {
|
||||||
|
return fmt.Errorf("record applied migration %s: %w", m.ID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitSQLStatementsLite(script string) []string {
|
||||||
|
scanner := bufio.NewScanner(strings.NewReader(script))
|
||||||
|
scanner.Buffer(make([]byte, 1024), 1024*1024)
|
||||||
|
|
||||||
|
lines := make([]string, 0, 64)
|
||||||
|
for scanner.Scan() {
|
||||||
|
line := strings.TrimSpace(scanner.Text())
|
||||||
|
if line == "" || strings.HasPrefix(line, "--") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
lines = append(lines, scanner.Text())
|
||||||
|
}
|
||||||
|
combined := strings.Join(lines, "\n")
|
||||||
|
raw := strings.Split(combined, ";")
|
||||||
|
stmts := make([]string, 0, len(raw))
|
||||||
|
for _, stmt := range raw {
|
||||||
|
trimmed := strings.TrimSpace(stmt)
|
||||||
|
if trimmed == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
stmts = append(stmts, trimmed)
|
||||||
|
}
|
||||||
|
return stmts
|
||||||
|
}
|
||||||
|
|
||||||
|
func digestSQL(sqlText string) string {
|
||||||
|
hash := sha256.Sum256([]byte(sqlText))
|
||||||
|
return hex.EncodeToString(hash[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
func compareVersions(left, right string) int {
|
||||||
|
leftParts := normalizeVersionParts(left)
|
||||||
|
rightParts := normalizeVersionParts(right)
|
||||||
|
maxLen := len(leftParts)
|
||||||
|
if len(rightParts) > maxLen {
|
||||||
|
maxLen = len(rightParts)
|
||||||
|
}
|
||||||
|
for i := 0; i < maxLen; i++ {
|
||||||
|
lv := 0
|
||||||
|
rv := 0
|
||||||
|
if i < len(leftParts) {
|
||||||
|
lv = leftParts[i]
|
||||||
|
}
|
||||||
|
if i < len(rightParts) {
|
||||||
|
rv = rightParts[i]
|
||||||
|
}
|
||||||
|
if lv < rv {
|
||||||
|
return -1
|
||||||
|
}
|
||||||
|
if lv > rv {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Service) reportClientSchemaState(mariaDB *gorm.DB, checkedAt time.Time) error {
|
||||||
|
if strings.EqualFold(mariaDB.Dialector.Name(), "sqlite") {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
username := strings.TrimSpace(s.localDB.GetDBUser())
|
||||||
|
if username == "" {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
lastMigrationID := ""
|
||||||
|
if id, err := s.localDB.GetLatestAppliedRemoteMigrationID(); err == nil {
|
||||||
|
lastMigrationID = id
|
||||||
|
}
|
||||||
|
return mariaDB.Exec(`
|
||||||
|
INSERT INTO qt_client_schema_state (username, last_applied_migration_id, app_version, last_checked_at, updated_at)
|
||||||
|
VALUES (?, ?, ?, ?, ?)
|
||||||
|
ON DUPLICATE KEY UPDATE
|
||||||
|
last_applied_migration_id = VALUES(last_applied_migration_id),
|
||||||
|
app_version = VALUES(app_version),
|
||||||
|
last_checked_at = VALUES(last_checked_at),
|
||||||
|
updated_at = VALUES(updated_at)
|
||||||
|
`, username, lastMigrationID, appmeta.Version(), checkedAt, checkedAt).Error
|
||||||
|
}
|
||||||
|
|
||||||
|
func normalizeVersionParts(v string) []int {
|
||||||
|
trimmed := strings.TrimSpace(v)
|
||||||
|
trimmed = strings.TrimPrefix(trimmed, "v")
|
||||||
|
chunks := strings.Split(trimmed, ".")
|
||||||
|
parts := make([]int, 0, len(chunks))
|
||||||
|
for _, chunk := range chunks {
|
||||||
|
clean := strings.TrimSpace(chunk)
|
||||||
|
if clean == "" {
|
||||||
|
parts = append(parts, 0)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
n := 0
|
||||||
|
for i := 0; i < len(clean); i++ {
|
||||||
|
if clean[i] < '0' || clean[i] > '9' {
|
||||||
|
clean = clean[:i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if clean != "" {
|
||||||
|
if parsed, err := strconv.Atoi(clean); err == nil {
|
||||||
|
n = parsed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parts = append(parts, n)
|
||||||
|
}
|
||||||
|
return parts
|
||||||
|
}
|
||||||
|
|
||||||
|
func toReadinessFromState(state *localdb.LocalSyncGuardState) *SyncReadiness {
|
||||||
|
if state == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
blocked := state.Status == ReadinessBlocked
|
||||||
|
return &SyncReadiness{
|
||||||
|
Status: state.Status,
|
||||||
|
Blocked: blocked,
|
||||||
|
ReasonCode: state.ReasonCode,
|
||||||
|
ReasonText: state.ReasonText,
|
||||||
|
RequiredMinAppVersion: state.RequiredMinAppVersion,
|
||||||
|
LastCheckedAt: state.LastCheckedAt,
|
||||||
|
}
|
||||||
|
}
|
||||||
1473
internal/services/sync/service.go
Normal file
1473
internal/services/sync/service.go
Normal file
File diff suppressed because it is too large
Load Diff
25
internal/services/sync/service_order_test.go
Normal file
25
internal/services/sync/service_order_test.go
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPrioritizeProjectChanges(t *testing.T) {
|
||||||
|
changes := []localdb.PendingChange{
|
||||||
|
{ID: 1, EntityType: "configuration"},
|
||||||
|
{ID: 2, EntityType: "project"},
|
||||||
|
{ID: 3, EntityType: "configuration"},
|
||||||
|
{ID: 4, EntityType: "project"},
|
||||||
|
}
|
||||||
|
|
||||||
|
sorted := prioritizeProjectChanges(changes)
|
||||||
|
if len(sorted) != 4 {
|
||||||
|
t.Fatalf("unexpected sorted length: %d", len(sorted))
|
||||||
|
}
|
||||||
|
|
||||||
|
if sorted[0].EntityType != "project" || sorted[1].EntityType != "project" {
|
||||||
|
t.Fatalf("expected project changes first, got order: %s, %s", sorted[0].EntityType, sorted[1].EntityType)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,107 @@
|
|||||||
|
package sync_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncPricelists_BackfillsLotCategoryForUsedPricelistItems(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
if err := serverDB.AutoMigrate(
|
||||||
|
&models.Pricelist{},
|
||||||
|
&models.PricelistItem{},
|
||||||
|
&models.Lot{},
|
||||||
|
&models.LotPartnumber{},
|
||||||
|
&models.StockLog{},
|
||||||
|
); err != nil {
|
||||||
|
t.Fatalf("migrate server tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverPL := models.Pricelist{
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "2026-02-11-001",
|
||||||
|
Notification: "server",
|
||||||
|
CreatedBy: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&models.PricelistItem{
|
||||||
|
PricelistID: serverPL.ID,
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "CPU",
|
||||||
|
Price: 10,
|
||||||
|
PriceMethod: "",
|
||||||
|
MetaPrices: "",
|
||||||
|
ManualPrice: nil,
|
||||||
|
AvailableQty: nil,
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: serverPL.ID,
|
||||||
|
Source: serverPL.Source,
|
||||||
|
Version: serverPL.Version,
|
||||||
|
Name: serverPL.Notification,
|
||||||
|
CreatedAt: serverPL.CreatedAt,
|
||||||
|
SyncedAt: time.Now(),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
localPL, err := local.GetLocalPricelistByServerID(serverPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local pricelist: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelistItems([]localdb.LocalPricelistItem{
|
||||||
|
{
|
||||||
|
PricelistID: localPL.ID,
|
||||||
|
LotName: "CPU_A",
|
||||||
|
LotCategory: "",
|
||||||
|
Price: 10,
|
||||||
|
},
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local pricelist items: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveConfiguration(&localdb.LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
Name: "cfg",
|
||||||
|
Items: localdb.LocalConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 10}},
|
||||||
|
IsActive: true,
|
||||||
|
PricelistID: &serverPL.ID,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
CreatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local configuration with pricelist ref: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.SyncPricelists(); err != nil {
|
||||||
|
t.Fatalf("sync pricelists: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
items, err := local.GetLocalPricelistItems(localPL.ID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("load local items: %v", err)
|
||||||
|
}
|
||||||
|
if len(items) != 1 {
|
||||||
|
t.Fatalf("expected 1 local item, got %d", len(items))
|
||||||
|
}
|
||||||
|
if items[0].LotCategory != "CPU" {
|
||||||
|
t.Fatalf("expected lot_category backfilled to CPU, got %q", items[0].LotCategory)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
85
internal/services/sync/service_pricelist_cleanup_test.go
Normal file
85
internal/services/sync/service_pricelist_cleanup_test.go
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
package sync_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSyncPricelistsDeletesMissingUnusedLocalPricelists(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
if err := serverDB.AutoMigrate(&models.Pricelist{}, &models.PricelistItem{}); err != nil {
|
||||||
|
t.Fatalf("migrate server pricelist tables: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
serverPL := models.Pricelist{
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "2026-01-01-001",
|
||||||
|
Notification: "server",
|
||||||
|
CreatedBy: "tester",
|
||||||
|
IsActive: true,
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Hour),
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&serverPL).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := serverDB.Create(&models.PricelistItem{PricelistID: serverPL.ID, LotName: "CPU_A", Price: 10}).Error; err != nil {
|
||||||
|
t.Fatalf("create server pricelist item: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := local.SaveLocalPricelist(&localdb.LocalPricelist{
|
||||||
|
ServerID: 9991,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "old-unused",
|
||||||
|
Name: "old-unused",
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
SyncedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
IsUsed: false,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local missing pricelist: %v", err)
|
||||||
|
}
|
||||||
|
missingUsed := &localdb.LocalPricelist{
|
||||||
|
ServerID: 9992,
|
||||||
|
Source: "estimate",
|
||||||
|
Version: "old-used",
|
||||||
|
Name: "old-used",
|
||||||
|
CreatedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
SyncedAt: time.Now().Add(-2 * time.Hour),
|
||||||
|
IsUsed: false,
|
||||||
|
}
|
||||||
|
if err := local.SaveLocalPricelist(missingUsed); err != nil {
|
||||||
|
t.Fatalf("seed local referenced pricelist: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.SaveConfiguration(&localdb.LocalConfiguration{
|
||||||
|
UUID: "cfg-1",
|
||||||
|
OriginalUsername: "tester",
|
||||||
|
Name: "cfg",
|
||||||
|
Items: localdb.LocalConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1}},
|
||||||
|
IsActive: true,
|
||||||
|
PricelistID: &missingUsed.ServerID,
|
||||||
|
SyncStatus: "synced",
|
||||||
|
CreatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
UpdatedAt: time.Now().Add(-30 * time.Minute),
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("seed local configuration with pricelist ref: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
svc := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
if _, err := svc.SyncPricelists(); err != nil {
|
||||||
|
t.Fatalf("sync pricelists: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(9991); err == nil {
|
||||||
|
t.Fatalf("expected unused missing local pricelist to be deleted")
|
||||||
|
}
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(9992); err != nil {
|
||||||
|
t.Fatalf("expected local pricelist referenced by active config to stay: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := local.GetLocalPricelistByServerID(serverPL.ID); err != nil {
|
||||||
|
t.Fatalf("expected server pricelist to be synced locally: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
390
internal/services/sync/service_projects_push_test.go
Normal file
390
internal/services/sync/service_projects_push_test.go
Normal file
@@ -0,0 +1,390 @@
|
|||||||
|
package sync_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/localdb"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/models"
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/services"
|
||||||
|
syncsvc "git.mchus.pro/mchus/quoteforge/internal/services/sync"
|
||||||
|
"github.com/glebarez/sqlite"
|
||||||
|
"gorm.io/gorm"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestPushPendingChangesProjectsBeforeConfigurations(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
projectService := services.NewProjectService(local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
|
||||||
|
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: ptrString("Project A"), Code: "PRJ-A"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg A",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
pushed, err := pushService.PushPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("push pending changes: %v", err)
|
||||||
|
}
|
||||||
|
if pushed < 2 {
|
||||||
|
t.Fatalf("expected at least 2 pushed changes, got %d", pushed)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverProject models.Project
|
||||||
|
if err := serverDB.Where("uuid = ?", project.UUID).First(&serverProject).Error; err != nil {
|
||||||
|
t.Fatalf("project not pushed to server: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverCfg models.Configuration
|
||||||
|
if err := serverDB.Where("uuid = ?", cfg.UUID).First(&serverCfg).Error; err != nil {
|
||||||
|
t.Fatalf("configuration not pushed to server: %v", err)
|
||||||
|
}
|
||||||
|
if serverCfg.ProjectUUID == nil || *serverCfg.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected project_uuid=%s on pushed config, got %v", project.UUID, serverCfg.ProjectUUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if got := local.CountPendingChanges(); got != 0 {
|
||||||
|
t.Fatalf("expected pending queue to be empty, got %d", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushPendingChangesProjectCreateThenUpdateBeforeFirstPush(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
projectService := services.NewProjectService(local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
|
project, err := projectService.Create("tester", &services.CreateProjectRequest{Name: ptrString("Project v1"), Code: "PRJ-V1"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create project: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := projectService.Update(project.UUID, "tester", &services.UpdateProjectRequest{Name: ptrString("Project v2")}); err != nil {
|
||||||
|
t.Fatalf("update project: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg linked",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: &project.UUID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("push pending changes: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverProject models.Project
|
||||||
|
if err := serverDB.Where("uuid = ?", project.UUID).First(&serverProject).Error; err != nil {
|
||||||
|
t.Fatalf("project not pushed to server: %v", err)
|
||||||
|
}
|
||||||
|
if serverProject.Name == nil || *serverProject.Name != "Project v2" {
|
||||||
|
t.Fatalf("expected latest project name, got %v", serverProject.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverCfg models.Configuration
|
||||||
|
if err := serverDB.Where("uuid = ?", cfg.UUID).First(&serverCfg).Error; err != nil {
|
||||||
|
t.Fatalf("configuration not pushed to server: %v", err)
|
||||||
|
}
|
||||||
|
if serverCfg.ProjectUUID == nil || *serverCfg.ProjectUUID != project.UUID {
|
||||||
|
t.Fatalf("expected project_uuid=%s on pushed config, got %v", project.UUID, serverCfg.ProjectUUID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushPendingChangesSkipsStaleUpdateAndAppliesLatest(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
|
created, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg v1",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 1, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("initial push: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := configService.UpdateNoAuth(created.UUID, &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg v2",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_A", Quantity: 2, UnitPrice: 1000}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: created.ProjectUUID,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local config: %v", err)
|
||||||
|
}
|
||||||
|
cfgSnapshot := localdb.LocalToConfiguration(localCfg)
|
||||||
|
stalePayload := syncsvc.ConfigurationChangePayload{
|
||||||
|
EventID: "stale-event",
|
||||||
|
IdempotencyKey: fmt.Sprintf("%s:v1:update", created.UUID),
|
||||||
|
ConfigurationUUID: created.UUID,
|
||||||
|
ProjectUUID: cfgSnapshot.ProjectUUID,
|
||||||
|
Operation: "update",
|
||||||
|
CurrentVersionID: "stale-v1",
|
||||||
|
CurrentVersionNo: 1,
|
||||||
|
ConflictPolicy: "last_write_wins",
|
||||||
|
Snapshot: *cfgSnapshot,
|
||||||
|
CreatedAt: time.Now().UTC().Add(-2 * time.Second),
|
||||||
|
}
|
||||||
|
raw, err := json.Marshal(stalePayload)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("marshal stale payload: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.DB().Create(&localdb.PendingChange{
|
||||||
|
EntityType: "configuration",
|
||||||
|
EntityUUID: created.UUID,
|
||||||
|
Operation: "update",
|
||||||
|
Payload: string(raw),
|
||||||
|
CreatedAt: time.Now().Add(-1 * time.Second),
|
||||||
|
}).Error; err != nil {
|
||||||
|
t.Fatalf("insert stale pending change: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("push pending with stale event: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverCfg models.Configuration
|
||||||
|
if err := serverDB.Where("uuid = ?", created.UUID).First(&serverCfg).Error; err != nil {
|
||||||
|
t.Fatalf("get server config: %v", err)
|
||||||
|
}
|
||||||
|
if serverCfg.Name != "Cfg v2" {
|
||||||
|
t.Fatalf("expected latest name to win, got %q", serverCfg.Name)
|
||||||
|
}
|
||||||
|
if got := local.CountPendingChanges(); got != 0 {
|
||||||
|
t.Fatalf("expected empty pending queue, got %d", got)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushPendingChangesCreateIsIdempotent(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
|
created, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg Idempotent",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_B", Quantity: 1, UnitPrice: 500}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
if _, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("initial push: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local config: %v", err)
|
||||||
|
}
|
||||||
|
currentVersionNo, currentVersionID := getCurrentVersionInfo(t, local, created.UUID, localCfg.CurrentVersionID)
|
||||||
|
cfgSnapshot := localdb.LocalToConfiguration(localCfg)
|
||||||
|
duplicatePayload := syncsvc.ConfigurationChangePayload{
|
||||||
|
EventID: "duplicate-create-event",
|
||||||
|
IdempotencyKey: fmt.Sprintf("%s:v%d:create", created.UUID, currentVersionNo),
|
||||||
|
ConfigurationUUID: created.UUID,
|
||||||
|
ProjectUUID: cfgSnapshot.ProjectUUID,
|
||||||
|
Operation: "create",
|
||||||
|
CurrentVersionID: currentVersionID,
|
||||||
|
CurrentVersionNo: currentVersionNo,
|
||||||
|
ConflictPolicy: "last_write_wins",
|
||||||
|
Snapshot: *cfgSnapshot,
|
||||||
|
CreatedAt: time.Now().UTC(),
|
||||||
|
}
|
||||||
|
raw, err := json.Marshal(duplicatePayload)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("marshal duplicate payload: %v", err)
|
||||||
|
}
|
||||||
|
if err := local.AddPendingChange("configuration", created.UUID, "create", string(raw)); err != nil {
|
||||||
|
t.Fatalf("add duplicate create pending change: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if pushed, err := pushService.PushPendingChanges(); err != nil {
|
||||||
|
t.Fatalf("push duplicate create: %v", err)
|
||||||
|
} else if pushed != 1 {
|
||||||
|
t.Fatalf("expected 1 pushed change for duplicate create, got %d", pushed)
|
||||||
|
}
|
||||||
|
|
||||||
|
var count int64
|
||||||
|
if err := serverDB.Model(&models.Configuration{}).Where("uuid = ?", created.UUID).Count(&count).Error; err != nil {
|
||||||
|
t.Fatalf("count server configs: %v", err)
|
||||||
|
}
|
||||||
|
if count != 1 {
|
||||||
|
t.Fatalf("expected one server row after idempotent create, got %d", count)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushPendingChangesCreateThenUpdateBeforeFirstPush(t *testing.T) {
|
||||||
|
local := newLocalDBForSyncTest(t)
|
||||||
|
serverDB := newServerDBForSyncTest(t)
|
||||||
|
|
||||||
|
localSync := syncsvc.NewService(nil, local)
|
||||||
|
configService := services.NewLocalConfigurationService(local, localSync, &services.QuoteService{}, func() bool { return false })
|
||||||
|
pushService := syncsvc.NewServiceWithDB(serverDB, local)
|
||||||
|
|
||||||
|
created, err := configService.Create("tester", &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg v1",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_X", Quantity: 1, UnitPrice: 700}},
|
||||||
|
ServerCount: 1,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("create config: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := configService.UpdateNoAuth(created.UUID, &services.CreateConfigRequest{
|
||||||
|
Name: "Cfg v2",
|
||||||
|
Items: models.ConfigItems{{LotName: "CPU_X", Quantity: 3, UnitPrice: 700}},
|
||||||
|
ServerCount: 1,
|
||||||
|
ProjectUUID: created.ProjectUUID,
|
||||||
|
}); err != nil {
|
||||||
|
t.Fatalf("update config before first push: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
pushed, err := pushService.PushPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("push pending changes: %v", err)
|
||||||
|
}
|
||||||
|
if pushed < 1 {
|
||||||
|
t.Fatalf("expected at least one pushed change, got %d", pushed)
|
||||||
|
}
|
||||||
|
|
||||||
|
var serverCfg models.Configuration
|
||||||
|
if err := serverDB.Where("uuid = ?", created.UUID).First(&serverCfg).Error; err != nil {
|
||||||
|
t.Fatalf("configuration not pushed to server: %v", err)
|
||||||
|
}
|
||||||
|
if serverCfg.Name != "Cfg v2" {
|
||||||
|
t.Fatalf("expected latest update to be pushed, got %q", serverCfg.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
localCfg, err := local.GetConfigurationByUUID(created.UUID)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("get local config: %v", err)
|
||||||
|
}
|
||||||
|
if localCfg.ServerID == nil || *localCfg.ServerID == 0 {
|
||||||
|
t.Fatalf("expected local configuration to have server_id after push, got %+v", localCfg.ServerID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newLocalDBForSyncTest(t *testing.T) *localdb.LocalDB {
|
||||||
|
t.Helper()
|
||||||
|
localPath := filepath.Join(t.TempDir(), "local.db")
|
||||||
|
local, err := localdb.New(localPath)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("init local db: %v", err)
|
||||||
|
}
|
||||||
|
t.Cleanup(func() { _ = local.Close() })
|
||||||
|
return local
|
||||||
|
}
|
||||||
|
|
||||||
|
func newServerDBForSyncTest(t *testing.T) *gorm.DB {
|
||||||
|
t.Helper()
|
||||||
|
serverPath := filepath.Join(t.TempDir(), "server.db")
|
||||||
|
db, err := gorm.Open(sqlite.Open(serverPath), &gorm.Config{})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("open server sqlite: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE qt_projects (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
owner_username TEXT NOT NULL,
|
||||||
|
code TEXT NOT NULL,
|
||||||
|
variant TEXT NOT NULL DEFAULT '',
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
tracker_url TEXT NULL,
|
||||||
|
is_active INTEGER NOT NULL DEFAULT 1,
|
||||||
|
is_system INTEGER NOT NULL DEFAULT 0,
|
||||||
|
created_at DATETIME,
|
||||||
|
updated_at DATETIME
|
||||||
|
);`).Error; err != nil {
|
||||||
|
t.Fatalf("create qt_projects: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`CREATE UNIQUE INDEX idx_qt_projects_code_variant ON qt_projects(code, variant);`).Error; err != nil {
|
||||||
|
t.Fatalf("create qt_projects index: %v", err)
|
||||||
|
}
|
||||||
|
if err := db.Exec(`
|
||||||
|
CREATE TABLE qt_configurations (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
uuid TEXT NOT NULL UNIQUE,
|
||||||
|
user_id INTEGER NULL,
|
||||||
|
owner_username TEXT NOT NULL,
|
||||||
|
project_uuid TEXT NULL,
|
||||||
|
app_version TEXT NULL,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
items TEXT NOT NULL,
|
||||||
|
total_price REAL NULL,
|
||||||
|
custom_price REAL NULL,
|
||||||
|
notes TEXT NULL,
|
||||||
|
is_template INTEGER NOT NULL DEFAULT 0,
|
||||||
|
server_count INTEGER NOT NULL DEFAULT 1,
|
||||||
|
server_model TEXT NULL,
|
||||||
|
support_code TEXT NULL,
|
||||||
|
article TEXT NULL,
|
||||||
|
pricelist_id INTEGER NULL,
|
||||||
|
warehouse_pricelist_id INTEGER NULL,
|
||||||
|
competitor_pricelist_id INTEGER NULL,
|
||||||
|
disable_price_refresh INTEGER NOT NULL DEFAULT 0,
|
||||||
|
only_in_stock INTEGER NOT NULL DEFAULT 0,
|
||||||
|
price_updated_at DATETIME NULL,
|
||||||
|
created_at DATETIME
|
||||||
|
);`).Error; err != nil {
|
||||||
|
t.Fatalf("create qt_configurations: %v", err)
|
||||||
|
}
|
||||||
|
return db
|
||||||
|
}
|
||||||
|
|
||||||
|
func ptrString(value string) *string {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
|
||||||
|
func getCurrentVersionInfo(t *testing.T, local *localdb.LocalDB, configurationUUID string, currentVersionID *string) (int, string) {
|
||||||
|
t.Helper()
|
||||||
|
if currentVersionID == nil || *currentVersionID == "" {
|
||||||
|
t.Fatalf("current version id is empty for %s", configurationUUID)
|
||||||
|
}
|
||||||
|
|
||||||
|
var version localdb.LocalConfigurationVersion
|
||||||
|
if err := local.DB().
|
||||||
|
Where("id = ? AND configuration_uuid = ?", *currentVersionID, configurationUUID).
|
||||||
|
First(&version).Error; err != nil {
|
||||||
|
t.Fatalf("get current version info: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return version.VersionNo, version.ID
|
||||||
|
}
|
||||||
102
internal/services/sync/worker.go
Normal file
102
internal/services/sync/worker.go
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
package sync
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"log/slog"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"git.mchus.pro/mchus/quoteforge/internal/db"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Worker performs background synchronization at regular intervals
|
||||||
|
type Worker struct {
|
||||||
|
service *Service
|
||||||
|
connMgr *db.ConnectionManager
|
||||||
|
interval time.Duration
|
||||||
|
logger *slog.Logger
|
||||||
|
stopCh chan struct{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewWorker creates a new background sync worker
|
||||||
|
func NewWorker(service *Service, connMgr *db.ConnectionManager, interval time.Duration) *Worker {
|
||||||
|
return &Worker{
|
||||||
|
service: service,
|
||||||
|
connMgr: connMgr,
|
||||||
|
interval: interval,
|
||||||
|
logger: slog.Default(),
|
||||||
|
stopCh: make(chan struct{}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// isOnline checks if the database connection is available
|
||||||
|
func (w *Worker) isOnline() bool {
|
||||||
|
return w.connMgr.IsOnline()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start begins the background sync loop in a goroutine
|
||||||
|
func (w *Worker) Start(ctx context.Context) {
|
||||||
|
w.logger.Info("starting background sync worker", "interval", w.interval)
|
||||||
|
|
||||||
|
ticker := time.NewTicker(w.interval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
// Run once immediately
|
||||||
|
w.runSync()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
w.logger.Info("background sync worker stopped by context")
|
||||||
|
return
|
||||||
|
case <-w.stopCh:
|
||||||
|
w.logger.Info("background sync worker stopped")
|
||||||
|
return
|
||||||
|
case <-ticker.C:
|
||||||
|
w.runSync()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Stop gracefully stops the worker
|
||||||
|
func (w *Worker) Stop() {
|
||||||
|
w.logger.Info("stopping background sync worker")
|
||||||
|
close(w.stopCh)
|
||||||
|
}
|
||||||
|
|
||||||
|
// runSync performs a single sync iteration
|
||||||
|
func (w *Worker) runSync() {
|
||||||
|
// Check if online
|
||||||
|
if !w.isOnline() {
|
||||||
|
w.logger.Debug("offline, skipping background sync")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if readiness, err := w.service.EnsureReadinessForSync(); err != nil {
|
||||||
|
w.logger.Warn("background sync: blocked by readiness guard",
|
||||||
|
"error", err,
|
||||||
|
"reason_code", readiness.ReasonCode,
|
||||||
|
"reason_text", readiness.ReasonText,
|
||||||
|
)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Push pending changes first
|
||||||
|
pushed, err := w.service.PushPendingChanges()
|
||||||
|
if err != nil {
|
||||||
|
w.logger.Warn("background sync: failed to push pending changes", "error", err)
|
||||||
|
} else if pushed > 0 {
|
||||||
|
w.logger.Info("background sync: pushed pending changes", "count", pushed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then check for new pricelists
|
||||||
|
err = w.service.SyncPricelistsIfNeeded()
|
||||||
|
if err != nil {
|
||||||
|
w.logger.Warn("background sync: failed to sync pricelists", "error", err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark user's sync heartbeat (used for online/offline status in UI).
|
||||||
|
w.service.RecordSyncHeartbeat()
|
||||||
|
|
||||||
|
w.logger.Info("background sync cycle completed")
|
||||||
|
}
|
||||||
413
man/backup.md
Normal file
413
man/backup.md
Normal file
@@ -0,0 +1,413 @@
|
|||||||
|
# AI Implementation Guide: Go Scheduled Backup Rotation (ZIP)
|
||||||
|
|
||||||
|
This document is written **for an AI** to replicate the same backup approach in another Go project. It contains the exact requirements, design notes, and full module listings you can copy.
|
||||||
|
|
||||||
|
## Requirements (Behavioral)
|
||||||
|
- Run backups on a daily schedule at a configured local time (default `00:00`).
|
||||||
|
- At startup, if there is no backup for the current period, create it immediately.
|
||||||
|
- Backup content must include:
|
||||||
|
- Local SQLite DB file (e.g., `qfs.db`).
|
||||||
|
- SQLite sidecars (`-wal`, `-shm`) if present.
|
||||||
|
- Runtime config file (e.g., `config.yaml`) if present.
|
||||||
|
- Backups must be ZIP archives named:
|
||||||
|
- `qfs-backp-YYYY-MM-DD.zip`
|
||||||
|
- Retention policy:
|
||||||
|
- 7 daily, 4 weekly, 12 monthly, 10 yearly archives.
|
||||||
|
- Keep backups in period-specific directories:
|
||||||
|
- `<backup root>/daily`, `/weekly`, `/monthly`, `/yearly`.
|
||||||
|
- Prevent duplicate backups for the same period via a marker file.
|
||||||
|
- Log success with the archive path, and log errors on failure.
|
||||||
|
|
||||||
|
## Configuration & Env
|
||||||
|
- Config key: `backup.time` with format `HH:MM` in local time. Default: `00:00`.
|
||||||
|
- Env overrides:
|
||||||
|
- `QFS_BACKUP_DIR` — backup root directory.
|
||||||
|
- `QFS_BACKUP_DISABLE` — disable backups (`1/true/yes`).
|
||||||
|
|
||||||
|
## Integration Steps (Minimal)
|
||||||
|
1. Add `BackupConfig` to your config struct.
|
||||||
|
2. Add a scheduler goroutine that:
|
||||||
|
- On startup: runs backup immediately if needed.
|
||||||
|
- Then sleeps until next configured time and runs daily.
|
||||||
|
3. Add the backup module (below).
|
||||||
|
4. Wire logs for success/failure.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Full Go Listings
|
||||||
|
|
||||||
|
## 1) Backup Module (Drop-in)
|
||||||
|
Create: `internal/appstate/backup.go`
|
||||||
|
|
||||||
|
```go
|
||||||
|
package appstate
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/zip"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type backupPeriod struct {
|
||||||
|
name string
|
||||||
|
retention int
|
||||||
|
key func(time.Time) string
|
||||||
|
date func(time.Time) string
|
||||||
|
}
|
||||||
|
|
||||||
|
var backupPeriods = []backupPeriod{
|
||||||
|
{
|
||||||
|
name: "daily",
|
||||||
|
retention: 7,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "weekly",
|
||||||
|
retention: 4,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
y, w := t.ISOWeek()
|
||||||
|
return fmt.Sprintf("%04d-W%02d", y, w)
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "monthly",
|
||||||
|
retention: 12,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "yearly",
|
||||||
|
retention: 10,
|
||||||
|
key: func(t time.Time) string {
|
||||||
|
return t.Format("2006")
|
||||||
|
},
|
||||||
|
date: func(t time.Time) string {
|
||||||
|
return t.Format("2006-01-02")
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
envBackupDisable = "QFS_BACKUP_DISABLE"
|
||||||
|
envBackupDir = "QFS_BACKUP_DIR"
|
||||||
|
)
|
||||||
|
|
||||||
|
var backupNow = time.Now
|
||||||
|
|
||||||
|
// EnsureRotatingLocalBackup creates or refreshes daily/weekly/monthly/yearly backups
|
||||||
|
// for the local database and config. It keeps a limited number per period.
|
||||||
|
func EnsureRotatingLocalBackup(dbPath, configPath string) ([]string, error) {
|
||||||
|
if isBackupDisabled() {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
if dbPath == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := os.Stat(dbPath); err != nil {
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
return nil, fmt.Errorf("stat db: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
root := resolveBackupRoot(dbPath)
|
||||||
|
now := backupNow()
|
||||||
|
|
||||||
|
created := make([]string, 0)
|
||||||
|
for _, period := range backupPeriods {
|
||||||
|
newFiles, err := ensurePeriodBackup(root, period, now, dbPath, configPath)
|
||||||
|
if err != nil {
|
||||||
|
return created, err
|
||||||
|
}
|
||||||
|
if len(newFiles) > 0 {
|
||||||
|
created = append(created, newFiles...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return created, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resolveBackupRoot(dbPath string) string {
|
||||||
|
if fromEnv := strings.TrimSpace(os.Getenv(envBackupDir)); fromEnv != "" {
|
||||||
|
return filepath.Clean(fromEnv)
|
||||||
|
}
|
||||||
|
return filepath.Join(filepath.Dir(dbPath), "backups")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isBackupDisabled() bool {
|
||||||
|
val := strings.ToLower(strings.TrimSpace(os.Getenv(envBackupDisable)))
|
||||||
|
return val == "1" || val == "true" || val == "yes"
|
||||||
|
}
|
||||||
|
|
||||||
|
func ensurePeriodBackup(root string, period backupPeriod, now time.Time, dbPath, configPath string) ([]string, error) {
|
||||||
|
key := period.key(now)
|
||||||
|
periodDir := filepath.Join(root, period.name)
|
||||||
|
if err := os.MkdirAll(periodDir, 0755); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup dir: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if hasBackupForKey(periodDir, key) {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
archiveName := fmt.Sprintf("qfs-backp-%s.zip", period.date(now))
|
||||||
|
archivePath := filepath.Join(periodDir, archiveName)
|
||||||
|
|
||||||
|
if err := createBackupArchive(archivePath, dbPath, configPath); err != nil {
|
||||||
|
return nil, fmt.Errorf("create %s backup archive: %w", period.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := writePeriodMarker(periodDir, key); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pruneOldBackups(periodDir, period.retention); err != nil {
|
||||||
|
return []string{archivePath}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return []string{archivePath}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func hasBackupForKey(periodDir, key string) bool {
|
||||||
|
marker := periodMarker{Key: ""}
|
||||||
|
data, err := os.ReadFile(periodMarkerPath(periodDir))
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
if err := json.Unmarshal(data, &marker); err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return marker.Key == key
|
||||||
|
}
|
||||||
|
|
||||||
|
type periodMarker struct {
|
||||||
|
Key string `json:"key"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func periodMarkerPath(periodDir string) string {
|
||||||
|
return filepath.Join(periodDir, ".period.json")
|
||||||
|
}
|
||||||
|
|
||||||
|
func writePeriodMarker(periodDir, key string) error {
|
||||||
|
data, err := json.MarshalIndent(periodMarker{Key: key}, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return os.WriteFile(periodMarkerPath(periodDir), data, 0644)
|
||||||
|
}
|
||||||
|
|
||||||
|
func pruneOldBackups(periodDir string, keep int) error {
|
||||||
|
entries, err := os.ReadDir(periodDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("read backups dir: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
files := make([]os.DirEntry, 0, len(entries))
|
||||||
|
for _, entry := range entries {
|
||||||
|
if entry.IsDir() {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if strings.HasSuffix(entry.Name(), ".zip") {
|
||||||
|
files = append(files, entry)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(files) <= keep {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
sort.Slice(files, func(i, j int) bool {
|
||||||
|
infoI, errI := files[i].Info()
|
||||||
|
infoJ, errJ := files[j].Info()
|
||||||
|
if errI != nil || errJ != nil {
|
||||||
|
return files[i].Name() < files[j].Name()
|
||||||
|
}
|
||||||
|
return infoI.ModTime().Before(infoJ.ModTime())
|
||||||
|
})
|
||||||
|
|
||||||
|
for i := 0; i < len(files)-keep; i++ {
|
||||||
|
path := filepath.Join(periodDir, files[i].Name())
|
||||||
|
if err := os.Remove(path); err != nil {
|
||||||
|
return fmt.Errorf("remove old backup %s: %w", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func createBackupArchive(destPath, dbPath, configPath string) error {
|
||||||
|
file, err := os.Create(destPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
|
||||||
|
zipWriter := zip.NewWriter(file)
|
||||||
|
if err := addZipFile(zipWriter, dbPath); err != nil {
|
||||||
|
_ = zipWriter.Close()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-wal")
|
||||||
|
_ = addZipOptionalFile(zipWriter, dbPath+"-shm")
|
||||||
|
|
||||||
|
if strings.TrimSpace(configPath) != "" {
|
||||||
|
_ = addZipOptionalFile(zipWriter, configPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := zipWriter.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return file.Sync()
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipOptionalFile(writer *zip.Writer, path string) error {
|
||||||
|
if _, err := os.Stat(path); err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return addZipFile(writer, path)
|
||||||
|
}
|
||||||
|
|
||||||
|
func addZipFile(writer *zip.Writer, path string) error {
|
||||||
|
in, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
info, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
header, err := zip.FileInfoHeader(info)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
header.Name = filepath.Base(path)
|
||||||
|
header.Method = zip.Deflate
|
||||||
|
|
||||||
|
out, err := writer.CreateHeader(header)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = io.Copy(out, in)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2) Scheduler Hook (Main)
|
||||||
|
Add this to your `main.go` (or equivalent). This schedules daily backups and logs success.
|
||||||
|
|
||||||
|
```go
|
||||||
|
func startBackupScheduler(ctx context.Context, cfg *config.Config, dbPath, configPath string) {
|
||||||
|
if cfg == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
hour, minute, err := parseBackupTime(cfg.Backup.Time)
|
||||||
|
if err != nil {
|
||||||
|
slog.Warn("invalid backup time; using 00:00", "value", cfg.Backup.Time, "error", err)
|
||||||
|
hour = 0
|
||||||
|
minute = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Startup check: if no backup exists for current periods, create now.
|
||||||
|
if created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath); backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr)
|
||||||
|
} else if len(created) > 0 {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for {
|
||||||
|
next := nextBackupTime(time.Now(), hour, minute)
|
||||||
|
timer := time.NewTimer(time.Until(next))
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
timer.Stop()
|
||||||
|
return
|
||||||
|
case <-timer.C:
|
||||||
|
start := time.Now()
|
||||||
|
created, backupErr := appstate.EnsureRotatingLocalBackup(dbPath, configPath)
|
||||||
|
duration := time.Since(start)
|
||||||
|
if backupErr != nil {
|
||||||
|
slog.Error("local backup failed", "error", backupErr, "duration", duration)
|
||||||
|
} else {
|
||||||
|
for _, path := range created {
|
||||||
|
slog.Info("local backup completed", "archive", path, "duration", duration)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseBackupTime(value string) (int, int, error) {
|
||||||
|
if strings.TrimSpace(value) == "" {
|
||||||
|
return 0, 0, fmt.Errorf("empty backup time")
|
||||||
|
}
|
||||||
|
parsed, err := time.Parse("15:04", value)
|
||||||
|
if err != nil {
|
||||||
|
return 0, 0, err
|
||||||
|
}
|
||||||
|
return parsed.Hour(), parsed.Minute(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func nextBackupTime(now time.Time, hour, minute int) time.Time {
|
||||||
|
location := now.Location()
|
||||||
|
target := time.Date(now.Year(), now.Month(), now.Day(), hour, minute, 0, 0, location)
|
||||||
|
if !now.Before(target) {
|
||||||
|
target = target.Add(24 * time.Hour)
|
||||||
|
}
|
||||||
|
return target
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3) Config Struct (Minimal)
|
||||||
|
Add to config:
|
||||||
|
|
||||||
|
```go
|
||||||
|
type BackupConfig struct {
|
||||||
|
Time string `yaml:"time"`
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Default:
|
||||||
|
```go
|
||||||
|
if c.Backup.Time == "" {
|
||||||
|
c.Backup.Time = "00:00"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes for Replication
|
||||||
|
- Keep `backup.time` in local time. Do **not** parse with timezone offsets unless required.
|
||||||
|
- The `.period.json` marker is what prevents duplicate backups within the same period.
|
||||||
|
- The archive file name only contains the date. Uniqueness is ensured by per-period directories and the period marker.
|
||||||
|
- If you change naming or retention, update both the file naming and prune logic together.
|
||||||
93
markdown
Normal file
93
markdown
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
<p><strong>ТЕХНИЧЕСКОЕ ЗАДАНИЕ</strong></p>
|
||||||
|
<p>1. Требования к продукции</p>
|
||||||
|
<p>Поставляемое оборудование должно быть новым, оригинальным, не бывшим
|
||||||
|
в употреблении, не восстановленным. Гарантийный срок — не менее 12
|
||||||
|
месяцев с момента поставки. Все компоненты, включая процессоры, память,
|
||||||
|
накопители и контроллеры, должны быть совместимы и предварительно
|
||||||
|
протестированы на совместимость и производительность в рамках единой
|
||||||
|
системы.</p>
|
||||||
|
<p>2. Количественные и качественные характеристики</p>
|
||||||
|
<p>2.1. Базовые требования к серверной платформе:</p>
|
||||||
|
<p>Модель (артикул): Сервер в конфигурации с шасси, поддерживающим 12
|
||||||
|
отсека 2.5" для накопителей NVMe U.2/U.3.</p>
|
||||||
|
<p>Форм-фактор: 2U для установки в стойку.</p>
|
||||||
|
<p>Кол-во процессорных сокетов: 1.</p>
|
||||||
|
<p>2.2. Требования к процессорам (CPU):</p>
|
||||||
|
<p>Количество: 1 шт.</p>
|
||||||
|
<p>Модель/семейство: 256t x AMD EPYC 9755 2.7 GHz 128c-Core Processor
|
||||||
|
Объём кэша L3: 512MB Техпроцесс: 4 нм, архитектура процессора: Zen-5
|
||||||
|
(Turin).</p>
|
||||||
|
<p>Минимальная базовая тактовая частота: 2.7 ГГц.</p>
|
||||||
|
<p>Максимальная частота работы процессора (Turboboost): 4.1 GHz</p>
|
||||||
|
<p>Для обеспечения полной производительности всех 12 накопителей NVMe и
|
||||||
|
сетевых адаптеров, процессор и системная платформа в целом должны
|
||||||
|
обеспечивать достаточно линий PCIe 5.0.</p>
|
||||||
|
<p>2.3. Требования к оперативной памяти (RAM):</p>
|
||||||
|
<p>Тип памяти: DDR5 с коррекцией ошибок (ECC) RDIMM 6000Mhz.</p>
|
||||||
|
<p>Минимальный объем оперативной памяти: 2048 ГБ.</p>
|
||||||
|
<p>Конфигурация: Модули памяти должны быть установлены в оптимальной
|
||||||
|
конфигурации для обеспечения полной пропускной способности всех линий
|
||||||
|
PCIe 5.0 от NVMe-накопителей. Платформа должна поддерживать установку не
|
||||||
|
менее 16 модулей DDR5 ECC REG 6000Mhz для последующего
|
||||||
|
масштабирования.</p>
|
||||||
|
<p>Поддерживаемая частота: Не менее 6000 МТ/с.</p>
|
||||||
|
<p>Одобренные модули оперативной памяти - Samsung/Micron/Hynix, DDR5,
|
||||||
|
64GB, RDIMM, ECC</p>
|
||||||
|
<p>2.4 Требования к системе хранения данных:</p>
|
||||||
|
<p>Конфигурация шасси: Обязательна поставка в конфигурации с 12 отсеками
|
||||||
|
2.5" под горячую замену, поддерживающими интерфейс NVMe через PCIe 5.0
|
||||||
|
x4 и 2 отсеками 2.5"/М.2 под горячую замену, поддерживающими SATA.</p>
|
||||||
|
<p>Дополнительно (для ОС): Поддержка установки 2x M.2 NVMe накопителей в
|
||||||
|
dedicated-слотах на материнской плате под операционную систему отдельно
|
||||||
|
от основного хранилища данных.</p>
|
||||||
|
<p>2.5. Требования к сетевым интерфейсам (NIC):</p>
|
||||||
|
<p>Слоты расширения сети: Наличие не менее 1 слотов OCP 3.0 SFF для
|
||||||
|
установки специализированных сетевых адаптеров.</p>
|
||||||
|
<p>Дополнительные сетевые адаптеры (обязательная поставка): Один сетевой
|
||||||
|
адаптер OCP 3.0 с 2 портами 25 Гбит/с Ethernet Intel 810 или Mellanox
|
||||||
|
CX-6.</p>
|
||||||
|
<p>Встроенные порты управления: порт 1 Гбит/с Ethernet (RJ-45) для
|
||||||
|
модуля управления iBMC.</p>
|
||||||
|
<p>2.6. Требования к интерфейсам расширения (PCIe):</p>
|
||||||
|
<p>Количество слотов PCIe: Конфигурация с 12 дисками NVMe использует
|
||||||
|
большую часть линий PCIe. Тем не менее, не менее слотов PCIe 5.0 должны
|
||||||
|
оставаться свободными для будущего расширения (например, установки
|
||||||
|
дополнительных сетевых карт или FPGA/GPU для специфических задач).</p>
|
||||||
|
<p>Шинная архитектура: Поставщик должен предоставить схему распределения
|
||||||
|
линий PCIe 5.0 между процессором контроллером RAID и слотами расширения,
|
||||||
|
подтверждающую отсутствие узких мест (bottlenecks).</p>
|
||||||
|
<p>2.7. Требования к системе управления:</p>
|
||||||
|
<p>Внеполосный (out-of-band) модуль управления: Наличие выделенного чипа
|
||||||
|
iBMC.</p>
|
||||||
|
<p>Интеллектуальные функции: Критически важна поддержка детального
|
||||||
|
мониторинга состояния NVMe-накопителей (SMART, температура, износ,
|
||||||
|
прогнозирование сбоев) через интерфейс iBMC и поддержка технологии
|
||||||
|
горячей замены NVMe-накопителей.</p>
|
||||||
|
<p>Наличие безагентного КВМ (HTML5)</p>
|
||||||
|
<p>Желательна поддержка shared LAN (через NCSI OCPv3 разъем) с
|
||||||
|
тегированием VLAN, настройка по умолчанию DHCP IPv4</p>
|
||||||
|
<p>Управление параметрами работы сервера (режим работы вентиляторов,
|
||||||
|
потребление энергии, итд).</p>
|
||||||
|
<p>Наличие 2х видов логирования:</p>
|
||||||
|
<p>Все аппаратные события, включая ECC ошибки по памяти, ошибки PCIe,
|
||||||
|
SATA (IPMI/Hardware Event Log)</p>
|
||||||
|
<p>Все сессии аутентификации и изменения системных параметров
|
||||||
|
(Audit/Security Log)</p>
|
||||||
|
<p>Наличие функционала обновления прошивок сервера (BIOS, BMC, CPLD
|
||||||
|
(опционально)) с сохранением и без сохранения настроек.</p>
|
||||||
|
<p>2.8. Требования к системе питания и охлаждения:</p>
|
||||||
|
<p>Блоки питания (PSU):</p>
|
||||||
|
<p>Количество: 2 шт. (резервирование 1+1) с возможностью горячей
|
||||||
|
замены.</p>
|
||||||
|
<p>Номинальная мощность каждого: Минимум 1200 Вт с сертификацией 80 Plus
|
||||||
|
Platinum/Titanium. Мощность должна быть достаточной для одновременной
|
||||||
|
пиковой нагрузки от процессора, 12 NVMe-дисков и прочих компонентов.</p>
|
||||||
|
<p>Система охлаждения: не менее N+1 резервирование вентиляторов.</p>
|
||||||
|
<p>3. Упаковка и маркировка:</p>
|
||||||
|
<p>Оборудование должно быть упаковано так, чтобы предотвратить
|
||||||
|
повреждение при транспортировке.</p>
|
||||||
|
<p>4. Требования к комплектации:</p>
|
||||||
|
<p>Рельсовый комплект - без инструментов с горизонтальной загрузкой</p>
|
||||||
|
<p>Оборудование - C19-C20 или С13-С14 кабели питания 1-2 m в зависимости
|
||||||
|
от БП, 19 " комплект для монтажа в стойку, комплект винтов, все отсеки с
|
||||||
|
корзинами.</p>
|
||||||
41
memory.md
Normal file
41
memory.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
# Changes summary (2026-02-11)
|
||||||
|
|
||||||
|
Implemented strict `lot_category` flow using `pricelist_items.lot_category` only (no parsing from `lot_name`), plus local caching and backfill:
|
||||||
|
|
||||||
|
1. Local DB schema + migrations
|
||||||
|
- Added `lot_category` column to `local_pricelist_items` via `LocalPricelistItem` model.
|
||||||
|
- Added local migration `2026_02_11_local_pricelist_item_category` to add the column if missing and create indexes:
|
||||||
|
- `idx_local_pricelist_items_pricelist_lot (pricelist_id, lot_name)`
|
||||||
|
- `idx_local_pricelist_items_lot_category (lot_category)`
|
||||||
|
|
||||||
|
2. Server model/repository
|
||||||
|
- Added `LotCategory` field to `models.PricelistItem`.
|
||||||
|
- `PricelistRepository.GetItems` now sets `Category` from `LotCategory` (no parsing from `lot_name`).
|
||||||
|
|
||||||
|
3. Sync + local DB helpers
|
||||||
|
- `SyncPricelistItems` now saves `lot_category` into local cache via `PricelistItemToLocal`.
|
||||||
|
- Added `LocalDB.CountLocalPricelistItemsWithEmptyCategory` and `LocalDB.ReplaceLocalPricelistItems`.
|
||||||
|
- Added `LocalDB.GetLocalLotCategoriesByServerPricelistID` for strict category lookup.
|
||||||
|
- Added `SyncPricelists` backfill step: for used active pricelists with empty categories, force refresh items from server.
|
||||||
|
|
||||||
|
4. API handler
|
||||||
|
- `GET /api/pricelists/:id/items` returns `category` from `local_pricelist_items.lot_category` (no parsing from `lot_name`).
|
||||||
|
|
||||||
|
5. Article category foundation
|
||||||
|
- New package `internal/article`:
|
||||||
|
- `ResolveLotCategoriesStrict` pulls categories from local pricelist items and errors on missing category.
|
||||||
|
- `GroupForLotCategory` maps only allowed codes (CPU/MEM/GPU/M2/SSD/HDD/EDSFF/HHHL/NIC/HCA/DPU/PSU/PS) to article groups; excludes `SFP`.
|
||||||
|
- Error type `MissingCategoryForLotError` with base `ErrMissingCategoryForLot`.
|
||||||
|
|
||||||
|
6. Tests
|
||||||
|
- Added unit tests for converters and article category resolver.
|
||||||
|
- Added handler test to ensure `/api/pricelists/:id/items` returns `lot_category`.
|
||||||
|
- Added sync test for category backfill on used pricelist items.
|
||||||
|
- `go test ./...` passed.
|
||||||
|
|
||||||
|
Additional fixes (2026-02-11):
|
||||||
|
- Fixed article parsing bug: CPU/GPU parsers were swapped in `internal/article/generator.go`. CPU now uses last token from CPU lot; GPU uses model+memory from `GPU_vendor_model_mem_iface`.
|
||||||
|
- Adjusted configurator base tab layout to align labels on the same row (separate label row + input row grid).
|
||||||
|
|
||||||
|
UI rule (2026-02-19):
|
||||||
|
- In all breadcrumbs, truncate long specification/configuration names to 16 characters using ellipsis.
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user