21 Commits

Author SHA1 Message Date
92134a6cc1 Support TXT uploads and extend XigmaNAS event parsing 2026-02-04 22:25:43 +03:00
ae588ae75a Register xigmanas vendor parser 2026-02-04 22:15:45 +03:00
b64a8d8709 Add XigmaNAS log parser and tests 2026-02-04 22:14:14 +03:00
Mikhail Chusavitin
f9230e12f3 Update README and CLAUDE docs for current Redfish workflow 2026-02-04 19:49:05 +03:00
Mikhail Chusavitin
bb48b03677 Redfish snapshot/export overhaul and portable release build 2026-02-04 19:43:51 +03:00
Mikhail Chusavitin
c89ee0118f Add pluggable live collectors and simplify API connect form 2026-02-04 19:00:03 +03:00
Mikhail Chusavitin
60c52b18b1 docs: sync README and CLAUDE with current CLI and live API behavior 2026-02-04 11:58:56 +03:00
Mikhail Chusavitin
f6a10d4eac fix: align live flow contracts and preserve existing result state
Closes #9
2026-02-04 11:38:35 +03:00
Mikhail Chusavitin
53849032fe test(server): add smoke and regression tests for archive and live flows
Closes #8
2026-02-04 10:14:55 +03:00
Mikhail Chusavitin
c54abf11b7 chore(test): ignore local helper mains in go test 2026-02-04 10:11:13 +03:00
Mikhail Chusavitin
596eda709c feat(models): add source metadata to analysis result
Closes #7
2026-02-04 10:09:15 +03:00
Mikhail Chusavitin
d38d0c9d30 feat(backend): add in-memory collect job manager and mock executor 2026-02-04 10:01:51 +03:00
Mikhail Chusavitin
aa3c82d9ba feat(api): add live collection contract endpoints 2026-02-04 09:54:48 +03:00
Mikhail Chusavitin
5a982d7ca8 feat(ui): add live collection job status mock screen 2026-02-04 09:50:46 +03:00
Mikhail Chusavitin
601e21f184 feat(ui): validate API form and improve error UX 2026-02-04 09:46:07 +03:00
Mikhail Chusavitin
c8772d97ed feat(ui): add archive/api data source switch 2026-02-04 09:39:04 +03:00
Mikhail Chusavitin
8e99c36888 Составить план модернизации интерфей 2026-02-04 09:20:30 +03:00
241e4e3605 Update README.md with comprehensive documentation 2026-01-31 00:35:10 +03:00
eeed509b43 Update documentation to reflect current implementation 2026-01-31 00:21:48 +03:00
Mikhail Chusavitin
70cd541d9e v1.3.0: Add multiple vendor parsers and enhanced hardware detection
New parsers:
- NVIDIA Field Diagnostics parser with dmidecode output support
- NVIDIA Bug Report parser with comprehensive hardware extraction
- Supermicro crashdump (CDump.txt) parser
- Generic fallback parser for unrecognized text files

Enhanced GPU parsing (nvidia-bug-report):
- Model and manufacturer detection (NVIDIA H100 80GB HBM3)
- UUID, Video BIOS version, IRQ information
- Bus location (BDF), DMA size/mask, device minor
- PCIe bus type details

New hardware detection (nvidia-bug-report):
- System Information: server S/N, UUID, manufacturer, product name
- CPU: model, S/N, cores, threads, frequencies from dmidecode
- Memory: P/N, S/N, manufacturer, speed for all DIMMs
- Power Supplies: manufacturer, model, S/N, wattage, status
- Network Adapters: Ethernet/InfiniBand controllers with VPD data
  - Model, P/N, S/N from lspci Vital Product Data
  - Port count/type detection (QSFP56, OSFP, etc.)
  - Support for ConnectX-6/7 adapters

Archive handling improvements:
- Plain .gz file support (not just tar.gz)
- Increased size limit for plain gzip files (50MB)
- Better error handling for mixed archive formats

Web interface enhancements:
- Display parser name and filename badges
- Improved file info section with visual indicators

Co-Authored-By: Claude (qwen3-coder:480b) <noreply@anthropic.com>
2026-01-30 17:19:47 +03:00
Mikhail Chusavitin
21f4e5a67e v1.2.0: Enhanced Inspur/Kaytus parser with GPU, PCIe, and storage support
Major improvements:
- Add CSV SEL event parser for Kaytus firmware format
- Add PCIe device parser with link speed/width detection
- Add GPU temperature and PCIe link monitoring
- Add disk backplane parser for storage bay information
- Fix memory module detection (only show installed DIMMs)

Parser enhancements:
- Parse RESTful PCIe Device info (max/current link width/speed)
- Parse GPU sensor data (core and memory temperatures)
- Parse diskbackplane info (slot count, installed drives)
- Parse SEL events from CSV format (selelist.csv)
- Fix memory Present status logic (check mem_mod_status)

Web interface improvements:
- Add PCIe link degradation highlighting (red when current < max)
- Add storage table with Present status and location
- Update memory specification to show only installed modules with frequency
- Sort events from newest to oldest
- Filter out N/A serial numbers from display

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-30 12:30:18 +03:00
59 changed files with 8768 additions and 345 deletions

232
CLAUDE.md
View File

@@ -1,193 +1,95 @@
# BMC Analyzer - Инструкции для Claude Code
# LOGPile - Engineering Notes (for Claude/Codex)
## Описание проекта
## Project summary
Приложение для анализа диагностической информации с BMC серверов (IPMI).
Представляет собой standalone Go-бинарник со встроенным веб-интерфейсом.
LOGPile is a standalone Go app for BMC diagnostics analysis with embedded web UI.
### Функциональность
Current product modes:
1. Upload and parse vendor archives / JSON snapshots.
2. Collect live data via Redfish and analyze/export it.
**Входные данные:**
- Архив (tar.gz/zip) с диагностическими данными IPMI сервера
## Runtime architecture
**Обработка:**
- Парсинг System Event Log (SEL) - журнал событий IPMI
- Парсинг FRU (Field Replaceable Unit) - серийные номера компонентов
- Парсинг конфигурации сервера (CPU, RAM, диски, и т.д.)
- Go + `net/http` (`http.ServeMux`)
- Embedded UI (`web/embed.go`, `//go:embed templates static`)
- In-memory state (`Server.result`, `Server.detectedVendor`)
- Job manager for live collect status/logs
**Выходные данные:**
- Веб-интерфейс с человекочитаемой информацией
- Экспорт логов в TXT/JSON
- Экспорт конфигурации в JSON
- Экспорт серийных номеров в CSV
Default port: `8082`.
## Архитектура
## Key flows
- **Тип:** Standalone бинарник с embedded веб-сервером
- **Язык:** Go
- **UI:** Embedded HTML + CSS + Vanilla JS (или Alpine.js)
- **Порт:** localhost:8080 (по умолчанию)
### Upload flow (`POST /api/upload`)
- Accepts multipart file field `archive`.
- If file looks like JSON, parsed as `models.AnalysisResult` snapshot.
- Otherwise passed to archive parser (`parser.NewBMCParser().ParseFromReader(...)`).
- Result stored in memory and exposed by API/UI.
## Структура проекта
### Live flow (`POST /api/collect`)
- Validates request (`host/protocol/port/username/auth_type/tls_mode`).
- Runs collector asynchronously with progress callback.
- On success:
- source metadata set (`source_type=api`, protocol/host/date),
- result becomes current in-memory dataset.
- On failed/canceled previous dataset stays unchanged.
```
bmc-analyzer/
├── cmd/bmc-analyzer/main.go # Точка входа
├── internal/
│ ├── parser/ # Парсинг архивов и IPMI данных
│ ├── models/ # Модели данных
│ ├── analyzer/ # Логика анализа
│ ├── exporter/ # Экспорт данных
│ └── server/ # HTTP сервер и handlers
├── web/ # Embedded веб-интерфейс
│ ├── static/ # CSS, JS, изображения
│ └── templates/ # HTML шаблоны
├── testdata/ # Примеры архивов для тестов
├── go.mod
├── Makefile
└── README.md
```
## Collectors
## Технический стек
Registry: `internal/collector/registry.go`
### Backend
- Go 1.21+
- Стандартная библиотека (net/http, archive/tar, compress/gzip)
- embed для встраивания веб-ресурсов
- Возможно: fiber или gin для роутинга (на ваше усмотрение)
- `redfish` (real collector):
- dynamic discovery of Systems/Chassis/Managers,
- CPU/RAM/Storage/GPU/PSU/NIC/PCIe/Firmware mapping,
- raw Redfish snapshot (`result.RawPayloads["redfish_tree"]`) for offline future analysis,
- progress logs include active collection stage and snapshot progress.
- `ipmi` is currently a mock collector scaffold.
### Frontend
- Vanilla JavaScript или Alpine.js (минимализм)
- CSS (можно Tailwind CSS через CDN)
- Без сборщиков - всё embedded в бинарник
## Export behavior
### Парсинг IPMI
- SEL формат: обычно текстовый вывод `ipmitool sel list` или бинарный
- FRU формат: вывод `ipmitool fru print`
- Конфигурация: различные текстовые файлы из архива
Endpoints:
- `/api/export/csv`
- `/api/export/json`
- `/api/export/txt`
## Этапы разработки
Filename pattern for all exports:
`YYYY-MM-DD (SERVER MODEL) - SERVER SN.<ext>`
### 1. Базовая структура ✓
- [x] Создана структура директорий
- [ ] go.mod инициализирован
- [ ] Makefile создан
Notes:
- JSON export contains full `AnalysisResult`, including `raw_payloads`.
- TXT export is tabular and mirrors UI sections (no raw JSON section).
### 2. Парсер архивов
- [ ] Распаковка tar.gz
- [ ] Распаковка zip
- [ ] Определение типов файлов внутри архива
## CLI flags (`cmd/logpile/main.go`)
### 3. Парсеры IPMI данных
- [ ] SEL parser (System Event Log)
- [ ] FRU parser (серийные номера)
- [ ] Config parser (конфигурация сервера)
- `--port`
- `--file` (reserved/preload, not active workflow)
- `--version`
- `--no-browser`
- `--hold-on-crash` (default true on Windows) — keeps console open on fatal crash for debugging.
### 4. Модели данных
- [ ] Event (события из SEL)
- [ ] Hardware (конфигурация)
- [ ] SerialNumber (серийники компонентов)
## Build / release
### 5. Веб-сервер
- [ ] HTTP сервер с embedded файлами
- [ ] Upload handler для архивов
- [ ] API endpoints для получения данных
- [ ] Handlers для экспорта
- `make build` -> single local binary (`CGO_ENABLED=0`).
- `make build-all` -> cross-platform binaries.
- Tags/releases are published with `tea`.
- Release notes live in `docs/releases/<tag>.md`.
### 6. Веб-интерфейс
- [ ] Главная страница с upload формой
- [ ] Отображение событий (timeline/таблица)
- [ ] Отображение конфигурации
- [ ] Таблица серийных номеров
- [ ] Кнопки экспорта
## Testing expectations
### 7. Экспортеры
- [ ] CSV экспорт (серийники)
- [ ] JSON экспорт (конфиг, события)
- [ ] TXT отчет (логи)
### 8. Тестирование и сборка
- [ ] Unit тесты для парсеров
- [ ] Интеграционные тесты
- [ ] Cross-platform сборка (Linux, Windows, Mac)
## Примеры использования
Before merge:
```bash
# Простой запуск
./bmc-analyzer
# С указанием порта
./bmc-analyzer --port 9000
# С предзагрузкой файла
./bmc-analyzer --file /path/to/bmc-archive.tar.gz
# Кросс-компиляция
make build-all
go test ./...
```
## Формат данных IPMI
If touching collectors/handlers, prefer adding or updating tests in:
- `internal/collector/*_test.go`
- `internal/server/*_test.go`
### SEL (System Event Log)
```
SEL Record ID : 0001
Record Type : 02
Timestamp : 01/15/2025 14:23:45
Generator ID : 0020
EvM Revision : 04
Sensor Type : Temperature
Sensor Number : 01
Event Type : Threshold
Event Direction : Assertion Event
Event Data : 010000
Description : Upper Critical - going high
```
## Practical coding guidance
### FRU (Field Replaceable Unit)
```
FRU Device Description : Builtin FRU Device (ID 0)
Board Mfg Date : Mon Jan 1 00:00:00 1996
Board Mfg : Supermicro
Board Product : X11DPH-T
Board Serial : WM194S001234
Board Part Number : X11DPH-TQ
```
## API Endpoints (планируемые)
```
POST /api/upload # Загрузить архив
GET /api/events # Получить список событий
GET /api/config # Получить конфигурацию
GET /api/serials # Получить серийные номера
GET /api/export/csv # Экспорт в CSV
GET /api/export/json # Экспорт в JSON
GET /api/export/txt # Экспорт текстового отчета
DELETE /api/clear # Очистить загруженные данные
```
## Следующие шаги
1. Инициализировать Go модуль
2. Создать базовую структуру пакетов
3. Реализовать парсер архивов (tar.gz)
4. Создать простой HTTP сервер с upload формой
5. Реализовать парсинг SEL логов
6. Добавить веб-интерфейс для отображения данных
## Примечания
- Все файлы веб-интерфейса должны быть embedded в бинарник через `//go:embed`
- Приоритет на простоту и минимум зависимостей
- Безопасность: валидация загружаемых архивов (размер, типы файлов)
- UI должен быть простым и функциональным, не перегруженным
- Поддержка русского языка в интерфейсе
## Вопросы для уточнения
1. Какие конкретно производители BMC используются? (Supermicro, Dell iDRAC, HP iLO, etc.)
2. Есть ли примеры реальных архивов для тестирования?
3. Нужна ли поддержка разных форматов SEL (текстовый vs бинарный)?
4. Какие метрики/события наиболее важны для анализа?
5. Нужна ли фильтрация событий по severity (Critical, Warning, Info)?
- Keep API contracts stable with frontend (`web/static/js/app.js`).
- When adding Redfish mappings, prefer tolerant/fallback parsing:
- alternate collection paths,
- `@odata.id` references and embedded members,
- deduping by serial/BDF/slot+model.
- Avoid breaking snapshot backward compatibility (`AnalysisResult` JSON shape).

View File

@@ -6,7 +6,7 @@ COMMIT=$(shell git rev-parse --short HEAD 2>/dev/null || echo "none")
LDFLAGS=-ldflags "-X main.version=$(VERSION) -X main.commit=$(COMMIT)"
build:
go build $(LDFLAGS) -o bin/$(BINARY_NAME) ./cmd/logpile
CGO_ENABLED=0 go build $(LDFLAGS) -o bin/$(BINARY_NAME) ./cmd/logpile
run: build
./bin/$(BINARY_NAME)
@@ -19,11 +19,11 @@ test:
# Cross-platform builds
build-all: clean
GOOS=linux GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-linux-amd64 ./cmd/logpile
GOOS=linux GOARCH=arm64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-linux-arm64 ./cmd/logpile
GOOS=darwin GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-darwin-amd64 ./cmd/logpile
GOOS=darwin GOARCH=arm64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-darwin-arm64 ./cmd/logpile
GOOS=windows GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-windows-amd64.exe ./cmd/logpile
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-linux-amd64 ./cmd/logpile
CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-linux-arm64 ./cmd/logpile
CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-darwin-amd64 ./cmd/logpile
CGO_ENABLED=0 GOOS=darwin GOARCH=arm64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-darwin-arm64 ./cmd/logpile
CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build $(LDFLAGS) -o bin/$(BINARY_NAME)-windows-amd64.exe ./cmd/logpile
dev:
go run ./cmd/logpile

155
README.md
View File

@@ -1,18 +1,151 @@
# logpile
# LOGPile
BMC Log analyzer
LOGPile — standalone Go-приложение для анализа диагностических данных BMC.
## Запуск из исходников
Поддерживает два сценария:
1. Загрузка архивов/снапшотов и оффлайн-анализ в веб-интерфейсе.
2. Live-сбор через Redfish API с последующим экспортом и повторной загрузкой оффлайн.
## Что умеет
- Standalone бинарник с embedded UI (без внешних статических файлов).
- Парсинг vendor-архивов (Supermicro, Inspur/Kaytus, NVIDIA, fallback generic).
- Live-сбор по Redfish (`/api/collect`) с прогрессом и журналом шагов.
- Расширенный Redfish snapshot:
- нормализованные данные (CPU/RAM/Storage/GPU/PSU/NIC/PCIe/Firmware),
- сырой `redfish_tree` для будущего анализа.
- Загрузка JSON snapshot обратно через `/api/upload` для оффлайн-работы.
- Экспорт в CSV / JSON / TXT.
## Требования
- Go 1.22+
## Сборка
```bash
# Сборка
make build
# Запуск веб-сервера
./bin/logpile serve
# Открыть в браузере
open http://localhost:8080
```
Требования: Go 1.22+
Бинарник будет в `bin/logpile`.
Для кросс-сборки:
```bash
make build-all
```
Артефакты:
- `bin/logpile-linux-amd64`
- `bin/logpile-linux-arm64`
- `bin/logpile-darwin-amd64`
- `bin/logpile-darwin-arm64`
- `bin/logpile-windows-amd64.exe`
## Запуск
```bash
./bin/logpile
./bin/logpile --port 8082
./bin/logpile --no-browser
./bin/logpile --version
```
Отладка падений (чтобы консоль не закрывалась):
```bash
./bin/logpile --hold-on-crash
```
> На Windows `--hold-on-crash` включён по умолчанию.
## Форматы загрузки
`POST /api/upload` принимает:
- архивы: `.tar`, `.tar.gz`, `.tgz`
- JSON snapshot (`AnalysisResult`)
## Live Redfish
Запуск live-сбора:
```http
POST /api/collect
```
Пример body:
```json
{
"host": "bmc01.example.local",
"protocol": "redfish",
"port": 443,
"username": "admin",
"auth_type": "password",
"password": "secret",
"tls_mode": "insecure"
}
```
Жизненный цикл задачи:
`queued -> running -> success|failed|canceled`
Статус и прогресс:
- `GET /api/collect/{id}`
- `POST /api/collect/{id}/cancel`
## Экспорт
- `GET /api/export/csv` — серийные номера
- `GET /api/export/json` — полный `AnalysisResult` (включая `raw_payloads`)
- `GET /api/export/txt` — табличный отчёт по разделам UI
Имена экспортируемых файлов:
`YYYY-MM-DD (SERVER MODEL) - SERVER SN.<ext>`
Пример:
`2026-02-04 (SYS-421GE-TNHR2) - C8X123456789.json`
## API
```text
POST /api/upload
POST /api/collect
GET /api/collect/{id}
POST /api/collect/{id}/cancel
GET /api/status
GET /api/parsers
GET /api/events
GET /api/sensors
GET /api/config
GET /api/serials
GET /api/firmware
GET /api/export/csv
GET /api/export/json
GET /api/export/txt
DELETE /api/clear
POST /api/shutdown
```
`/api/status` и `/api/config` содержат метаданные источника:
- `source_type`: `archive` | `api`
- `protocol`: `redfish` | `ipmi` (для архивов может быть пустым)
- `target_host`
- `collected_at`
## Структура
```text
cmd/logpile/main.go # entrypoint
internal/collector/ # live collectors (redfish, ipmi mock)
internal/parser/ # archive parsers
internal/server/ # HTTP handlers
internal/exporter/ # CSV/JSON/TXT export
internal/models/ # data contracts
web/ # embedded templates/static
```
## Лицензия
MIT — см. `LICENSE`.

View File

@@ -1,6 +1,7 @@
package main
import (
"bufio"
"flag"
"fmt"
"log"
@@ -21,7 +22,8 @@ var (
)
func main() {
port := flag.Int("port", 8080, "HTTP server port")
holdOnCrash := flag.Bool("hold-on-crash", runtime.GOOS == "windows", "Wait for Enter on crash to keep console open")
port := flag.Int("port", 8082, "HTTP server port")
file := flag.String("file", "", "Pre-load archive file")
showVersion := flag.Bool("version", false, "Show version")
noBrowser := flag.Bool("no-browser", false, "Don't open browser automatically")
@@ -54,11 +56,22 @@ func main() {
}()
}
if err := srv.Run(); err != nil {
log.Fatalf("Server error: %v", err)
if err := runServer(srv); err != nil {
log.Printf("FATAL: %v", err)
maybeWaitForCrashInput(*holdOnCrash)
os.Exit(1)
}
}
func runServer(srv *server.Server) (runErr error) {
defer func() {
if recovered := recover(); recovered != nil {
runErr = fmt.Errorf("panic: %v", recovered)
}
}()
return srv.Run()
}
// openBrowser opens the default browser with the given URL
func openBrowser(url string) {
var cmd *exec.Cmd
@@ -76,3 +89,23 @@ func openBrowser(url string) {
log.Printf("Failed to open browser: %v", err)
}
}
func maybeWaitForCrashInput(enabled bool) {
if !enabled || !isInteractiveConsole() {
return
}
fmt.Fprintln(os.Stderr, "\nApplication crashed. Press Enter to close...")
_, _ = bufio.NewReader(os.Stdin).ReadString('\n')
}
func isInteractiveConsole() bool {
stdinInfo, err := os.Stdin.Stat()
if err != nil {
return false
}
stderrInfo, err := os.Stderr.Stat()
if err != nil {
return false
}
return (stdinInfo.Mode()&os.ModeCharDevice) != 0 && (stderrInfo.Mode()&os.ModeCharDevice) != 0
}

24
docs/releases/v1.2.1.md Normal file
View File

@@ -0,0 +1,24 @@
# LOGPile v1.2.1
Release date: 2026-02-04
## Highlights
- Redfish collection significantly expanded: dynamic Systems/Chassis/Managers discovery, PSU/GPU/PCIe inventory mapping, improved NVMe and storage parsing (including SimpleStorage and chassis drive fallbacks).
- Added Redfish snapshot support with broad raw Redfish tree capture for future offline analysis.
- Upload flow now accepts JSON snapshots in addition to archives, enabling offline re-open of live Redfish collections.
- Export UX improved:
- Export filenames now follow `YYYY-MM-DD (SERVER MODEL) - SERVER SN`.
- TXT export now outputs tabular sections matching web UI views (no raw JSON dump).
- Live API UI improvements: parser/file badges for Redfish sessions and clearer upload format messaging.
- Redfish progress logs are more informative (snapshot stage and active top-level roots).
- Build/distribution hardening:
- Cross-platform builds via `make build-all`.
- `CGO_ENABLED=0` for more portable single-binary distribution.
- Crash hold option to keep console open for debugging (`-hold-on-crash`, enabled by default on Windows).
## Artifacts
- `bin/logpile-linux-amd64`
- `bin/logpile-linux-arm64`
- `bin/logpile-darwin-amd64`
- `bin/logpile-darwin-arm64`
- `bin/logpile-windows-amd64.exe`

View File

@@ -0,0 +1,18 @@
package collector
import (
"context"
"time"
)
func sleepWithContext(ctx context.Context, d time.Duration) bool {
timer := time.NewTimer(d)
defer timer.Stop()
select {
case <-ctx.Done():
return false
case <-timer.C:
return true
}
}

View File

@@ -0,0 +1,42 @@
package collector
import (
"context"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
type IPMIMockConnector struct{}
func NewIPMIMockConnector() *IPMIMockConnector {
return &IPMIMockConnector{}
}
func (c *IPMIMockConnector) Protocol() string {
return "ipmi"
}
func (c *IPMIMockConnector) Collect(ctx context.Context, req Request, emit ProgressFn) (*models.AnalysisResult, error) {
steps := []Progress{
{Status: "running", Progress: 20, Message: "IPMI: подключение к BMC..."},
{Status: "running", Progress: 55, Message: "IPMI: чтение инвентаря..."},
{Status: "running", Progress: 85, Message: "IPMI: нормализация данных..."},
}
for _, step := range steps {
if !sleepWithContext(ctx, 150*time.Millisecond) {
return nil, ctx.Err()
}
if emit != nil {
emit(step)
}
}
return &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
Hardware: &models.HardwareConfig{},
}, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,202 @@
package collector
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)
func TestRedfishConnectorCollect(t *testing.T) {
mux := http.NewServeMux()
register := func(path string, payload interface{}) {
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(payload)
})
}
register("/redfish/v1", map[string]interface{}{"Name": "ServiceRoot"})
register("/redfish/v1/Systems/1", map[string]interface{}{
"Manufacturer": "Supermicro",
"Model": "SYS-TEST",
"SerialNumber": "SYS123",
"BiosVersion": "2.1a",
})
register("/redfish/v1/Systems/1/Bios", map[string]interface{}{"Version": "2.1a"})
register("/redfish/v1/Systems/1/SecureBoot", map[string]interface{}{"SecureBootCurrentBoot": "Enabled"})
register("/redfish/v1/Systems/1/Processors", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/Processors/CPU1"},
},
})
register("/redfish/v1/Systems/1/Processors/CPU1", map[string]interface{}{
"Name": "CPU1",
"Model": "Xeon Gold",
"TotalCores": 32,
"TotalThreads": 64,
"MaxSpeedMHz": 3600,
})
register("/redfish/v1/Systems/1/Memory", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/Memory/DIMM1"},
},
})
register("/redfish/v1/Systems/1/Memory/DIMM1", map[string]interface{}{
"Name": "DIMM A1",
"CapacityMiB": 32768,
"MemoryDeviceType": "DDR5",
"OperatingSpeedMhz": 4800,
"Status": map[string]interface{}{
"Health": "OK",
},
})
register("/redfish/v1/Systems/1/Storage", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/Storage/1"},
},
})
register("/redfish/v1/Systems/1/Storage/1", map[string]interface{}{
"Drives": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/Storage/1/Drives/1"},
},
})
register("/redfish/v1/Systems/1/Storage/1/Drives/1", map[string]interface{}{
"Name": "Drive1",
"Model": "NVMe Test",
"MediaType": "SSD",
"Protocol": "NVMe",
"CapacityGB": 960,
"SerialNumber": "SN123",
})
register("/redfish/v1/Systems/1/PCIeDevices", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/PCIeDevices/GPU1"},
},
})
register("/redfish/v1/Systems/1/PCIeDevices/GPU1", map[string]interface{}{
"Id": "GPU1",
"Name": "NVIDIA H100",
"Model": "NVIDIA H100 PCIe",
"Manufacturer": "NVIDIA",
"SerialNumber": "GPU-SN-001",
"PCIeFunctions": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1/PCIeDevices/GPU1/PCIeFunctions",
},
})
register("/redfish/v1/Systems/1/PCIeDevices/GPU1/PCIeFunctions", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/PCIeFunctions/GPU1F0"},
},
})
register("/redfish/v1/Systems/1/PCIeFunctions/GPU1F0", map[string]interface{}{
"FunctionId": "0000:65:00.0",
"VendorId": "0x10DE",
"DeviceId": "0x2331",
"ClassCode": "0x030200",
"CurrentLinkWidth": 16,
"CurrentLinkSpeed": "16.0 GT/s",
"MaxLinkWidth": 16,
"MaxLinkSpeed": "16.0 GT/s",
})
register("/redfish/v1/Chassis/1/NetworkAdapters", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/1"},
},
})
register("/redfish/v1/Chassis/1/Power", map[string]interface{}{
"PowerSupplies": []map[string]interface{}{
{
"MemberId": "PSU1",
"Name": "PSU Slot 1",
"Model": "PWS-2K01A-1R",
"Manufacturer": "Delta",
"PowerCapacityWatts": 2000,
"PowerInputWatts": 1600,
"LastPowerOutputWatts": 1200,
"LineInputVoltage": 230,
"Status": map[string]interface{}{
"Health": "OK",
"State": "Enabled",
},
},
},
})
register("/redfish/v1/Chassis/1/NetworkAdapters/1", map[string]interface{}{
"Name": "Mellanox",
"Model": "ConnectX-6",
"SerialNumber": "NIC123",
})
register("/redfish/v1/Managers/1", map[string]interface{}{
"FirmwareVersion": "1.25",
})
register("/redfish/v1/Managers/1/NetworkProtocol", map[string]interface{}{
"Id": "NetworkProtocol",
})
ts := httptest.NewServer(mux)
defer ts.Close()
c := NewRedfishConnector()
result, err := c.Collect(context.Background(), Request{
Host: ts.URL,
Port: 443,
Protocol: "redfish",
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "strict",
}, nil)
if err != nil {
t.Fatalf("collect failed: %v", err)
}
if result.Hardware == nil {
t.Fatalf("expected hardware config")
}
if result.Hardware.BoardInfo.ProductName != "SYS-TEST" {
t.Fatalf("unexpected board model: %q", result.Hardware.BoardInfo.ProductName)
}
if len(result.Hardware.CPUs) != 1 {
t.Fatalf("expected one CPU, got %d", len(result.Hardware.CPUs))
}
if len(result.Hardware.Memory) != 1 {
t.Fatalf("expected one DIMM, got %d", len(result.Hardware.Memory))
}
if len(result.Hardware.Storage) != 1 {
t.Fatalf("expected one drive, got %d", len(result.Hardware.Storage))
}
if len(result.Hardware.NetworkAdapters) != 1 {
t.Fatalf("expected one nic, got %d", len(result.Hardware.NetworkAdapters))
}
if len(result.Hardware.GPUs) != 1 {
t.Fatalf("expected one gpu, got %d", len(result.Hardware.GPUs))
}
if result.Hardware.GPUs[0].BDF != "0000:65:00.0" {
t.Fatalf("unexpected gpu BDF: %q", result.Hardware.GPUs[0].BDF)
}
if len(result.Hardware.PCIeDevices) != 1 {
t.Fatalf("expected one pcie device, got %d", len(result.Hardware.PCIeDevices))
}
if len(result.Hardware.PowerSupply) != 1 {
t.Fatalf("expected one psu, got %d", len(result.Hardware.PowerSupply))
}
if result.Hardware.PowerSupply[0].WattageW != 2000 {
t.Fatalf("unexpected psu wattage: %d", result.Hardware.PowerSupply[0].WattageW)
}
if len(result.Hardware.Firmware) == 0 {
t.Fatalf("expected firmware entries")
}
if result.RawPayloads == nil {
t.Fatalf("expected raw payloads")
}
treeAny, ok := result.RawPayloads["redfish_tree"]
if !ok {
t.Fatalf("expected redfish_tree in raw payloads")
}
tree, ok := treeAny.(map[string]interface{})
if !ok || len(tree) == 0 {
t.Fatalf("expected non-empty redfish_tree, got %#v", treeAny)
}
}

View File

@@ -0,0 +1,37 @@
package collector
import "sync"
type Registry struct {
mu sync.RWMutex
connectors map[string]Connector
}
func NewRegistry() *Registry {
return &Registry{
connectors: make(map[string]Connector),
}
}
func NewDefaultRegistry() *Registry {
r := NewRegistry()
r.Register(NewRedfishConnector())
r.Register(NewIPMIMockConnector())
return r
}
func (r *Registry) Register(connector Connector) {
if connector == nil {
return
}
r.mu.Lock()
r.connectors[connector.Protocol()] = connector
r.mu.Unlock()
}
func (r *Registry) Get(protocol string) (Connector, bool) {
r.mu.RLock()
connector, ok := r.connectors[protocol]
r.mu.RUnlock()
return connector, ok
}

View File

@@ -0,0 +1,31 @@
package collector
import (
"context"
"git.mchus.pro/mchus/logpile/internal/models"
)
type Request struct {
Host string
Protocol string
Port int
Username string
AuthType string
Password string
Token string
TLSMode string
}
type Progress struct {
Status string
Progress int
Message string
}
type ProgressFn func(Progress)
type Connector interface {
Protocol() string
Collect(ctx context.Context, req Request, emit ProgressFn) (*models.AnalysisResult, error)
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"io"
"text/tabwriter"
"git.mchus.pro/mchus/logpile/internal/models"
)
@@ -125,13 +126,16 @@ func (e *Exporter) ExportTXT(w io.Writer) error {
return nil
}
fmt.Fprintf(w, "File: %s\n", e.result.Filename)
fmt.Fprintf(w, "File:\t%s\n", e.result.Filename)
fmt.Fprintf(w, "Source:\t%s\n", e.result.SourceType)
fmt.Fprintf(w, "Protocol:\t%s\n", e.result.Protocol)
fmt.Fprintf(w, "Target:\t%s\n", e.result.TargetHost)
fmt.Fprintln(w)
// Server model and serial number
if e.result.Hardware != nil && e.result.Hardware.BoardInfo.ProductName != "" {
fmt.Fprintln(w)
fmt.Fprintf(w, "Server Model: %s\n", e.result.Hardware.BoardInfo.ProductName)
fmt.Fprintf(w, "Serial Number: %s\n", e.result.Hardware.BoardInfo.SerialNumber)
fmt.Fprintf(w, "Server Model:\t%s\n", e.result.Hardware.BoardInfo.ProductName)
fmt.Fprintf(w, "Serial Number:\t%s\n", e.result.Hardware.BoardInfo.SerialNumber)
}
fmt.Fprintln(w)
@@ -139,118 +143,172 @@ func (e *Exporter) ExportTXT(w io.Writer) error {
if e.result.Hardware != nil {
hw := e.result.Hardware
// Firmware
// Firmware tab
if len(hw.Firmware) > 0 {
fmt.Fprintln(w, "FIRMWARE VERSIONS")
fmt.Fprintln(w, "-----------------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Component\tVersion\tBuild Time")
for _, fw := range hw.Firmware {
fmt.Fprintf(w, " %s: %s\n", fw.DeviceName, fw.Version)
fmt.Fprintf(tw, "%s\t%s\t%s\n", fw.DeviceName, fw.Version, fw.BuildTime)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// CPUs
// CPU tab
if len(hw.CPUs) > 0 {
fmt.Fprintln(w, "PROCESSORS")
fmt.Fprintln(w, "----------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Socket\tModel\tCores\tThreads\tFreq MHz\tTurbo MHz\tTDP W\tPPIN/SN")
for _, cpu := range hw.CPUs {
fmt.Fprintf(w, " Socket %d: %s\n", cpu.Socket, cpu.Model)
fmt.Fprintf(w, " Cores: %d, Threads: %d, Freq: %d MHz (Turbo: %d MHz)\n",
cpu.Cores, cpu.Threads, cpu.FrequencyMHz, cpu.MaxFreqMHz)
fmt.Fprintf(w, " TDP: %dW, L3 Cache: %d KB\n", cpu.TDP, cpu.L3CacheKB)
id := cpu.SerialNumber
if id == "" {
id = cpu.PPIN
}
fmt.Fprintf(tw, "CPU%d\t%s\t%d\t%d\t%d\t%d\t%d\t%s\n",
cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz, cpu.MaxFreqMHz, cpu.TDP, id)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// Memory
// Memory tab
if len(hw.Memory) > 0 {
fmt.Fprintln(w, "MEMORY")
fmt.Fprintln(w, "------")
totalMB := 0
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tPresent\tSize MB\tType\tSpeed MHz\tVendor\tModel/PN\tSerial\tStatus")
for _, mem := range hw.Memory {
totalMB += mem.SizeMB
location := mem.Location
if location == "" {
location = mem.Slot
}
fmt.Fprintf(tw, "%s\t%t\t%d\t%s\t%d\t%s\t%s\t%s\t%s\n",
location, mem.Present, mem.SizeMB, mem.Type, mem.CurrentSpeedMHz, mem.Manufacturer, mem.PartNumber, mem.SerialNumber, mem.Status)
}
fmt.Fprintf(w, " Total: %d GB (%d DIMMs)\n", totalMB/1024, len(hw.Memory))
fmt.Fprintf(w, " Type: %s @ %d MHz\n", hw.Memory[0].Type, hw.Memory[0].CurrentSpeedMHz)
fmt.Fprintf(w, " Manufacturer: %s\n", hw.Memory[0].Manufacturer)
_ = tw.Flush()
fmt.Fprintln(w)
}
// Storage
// Power tab
if len(hw.PowerSupply) > 0 {
fmt.Fprintln(w, "POWER SUPPLIES")
fmt.Fprintln(w, "--------------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tPresent\tVendor\tModel\tWattage W\tInput W\tOutput W\tInput V\tTemp C\tStatus\tSerial")
for _, psu := range hw.PowerSupply {
fmt.Fprintf(tw, "%s\t%t\t%s\t%s\t%d\t%d\t%d\t%.0f\t%d\t%s\t%s\n",
psu.Slot, psu.Present, psu.Vendor, psu.Model, psu.WattageW, psu.InputPowerW, psu.OutputPowerW, psu.InputVoltage, psu.TemperatureC, psu.Status, psu.SerialNumber)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// Storage tab
if len(hw.Storage) > 0 {
fmt.Fprintln(w, "STORAGE")
fmt.Fprintln(w, "-------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tPresent\tType\tInterface\tModel\tSize GB\tVendor\tFirmware\tSerial")
for _, stor := range hw.Storage {
fmt.Fprintf(w, " %s: %s (%d GB) - S/N: %s\n",
stor.Slot, stor.Model, stor.SizeGB, stor.SerialNumber)
fmt.Fprintf(tw, "%s\t%t\t%s\t%s\t%s\t%d\t%s\t%s\t%s\n",
stor.Slot, stor.Present, stor.Type, stor.Interface, stor.Model, stor.SizeGB, stor.Manufacturer, stor.Firmware, stor.SerialNumber)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// PCIe
// GPU tab
if len(hw.GPUs) > 0 {
fmt.Fprintln(w, "GPUS")
fmt.Fprintln(w, "----")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tModel\tVendor\tBDF\tPCIe\tSerial\tStatus")
for _, gpu := range hw.GPUs {
link := fmt.Sprintf("x%d %s", gpu.CurrentLinkWidth, gpu.CurrentLinkSpeed)
if gpu.MaxLinkWidth > 0 || gpu.MaxLinkSpeed != "" {
link = fmt.Sprintf("%s / x%d %s", link, gpu.MaxLinkWidth, gpu.MaxLinkSpeed)
}
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\t%s\t%s\n",
gpu.Slot, gpu.Model, gpu.Manufacturer, gpu.BDF, link, gpu.SerialNumber, gpu.Status)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// Network tab
if len(hw.NetworkAdapters) > 0 {
fmt.Fprintln(w, "NETWORK ADAPTERS")
fmt.Fprintln(w, "----------------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tLocation\tModel\tVendor\tPorts\tType\tStatus\tSerial")
for _, nic := range hw.NetworkAdapters {
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%d\t%s\t%s\t%s\n",
nic.Slot, nic.Location, nic.Model, nic.Vendor, nic.PortCount, nic.PortType, nic.Status, nic.SerialNumber)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// Device inventory tab
if len(hw.PCIeDevices) > 0 {
fmt.Fprintln(w, "PCIE DEVICES")
fmt.Fprintln(w, "------------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Slot\tBDF\tClass\tVendor\tVID:DID\tLink\tSerial")
for _, pcie := range hw.PCIeDevices {
fmt.Fprintf(w, " %s: %s (x%d %s)\n",
pcie.Slot, pcie.DeviceClass, pcie.LinkWidth, pcie.LinkSpeed)
if pcie.SerialNumber != "" {
fmt.Fprintf(w, " S/N: %s\n", pcie.SerialNumber)
}
if len(pcie.MACAddresses) > 0 {
fmt.Fprintf(w, " MACs: %v\n", pcie.MACAddresses)
}
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%04x:%04x\tx%d %s / x%d %s\t%s\n",
pcie.Slot, pcie.BDF, pcie.DeviceClass, pcie.Manufacturer, pcie.VendorID, pcie.DeviceID,
pcie.LinkWidth, pcie.LinkSpeed, pcie.MaxLinkWidth, pcie.MaxLinkSpeed, pcie.SerialNumber)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
}
// Sensors summary
// Sensors tab
if len(e.result.Sensors) > 0 {
fmt.Fprintln(w, "SENSOR READINGS")
fmt.Fprintln(w, "---------------")
// Group by type
byType := make(map[string][]models.SensorReading)
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Type\tName\tValue\tUnit\tRaw\tStatus")
for _, s := range e.result.Sensors {
byType[s.Type] = append(byType[s.Type], s)
}
for stype, sensors := range byType {
fmt.Fprintf(w, "\n %s:\n", stype)
for _, s := range sensors {
if s.Value != 0 {
fmt.Fprintf(w, " %s: %.0f %s [%s]\n", s.Name, s.Value, s.Unit, s.Status)
} else if s.RawValue != "" {
fmt.Fprintf(w, " %s: %s [%s]\n", s.Name, s.RawValue, s.Status)
}
}
fmt.Fprintf(tw, "%s\t%s\t%.0f\t%s\t%s\t%s\n", s.Type, s.Name, s.Value, s.Unit, s.RawValue, s.Status)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// FRU summary
// Serials/FRU tab
if len(e.result.FRU) > 0 {
fmt.Fprintln(w, "FRU COMPONENTS")
fmt.Fprintln(w, "--------------")
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Description\tManufacturer\tProduct\tSerial\tPart Number")
for _, fru := range e.result.FRU {
name := fru.ProductName
if name == "" {
name = fru.Description
}
fmt.Fprintf(w, " %s\n", name)
if fru.SerialNumber != "" {
fmt.Fprintf(w, " Serial: %s\n", fru.SerialNumber)
}
if fru.Manufacturer != "" {
fmt.Fprintf(w, " Manufacturer: %s\n", fru.Manufacturer)
}
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\n", fru.Description, fru.Manufacturer, name, fru.SerialNumber, fru.PartNumber)
}
_ = tw.Flush()
fmt.Fprintln(w)
}
// Events summary
// Events tab
fmt.Fprintf(w, "EVENTS: %d total\n", len(e.result.Events))
if len(e.result.Events) > 0 {
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
fmt.Fprintln(tw, "Time\tSeverity\tSource\tType\tName\tDescription")
for _, ev := range e.result.Events {
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\t%s\n",
ev.Timestamp.Format("2006-01-02 15:04:05"), ev.Severity, ev.Source, ev.SensorType, ev.SensorName, ev.Description)
}
_ = tw.Flush()
}
var critical, warning, info int
for _, ev := range e.result.Events {
switch ev.Severity {

View File

@@ -2,13 +2,23 @@ package models
import "time"
const (
SourceTypeArchive = "archive"
SourceTypeAPI = "api"
)
// AnalysisResult contains all parsed data from an archive
type AnalysisResult struct {
Filename string `json:"filename"`
Events []Event `json:"events"`
FRU []FRUInfo `json:"fru"`
Sensors []SensorReading `json:"sensors"`
Hardware *HardwareConfig `json:"hardware"`
Filename string `json:"filename"`
SourceType string `json:"source_type,omitempty"` // archive | api
Protocol string `json:"protocol,omitempty"` // redfish | ipmi
TargetHost string `json:"target_host,omitempty"` // BMC host for live collect
CollectedAt time.Time `json:"collected_at,omitempty"` // Collection/upload timestamp
RawPayloads map[string]any `json:"raw_payloads,omitempty"` // Additional source payloads (e.g. Redfish tree)
Events []Event `json:"events"`
FRU []FRUInfo `json:"fru"`
Sensors []SensorReading `json:"sensors"`
Hardware *HardwareConfig `json:"hardware"`
}
// Event represents a single log event
@@ -78,12 +88,14 @@ type FirmwareInfo struct {
BuildTime string `json:"build_time,omitempty"`
}
// BoardInfo represents motherboard information
// BoardInfo represents motherboard/system information
type BoardInfo struct {
Manufacturer string `json:"manufacturer,omitempty"`
ProductName string `json:"product_name,omitempty"`
SerialNumber string `json:"serial_number,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Version string `json:"version,omitempty"`
UUID string `json:"uuid,omitempty"`
}
// CPU represents processor information
@@ -129,6 +141,9 @@ type Storage struct {
Manufacturer string `json:"manufacturer,omitempty"`
Firmware string `json:"firmware,omitempty"`
Interface string `json:"interface,omitempty"`
Present bool `json:"present"`
Location string `json:"location,omitempty"` // Front/Rear
BackplaneID int `json:"backplane_id,omitempty"`
}
// PCIeDevice represents a PCIe device
@@ -159,35 +174,52 @@ type NIC struct {
// PSU represents a power supply unit
type PSU struct {
Slot string `json:"slot"`
Present bool `json:"present"`
Model string `json:"model"`
Vendor string `json:"vendor,omitempty"`
WattageW int `json:"wattage_w,omitempty"`
SerialNumber string `json:"serial_number,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Firmware string `json:"firmware,omitempty"`
Status string `json:"status,omitempty"`
InputType string `json:"input_type,omitempty"`
InputPowerW int `json:"input_power_w,omitempty"`
OutputPowerW int `json:"output_power_w,omitempty"`
InputVoltage float64 `json:"input_voltage,omitempty"`
OutputVoltage float64 `json:"output_voltage,omitempty"`
TemperatureC int `json:"temperature_c,omitempty"`
Slot string `json:"slot"`
Present bool `json:"present"`
Model string `json:"model"`
Vendor string `json:"vendor,omitempty"`
WattageW int `json:"wattage_w,omitempty"`
SerialNumber string `json:"serial_number,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Firmware string `json:"firmware,omitempty"`
Status string `json:"status,omitempty"`
InputType string `json:"input_type,omitempty"`
InputPowerW int `json:"input_power_w,omitempty"`
OutputPowerW int `json:"output_power_w,omitempty"`
InputVoltage float64 `json:"input_voltage,omitempty"`
OutputVoltage float64 `json:"output_voltage,omitempty"`
TemperatureC int `json:"temperature_c,omitempty"`
}
// GPU represents a graphics processing unit
type GPU struct {
Slot string `json:"slot"`
Model string `json:"model"`
Manufacturer string `json:"manufacturer,omitempty"`
VendorID int `json:"vendor_id,omitempty"`
DeviceID int `json:"device_id,omitempty"`
BDF string `json:"bdf,omitempty"`
SerialNumber string `json:"serial_number,omitempty"`
PartNumber string `json:"part_number,omitempty"`
LinkWidth int `json:"link_width,omitempty"`
LinkSpeed string `json:"link_speed,omitempty"`
Slot string `json:"slot"`
Location string `json:"location,omitempty"`
Model string `json:"model"`
Manufacturer string `json:"manufacturer,omitempty"`
VendorID int `json:"vendor_id,omitempty"`
DeviceID int `json:"device_id,omitempty"`
BDF string `json:"bdf,omitempty"`
UUID string `json:"uuid,omitempty"`
SerialNumber string `json:"serial_number,omitempty"`
PartNumber string `json:"part_number,omitempty"`
Firmware string `json:"firmware,omitempty"`
VideoBIOS string `json:"video_bios,omitempty"`
IRQ int `json:"irq,omitempty"`
BusType string `json:"bus_type,omitempty"`
DMASize string `json:"dma_size,omitempty"`
DMAMask string `json:"dma_mask,omitempty"`
DeviceMinor int `json:"device_minor,omitempty"`
Temperature int `json:"temperature,omitempty"` // GPU core temp
MemTemperature int `json:"mem_temperature,omitempty"` // GPU memory temp
Power int `json:"power,omitempty"` // Current power draw (W)
MaxPower int `json:"max_power,omitempty"` // TDP (W)
ClockSpeed int `json:"clock_speed,omitempty"` // Operating speed MHz
MaxLinkWidth int `json:"max_link_width,omitempty"`
MaxLinkSpeed string `json:"max_link_speed,omitempty"`
CurrentLinkWidth int `json:"current_link_width,omitempty"`
CurrentLinkSpeed string `json:"current_link_speed,omitempty"`
Status string `json:"status,omitempty"`
}
// NetworkAdapter represents a network adapter with detailed info

View File

@@ -3,6 +3,7 @@ package parser
import (
"archive/tar"
"archive/zip"
"bytes"
"compress/gzip"
"fmt"
"io"
@@ -11,6 +12,8 @@ import (
"strings"
)
const maxSingleFileSize = 10 * 1024 * 1024
// ExtractedFile represents a file extracted from archive
type ExtractedFile struct {
Path string
@@ -24,8 +27,12 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
switch ext {
case ".gz", ".tgz":
return extractTarGz(archivePath)
case ".tar":
return extractTar(archivePath)
case ".zip":
return extractZip(archivePath)
case ".txt", ".log":
return extractSingleFile(archivePath)
default:
return nil, fmt.Errorf("unsupported archive format: %s", ext)
}
@@ -37,7 +44,11 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
switch ext {
case ".gz", ".tgz":
return extractTarGzFromReader(r)
return extractTarGzFromReader(r, filename)
case ".tar":
return extractTarFromReader(r)
case ".txt", ".log":
return extractSingleFileFromReader(r, filename)
default:
return nil, fmt.Errorf("unsupported archive format: %s", ext)
}
@@ -50,17 +61,21 @@ func extractTarGz(archivePath string) ([]ExtractedFile, error) {
}
defer f.Close()
return extractTarGzFromReader(f)
return extractTarGzFromReader(f, filepath.Base(archivePath))
}
func extractTarGzFromReader(r io.Reader) ([]ExtractedFile, error) {
gzr, err := gzip.NewReader(r)
func extractTar(archivePath string) ([]ExtractedFile, error) {
f, err := os.Open(archivePath)
if err != nil {
return nil, fmt.Errorf("gzip reader: %w", err)
return nil, fmt.Errorf("open archive: %w", err)
}
defer gzr.Close()
defer f.Close()
tr := tar.NewReader(gzr)
return extractTarFromReader(f)
}
func extractTarFromReader(r io.Reader) ([]ExtractedFile, error) {
tr := tar.NewReader(r)
var files []ExtractedFile
for {
@@ -96,6 +111,75 @@ func extractTarGzFromReader(r io.Reader) ([]ExtractedFile, error) {
return files, nil
}
func extractTarGzFromReader(r io.Reader, filename string) ([]ExtractedFile, error) {
gzr, err := gzip.NewReader(r)
if err != nil {
return nil, fmt.Errorf("gzip reader: %w", err)
}
defer gzr.Close()
// Read all decompressed content into buffer
// Limit to 50MB for plain gzip files, 10MB per file for tar.gz
decompressed, err := io.ReadAll(io.LimitReader(gzr, 50*1024*1024))
if err != nil {
return nil, fmt.Errorf("read gzip content: %w", err)
}
// Try to read as tar archive
tr := tar.NewReader(bytes.NewReader(decompressed))
var files []ExtractedFile
header, err := tr.Next()
if err != nil {
// Not a tar archive - treat as a single gzipped file
if strings.Contains(err.Error(), "invalid tar header") || err == io.EOF {
// Get base filename without .gz extension
baseName := strings.TrimSuffix(filename, ".gz")
if gzr.Name != "" {
baseName = gzr.Name
}
return []ExtractedFile{
{
Path: baseName,
Content: decompressed,
},
}, nil
}
return nil, fmt.Errorf("tar read: %w", err)
}
// It's a valid tar archive, process it
for {
// Skip directories
if header.Typeflag != tar.TypeDir {
// Skip large files (>10MB)
if header.Size <= 10*1024*1024 {
content, err := io.ReadAll(tr)
if err != nil {
return nil, fmt.Errorf("read file %s: %w", header.Name, err)
}
files = append(files, ExtractedFile{
Path: header.Name,
Content: content,
})
}
}
// Read next header
header, err = tr.Next()
if err == io.EOF {
break
}
if err != nil {
return nil, fmt.Errorf("tar read: %w", err)
}
}
return files, nil
}
func extractZip(archivePath string) ([]ExtractedFile, error) {
r, err := zip.OpenReader(archivePath)
if err != nil {
@@ -135,6 +219,33 @@ func extractZip(archivePath string) ([]ExtractedFile, error) {
return files, nil
}
func extractSingleFile(path string) ([]ExtractedFile, error) {
f, err := os.Open(path)
if err != nil {
return nil, fmt.Errorf("open file: %w", err)
}
defer f.Close()
return extractSingleFileFromReader(f, filepath.Base(path))
}
func extractSingleFileFromReader(r io.Reader, filename string) ([]ExtractedFile, error) {
content, err := io.ReadAll(io.LimitReader(r, maxSingleFileSize+1))
if err != nil {
return nil, fmt.Errorf("read file content: %w", err)
}
if len(content) > maxSingleFileSize {
return nil, fmt.Errorf("file too large: max %d bytes", maxSingleFileSize)
}
return []ExtractedFile{
{
Path: filepath.Base(filename),
Content: content,
},
}, nil
}
// FindFileByPattern finds files matching pattern in extracted files
func FindFileByPattern(files []ExtractedFile, patterns ...string) []ExtractedFile {
var result []ExtractedFile

View File

@@ -0,0 +1,48 @@
package parser
import (
"os"
"path/filepath"
"strings"
"testing"
)
func TestExtractArchiveFromReaderTXT(t *testing.T) {
content := "loader_brand=\"XigmaNAS\"\nSystem uptime:\n"
files, err := ExtractArchiveFromReader(strings.NewReader(content), "xigmanas.txt")
if err != nil {
t.Fatalf("extract txt from reader: %v", err)
}
if len(files) != 1 {
t.Fatalf("expected 1 file, got %d", len(files))
}
if files[0].Path != "xigmanas.txt" {
t.Fatalf("expected filename xigmanas.txt, got %q", files[0].Path)
}
if string(files[0].Content) != content {
t.Fatalf("content mismatch")
}
}
func TestExtractArchiveTXT(t *testing.T) {
dir := t.TempDir()
path := filepath.Join(dir, "sample.txt")
want := "plain text log"
if err := os.WriteFile(path, []byte(want), 0o600); err != nil {
t.Fatalf("write sample txt: %v", err)
}
files, err := ExtractArchive(path)
if err != nil {
t.Fatalf("extract txt file: %v", err)
}
if len(files) != 1 {
t.Fatalf("expected 1 file, got %d", len(files))
}
if files[0].Path != "sample.txt" {
t.Fatalf("expected sample.txt, got %q", files[0].Path)
}
if string(files[0].Content) != want {
t.Fatalf("content mismatch")
}
}

View File

@@ -0,0 +1,72 @@
# Generic Text File Parser
Fallback парсер для текстовых файлов, которые не распознаны другими парсерами.
## Назначение
Этот парсер обрабатывает любые текстовые файлы, которые:
- Не являются архивами специфичных вендоров
- Содержат текстовую информацию (не бинарные данные)
- Представляют собой одиночные .gz файлы или простые текстовые файлы
## Приоритет
**Confidence score: 15** (низкий приоритет)
Этот парсер срабатывает только если ни один другой парсер не подошел с более высоким confidence.
## Поддерживаемые файлы
### Автоматически распознаваемые типы
1. **NVIDIA Bug Report** (`nvidia-bug-report-*.log.gz`)
- Извлекает информацию о драйвере NVIDIA
- Находит GPU устройства
- Показывает версию драйвера
2. **Любые текстовые файлы**
- Проверяет, что содержимое - текст (не бинарные данные)
- Показывает базовую информацию о файле
## Извлекаемые данные
### Events
- **Text File**: Базовая информация о загруженном файле
- **Driver Info**: Информация о NVIDIA драйвере (для nvidia-bug-report)
- **GPU Device**: Обнаруженные GPU устройства (для nvidia-bug-report)
## Пример использования
```bash
# Запуск с nvidia-bug-report
./logpile --file nvidia-bug-report-*.log.gz
# Запуск с любым текстовым файлом
./logpile --file system.log.gz
```
## Версионирование
**Текущая версия парсера:** 1.0.0
## Ограничения
1. Этот парсер предоставляет только базовую информацию
2. Не выполняет глубокий анализ содержимого
3. Для детального анализа специфичных логов рекомендуется создать dedicated парсер
## Расширение
Чтобы добавить поддержку нового типа файлов:
1. Добавьте проверку в функцию `Parse()`
2. Создайте функцию `parseXXX()` для извлечения специфичной информации
3. Увеличьте версию парсера
Пример:
```go
if strings.Contains(strings.ToLower(file.Path), "custom-log") {
parseCustomLog(content, result)
}
```

View File

@@ -0,0 +1,147 @@
// Package generic provides a fallback parser for unrecognized text files
package generic
import (
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
// parserVersion - version of this parser module
const parserVersion = "1.0.0"
func init() {
parser.Register(&Parser{})
}
// Parser implements VendorParser for generic text files
type Parser struct{}
// Name returns human-readable parser name
func (p *Parser) Name() string {
return "Generic Text File Parser"
}
// Vendor returns vendor identifier
func (p *Parser) Vendor() string {
return "generic"
}
// Version returns parser version
func (p *Parser) Version() string {
return parserVersion
}
// Detect checks if this is a text file (fallback with low confidence)
// Returns confidence 0-100
func (p *Parser) Detect(files []parser.ExtractedFile) int {
// Only detect if there's exactly one file (plain .gz or single file)
if len(files) != 1 {
return 0
}
file := files[0]
// Check if content looks like text (not binary)
if !isLikelyText(file.Content) {
return 0
}
// Return low confidence so other parsers have priority
return 15
}
// isLikelyText checks if content is likely text (not binary)
func isLikelyText(content []byte) bool {
// Check first 512 bytes for binary data
sample := content
if len(content) > 512 {
sample = content[:512]
}
binaryCount := 0
for _, b := range sample {
// Count non-printable characters (excluding common whitespace)
if b < 32 && b != '\n' && b != '\r' && b != '\t' {
binaryCount++
}
if b == 0 { // NULL byte is a strong indicator of binary
binaryCount += 10
}
}
// If less than 5% binary, consider it text
return binaryCount < len(sample)/20
}
// Parse parses generic text file
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
}
// Initialize hardware config
result.Hardware = &models.HardwareConfig{}
if len(files) == 0 {
return result, nil
}
file := files[0]
content := string(file.Content)
// Create a single event with file info
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "File",
EventType: "Text File",
Description: "Generic text file loaded",
Severity: models.SeverityInfo,
RawData: "Filename: " + file.Path,
})
// Try to extract some basic info from common file types
if strings.Contains(strings.ToLower(file.Path), "nvidia-bug-report") {
parseNvidiaBugReport(content, result)
}
return result, nil
}
// parseNvidiaBugReport extracts info from nvidia-bug-report files
func parseNvidiaBugReport(content string, result *models.AnalysisResult) {
lines := strings.Split(content, "\n")
// Look for GPU information
for i, line := range lines {
// Find NVIDIA driver version
if strings.Contains(line, "NVRM version:") || strings.Contains(line, "nvidia-smi") {
if i+5 < len(lines) {
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "NVIDIA Driver",
EventType: "Driver Info",
Description: "NVIDIA driver information found",
Severity: models.SeverityInfo,
RawData: strings.TrimSpace(line),
})
}
}
// Find GPU devices
if strings.Contains(line, "/proc/driver/nvidia/gpus/") && strings.Contains(line, "***") {
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "GPU",
EventType: "GPU Device",
Description: "GPU device detected",
Severity: models.SeverityInfo,
RawData: strings.TrimSpace(line),
})
}
}
}

View File

@@ -207,8 +207,8 @@ func ParseAssetJSON(content []byte) (*models.HardwareConfig, error) {
VendorID: pcie.VendorId,
DeviceID: pcie.DeviceId,
BDF: formatBDF(pcie.BusNumber, pcie.DeviceNumber, pcie.FunctionNumber),
LinkWidth: pcie.NegotiatedLinkWidth,
LinkSpeed: pcieLinkSpeedToString(pcie.CurrentLinkSpeed),
LinkWidth: pcie.NegotiatedLinkWidth,
LinkSpeed: pcieLinkSpeedToString(pcie.CurrentLinkSpeed),
MaxLinkWidth: pcie.MaxLinkWidth,
MaxLinkSpeed: pcieLinkSpeedToString(pcie.MaxLinkSpeed),
DeviceClass: pcieClassToString(pcie.ClassCode, pcie.SubClassCode),
@@ -242,8 +242,10 @@ func ParseAssetJSON(content []byte) (*models.HardwareConfig, error) {
VendorID: pcie.VendorId,
DeviceID: pcie.DeviceId,
BDF: formatBDF(pcie.BusNumber, pcie.DeviceNumber, pcie.FunctionNumber),
LinkWidth: pcie.NegotiatedLinkWidth,
LinkSpeed: pcieLinkSpeedToString(pcie.CurrentLinkSpeed),
CurrentLinkWidth: pcie.NegotiatedLinkWidth,
CurrentLinkSpeed: pcieLinkSpeedToString(pcie.CurrentLinkSpeed),
MaxLinkWidth: pcie.MaxLinkWidth,
MaxLinkSpeed: pcieLinkSpeedToString(pcie.MaxLinkSpeed),
}
if pcie.PartNumber != nil {
gpu.PartNumber = strings.TrimSpace(*pcie.PartNumber)

View File

@@ -27,6 +27,9 @@ func ParseComponentLog(content []byte, hw *models.HardwareConfig) {
// Parse RESTful HDD info
parseHDDInfo(text, hw)
// Parse RESTful diskbackplane info
parseDiskBackplaneInfo(text, hw)
// Parse RESTful Network Adapter info
parseNetworkAdapterInfo(text, hw)
@@ -52,6 +55,7 @@ type MemoryRESTInfo struct {
MemModID int `json:"mem_mod_id"`
ConfigStatus int `json:"config_status"`
MemModSlot string `json:"mem_mod_slot"`
MemModStatus int `json:"mem_mod_status"`
MemModSize int `json:"mem_mod_size"`
MemModType string `json:"mem_mod_type"`
MemModTechnology string `json:"mem_mod_technology"`
@@ -90,7 +94,7 @@ func parseMemoryInfo(text string, hw *models.HardwareConfig) {
hw.Memory = append(hw.Memory, models.MemoryDIMM{
Slot: mem.MemModSlot,
Location: mem.MemModSlot,
Present: mem.ConfigStatus == 1,
Present: mem.MemModStatus == 1 && mem.MemModSize > 0,
SizeMB: mem.MemModSize * 1024, // Convert GB to MB
Type: mem.MemModType,
Technology: strings.TrimSpace(mem.MemModTechnology),
@@ -420,3 +424,56 @@ func extractComponentFirmware(text string, hw *models.HardwareConfig) {
}
}
}
// DiskBackplaneRESTInfo represents the RESTful diskbackplane info structure
type DiskBackplaneRESTInfo []struct {
PortCount int `json:"port_count"`
DriverCount int `json:"driver_count"`
Front int `json:"front"`
BackplaneIndex int `json:"backplane_index"`
Present int `json:"present"`
CPLDVersion string `json:"cpld_version"`
Temperature int `json:"temperature"`
}
func parseDiskBackplaneInfo(text string, hw *models.HardwareConfig) {
// Find RESTful diskbackplane info section
re := regexp.MustCompile(`RESTful diskbackplane info:\s*(\[[\s\S]*?\])\s*BMC`)
match := re.FindStringSubmatch(text)
if match == nil {
return
}
jsonStr := match[1]
jsonStr = strings.ReplaceAll(jsonStr, "\n", "")
var backplaneInfo DiskBackplaneRESTInfo
if err := json.Unmarshal([]byte(jsonStr), &backplaneInfo); err != nil {
return
}
// Create storage entries based on backplane info
for _, bp := range backplaneInfo {
if bp.Present != 1 {
continue
}
location := "Rear"
if bp.Front == 1 {
location = "Front"
}
// Create entries for each port (disk slot)
for i := 0; i < bp.PortCount; i++ {
isPresent := i < bp.DriverCount
hw.Storage = append(hw.Storage, models.Storage{
Slot: fmt.Sprintf("%d", i),
Present: isPresent,
Location: location,
BackplaneID: bp.BackplaneIndex,
Type: "HDD",
})
}
}
}

View File

@@ -6,6 +6,7 @@
package inspur
import (
"fmt"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
@@ -91,12 +92,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
Sensors: make([]models.SensorReading, 0),
}
// Parse devicefrusdr.log (contains SDR and FRU data)
if f := parser.FindFileByName(files, "devicefrusdr.log"); f != nil {
p.parseDeviceFruSDR(f.Content, result)
}
// Parse asset.json
// Parse asset.json first (base hardware info)
if f := parser.FindFileByName(files, "asset.json"); f != nil {
if hw, err := ParseAssetJSON(f.Content); err == nil {
result.Hardware = hw
@@ -107,6 +103,12 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
if result.Hardware == nil {
result.Hardware = &models.HardwareConfig{}
}
// Parse devicefrusdr.log (contains SDR, FRU, PCIe and additional data)
if f := parser.FindFileByName(files, "devicefrusdr.log"); f != nil {
p.parseDeviceFruSDR(f.Content, result)
}
extractBoardInfo(result.FRU, result.Hardware)
// Extract PlatformId (server model) from ThermalConfig
@@ -129,6 +131,12 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
result.Events = append(result.Events, idlEvents...)
}
// Parse SEL list (selelist.csv)
if f := parser.FindFileByName(files, "selelist.csv"); f != nil {
selEvents := ParseSELList(f.Content)
result.Events = append(result.Events, selEvents...)
}
// Parse syslog files
syslogFiles := parser.FindFileByPattern(files, "syslog/alert", "syslog/warning", "syslog/notice", "syslog/info")
for _, f := range syslogFiles {
@@ -161,4 +169,70 @@ func (p *Parser) parseDeviceFruSDR(content []byte, result *models.AnalysisResult
fruContent := lines[fruStart:]
result.FRU = ParseFRU([]byte(fruContent))
}
// Parse PCIe devices from RESTful PCIE Device info
// This supplements data from asset.json with serial numbers, firmware, etc.
pcieDevicesFromREST := ParsePCIeDevices(content)
// Merge PCIe data: keep asset.json data but add RESTful data if available
if result.Hardware != nil {
// If asset.json didn't have PCIe devices, use RESTful data
if len(result.Hardware.PCIeDevices) == 0 && len(pcieDevicesFromREST) > 0 {
result.Hardware.PCIeDevices = pcieDevicesFromREST
}
// If we have both, merge them (RESTful data takes precedence for detailed info)
// For now, we keep asset.json data which has more details
}
// Parse GPU devices and add temperature data from sensors
if len(result.Sensors) > 0 && result.Hardware != nil {
// Use existing GPU data from asset.json and enrich with sensor data
for i := range result.Hardware.GPUs {
gpu := &result.Hardware.GPUs[i]
// Extract GPU number from slot name
slotNum := extractSlotNumberFromGPU(gpu.Slot)
// Find temperature sensors for this GPU
for _, sensor := range result.Sensors {
sensorName := strings.ToUpper(sensor.Name)
// Match GPU temperature sensor
if strings.Contains(sensorName, fmt.Sprintf("GPU%d_TEMP", slotNum)) && !strings.Contains(sensorName, "MEM") {
if sensor.RawValue != "" {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.Temperature)
}
}
// Match GPU memory temperature
if strings.Contains(sensorName, fmt.Sprintf("GPU%d_MEM_TEMP", slotNum)) {
if sensor.RawValue != "" {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.MemTemperature)
}
}
// Match PCIe slot temperature as fallback
if strings.Contains(sensorName, fmt.Sprintf("PCIE%d_GPU_TLM_T", slotNum)) && gpu.Temperature == 0 {
if sensor.RawValue != "" {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.Temperature)
}
}
}
}
}
}
// extractSlotNumberFromGPU extracts slot number from GPU slot string
func extractSlotNumberFromGPU(slot string) int {
parts := strings.Split(slot, "_")
for _, part := range parts {
if strings.HasPrefix(part, "PCIE") {
var num int
fmt.Sscanf(part, "PCIE%d", &num)
if num > 0 {
return num
}
}
}
return 0
}

214
internal/parser/vendors/inspur/pcie.go vendored Normal file
View File

@@ -0,0 +1,214 @@
package inspur
import (
"encoding/json"
"fmt"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// PCIeRESTInfo represents the RESTful PCIE Device info structure
type PCIeRESTInfo []struct {
ID int `json:"id"`
Present int `json:"present"`
Enable int `json:"enable"`
Status int `json:"status"`
VendorID int `json:"vendor_id"`
VendorName string `json:"vendor_name"`
DeviceID int `json:"device_id"`
DeviceName string `json:"device_name"`
BusNum int `json:"bus_num"`
DevNum int `json:"dev_num"`
FuncNum int `json:"func_num"`
MaxLinkWidth int `json:"max_link_width"`
MaxLinkSpeed int `json:"max_link_speed"`
CurrentLinkWidth int `json:"current_link_width"`
CurrentLinkSpeed int `json:"current_link_speed"`
Slot int `json:"slot"`
Location string `json:"location"`
DeviceLocator string `json:"DeviceLocator"`
DevType int `json:"dev_type"`
DevSubtype int `json:"dev_subtype"`
PartNum string `json:"part_num"`
SerialNum string `json:"serial_num"`
FwVer string `json:"fw_ver"`
}
// ParsePCIeDevices parses RESTful PCIE Device info from devicefrusdr.log
func ParsePCIeDevices(content []byte) []models.PCIeDevice {
text := string(content)
// Find RESTful PCIE Device info section
startMarker := "RESTful PCIE Device info:"
endMarker := "BMC sdr Info:"
startIdx := strings.Index(text, startMarker)
if startIdx == -1 {
return nil
}
endIdx := strings.Index(text[startIdx:], endMarker)
if endIdx == -1 {
endIdx = len(text) - startIdx
}
jsonText := text[startIdx+len(startMarker) : startIdx+endIdx]
jsonText = strings.TrimSpace(jsonText)
var pcieInfo PCIeRESTInfo
if err := json.Unmarshal([]byte(jsonText), &pcieInfo); err != nil {
return nil
}
var devices []models.PCIeDevice
for _, pcie := range pcieInfo {
if pcie.Present != 1 {
continue
}
// Convert PCIe speed to GEN notation
maxSpeed := fmt.Sprintf("GEN%d", pcie.MaxLinkSpeed)
currentSpeed := fmt.Sprintf("GEN%d", pcie.CurrentLinkSpeed)
// Determine device class based on dev_type
deviceClass := determineDeviceClass(pcie.DevType, pcie.DevSubtype, pcie.DeviceName)
// Build BDF string
bdf := fmt.Sprintf("%04x/%02x/%02x/%02x", 0, pcie.BusNum, pcie.DevNum, pcie.FuncNum)
device := models.PCIeDevice{
Slot: pcie.Location,
VendorID: pcie.VendorID,
DeviceID: pcie.DeviceID,
BDF: bdf,
DeviceClass: deviceClass,
Manufacturer: pcie.VendorName,
LinkWidth: pcie.CurrentLinkWidth,
LinkSpeed: currentSpeed,
MaxLinkWidth: pcie.MaxLinkWidth,
MaxLinkSpeed: maxSpeed,
PartNumber: strings.TrimSpace(pcie.PartNum),
SerialNumber: strings.TrimSpace(pcie.SerialNum),
}
devices = append(devices, device)
}
return devices
}
// determineDeviceClass maps device type to human-readable class
func determineDeviceClass(devType, devSubtype int, deviceName string) string {
// dev_type mapping:
// 1 = Mass Storage Controller
// 2 = Network Controller
// 3 = Display Controller (GPU)
// 4 = Multimedia Controller
switch devType {
case 1:
if devSubtype == 4 {
return "RAID Controller"
}
return "Storage Controller"
case 2:
return "Network Controller"
case 3:
// GPU
if strings.Contains(strings.ToUpper(deviceName), "H100") {
return "GPU (H100)"
}
if strings.Contains(strings.ToUpper(deviceName), "A100") {
return "GPU (A100)"
}
if strings.Contains(strings.ToUpper(deviceName), "NVIDIA") {
return "GPU"
}
return "Display Controller"
case 4:
return "Multimedia Controller"
default:
return "Unknown"
}
}
// ParseGPUs extracts GPU data from PCIe devices and sensors
func ParseGPUs(pcieDevices []models.PCIeDevice, sensors []models.SensorReading) []models.GPU {
var gpus []models.GPU
// Find GPU devices
for _, pcie := range pcieDevices {
if !strings.Contains(strings.ToLower(pcie.DeviceClass), "gpu") &&
!strings.Contains(strings.ToLower(pcie.DeviceClass), "display") {
continue
}
// Skip integrated graphics (ASPEED, etc.)
if strings.Contains(pcie.Manufacturer, "ASPEED") {
continue
}
gpu := models.GPU{
Slot: pcie.Slot,
Location: pcie.Slot,
Model: pcie.DeviceClass,
Manufacturer: pcie.Manufacturer,
SerialNumber: pcie.SerialNumber,
MaxLinkWidth: pcie.MaxLinkWidth,
MaxLinkSpeed: pcie.MaxLinkSpeed,
CurrentLinkWidth: pcie.LinkWidth,
CurrentLinkSpeed: pcie.LinkSpeed,
Status: "OK",
}
// Extract GPU number from slot name (e.g., "PCIE7" -> 7)
slotNum := extractSlotNumber(pcie.Slot)
// Find temperature sensors for this GPU
for _, sensor := range sensors {
sensorName := strings.ToUpper(sensor.Name)
// Match GPU temperature sensor (e.g., "GPU7_Temp")
if strings.Contains(sensorName, fmt.Sprintf("GPU%d_TEMP", slotNum)) {
if sensor.RawValue != "" {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.Temperature)
}
}
// Match GPU memory temperature (e.g., "GPU7_Mem_Temp")
if strings.Contains(sensorName, fmt.Sprintf("GPU%d_MEM_TEMP", slotNum)) {
if sensor.RawValue != "" {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.MemTemperature)
}
}
// Match PCIe slot temperature (e.g., "PCIE7_GPU_TLM_T")
if strings.Contains(sensorName, fmt.Sprintf("PCIE%d_GPU_TLM_T", slotNum)) {
if sensor.RawValue != "" && gpu.Temperature == 0 {
fmt.Sscanf(sensor.RawValue, "%d", &gpu.Temperature)
}
}
}
gpus = append(gpus, gpu)
}
return gpus
}
// extractSlotNumber extracts slot number from location string
// e.g., "CPU0_PE3_AC_PCIE7" -> 7
func extractSlotNumber(location string) int {
parts := strings.Split(location, "_")
for _, part := range parts {
if strings.HasPrefix(part, "PCIE") || strings.HasPrefix(part, "#CPU") {
var num int
fmt.Sscanf(part, "PCIE%d", &num)
if num > 0 {
return num
}
}
}
return 0
}

View File

@@ -46,6 +46,7 @@ func ParseSDR(content []byte) []models.SensorReading {
if v, err := strconv.ParseFloat(vm[1], 64); err == nil {
reading.Value = v
reading.Unit = strings.TrimSpace(vm[2])
reading.RawValue = valueStr // Keep original string for reference
}
}
} else if strings.HasPrefix(valueStr, "0x") {

174
internal/parser/vendors/inspur/sel.go vendored Normal file
View File

@@ -0,0 +1,174 @@
package inspur
import (
"encoding/csv"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
// ParseSELList parses selelist.csv file with SEL events
// Format: ID, Date (MM/DD/YYYY), Time (HH:MM:SS), Sensor, Event, Status
// Example: 1,04/18/2025,09:31:18,Event Logging Disabled SEL_Status,Log area reset/cleared,Asserted
func ParseSELList(content []byte) []models.Event {
var events []models.Event
text := string(content)
lines := strings.Split(text, "\n")
// Skip header line(s) if present
startIdx := 0
for i, line := range lines {
if strings.Contains(strings.ToLower(line), "sel elist") {
startIdx = i + 1
break
}
}
// Parse CSV data
for i := startIdx; i < len(lines); i++ {
line := strings.TrimSpace(lines[i])
if line == "" {
continue
}
// Parse CSV line
r := csv.NewReader(strings.NewReader(line))
records, err := r.Read()
if err != nil || len(records) < 6 {
continue
}
eventID := strings.TrimSpace(records[0])
dateStr := strings.TrimSpace(records[1])
timeStr := strings.TrimSpace(records[2])
sensorStr := strings.TrimSpace(records[3])
eventDesc := strings.TrimSpace(records[4])
status := strings.TrimSpace(records[5])
// Parse timestamp: MM/DD/YYYY HH:MM:SS
timestamp := parseSELTimestamp(dateStr, timeStr)
// Extract sensor type and name
sensorType, sensorName := parseSensorInfo(sensorStr)
// Determine severity
severity := determineSELSeverity(sensorStr, eventDesc, status)
// Build full description
description := buildSELDescription(eventDesc, status)
events = append(events, models.Event{
ID: eventID,
Timestamp: timestamp,
Source: "SEL",
SensorType: sensorType,
SensorName: sensorName,
EventType: eventDesc,
Severity: severity,
Description: description,
RawData: line,
})
}
return events
}
// parseSELTimestamp parses MM/DD/YYYY and HH:MM:SS into time.Time
func parseSELTimestamp(dateStr, timeStr string) time.Time {
// Combine date and time: MM/DD/YYYY HH:MM:SS
timestampStr := dateStr + " " + timeStr
// Try parsing with MM/DD/YYYY format
t, err := time.Parse("01/02/2006 15:04:05", timestampStr)
if err != nil {
// Fallback to current time
return time.Now()
}
return t
}
// parseSensorInfo extracts sensor type and name from sensor string
// Example: "Event Logging Disabled SEL_Status" -> ("sel", "SEL_Status")
// Example: "Power Supply PSU0_Status" -> ("power_supply", "PSU0_Status")
func parseSensorInfo(sensorStr string) (sensorType, sensorName string) {
parts := strings.Fields(sensorStr)
if len(parts) == 0 {
return "unknown", sensorStr
}
// Last part is usually the sensor name
sensorName = parts[len(parts)-1]
// First parts form the sensor type
if len(parts) > 1 {
sensorType = strings.ToLower(strings.Join(parts[:len(parts)-1], "_"))
} else {
sensorType = "system"
}
return
}
// determineSELSeverity determines event severity based on sensor and event description
func determineSELSeverity(sensorStr, eventDesc, status string) models.Severity {
lowerSensor := strings.ToLower(sensorStr)
lowerEvent := strings.ToLower(eventDesc)
lowerStatus := strings.ToLower(status)
// Critical indicators
criticalKeywords := []string{
"critical", "failure", "fault", "error",
"ac lost", "predictive failure", "redundancy lost",
"going high", "going low", "transition to critical",
}
for _, keyword := range criticalKeywords {
if strings.Contains(lowerSensor, keyword) ||
strings.Contains(lowerEvent, keyword) ||
strings.Contains(lowerStatus, keyword) {
return models.SeverityCritical
}
}
// Warning indicators
warningKeywords := []string{
"warning", "disabled", "non-recoverable",
"device removed", "device absent",
}
for _, keyword := range warningKeywords {
if strings.Contains(lowerSensor, keyword) ||
strings.Contains(lowerEvent, keyword) ||
strings.Contains(lowerStatus, keyword) {
return models.SeverityWarning
}
}
// Info indicators (normal operations)
infoKeywords := []string{
"presence detected", "device present", "asserted",
"initiated by", "state asserted", "s0/g0: working",
"power button pressed",
}
for _, keyword := range infoKeywords {
if strings.Contains(lowerEvent, keyword) ||
strings.Contains(lowerStatus, keyword) {
return models.SeverityInfo
}
}
// Default to info
return models.SeverityInfo
}
// buildSELDescription builds human-readable description
func buildSELDescription(eventDesc, status string) string {
if status == "Asserted" || status == "Deasserted" {
return eventDesc
}
return eventDesc + " (" + status + ")"
}

175
internal/parser/vendors/nvidia/README.md vendored Normal file
View File

@@ -0,0 +1,175 @@
# NVIDIA Field Diagnostics Parser
Парсер для диагностических архивов NVIDIA HGX Field Diagnostics.
Универсальный парсер, не привязанный к конкретному производителю серверов.
## Поддерживаемые архивы
- NVIDIA HGX Field Diag (работает с любыми серверами: Supermicro, Dell, HPE, и т.д.)
- Архивы с результатами GPU диагностики NVIDIA
## Формат архива
Парсер работает с архивами в формате:
- `.tar` (несжатый tar)
- `.tar.gz` (сжатый gzip)
## Распознаваемые файлы
### Основные файлы
1. **output.log** - вывод dmidecode с информацией о системе
- Производитель сервера (Manufacturer)
- Модель сервера (Product Name) - например, SYS-821GE-TNHR
- Серийный номер сервера (Serial Number) - например, A514359X5A07900
- UUID, SKU Number, Family
2. **unified_summary.json** - детальная информация о системе и компонентах
- Информация о GPU (модель, производитель, VBIOS, PCI адреса)
- Информация о NVSwitch (VendorID, DeviceID, Link speed/width)
- Информация о производителе и модели сервера
3. **summary.json** - результаты тестов диагностики
- Результаты тестов GPU (inforom, checkinforom, gpumem, gpustress, pcie, nvlink, nvswitch, power)
- Коды ошибок и статусы тестов
4. **summary.csv** - альтернативный формат результатов тестов
### Дополнительные файлы
- `gpu_fieldiag/*.log` - детальные логи диагностики каждого GPU
- `inventory/*.json` - дополнительная информация о конфигурации
## Извлекаемые данные
### Hardware Configuration
#### GPUs
```json
{
"slot": "GPUSXM1",
"model": "NVIDIA Device 2335",
"manufacturer": "NVIDIA Corporation",
"firmware": "96.00.D0.00.03",
"bdf": "0000:3a:00.0"
}
```
#### NVSwitch (как PCIe устройства)
```json
{
"slot": "NVSWITCHNVSWITCH0",
"device_class": "NVSwitch",
"manufacturer": "NVIDIA Corporation",
"vendor_id": 4318,
"device_id": 8867,
"bdf": "0000:05:00.0",
"link_speed": "16GT/s",
"link_width": 2
}
```
### Events
События создаются для:
- **Предупреждений и ошибок** тестов диагностики
- Примеры событий:
- `Row remapping failed` - ошибка памяти GPU (Warning)
- Различные тесты: connectivity, gpumem, gpustress, pcie, nvlink, nvswitch, power
Уровни severity:
- `info` - информационные события (тесты прошли успешно)
- `warning` - предупреждения (например, Row remapping failed)
- `critical` - критические ошибки (коды ошибок 300+)
## Пример использования
```bash
# Запуск веб-интерфейса
./logpile --file /path/to/A514359X5A07900_logs-20260122-074208.tar
# Веб-интерфейс будет доступен на http://localhost:8082
```
## Автоопределение
Парсер автоматически определяет архивы NVIDIA Field Diag по наличию:
- `unified_summary.json` с маркером "HGX Field Diag"
- `summary.json` и `summary.csv` с результатами тестов
- Директории `gpu_fieldiag/`
Confidence score:
- `unified_summary.json` с маркером "HGX Field Diag": +40
- `summary.json`: +20
- `summary.csv`: +15
- `gpu_fieldiag/` directory: +15
## Версионирование
**Текущая версия парсера:** 1.1.0
При модификации логики парсера необходимо увеличивать версию в константе `parserVersion` в файле `parser.go`.
### История версий
- **1.1.0** - Добавлен парсинг output.log (dmidecode) для извлечения модели и серийного номера сервера
- **1.0.0** - Первоначальная версия с парсингом unified_summary.json и summary.json/csv
## Примеры данных
### Пример unified_summary.json
```json
{
"runInfo": {
"diagVersion": "24287-XXXX-FLD-42658",
"diagName": "HGX Field Diag",
"finalResult": "FAIL",
"errorCode": 363
},
"tests": [{
"virtualId": "inventory",
"components": [{
"componentId": "GPUSXM1",
"properties": [
{"id": "Manufacturer", "value": "Any Server Vendor"},
{"id": "VendorID", "value": "10de"},
{"id": "DeviceID", "value": "2335"}
]
}]
}]
}
```
### Пример summary.json
```json
[
{
"Error Code": "005-000-1-000000000363",
"Test": "gpumem",
"Component ID": "SXM5_SN_1653925025497",
"Notes": "Row remapping failed",
"Virtual ID": "gpumem"
}
]
```
## Известные ограничения
1. Парсер фокусируется на данных из `unified_summary.json` и `summary.json`
2. Детальные логи из `gpu_fieldiag/*.log` пока не парсятся
3. Информация о CPU, памяти и дисках не извлекается (в архиве отсутствует)
## Разработка
### Добавление новых полей
1. Изучите структуру JSON в архиве
2. Добавьте поля в структуры `Component` или `Property`
3. Обновите функции `parseGPUComponent` или `parseNVSwitchComponent`
4. Увеличьте версию парсера
### Добавление новых типов файлов
1. Создайте новый файл с парсером (например, `gpu_logs.go`)
2. Добавьте парсинг в функцию `Parse()` в `parser.go`
3. Обновите документацию

View File

@@ -0,0 +1,68 @@
package nvidia
import (
"bufio"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// ParseOutputLog parses output.log file which contains dmidecode output
func ParseOutputLog(content []byte, result *models.AnalysisResult) error {
scanner := bufio.NewScanner(strings.NewReader(string(content)))
inSystemInfo := false
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Detect "System Information" section
if strings.Contains(trimmed, "System Information") {
inSystemInfo = true
continue
}
// Exit section when we hit another Handle or empty section
if inSystemInfo && strings.HasPrefix(trimmed, "Handle ") {
inSystemInfo = false
continue
}
// Parse fields in System Information section
if inSystemInfo && strings.Contains(line, ":") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
if value == "" {
continue
}
switch field {
case "Manufacturer":
result.Hardware.BoardInfo.Manufacturer = value
case "Product Name":
result.Hardware.BoardInfo.ProductName = value
case "Serial Number":
result.Hardware.BoardInfo.SerialNumber = value
case "Version":
// Store version in part number if needed
if result.Hardware.BoardInfo.PartNumber == "" {
result.Hardware.BoardInfo.PartNumber = value
}
case "UUID":
// Store UUID somewhere if needed (we don't have a field for it yet)
// Could add to FRU or as a custom field
case "Family":
// Could store family info if needed
}
}
}
return scanner.Err()
}

166
internal/parser/vendors/nvidia/parser.go vendored Normal file
View File

@@ -0,0 +1,166 @@
// Package nvidia provides parser for NVIDIA Field Diagnostics archives
// Tested with: HGX Field Diag (works with various server vendors)
//
// IMPORTANT: Increment parserVersion when modifying parser logic!
// This helps track which version was used to parse specific logs.
package nvidia
import (
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
// parserVersion - version of this parser module
// IMPORTANT: Increment this version when making changes to parser logic!
const parserVersion = "1.1.0"
func init() {
parser.Register(&Parser{})
}
// Parser implements VendorParser for NVIDIA Field Diagnostics
type Parser struct{}
// Name returns human-readable parser name
func (p *Parser) Name() string {
return "NVIDIA Field Diagnostics Parser"
}
// Vendor returns vendor identifier
func (p *Parser) Vendor() string {
return "nvidia"
}
// Version returns parser version
// IMPORTANT: Update parserVersion constant when modifying parser logic!
func (p *Parser) Version() string {
return parserVersion
}
// Detect checks if archive matches NVIDIA Field Diagnostics format
// Returns confidence 0-100
func (p *Parser) Detect(files []parser.ExtractedFile) int {
confidence := 0
for _, f := range files {
path := strings.ToLower(f.Path)
// Strong indicators for NVIDIA Field Diagnostics format
if strings.HasSuffix(path, "unified_summary.json") {
// Check if it's really NVIDIA Field Diag format
if containsNvidiaFieldDiagMarkers(f.Content) {
confidence += 40
}
}
if strings.HasSuffix(path, "summary.json") && !strings.Contains(path, "unified_") {
confidence += 20
}
if strings.HasSuffix(path, "summary.csv") {
confidence += 15
}
if strings.Contains(path, "gpu_fieldiag/") {
confidence += 15
}
if strings.HasSuffix(path, "output.log") {
// Check if it contains dmidecode output
if strings.Contains(string(f.Content), "dmidecode") ||
strings.Contains(string(f.Content), "System Information") {
confidence += 10
}
}
// Cap at 100
if confidence >= 100 {
return 100
}
}
return confidence
}
// containsNvidiaFieldDiagMarkers checks if content has NVIDIA Field Diag markers
func containsNvidiaFieldDiagMarkers(content []byte) bool {
s := string(content)
// Check for typical NVIDIA Field Diagnostics structure
return strings.Contains(s, "runInfo") &&
strings.Contains(s, "diagVersion") &&
strings.Contains(s, "HGX Field Diag")
}
// Parse parses NVIDIA Field Diagnostics archive
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
}
// Initialize hardware config
result.Hardware = &models.HardwareConfig{
GPUs: make([]models.GPU, 0),
}
// Parse output.log first (contains dmidecode system info)
// Find the output.log file that contains dmidecode output
outputLogFile := findDmidecodeOutputLog(files)
if outputLogFile != nil {
if err := ParseOutputLog(outputLogFile.Content, result); err != nil {
// Log error but continue parsing other files
_ = err // Ignore error for now
}
}
// Parse unified_summary.json (contains detailed component info)
if f := parser.FindFileByName(files, "unified_summary.json"); f != nil {
if err := ParseUnifiedSummary(f.Content, result); err != nil {
// Log error but continue parsing other files
_ = err // Ignore error for now
}
}
// Parse summary.json (test results summary)
if f := parser.FindFileByName(files, "summary.json"); f != nil {
events := ParseSummaryJSON(f.Content)
result.Events = append(result.Events, events...)
}
// Parse summary.csv (alternative format)
if f := parser.FindFileByName(files, "summary.csv"); f != nil {
csvEvents := ParseSummaryCSV(f.Content)
result.Events = append(result.Events, csvEvents...)
}
// Parse GPU field diagnostics logs
gpuFieldiagFiles := parser.FindFileByPattern(files, "gpu_fieldiag/", ".log")
for _, f := range gpuFieldiagFiles {
// Parse individual GPU diagnostic logs if needed
// For now, we focus on summary files
_ = f
}
return result, nil
}
// findDmidecodeOutputLog finds the output.log file that contains dmidecode output
func findDmidecodeOutputLog(files []parser.ExtractedFile) *parser.ExtractedFile {
for _, f := range files {
// Look for output.log files
if !strings.HasSuffix(strings.ToLower(f.Path), "output.log") {
continue
}
// Check if it contains dmidecode output
content := string(f.Content)
if strings.Contains(content, "dmidecode") &&
strings.Contains(content, "System Information") {
return &f
}
}
return nil
}

View File

@@ -0,0 +1,152 @@
package nvidia
import (
"encoding/csv"
"encoding/json"
"fmt"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
// SummaryEntry represents a single test result entry
type SummaryEntry struct {
ErrorCode string `json:"Error Code"`
Test string `json:"Test"`
ComponentID string `json:"Component ID"`
Notes string `json:"Notes"`
VirtualID string `json:"Virtual ID"`
IgnoreError string `json:"Ignore Error"`
}
// ParseSummaryJSON parses summary.json file and returns events
func ParseSummaryJSON(content []byte) []models.Event {
var entries []SummaryEntry
if err := json.Unmarshal(content, &entries); err != nil {
return nil
}
events := make([]models.Event, 0)
timestamp := time.Now() // Use current time as we don't have exact timestamps in summary
for _, entry := range entries {
// Only create events for failures or warnings
if entry.Notes != "OK" || entry.ErrorCode != "001-000-1-000000000000" {
event := models.Event{
Timestamp: timestamp,
Source: "GPU Field Diagnostics",
EventType: entry.Test,
Description: formatSummaryDescription(entry),
Severity: getSeverityFromErrorCode(entry.ErrorCode, entry.Notes),
RawData: fmt.Sprintf("Test: %s, Component: %s, Error: %s", entry.Test, entry.ComponentID, entry.ErrorCode),
}
events = append(events, event)
}
}
return events
}
// ParseSummaryCSV parses summary.csv file and returns events
func ParseSummaryCSV(content []byte) []models.Event {
reader := csv.NewReader(strings.NewReader(string(content)))
records, err := reader.ReadAll()
if err != nil {
return nil
}
events := make([]models.Event, 0)
timestamp := time.Now()
// Skip header row
for i, record := range records {
if i == 0 {
continue // Skip header
}
// CSV format: ErrorCode,Test,VirtualID,SubTest,Type,ComponentID,Notes,Level,,,IgnoreError
if len(record) < 7 {
continue
}
errorCode := record[0]
test := record[1]
componentID := record[5]
notes := record[6]
// Only create events for failures or warnings
if notes != "OK" || (errorCode != "0" && !strings.HasPrefix(errorCode, "048-000-0") && !strings.HasPrefix(errorCode, "001-000-1")) {
event := models.Event{
Timestamp: timestamp,
Source: "GPU Field Diagnostics",
EventType: test,
Description: formatCSVDescription(test, componentID, notes, errorCode),
Severity: getSeverityFromErrorCode(errorCode, notes),
RawData: fmt.Sprintf("Test: %s, Component: %s, Error: %s", test, componentID, errorCode),
}
events = append(events, event)
}
}
return events
}
// formatSummaryDescription creates a human-readable description from summary entry
func formatSummaryDescription(entry SummaryEntry) string {
component := entry.ComponentID
if component == "" {
component = entry.VirtualID
}
if entry.Notes == "OK" {
return fmt.Sprintf("%s test passed for %s", entry.Test, component)
}
return fmt.Sprintf("%s test failed for %s: %s (Error: %s)", entry.Test, component, entry.Notes, entry.ErrorCode)
}
// formatCSVDescription creates a human-readable description from CSV record
func formatCSVDescription(test, component, notes, errorCode string) string {
if notes == "OK" {
return fmt.Sprintf("%s test passed for %s", test, component)
}
return fmt.Sprintf("%s test failed for %s: %s (Error: %s)", test, component, notes, errorCode)
}
// getSeverityFromErrorCode determines severity based on error code and notes
func getSeverityFromErrorCode(errorCode, notes string) models.Severity {
// Parse error code format: XXX-YYY-Z-ZZZZZZZZZZZZ
// First digit indicates severity in some cases
if notes == "OK" {
return models.SeverityInfo
}
// Row remapping failed is a warning
if strings.Contains(notes, "Row remapping failed") {
return models.SeverityWarning
}
// Check error code
if errorCode == "" || errorCode == "0" {
return models.SeverityInfo
}
// Codes starting with 0 are typically informational
if strings.HasPrefix(errorCode, "001-000-1") || strings.HasPrefix(errorCode, "048-000-0") {
return models.SeverityInfo
}
// Non-zero error codes are typically warnings or errors
// If code is in 300+ range, it's likely an error
if len(errorCode) > 2 {
firstDigits := errorCode[:3]
if firstDigits >= "300" {
return models.SeverityCritical
}
}
return models.SeverityWarning
}

View File

@@ -0,0 +1,281 @@
package nvidia
import (
"encoding/json"
"fmt"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// UnifiedSummaryData represents the structure of unified_summary.json
type UnifiedSummaryData struct {
RunInfo RunInfo `json:"runInfo"`
Tests []Test `json:"tests"`
}
// RunInfo contains information about the diagnostic run
type RunInfo struct {
TimeInfo struct {
StartTime string `json:"startTime"`
EndTime string `json:"endTime"`
TotalDuration string `json:"totalDuration"`
} `json:"timeInfo"`
DiagVersion string `json:"diagVersion"`
BaseVersion string `json:"baseVersion"`
FinalResult string `json:"finalResult"`
ErrorCode int `json:"errorCode"`
DiagName string `json:"diagName"`
RunLevel string `json:"runLevel"`
}
// Test represents a diagnostic test
type Test struct {
VirtualID string `json:"virtualId"`
Action string `json:"action"`
StartTime string `json:"startTime"`
EndTime string `json:"endTime"`
Components []Component `json:"components"`
}
// Component represents a hardware component
type Component struct {
ComponentID string `json:"componentId"`
ErrorCode string `json:"errorCode"`
Notes string `json:"notes"`
Result string `json:"result"`
Properties []Property `json:"properties"`
}
// Property represents a component property
type Property struct {
ID string `json:"id"`
Value interface{} `json:"value"` // Can be string or number
}
// GetValueAsString returns the value as a string
func (p *Property) GetValueAsString() string {
switch v := p.Value.(type) {
case string:
return v
case float64:
return fmt.Sprintf("%.0f", v)
case int:
return fmt.Sprintf("%d", v)
default:
return fmt.Sprintf("%v", v)
}
}
// ParseUnifiedSummary parses unified_summary.json file
func ParseUnifiedSummary(content []byte, result *models.AnalysisResult) error {
var data UnifiedSummaryData
if err := json.Unmarshal(content, &data); err != nil {
return fmt.Errorf("failed to parse unified_summary.json: %w", err)
}
// Set default board info only if not already set (from output.log)
if result.Hardware.BoardInfo.ProductName == "" {
result.Hardware.BoardInfo.ProductName = "GPU Server (Field Diag)"
}
// Parse inventory test for hardware details
for _, test := range data.Tests {
if test.VirtualID == "inventory" || test.Action == "inventory" {
parseInventoryComponents(test.Components, result)
}
}
return nil
}
// parseInventoryComponents extracts hardware info from inventory test
func parseInventoryComponents(components []Component, result *models.AnalysisResult) {
for _, comp := range components {
// Parse system/board information
if parseSystemInfo(comp, result) {
// System info was found and parsed
continue
}
// Parse GPU components
if strings.HasPrefix(comp.ComponentID, "GPUSXM") {
gpu := parseGPUComponent(comp)
if gpu != nil {
result.Hardware.GPUs = append(result.Hardware.GPUs, *gpu)
}
}
// Parse NVSwitch components
if strings.HasPrefix(comp.ComponentID, "NVSWITCHNVSWITCH") {
nvswitch := parseNVSwitchComponent(comp)
if nvswitch != nil {
// Add as PCIe device for now
result.Hardware.PCIeDevices = append(result.Hardware.PCIeDevices, *nvswitch)
}
}
}
}
// parseSystemInfo extracts system/board information from a component
// Returns true if this component contains system info
func parseSystemInfo(comp Component, result *models.AnalysisResult) bool {
compID := strings.ToUpper(comp.ComponentID)
// Check if this is a system/board component
isSystemComponent := strings.Contains(compID, "BASEBOARD") ||
strings.Contains(compID, "SYSTEM") ||
strings.Contains(compID, "MOTHERBOARD") ||
strings.Contains(compID, "BOARD") ||
comp.ComponentID == "Inventory"
if !isSystemComponent {
return false
}
// Extract system properties
for _, prop := range comp.Properties {
propID := prop.ID
value := prop.GetValueAsString()
if value == "" {
continue
}
switch propID {
case "Manufacturer", "BoardManufacturer", "SystemManufacturer":
// Only set if not already populated (e.g., from output.log)
if result.Hardware.BoardInfo.Manufacturer == "" {
result.Hardware.BoardInfo.Manufacturer = value
}
case "ProductName", "Product", "Model", "ModelName", "BoardProduct", "SystemProduct":
// Don't overwrite real data from output.log with generic data
// Only set if empty or still has the default placeholder value
if result.Hardware.BoardInfo.ProductName == "" ||
result.Hardware.BoardInfo.ProductName == "GPU Server (Field Diag)" {
result.Hardware.BoardInfo.ProductName = value
}
case "SerialNumber", "Serial", "BoardSerial", "SystemSerial":
// Only set if not already populated (e.g., from output.log)
if result.Hardware.BoardInfo.SerialNumber == "" {
result.Hardware.BoardInfo.SerialNumber = value
}
case "PartNumber", "BoardPartNumber":
// Only set if not already populated
if result.Hardware.BoardInfo.PartNumber == "" {
result.Hardware.BoardInfo.PartNumber = value
}
}
}
return true
}
// parseGPUComponent parses GPU component information
func parseGPUComponent(comp Component) *models.GPU {
gpu := &models.GPU{
Slot: comp.ComponentID, // e.g., "GPUSXM1"
}
var deviceID, vbios, pciID string
for _, prop := range comp.Properties {
switch prop.ID {
case "DeviceID":
deviceID = prop.GetValueAsString()
case "Vendor":
gpu.Manufacturer = prop.GetValueAsString()
case "DeviceName":
gpu.Model = prop.GetValueAsString()
case "VBIOS_version":
vbios = prop.GetValueAsString()
case "PCIID":
pciID = prop.GetValueAsString()
}
}
// Build model string from vendor/device IDs
if gpu.Model == "" || strings.Contains(gpu.Model, "Device") {
if deviceID != "" {
gpu.Model = fmt.Sprintf("NVIDIA Device %s", strings.ToUpper(deviceID))
}
}
// Add firmware info
if vbios != "" {
gpu.Firmware = vbios
}
// Add PCI info
if pciID != "" {
gpu.BDF = pciID
}
return gpu
}
// parseNVSwitchComponent parses NVSwitch component information
func parseNVSwitchComponent(comp Component) *models.PCIeDevice {
device := &models.PCIeDevice{
Slot: comp.ComponentID, // e.g., "NVSWITCHNVSWITCH0"
}
var vendorIDStr, deviceIDStr, vbios, pciID string
var pciSpeedStr, pciWidthStr string
var vendor string
for _, prop := range comp.Properties {
switch prop.ID {
case "VendorID":
vendorIDStr = prop.GetValueAsString()
case "DeviceID":
deviceIDStr = prop.GetValueAsString()
case "Vendor":
vendor = prop.GetValueAsString()
case "VBIOS_version":
vbios = prop.GetValueAsString()
case "InfoROM_version":
// Store in part number field as we don't have a better place
case "PCIID":
pciID = prop.GetValueAsString()
device.BDF = pciID
case "PCISpeed":
pciSpeedStr = prop.GetValueAsString()
device.LinkSpeed = pciSpeedStr
device.MaxLinkSpeed = pciSpeedStr
case "PCIWidth":
pciWidthStr = prop.GetValueAsString()
}
}
// Parse vendor ID
if vendorIDStr != "" {
fmt.Sscanf(vendorIDStr, "%x", &device.VendorID)
}
// Parse device ID
if deviceIDStr != "" {
fmt.Sscanf(deviceIDStr, "%x", &device.DeviceID)
}
// Set manufacturer
if vendor != "" {
device.Manufacturer = vendor
}
// Set device class
device.DeviceClass = "NVSwitch"
// Parse link width
if pciWidthStr != "" {
fmt.Sscanf(pciWidthStr, "x%d", &device.LinkWidth)
device.MaxLinkWidth = device.LinkWidth
}
// Store part number (use for firmware version)
if vbios != "" {
device.PartNumber = vbios
}
return device
}

View File

@@ -0,0 +1,275 @@
# NVIDIA Bug Report Parser
Парсер для файлов nvidia-bug-report, генерируемых скриптом `nvidia-bug-report.sh`.
## Назначение
Этот парсер обрабатывает диагностические логи NVIDIA драйверов и извлекает:
- Информацию о модулях памяти (из dmidecode)
- Информацию о GPU устройствах
- Версию NVIDIA драйвера
## Формат файла
- Имя файла: `nvidia-bug-report-*.log.gz`
- Формат: Gzip-сжатый текстовый файл
- Генерируется: `nvidia-bug-report.sh` скриптом
## Confidence Score
**85** - высокий приоритет для файлов nvidia-bug-report
## Извлекаемые данные
### 1. System Information (из dmidecode)
Информация о сервере:
- **Serial Number**: Серийный номер сервера (например, 2KD501412)
- **UUID**: Уникальный идентификатор системы (например, 2e4054bc-1dd2-11b2-0284-6b0a21737950)
- **Manufacturer**: Производитель сервера
- **Product Name**: Модель сервера
- **Version**: Версия системы
### 2. CPU Information (из dmidecode)
Для каждого процессора извлекается:
- **Model**: Модель процессора (например, Intel(R) Xeon(R) Platinum 8480+)
- **Serial Number**: Серийный номер (например, 5DB0D6C0DD30ABD8)
- **Core Count**: Количество ядер (например, 56)
- **Thread Count**: Количество потоков (например, 112)
- **Max Speed**: Максимальная частота (например, 3800 MHz)
- **Current Speed**: Текущая частота (например, 2000 MHz)
Пример:
```
Socket 0: Intel(R) Xeon(R) Platinum 8480+
Serial Number: 5DB0D6C0DD30ABD8
Cores: 56, Threads: 112
Frequency: 2000 MHz (Max: 3800 MHz)
```
### 3. Memory Modules (из dmidecode)
Для каждого модуля памяти извлекается:
- **Slot/Location**: Например, CPU0_C0D0
- **Size**: Размер в GB (например, 64 GB)
- **Type**: Тип памяти (DDR5, DDR4, etc.)
- **Manufacturer**: Производитель (Hynix, Samsung, Micron, etc.)
- **Part Number**: P/N модуля (например, HMCG94AGBRA179N)
- **Serial Number**: S/N модуля (например, 80AD0224322B3834E6)
- **Speed**: Max/Current скорость (например, 5600/4400 MHz)
- **Ranks**: Количество рангов
Пример:
```
Slot: CPU0_C0D0
Size: 64 GB
Type: DDR5
Manufacturer: Hynix
Part Number: HMCG94AGBRA179N
Serial Number: 80AD0224322B3834E6
Speed: 5600 MT/s (configured: 4400 MT/s)
Ranks: 2
```
### 4. Power Supplies (из dmidecode)
Для каждого блока питания извлекается:
- **Location**: Позиция (например, PSU0, PSU1)
- **Manufacturer**: Производитель (например, DELTA, Great Wall)
- **Model Part Number**: Модель БП (например, V0310DT000000000)
- **Serial Number**: Серийный номер (например, DGPLV251500LZ)
- **Max Power Capacity**: Максимальная мощность (например, 2700 W)
- **Revision**: Версия прошивки (например, 00.01.04)
- **Status**: Статус (например, Present, OK)
Пример:
```
PSU0: V0310DT000000000 (DELTA)
Serial Number: DGPLV251500LZ
Power: 2700 W, Revision: 00.01.04
Status: Present, OK
```
### 5. Network Adapters (из lspci)
Для каждого сетевого адаптера (Ethernet, Network, InfiniBand) извлекается:
- **Model**: Полное название модели из VPD (например, "NVIDIA ConnectX-7 HHHL Adapter card, 400GbE / NDR IB (default mode), Single-port OSFP, PCIe 5.0 x16")
- **Location**: PCI BDF адрес (например, 0000:0e:00.0)
- **Slot**: Физический слот (например, 108)
- **Part Number**: P/N адаптера (например, MCX75310AAS-NEAT)
- **Serial Number**: S/N адаптера (например, MT2430600249)
- **Vendor**: Производитель (Mellanox, NVIDIA)
- **Vendor ID / Device ID**: PCI идентификаторы (например, 15b3:1021)
- **Port Count**: Количество портов (определяется из модели: Dual-port = 2, Single-port = 1)
- **Port Type**: Тип портов (QSFP56, OSFP, SFP+)
Пример:
```
0000:0e:00.0: NVIDIA ConnectX-7 HHHL Adapter card, 400GbE / NDR IB (default mode), Single-port OSFP
Slot: 108
P/N: MCX75310AAS-NEAT
S/N: MT2430600249
Ports: 1 x OSFP
```
### 6. GPU Devices
Для каждого GPU извлекается:
- **Model**: Модель GPU (например, NVIDIA H100 80GB HBM3)
- **BDF (Bus:Device.Function)**: PCI адрес (например, 0000:0f:00.0)
- **UUID**: Уникальный идентификатор GPU (например, GPU-64674e47-e036-c12a-3e8d-55a2a9ac8db3)
- **Video BIOS**: Версия BIOS видеокарты (например, 96.00.99.00.01)
- **IRQ**: Прерывание (например, 17)
- **Bus Type**: Тип шины (PCIe)
- **DMA Size**: Размер DMA (например, 52 bits)
- **DMA Mask**: Маска DMA (например, 0xfffffffffffff)
- **Device Minor**: Номер устройства (например, 0)
- **Manufacturer**: NVIDIA
Пример:
```
0000:0f:00.0: NVIDIA H100 80GB HBM3
UUID: GPU-64674e47-e036-c12a-3e8d-55a2a9ac8db3
Video BIOS: 96.00.99.00.01
IRQ: 17
```
### 7. Events
- **Memory Configuration**: Сводка по модулям памяти (количество, производители, общий размер)
- **GPU Detection**: Обнаруженные GPU устройства
- **Driver Version**: Версия NVIDIA драйвера
## Пример использования
```bash
# Запуск с nvidia-bug-report файлом
./logpile --file nvidia-bug-report-2KD501412.log.gz
# Веб-интерфейс будет доступен на http://localhost:8082
```
## Пример вывода
```
✓ Detected vendor: NVIDIA Bug Report Parser
✓ CPUs: 2
✓ Memory: 32 modules
✓ Power Supplies: 8
✓ GPUs: 8
✓ Network Adapters: 12
System Information:
Serial Number: 2KD501412
UUID: 2e4054bc-1dd2-11b2-0284-6b0a21737950
Version: 0
CPU Information:
Socket 0: Intel(R) Xeon(R) Platinum 8480+
S/N: 5DB0D6C0DD30ABD8, Cores: 56, Threads: 112
Socket 1: Intel(R) Xeon(R) Platinum 8480+
S/N: 5DB017C05685B3ED, Cores: 56, Threads: 112
Power Supplies:
PSU0: V0310DT000000000 (DELTA)
S/N: DGPLV251500LZ
Power: 2700 W, Revision: 00.01.04
Status: Present, OK
PSU1: V0310DT000000000 (DELTA)
S/N: DGPLV251500GY
Power: 2700 W, Revision: 00.01.04
Status: Present, OK
[... 6 more PSUs ...]
Memory Modules:
CPU0_C0D0: 64 GB, Hynix
P/N: HMCG94AGBRA179N, S/N: 80AD0224322B3834E6
Type: DDR5, Speed: 4400/5600 MHz
[... 31 more modules ...]
Network Adapters: 12 devices
0000:0e:00.0: NVIDIA ConnectX-7 HHHL Adapter card, 400GbE / NDR IB (default mode), Single-port OSFP
Slot: 108
P/N: MCX75310AAS-NEAT
S/N: MT2430600249
Ports: 1 x OSFP
0000:1f:00.0: ConnectX-6 Dx EN adapter card, 100GbE, Dual-port QSFP56
Slot: 12
P/N: MCX623106AN-CDAT
S/N: MT2434J00PCD
Ports: 2 x QSFP56
[... 10 more adapters ...]
GPUs: 8 devices
0000:0f:00.0: NVIDIA H100 80GB HBM3
UUID: GPU-64674e47-e036-c12a-3e8d-55a2a9ac8db3
Video BIOS: 96.00.99.00.01
IRQ: 17
0000:34:00.0: NVIDIA H100 80GB HBM3
UUID: GPU-fa796345-c23a-54aa-1b67-709ac2542852
Video BIOS: 96.00.99.00.01
IRQ: 16
[... 6 more GPUs ...]
```
## Версионирование
**Текущая версия парсера:** 1.0.0
### История версий
- **1.0.0** - Первоначальная версия с парсингом System Info, CPU, Memory, PSU, GPU, Network Adapters и Driver
## Структура данных
Парсер использует следующие секции в bug report:
1. **dmidecode output (System Information)** - для извлечения информации о сервере
2. **dmidecode output (Processor Information)** - для извлечения информации о CPU
3. **dmidecode output (Memory Device)** - для извлечения информации о памяти
4. **dmidecode output (System Power Supply)** - для извлечения информации о блоках питания
5. **lspci -vvv output (Ethernet/Network/Infiniband controller)** - для извлечения информации о сетевых адаптерах
6. **lspci VPD (Vital Product Data)** - для извлечения P/N, S/N и модели сетевых адаптеров
7. **/proc/driver/nvidia/gpus/.../information** - для детальной информации о GPU
8. **NVRM version** - для версии драйвера
## Известные ограничения
1. Ошибки и предупреждения из логов пока не извлекаются
2. Некоторые специфичные характеристики GPU (температура, утилизация) не парсятся
3. Информация о производительности и метрики GPU требуют парсинга других секций
## Расширение
Для добавления новых возможностей:
1. **Ошибки драйвера**: Парсить секции с ошибками NVIDIA драйвера
2. **nvidia-smi output**: Извлекать детальную информацию из вывода nvidia-smi (температура, утилизация)
3. **GPU производительность**: Парсить метрики производительности и использования памяти GPU
4. **PCIe информация**: Извлекать детали о PCIe конфигурации (скорость линка, ширина)
## Пример структуры файла
```
Start of NVIDIA bug report log file
nvidia-bug-report.sh Version: 34275561
Date: Thu Jul 17 18:18:18 EDT 2025
[... system info ...]
Memory Device
Data Width: 64 bits
Size: 64 GB
Form Factor: DIMM
Locator: CPU0_C0D0
Type: DDR5
Speed: 5600 MT/s
Manufacturer: Hynix
Serial Number: 80AD0224322B3834E6
Part Number: HMCG94AGBRA179N
[... more memory modules ...]
*** /proc/driver/nvidia/./gpus/0000:0f:00.0/power
[... GPU info ...]
```

View File

@@ -0,0 +1,140 @@
package nvidia_bug_report
import (
"bufio"
"strconv"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parseCPUInfo extracts CPU information from dmidecode output
func parseCPUInfo(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
var currentCPU *models.CPU
inProcessorInfo := false
cpuSocket := 0
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Start of Processor Information section
if strings.Contains(trimmed, "Processor Information") {
inProcessorInfo = true
currentCPU = &models.CPU{
Socket: cpuSocket,
}
cpuSocket++
continue
}
// End of current section (empty line or new section with Handle)
if inProcessorInfo && (trimmed == "" || strings.HasPrefix(trimmed, "Handle ")) {
// Save CPU if it has valid data
if currentCPU != nil && currentCPU.Model != "" {
result.Hardware.CPUs = append(result.Hardware.CPUs, *currentCPU)
}
inProcessorInfo = false
currentCPU = nil
continue
}
// Parse fields within Processor Information section
if inProcessorInfo && currentCPU != nil && strings.Contains(line, ":") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
if value == "" || value == "Not Specified" || value == "Unknown" || value == "UNKNOWN" || value == "<OUT OF SPEC>" {
continue
}
switch field {
case "Version":
// CPU model name
currentCPU.Model = value
case "Serial Number":
currentCPU.SerialNumber = value
case "Part Number":
// Store part number if available
// Could be stored in a custom field if needed
case "Core Count":
if cores, err := strconv.Atoi(value); err == nil {
currentCPU.Cores = cores
}
case "Core Enabled":
// Could store this if needed
case "Thread Count":
if threads, err := strconv.Atoi(value); err == nil {
currentCPU.Threads = threads
}
case "Max Speed":
// Parse speed like "3800 MHz"
if speed := parseCPUSpeed(value); speed > 0 {
currentCPU.MaxFreqMHz = speed
}
case "Current Speed":
// Parse current speed like "2000 MHz"
if speed := parseCPUSpeed(value); speed > 0 {
currentCPU.FrequencyMHz = speed
}
case "Voltage":
// Could parse voltage if needed (e.g., "1.6 V")
case "Status":
// Status like "Populated, Enabled"
// Check if CPU is enabled
if !strings.Contains(value, "Populated") {
// Skip unpopulated CPUs
currentCPU = nil
inProcessorInfo = false
}
}
}
}
// Save last CPU if exists
if currentCPU != nil && currentCPU.Model != "" {
result.Hardware.CPUs = append(result.Hardware.CPUs, *currentCPU)
}
}
// parseCPUSpeed parses CPU speed strings like "3800 MHz" or "2.0 GHz"
func parseCPUSpeed(speedStr string) int {
parts := strings.Fields(speedStr)
if len(parts) < 2 {
return 0
}
// Try to parse the number (may be int or float)
speedStr = parts[0]
var speed float64
var err error
if strings.Contains(speedStr, ".") {
speed, err = strconv.ParseFloat(speedStr, 64)
} else {
var speedInt int
speedInt, err = strconv.Atoi(speedStr)
speed = float64(speedInt)
}
if err != nil {
return 0
}
unit := strings.ToUpper(parts[1])
switch unit {
case "MHZ":
return int(speed)
case "GHZ":
return int(speed * 1000)
default:
return 0
}
}

View File

@@ -0,0 +1,170 @@
package nvidia_bug_report
import (
"bufio"
"regexp"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parseGPUInfo extracts GPU information from the bug report
func parseGPUInfo(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
var currentGPU *models.GPU
inGPUInfo := false
for scanner.Scan() {
line := scanner.Text()
// Look for GPU information section markers (but skip ls listings)
if strings.Contains(line, "/proc/driver/nvidia") && strings.Contains(line, "/gpus/") &&
strings.Contains(line, "/information") && !strings.Contains(line, "ls:") {
// Extract PCI address
re := regexp.MustCompile(`/gpus/([\da-f]{4}:[\da-f]{2}:[\da-f]{2}\.[\da-f])`)
matches := re.FindStringSubmatch(line)
if len(matches) > 1 {
pciAddr := matches[1]
// Save previous GPU if exists
if currentGPU != nil {
result.Hardware.GPUs = append(result.Hardware.GPUs, *currentGPU)
}
// Start new GPU entry
currentGPU = &models.GPU{
BDF: pciAddr,
Manufacturer: "NVIDIA",
}
inGPUInfo = true
continue
}
}
// End of GPU info section (separator line or new section, but not ls lines)
if inGPUInfo && (strings.HasPrefix(line, "___") || (strings.HasPrefix(line, "***") && !strings.Contains(line, "ls:"))) {
inGPUInfo = false
continue
}
// Parse GPU fields within information section
if inGPUInfo && currentGPU != nil && strings.Contains(line, ":") {
// Split on first colon and trim whitespace/tabs
parts := strings.SplitN(line, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
if value == "" {
continue
}
switch field {
case "Model":
currentGPU.Model = value
case "IRQ":
if irq, err := strconv.Atoi(value); err == nil {
currentGPU.IRQ = irq
}
case "GPU UUID":
currentGPU.UUID = value
case "Video BIOS":
currentGPU.VideoBIOS = value
case "Bus Type":
currentGPU.BusType = value
case "DMA Size":
currentGPU.DMASize = value
case "DMA Mask":
currentGPU.DMAMask = value
case "Bus Location":
// BDF already set from path, but verify consistency
if currentGPU.BDF != value {
// Use the value from the information section as it's more explicit
currentGPU.BDF = value
}
case "Device Minor":
if minor, err := strconv.Atoi(value); err == nil {
currentGPU.DeviceMinor = minor
}
case "GPU Excluded":
// Store as status if "Yes"
if strings.ToLower(value) == "yes" {
currentGPU.Status = "Excluded"
}
}
}
}
// Save last GPU if exists
if currentGPU != nil {
result.Hardware.GPUs = append(result.Hardware.GPUs, *currentGPU)
}
// Create event for GPU summary
if len(result.Hardware.GPUs) > 0 {
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "NVIDIA Driver",
EventType: "GPU Detection",
Description: "NVIDIA GPUs detected",
Severity: models.SeverityInfo,
RawData: formatGPUSummary(result.Hardware.GPUs),
})
}
}
// parseDriverVersion extracts NVIDIA driver version
func parseDriverVersion(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
for scanner.Scan() {
line := scanner.Text()
// Look for NVRM version line
if strings.Contains(line, "NVRM version:") {
// Extract version info
parts := strings.Split(line, "NVRM version:")
if len(parts) > 1 {
version := strings.TrimSpace(parts[1])
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "NVIDIA Driver",
EventType: "Driver Version",
Description: "NVIDIA driver version detected",
Severity: models.SeverityInfo,
RawData: version,
})
break
}
}
}
}
// formatGPUSummary creates a summary string for GPUs
func formatGPUSummary(gpus []models.GPU) string {
if len(gpus) == 0 {
return ""
}
var summary strings.Builder
for i, gpu := range gpus {
if i > 0 {
summary.WriteString("; ")
}
summary.WriteString(gpu.BDF)
if gpu.Model != "" {
summary.WriteString(" (")
summary.WriteString(gpu.Model)
summary.WriteString(")")
}
}
return summary.String()
}

View File

@@ -0,0 +1,183 @@
package nvidia_bug_report
import (
"bufio"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parseMemoryModules extracts memory module information from dmidecode output
func parseMemoryModules(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
var currentModule *models.MemoryDIMM
inMemoryDevice := false
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Start of Memory Device section
if strings.Contains(trimmed, "Memory Device") && !strings.Contains(trimmed, "Array") {
inMemoryDevice = true
currentModule = &models.MemoryDIMM{
Present: true,
}
continue
}
// End of current section (empty line or new section)
if inMemoryDevice && (trimmed == "" || strings.HasPrefix(trimmed, "Handle ")) {
// Save module if it has valid data
if currentModule != nil && currentModule.Slot != "" && currentModule.SizeMB > 0 {
result.Hardware.Memory = append(result.Hardware.Memory, *currentModule)
}
inMemoryDevice = false
currentModule = nil
continue
}
// Parse fields within Memory Device section
if inMemoryDevice && currentModule != nil && strings.Contains(line, ":") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
if value == "" || value == "Not Specified" || value == "Unknown" || value == "NO DIMM" {
continue
}
switch field {
case "Size":
// Parse size like "64 GB" or "32768 MB"
currentModule.SizeMB = parseMemorySize(value)
case "Locator":
currentModule.Slot = value
currentModule.Location = value
case "Bank Locator":
// Store in location if slot is empty
if currentModule.Location == "" {
currentModule.Location = value
}
case "Type":
currentModule.Type = value
case "Type Detail":
currentModule.Technology = value
case "Speed":
// Parse speed like "5600 MT/s"
currentModule.MaxSpeedMHz = parseMemorySpeed(value)
case "Configured Memory Speed":
currentModule.CurrentSpeedMHz = parseMemorySpeed(value)
case "Manufacturer":
currentModule.Manufacturer = value
case "Serial Number":
currentModule.SerialNumber = value
case "Part Number":
currentModule.PartNumber = strings.TrimSpace(value)
case "Rank":
// Parse rank
if rank, err := strconv.Atoi(value); err == nil {
currentModule.Ranks = rank
}
}
}
}
// Save last module if exists
if currentModule != nil && currentModule.Slot != "" && currentModule.SizeMB > 0 {
result.Hardware.Memory = append(result.Hardware.Memory, *currentModule)
}
// Create event for memory summary
if len(result.Hardware.Memory) > 0 {
totalMemoryGB := 0
for _, mem := range result.Hardware.Memory {
totalMemoryGB += mem.SizeMB / 1024
}
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "DMI",
EventType: "Memory Configuration",
Description: "Memory modules detected",
Severity: models.SeverityInfo,
RawData: formatMemorySummary(result.Hardware.Memory, totalMemoryGB),
})
}
}
// parseMemorySize parses memory size strings like "64 GB" or "32768 MB"
func parseMemorySize(sizeStr string) int {
parts := strings.Fields(sizeStr)
if len(parts) < 2 {
return 0
}
size, err := strconv.Atoi(parts[0])
if err != nil {
return 0
}
unit := strings.ToUpper(parts[1])
switch unit {
case "GB":
return size * 1024
case "MB":
return size
case "TB":
return size * 1024 * 1024
default:
return 0
}
}
// parseMemorySpeed parses speed strings like "5600 MT/s" or "4400 MHz"
func parseMemorySpeed(speedStr string) int {
parts := strings.Fields(speedStr)
if len(parts) < 1 {
return 0
}
speed, err := strconv.Atoi(parts[0])
if err != nil {
return 0
}
return speed
}
// formatMemorySummary creates a summary string for memory modules
func formatMemorySummary(modules []models.MemoryDIMM, totalGB int) string {
if len(modules) == 0 {
return ""
}
// Group by manufacturer
manufacturerCount := make(map[string]int)
for _, mem := range modules {
if mem.Manufacturer != "" {
manufacturerCount[mem.Manufacturer]++
}
}
summary := ""
for mfr, count := range manufacturerCount {
if summary != "" {
summary += ", "
}
summary += mfr + ": " + strconv.Itoa(count) + " modules"
}
if summary == "" {
summary = strconv.Itoa(len(modules)) + " modules"
}
return summary + ", Total: " + strconv.Itoa(totalGB) + " GB"
}

View File

@@ -0,0 +1,160 @@
package nvidia_bug_report
import (
"bufio"
"regexp"
"strconv"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parseNetworkAdapters extracts network adapter information from lspci output
func parseNetworkAdapters(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
var currentAdapter *models.NetworkAdapter
inVPD := false
currentBDF := ""
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Check if this is a new PCI device line
re := regexp.MustCompile(`^([\da-f]{4}:[\da-f]{2}:[\da-f]{2}\.[\da-f])\s+`)
matches := re.FindStringSubmatch(line)
if len(matches) > 0 {
// Save previous adapter if exists before processing new device
if currentAdapter != nil && currentAdapter.Model != "" {
result.Hardware.NetworkAdapters = append(result.Hardware.NetworkAdapters, *currentAdapter)
}
currentAdapter = nil
inVPD = false
}
// Match PCI device line: "0000:1f:00.0 Ethernet controller [0200]: Mellanox Technologies..."
if strings.Contains(line, "Ethernet controller") || strings.Contains(line, "Network controller") || strings.Contains(line, "Infiniband controller") {
// Extract BDF (Bus:Device.Function)
if len(matches) > 1 {
currentBDF = matches[1]
currentAdapter = &models.NetworkAdapter{
Location: currentBDF,
Present: true,
}
// Extract vendor and device info
// Format: "Vendor description [DeviceClass]: Vendor Name Device Name [VendorID:DeviceID]"
re2 := regexp.MustCompile(`:\s+(.+?)\s+\[([0-9a-f]{4}):([0-9a-f]{4})\]`)
matches2 := re2.FindStringSubmatch(line)
if len(matches2) > 3 {
// Parse vendor name from description
vendorDesc := matches2[1]
if idx := strings.Index(vendorDesc, " "); idx > 0 {
currentAdapter.Vendor = strings.Split(vendorDesc, " ")[0]
}
// Parse vendor ID and device ID
if vendorID, err := strconv.ParseInt(matches2[2], 16, 32); err == nil {
currentAdapter.VendorID = int(vendorID)
}
if deviceID, err := strconv.ParseInt(matches2[3], 16, 32); err == nil {
currentAdapter.DeviceID = int(deviceID)
}
}
continue
}
}
// Skip if not processing an adapter
if currentAdapter == nil {
continue
}
// Parse Physical Slot
if strings.HasPrefix(trimmed, "Physical Slot:") {
slotStr := strings.TrimPrefix(trimmed, "Physical Slot:")
currentAdapter.Slot = strings.TrimSpace(slotStr)
continue
}
// Start of Vital Product Data section
if strings.Contains(trimmed, "Vital Product Data") {
inVPD = true
continue
}
// End of VPD section
if inVPD && (trimmed == "End" || strings.HasPrefix(trimmed, "Capabilities:")) {
if trimmed == "End" {
inVPD = false
}
continue
}
// Parse Product Name in VPD
if inVPD && strings.HasPrefix(trimmed, "Product Name:") {
productName := strings.TrimPrefix(trimmed, "Product Name:")
currentAdapter.Model = strings.TrimSpace(productName)
// Extract port count from model name
if strings.Contains(currentAdapter.Model, "Dual-port") {
currentAdapter.PortCount = 2
} else if strings.Contains(currentAdapter.Model, "Single-port") {
currentAdapter.PortCount = 1
} else if strings.Contains(currentAdapter.Model, "Quad-port") {
currentAdapter.PortCount = 4
}
// Extract port type from model name
if strings.Contains(currentAdapter.Model, "QSFP56") {
currentAdapter.PortType = "QSFP56"
} else if strings.Contains(currentAdapter.Model, "QSFP28") {
currentAdapter.PortType = "QSFP28"
} else if strings.Contains(currentAdapter.Model, "OSFP") {
currentAdapter.PortType = "OSFP"
} else if strings.Contains(currentAdapter.Model, "SFP") {
currentAdapter.PortType = "SFP+"
}
continue
}
// Parse VPD fields
if inVPD && strings.HasPrefix(trimmed, "[") {
// Match pattern: [TAG] Description: Value
re := regexp.MustCompile(`^\[([A-Z0-9]+)\]\s+([^:]+):\s+(.+)`)
matches := re.FindStringSubmatch(trimmed)
if len(matches) > 3 {
tag := matches[1]
value := strings.TrimSpace(matches[3])
switch tag {
case "PN":
// Part number
currentAdapter.PartNumber = value
case "SN":
// Serial number
currentAdapter.SerialNumber = value
case "EC":
// Engineering changes - could be stored as firmware/revision
if currentAdapter.Firmware == "" {
currentAdapter.Firmware = value
}
}
}
continue
}
// End of current device section (empty line followed by hex dump or new device)
if currentAdapter != nil && trimmed == "" {
// Check if next lines are hex dump (config space)
continue
}
}
// Save last adapter if exists
if currentAdapter != nil && currentAdapter.Model != "" {
result.Hardware.NetworkAdapters = append(result.Hardware.NetworkAdapters, *currentAdapter)
}
}

View File

@@ -0,0 +1,107 @@
// Package nvidia_bug_report provides parser for NVIDIA bug report files
// Generated by nvidia-bug-report.sh script
package nvidia_bug_report
import (
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
// parserVersion - version of this parser module
const parserVersion = "1.0.0"
func init() {
parser.Register(&Parser{})
}
// Parser implements VendorParser for NVIDIA bug reports
type Parser struct{}
// Name returns human-readable parser name
func (p *Parser) Name() string {
return "NVIDIA Bug Report Parser"
}
// Vendor returns vendor identifier
func (p *Parser) Vendor() string {
return "nvidia_bug_report"
}
// Version returns parser version
func (p *Parser) Version() string {
return parserVersion
}
// Detect checks if this is an NVIDIA bug report
// Returns confidence 0-100
func (p *Parser) Detect(files []parser.ExtractedFile) int {
// Only detect if there's exactly one file
if len(files) != 1 {
return 0
}
file := files[0]
// Check filename
if !strings.Contains(strings.ToLower(file.Path), "nvidia-bug-report") {
return 0
}
// Check content markers
content := string(file.Content)
if !strings.Contains(content, "nvidia-bug-report.sh") ||
!strings.Contains(content, "NVIDIA bug report log file") {
return 0
}
// High confidence for nvidia-bug-report files
return 85
}
// Parse parses NVIDIA bug report file
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
}
// Initialize hardware config
result.Hardware = &models.HardwareConfig{
CPUs: make([]models.CPU, 0),
Memory: make([]models.MemoryDIMM, 0),
GPUs: make([]models.GPU, 0),
PowerSupply: make([]models.PSU, 0),
}
if len(files) == 0 {
return result, nil
}
content := string(files[0].Content)
// Parse system information
parseSystemInfo(content, result)
// Parse CPU information
parseCPUInfo(content, result)
// Parse memory modules
parseMemoryModules(content, result)
// Parse power supplies
parsePSUInfo(content, result)
// Parse GPU information
parseGPUInfo(content, result)
// Parse network adapters
parseNetworkAdapters(content, result)
// Parse driver version
parseDriverVersion(content, result)
return result, nil
}

View File

@@ -0,0 +1,116 @@
package nvidia_bug_report
import (
"bufio"
"strconv"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parsePSUInfo extracts Power Supply information from dmidecode output
func parsePSUInfo(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
var currentPSU *models.PSU
inPowerSupply := false
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Start of System Power Supply section
if strings.Contains(trimmed, "System Power Supply") {
inPowerSupply = true
currentPSU = &models.PSU{}
continue
}
// End of current section (empty line or new section with Handle)
if inPowerSupply && (trimmed == "" || strings.HasPrefix(trimmed, "Handle ")) {
// Save PSU if it has valid data
if currentPSU != nil && currentPSU.Slot != "" {
// Only add if PSU is present
if strings.Contains(strings.ToLower(currentPSU.Status), "present") {
result.Hardware.PowerSupply = append(result.Hardware.PowerSupply, *currentPSU)
}
}
inPowerSupply = false
currentPSU = nil
continue
}
// Parse fields within System Power Supply section
if inPowerSupply && currentPSU != nil && strings.Contains(line, ":") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
if value == "" || value == "Not Specified" || value == "Unknown" || value == "UNKNOWN" {
continue
}
switch field {
case "Location":
currentPSU.Slot = value
case "Name":
// Use Name as Model if Model is not set later
if currentPSU.Model == "" {
currentPSU.Model = value
}
case "Manufacturer":
currentPSU.Vendor = value
case "Serial Number":
currentPSU.SerialNumber = value
case "Model Part Number":
// Use Model Part Number as the primary model identifier
currentPSU.Model = value
case "Revision":
currentPSU.Firmware = value
case "Max Power Capacity":
// Parse wattage like "2700 W"
if wattage := parsePowerWattage(value); wattage > 0 {
currentPSU.WattageW = wattage
}
case "Status":
currentPSU.Status = value
case "Type":
// Could store PSU type if needed (e.g., "Switching")
case "Plugged":
// Could track if PSU is plugged
case "Hot Replaceable":
// Could track if hot-swappable
}
}
}
// Save last PSU if exists
if currentPSU != nil && currentPSU.Slot != "" {
if strings.Contains(strings.ToLower(currentPSU.Status), "present") {
result.Hardware.PowerSupply = append(result.Hardware.PowerSupply, *currentPSU)
}
}
}
// parsePowerWattage parses power capacity strings like "2700 W" or "1200 Watts"
func parsePowerWattage(powerStr string) int {
parts := strings.Fields(powerStr)
if len(parts) < 1 {
return 0
}
// Try to parse the number
wattageStr := parts[0]
wattage, err := strconv.Atoi(wattageStr)
if err != nil {
return 0
}
// Check if unit is specified (W, Watts, etc.) and convert if needed
// For now, assume it's always in Watts
return wattage
}

View File

@@ -0,0 +1,61 @@
package nvidia_bug_report
import (
"bufio"
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
)
// parseSystemInfo extracts System Information from dmidecode output
func parseSystemInfo(content string, result *models.AnalysisResult) {
scanner := bufio.NewScanner(strings.NewReader(content))
inSystemInfo := false
for scanner.Scan() {
line := scanner.Text()
trimmed := strings.TrimSpace(line)
// Start of System Information section
if trimmed == "System Information" {
inSystemInfo = true
continue
}
// End of section (empty line or new Handle)
if inSystemInfo && (trimmed == "" || strings.HasPrefix(trimmed, "Handle ")) {
inSystemInfo = false
continue
}
// Parse fields within System Information section
if inSystemInfo && strings.Contains(line, ":") {
parts := strings.SplitN(trimmed, ":", 2)
if len(parts) != 2 {
continue
}
field := strings.TrimSpace(parts[0])
value := strings.TrimSpace(parts[1])
// Skip empty, NULL, or "Not specified" values
if value == "" || value == "NULL" || value == "Not specified" || value == "Not Specified" {
continue
}
switch field {
case "Manufacturer":
result.Hardware.BoardInfo.Manufacturer = value
case "Product Name":
result.Hardware.BoardInfo.ProductName = value
case "Version":
result.Hardware.BoardInfo.Version = value
case "Serial Number":
result.Hardware.BoardInfo.SerialNumber = value
case "UUID":
result.Hardware.BoardInfo.UUID = value
}
}
}
}

View File

@@ -0,0 +1,133 @@
# SMC Crash Dump Parser
Парсер для архивов Supermicro (SMC) BMC Crash Dump.
## Поддерживаемые серверы
- Supermicro SYS-821GE-TNHR
- Другие серверы Supermicro с BMC Crashdump функциональностью
## Формат архива
Парсер работает с архивами в формате:
- `.tgz` / `.tar.gz` (сжатый tar)
- `.tar` (несжатый tar)
## Распознаваемые файлы
### Основные файлы
1. **CDump.txt** - JSON файл с данными crashdump
- Metadata (BMC, BIOS, ME версии firmware)
- CPU информация (CPUID, количество ядер, microcode версия, PPIN)
- MCA (Machine Check Architecture) данные - ошибки процессоров
## Извлекаемые данные
### Hardware Configuration
#### CPUs
```json
{
"slot": "CPU0",
"model": "CPUID: 0xc06f2",
"cores": 56,
"manufacturer": "Intel",
"firmware": "Microcode: 0x210002b3"
}
```
### FRU Information
- BMC Firmware Version
- BIOS Version
- ME Firmware Version
- CPU PPIN (Protected Processor Inventory Number)
### Events
События создаются для:
- **Crashdump collection** - когда был собран crashdump
- **MCA Errors** - ошибки Machine Check Architecture
- Corrected errors (Warning severity)
- Uncorrected errors (Critical severity)
Уровни severity:
- `info` - информационные события (crashdump по запросу)
- `warning` - предупреждения (corrected MCA errors, reset detected)
- `critical` - критические ошибки (uncorrected MCA errors)
## Пример использования
```bash
# Запуск веб-интерфейса
./logpile --file /path/to/CDump_090859_01302026.tgz
# Веб-интерфейс будет доступен на http://localhost:8082
```
## Автоопределение
Парсер автоматически определяет архивы SMC Crash Dump по наличию:
- `CDump.txt` с маркерами "crash_data", "METADATA", "bmc_fw_ver"
Confidence score:
- `CDump.txt` с маркерами crashdump: +80
## Версионирование
**Текущая версия парсера:** 1.0.0
При модификации логики парсера необходимо увеличивать версию в константе `parserVersion` в файле `parser.go`.
## Примеры данных
### Пример CDump.txt (metadata)
```json
{
"crash_data": {
"METADATA": {
"cpu0": {
"cpuid": "0xc06f2",
"core_count": "0x38",
"ppin": "0xa3ccbe7d45026592",
"ucode_patch_ver": "0x210002b3"
},
"bmc_fw_ver": "01.03.18",
"bios_id": "BIOS Date: 08/04/2025 Rev 2.7",
"me_fw_ver": "6.1.4.204",
"timestamp": "2026-01-30T09:06:52Z",
"trigger_type": "On-Demand"
}
}
}
```
### MCA Error Detection
Парсер проверяет регистры MCA status на наличие ошибок:
- Bit 63 (Valid) - индикатор валидной ошибки
- Bit 61 (UC) - uncorrected error
- Bit 60 (EN) - error enabled
## Известные ограничения
1. Парсер фокусируется на данных из `CDump.txt`
2. Детальный анализ MCA errors пока упрощен (только проверка status регистров)
3. TOR dump и другие расширенные данные пока не парсятся
## Разработка
### Добавление новых полей
1. Изучите структуру JSON в CDump.txt
2. Добавьте поля в структуры `Metadata`, `CPUMetadata`, или `MCAData`
3. Обновите функции парсинга
4. Увеличьте версию парсера
### Расширение MCA анализа
Для более детального анализа MCA ошибок можно:
1. Добавить декодирование MCA error codes
2. Парсить MISC и ADDR регистры
3. Добавить корреляцию ошибок между банками

View File

@@ -0,0 +1,261 @@
package supermicro
import (
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
// CrashDumpData represents the structure of CDump.txt
type CrashDumpData struct {
CrashData struct {
METADATA Metadata `json:"METADATA"`
PROCESSORS ProcessorsData `json:"PROCESSORS"`
} `json:"crash_data"`
}
// ProcessorsData contains processor crash data
type ProcessorsData struct {
Version string `json:"_version"`
CPU0 Processors `json:"cpu0"`
CPU1 Processors `json:"cpu1"`
}
// Metadata contains crashdump metadata
type Metadata struct {
CPU0 CPUMetadata `json:"cpu0"`
CPU1 CPUMetadata `json:"cpu1"`
BMCFWVer string `json:"bmc_fw_ver"`
BIOSId string `json:"bios_id"`
MEFWVer string `json:"me_fw_ver"`
Timestamp string `json:"timestamp"`
TriggerType string `json:"trigger_type"`
PlatformName string `json:"platform_name"`
CrashdumpVer string `json:"crashdump_ver"`
ResetDetected string `json:"_reset_detected"`
}
// CPUMetadata contains CPU metadata
type CPUMetadata struct {
CPUID string `json:"cpuid"`
CoreMask string `json:"core_mask"`
CHACount string `json:"cha_count"`
CoreCount string `json:"core_count"`
PPIN string `json:"ppin"`
UcodePatchVer string `json:"ucode_patch_ver"`
}
// Processors contains processor crash data
type Processors struct {
MCA MCAData `json:"MCA"`
}
// MCAData contains Machine Check Architecture data
type MCAData struct {
Uncore map[string]interface{} `json:"uncore"`
}
// ParseCrashDump parses CDump.txt file
func ParseCrashDump(content []byte, result *models.AnalysisResult) error {
var data CrashDumpData
if err := json.Unmarshal(content, &data); err != nil {
return fmt.Errorf("failed to parse CDump.txt: %w", err)
}
// Initialize Hardware.Firmware slice if nil
if result.Hardware.Firmware == nil {
result.Hardware.Firmware = make([]models.FirmwareInfo, 0)
}
// Parse metadata
parseMetadata(&data.CrashData.METADATA, result)
// Parse CPU information
parseCPUInfo(&data.CrashData.METADATA, result)
// Parse MCA errors
parseMCAErrors(&data.CrashData, result)
return nil
}
// parseMetadata extracts metadata information
func parseMetadata(metadata *Metadata, result *models.AnalysisResult) {
// Store firmware versions in HardwareConfig.Firmware
if metadata.BMCFWVer != "" {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: "BMC",
Version: metadata.BMCFWVer,
})
}
if metadata.BIOSId != "" {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: "BIOS",
Version: metadata.BIOSId,
})
}
if metadata.MEFWVer != "" {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: "ME",
Version: metadata.MEFWVer,
})
}
// Create event for crashdump trigger
timestamp := time.Now()
if metadata.Timestamp != "" {
if t, err := time.Parse(time.RFC3339, metadata.Timestamp); err == nil {
timestamp = t
}
}
triggerType := metadata.TriggerType
if triggerType == "" {
triggerType = "Unknown"
}
severity := models.SeverityInfo
if metadata.ResetDetected != "" && metadata.ResetDetected != "NONE" {
severity = models.SeverityWarning
}
result.Events = append(result.Events, models.Event{
Timestamp: timestamp,
Source: "Crashdump",
EventType: "System Crashdump",
Description: fmt.Sprintf("Crashdump collected (%s)", triggerType),
Severity: severity,
RawData: fmt.Sprintf("Version: %s, Reset: %s", metadata.CrashdumpVer, metadata.ResetDetected),
})
}
// parseCPUInfo extracts CPU information
func parseCPUInfo(metadata *Metadata, result *models.AnalysisResult) {
cpus := []struct {
socket int
data CPUMetadata
}{
{0, metadata.CPU0},
{1, metadata.CPU1},
}
for _, cpu := range cpus {
if cpu.data.CPUID == "" {
continue
}
// Parse core count
coreCount := 0
if cpu.data.CoreCount != "" {
if count, err := strconv.ParseInt(strings.TrimPrefix(cpu.data.CoreCount, "0x"), 16, 64); err == nil {
coreCount = int(count)
}
}
cpuModel := models.CPU{
Socket: cpu.socket,
Model: fmt.Sprintf("Intel CPU (CPUID: %s)", cpu.data.CPUID),
Cores: coreCount,
}
// Add PPIN
if cpu.data.PPIN != "" && cpu.data.PPIN != "0x0" {
cpuModel.PPIN = cpu.data.PPIN
}
result.Hardware.CPUs = append(result.Hardware.CPUs, cpuModel)
// Add microcode version to firmware list
if cpu.data.UcodePatchVer != "" {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: fmt.Sprintf("CPU%d Microcode", cpu.socket),
Version: cpu.data.UcodePatchVer,
})
}
}
}
// parseMCAErrors extracts Machine Check Architecture errors
func parseMCAErrors(crashData *struct {
METADATA Metadata `json:"METADATA"`
PROCESSORS ProcessorsData `json:"PROCESSORS"`
}, result *models.AnalysisResult) {
timestamp := time.Now()
if crashData.METADATA.Timestamp != "" {
if t, err := time.Parse(time.RFC3339, crashData.METADATA.Timestamp); err == nil {
timestamp = t
}
}
// Parse each CPU's MCA data
cpuProcs := []struct {
name string
data Processors
}{
{"cpu0", crashData.PROCESSORS.CPU0},
{"cpu1", crashData.PROCESSORS.CPU1},
}
for _, cpu := range cpuProcs {
if cpu.data.MCA.Uncore == nil {
continue
}
// Check each MCA bank for errors
for bankName, bankDataRaw := range cpu.data.MCA.Uncore {
bankData, ok := bankDataRaw.(map[string]interface{})
if !ok {
continue
}
// Look for status register
statusKey := strings.ToLower(bankName) + "_status"
statusRaw, ok := bankData[statusKey]
if !ok {
continue
}
statusStr, ok := statusRaw.(string)
if !ok {
continue
}
// Parse status value
status, err := strconv.ParseUint(strings.TrimPrefix(statusStr, "0x"), 16, 64)
if err != nil {
continue
}
// Check if MCA error is valid (bit 63 = Valid)
if status&(1<<63) != 0 {
// MCA error detected
severity := models.SeverityWarning
if status&(1<<61) != 0 { // UC bit = uncorrected error
severity = models.SeverityCritical
}
description := fmt.Sprintf("MCA Error in %s bank %s", cpu.name, bankName)
if status&(1<<61) != 0 {
description += " (Uncorrected)"
} else {
description += " (Corrected)"
}
result.Events = append(result.Events, models.Event{
Timestamp: timestamp,
Source: "MCA",
EventType: "Machine Check",
Description: description,
Severity: severity,
RawData: fmt.Sprintf("Status: %s, CPU: %s, Bank: %s", statusStr, cpu.name, bankName),
})
}
}
}
}

View File

@@ -0,0 +1,98 @@
// Package supermicro provides parser for Supermicro BMC crashdump archives
// Tested with: Supermicro SYS-821GE-TNHR (Crashdump format)
//
// IMPORTANT: Increment parserVersion when modifying parser logic!
// This helps track which version was used to parse specific logs.
package supermicro
import (
"strings"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
// parserVersion - version of this parser module
// IMPORTANT: Increment this version when making changes to parser logic!
const parserVersion = "1.0.0"
func init() {
parser.Register(&Parser{})
}
// Parser implements VendorParser for Supermicro servers
type Parser struct{}
// Name returns human-readable parser name
func (p *Parser) Name() string {
return "SMC Crash Dump Parser"
}
// Vendor returns vendor identifier
func (p *Parser) Vendor() string {
return "supermicro"
}
// Version returns parser version
// IMPORTANT: Update parserVersion constant when modifying parser logic!
func (p *Parser) Version() string {
return parserVersion
}
// Detect checks if archive matches Supermicro crashdump format
// Returns confidence 0-100
func (p *Parser) Detect(files []parser.ExtractedFile) int {
confidence := 0
for _, f := range files {
path := strings.ToLower(f.Path)
// Strong indicator for Supermicro Crashdump format
if strings.HasSuffix(path, "cdump.txt") {
// Check if it's really Supermicro crashdump format
if containsCrashdumpMarkers(f.Content) {
confidence += 80
}
}
// Cap at 100
if confidence >= 100 {
return 100
}
}
return confidence
}
// containsCrashdumpMarkers checks if content has Supermicro crashdump markers
func containsCrashdumpMarkers(content []byte) bool {
s := string(content)
// Check for typical Supermicro Crashdump structure
return strings.Contains(s, "crash_data") &&
strings.Contains(s, "METADATA") &&
(strings.Contains(s, "bmc_fw_ver") || strings.Contains(s, "crashdump_ver"))
}
// Parse parses Supermicro crashdump archive
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
}
// Initialize hardware config
result.Hardware = &models.HardwareConfig{
CPUs: make([]models.CPU, 0),
}
// Parse CDump.txt (JSON crashdump)
if f := parser.FindFileByName(files, "CDump.txt"); f != nil {
if err := ParseCrashDump(f.Content, result); err != nil {
// Log error but continue parsing other files
_ = err // Ignore error for now
}
}
return result, nil
}

View File

@@ -5,9 +5,15 @@ package vendors
import (
// Import vendor modules to trigger their init() registration
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/supermicro"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
// Generic fallback parser (must be last for lowest priority)
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"
// Future vendors:
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/supermicro"
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe"
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo"

View File

@@ -0,0 +1,46 @@
# Xigmanas Parser
Parser for Xigmanas (FreeBSD-based NAS) system logs.
## Supported Files
- `xigmanas` - Main system log file with configuration and status information
- `dmesg` - Kernel messages and hardware initialization information
- SMART data from disk monitoring
## Features
This parser extracts the following information from Xigmanas logs:
### System Information
- Firmware version
- System uptime
- CPU model and specifications
- Memory configuration
- Hardware platform information
### Storage Information
- Disk models and serial numbers
- Disk capacity and health status
- SMART temperature readings
### Hardware Configuration
- CPU information
- Memory modules
- Storage devices
## Detection Logic
The parser detects Xigmanas format by looking for:
- Files with "xigmanas", "system", or "dmesg" in their names
- Content containing "XigmaNAS" or "FreeBSD" strings
- SMART-related information in log content
## Example Output
The parser populates the following fields in AnalysisResult:
- `Hardware.Firmware` - Firmware versions
- `Hardware.CPUs` - CPU information
- `Hardware.Memory` - Memory configuration
- `Hardware.Storage` - Storage devices with SMART data
- `Sensors` - Temperature readings from SMART data

View File

@@ -0,0 +1,525 @@
// Package xigmanas provides parser for XigmaNAS diagnostic dumps.
package xigmanas
import (
"regexp"
"strconv"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
// parserVersion - increment when parsing logic changes.
const parserVersion = "2.1.0"
func init() {
parser.Register(&Parser{})
}
// Parser implements VendorParser for XigmaNAS logs.
type Parser struct{}
func (p *Parser) Name() string { return "XigmaNAS Parser" }
func (p *Parser) Vendor() string { return "xigmanas" }
func (p *Parser) Version() string {
return parserVersion
}
// Detect checks if files contain typical XigmaNAS markers.
func (p *Parser) Detect(files []parser.ExtractedFile) int {
confidence := 0
for _, f := range files {
path := strings.ToLower(f.Path)
content := strings.ToLower(string(f.Content))
if strings.Contains(path, "xigmanas") || strings.HasSuffix(path, "dmesg") {
confidence += 20
}
if strings.Contains(content, `loader_brand="xigmanas"`) {
confidence += 70
}
if strings.Contains(content, "xigmanas kernel build") {
confidence += 35
}
if strings.Contains(content, "system uptime:") && strings.Contains(content, "routing tables:") {
confidence += 20
}
if strings.Contains(content, "s.m.a.r.t. [/dev/") {
confidence += 10
}
if confidence >= 100 {
return 100
}
}
if confidence > 100 {
return 100
}
return confidence
}
// Parse parses XigmaNAS logs and returns normalized data.
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
Hardware: &models.HardwareConfig{
Firmware: make([]models.FirmwareInfo, 0),
CPUs: make([]models.CPU, 0),
Memory: make([]models.MemoryDIMM, 0),
Storage: make([]models.Storage, 0),
},
}
content := joinFileContents(files)
if strings.TrimSpace(content) == "" {
return result, nil
}
parseSystemInfo(content, result)
parseCPU(content, result)
parseMemory(content, result)
parseUptime(content, result)
parseZFSState(content, result)
parseStorageAndSMART(content, result)
parseJournalLogSections(content, result)
return result, nil
}
func joinFileContents(files []parser.ExtractedFile) string {
var b strings.Builder
for _, f := range files {
b.Write(f.Content)
b.WriteString("\n")
}
return b.String()
}
func parseSystemInfo(content string, result *models.AnalysisResult) {
if m := regexp.MustCompile(`(?m)^Version:\s*\n-+\s*\n([^\n]+)`).FindStringSubmatch(content); len(m) == 2 {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: "XigmaNAS",
Version: strings.TrimSpace(m[1]),
})
}
if m := regexp.MustCompile(`(?m)^smbios\.bios\.version="([^"]+)"`).FindStringSubmatch(content); len(m) == 2 {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
DeviceName: "System BIOS",
Version: strings.TrimSpace(m[1]),
})
}
board := models.BoardInfo{}
if m := regexp.MustCompile(`(?m)^smbios\.system\.maker="([^"]+)"`).FindStringSubmatch(content); len(m) == 2 {
board.Manufacturer = strings.TrimSpace(m[1])
}
if m := regexp.MustCompile(`(?m)^smbios\.system\.product="([^"]+)"`).FindStringSubmatch(content); len(m) == 2 {
board.ProductName = strings.TrimSpace(m[1])
}
if m := regexp.MustCompile(`(?m)^smbios\.system\.serial="([^"]+)"`).FindStringSubmatch(content); len(m) == 2 {
board.SerialNumber = strings.TrimSpace(m[1])
}
if m := regexp.MustCompile(`(?m)^smbios\.system\.uuid="([^"]+)"`).FindStringSubmatch(content); len(m) == 2 {
board.UUID = strings.TrimSpace(m[1])
}
result.Hardware.BoardInfo = board
}
func parseCPU(content string, result *models.AnalysisResult) {
var cores, threads int
if m := regexp.MustCompile(`(?m)^FreeBSD/SMP:\s+\d+\s+package\(s\)\s+x\s+(\d+)\s+core\(s\)`).FindStringSubmatch(content); len(m) == 2 {
cores = parseInt(m[1])
threads = cores
}
seen := map[string]struct{}{}
cpuRe := regexp.MustCompile(`(?m)^CPU:\s+(.+?)\s+\(([\d.]+)-MHz`)
for _, m := range cpuRe.FindAllStringSubmatch(content, -1) {
model := strings.TrimSpace(m[1])
if _, ok := seen[model]; ok {
continue
}
seen[model] = struct{}{}
result.Hardware.CPUs = append(result.Hardware.CPUs, models.CPU{
Socket: len(result.Hardware.CPUs),
Model: model,
Cores: cores,
Threads: threads,
FrequencyMHz: int(parseFloat(m[2])),
})
}
}
func parseMemory(content string, result *models.AnalysisResult) {
if m := regexp.MustCompile(`(?m)^real memory\s*=\s*\d+\s+\((\d+)\s+MB\)`).FindStringSubmatch(content); len(m) == 2 {
result.Hardware.Memory = append(result.Hardware.Memory, models.MemoryDIMM{
Slot: "system",
Present: true,
SizeMB: parseInt(m[1]),
Type: "DRAM",
Status: "ok",
})
return
}
// Fallback for logs that only have active/inactive breakdown.
if m := regexp.MustCompile(`(?m)^Mem:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
totalMB := 0
tokenRe := regexp.MustCompile(`(\d+)M`)
for _, t := range tokenRe.FindAllStringSubmatch(m[1], -1) {
totalMB += parseInt(t[1])
}
if totalMB > 0 {
result.Hardware.Memory = append(result.Hardware.Memory, models.MemoryDIMM{
Slot: "system",
Present: true,
SizeMB: totalMB,
Type: "DRAM",
Status: "estimated",
})
}
}
}
func parseUptime(content string, result *models.AnalysisResult) {
upRe := regexp.MustCompile(`(?m)^(\d+:\d+(?:AM|PM))\s+up\s+(.+?),\s+(\d+)\s+users?,\s+load averages?:\s+([\d.]+),\s+([\d.]+),\s+([\d.]+)$`)
m := upRe.FindStringSubmatch(content)
if len(m) != 7 {
return
}
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "System",
EventType: "Uptime",
Severity: models.SeverityInfo,
Description: "System uptime and load averages parsed",
RawData: "time=" + m[1] + "; uptime=" + m[2] + "; users=" + m[3] + "; load=" + m[4] + "," + m[5] + "," + m[6],
})
}
func parseZFSState(content string, result *models.AnalysisResult) {
m := regexp.MustCompile(`(?m)^state:\s+([A-Z]+)$`).FindStringSubmatch(content)
if len(m) != 2 {
return
}
state := m[1]
severity := models.SeverityInfo
if state != "ONLINE" {
severity = models.SeverityWarning
}
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "ZFS",
EventType: "Pool State",
Severity: severity,
Description: "ZFS pool state: " + state,
RawData: state,
})
}
func parseStorageAndSMART(content string, result *models.AnalysisResult) {
type smartInfo struct {
model string
serial string
firmware string
health string
tempC int
capacityB int64
}
storageBySlot := make(map[string]*models.Storage)
scsiRe := regexp.MustCompile(`(?m)^<([^>]+)>\s+at\s+scbus\d+\s+target\s+\d+\s+lun\s+\d+\s+\(([^,]+),([^)]+)\)$`)
for _, m := range scsiRe.FindAllStringSubmatch(content, -1) {
slot := strings.TrimSpace(m[3])
model, fw := splitModelAndFirmware(strings.TrimSpace(m[1]))
entry := &models.Storage{
Slot: slot,
Type: guessStorageType(slot),
Model: model,
Firmware: fw,
Present: true,
Interface: "SCSI/SATA",
}
storageBySlot[slot] = entry
}
smartBySlot := make(map[string]smartInfo)
sectionRe := regexp.MustCompile(`(?m)^S\.M\.A\.R\.T\.\s+\[(/dev/[^\]]+)\]:\s*\n-+\n`)
sections := sectionRe.FindAllStringSubmatchIndex(content, -1)
for i, sec := range sections {
// sec indexes:
// [0]=full start, [1]=full end, [2]=capture 1 start, [3]=capture 1 end
if len(sec) < 4 {
continue
}
slot := strings.TrimPrefix(strings.TrimSpace(content[sec[2]:sec[3]]), "/dev/")
bodyStart := sec[1]
bodyEnd := len(content)
if i+1 < len(sections) {
bodyEnd = sections[i+1][0]
}
body := content[bodyStart:bodyEnd]
info := smartInfo{
model: findFirst(body, `(?m)^Device Model:\s+(.+)$`),
serial: findFirst(body, `(?m)^Serial Number:\s+(.+)$`),
firmware: findFirst(body, `(?m)^Firmware Version:\s+(.+)$`),
health: findFirst(body, `(?m)^SMART overall-health self-assessment test result:\s+(.+)$`),
}
info.capacityB = parseCapacityBytes(findFirst(body, `(?m)^User Capacity:\s+([\d,]+)\s+bytes`))
if t := findFirst(body, `(?m)^\s*194\s+Temperature_Celsius.*?-\s+(\d+)(?:\s|\()`); t != "" {
info.tempC = parseInt(t)
}
smartBySlot[slot] = info
if info.tempC > 0 {
status := "ok"
if info.health != "" && !strings.EqualFold(info.health, "PASSED") {
status = "warning"
}
result.Sensors = append(result.Sensors, models.SensorReading{
Name: "disk_temp_" + slot,
Type: "temperature",
Value: float64(info.tempC),
Unit: "C",
Status: status,
RawValue: strconv.Itoa(info.tempC),
})
}
if info.health != "" && !strings.EqualFold(info.health, "PASSED") {
result.Events = append(result.Events, models.Event{
Timestamp: time.Now(),
Source: "SMART",
EventType: "Disk Health",
Severity: models.SeverityWarning,
Description: "SMART health is not PASSED for " + slot,
RawData: info.health,
})
}
}
// Merge SMART data into storage entries and add missing entries.
for slot, info := range smartBySlot {
s := storageBySlot[slot]
if s == nil {
s = &models.Storage{
Slot: slot,
Type: guessStorageType(slot),
Present: true,
Interface: "SATA",
}
storageBySlot[slot] = s
}
if s.Model == "" && info.model != "" {
s.Model = info.model
}
if info.serial != "" {
s.SerialNumber = info.serial
}
if s.Firmware == "" && info.firmware != "" {
s.Firmware = info.firmware
}
if info.capacityB > 0 {
s.SizeGB = int(info.capacityB / 1_000_000_000)
}
}
for _, s := range storageBySlot {
result.Hardware.Storage = append(result.Hardware.Storage, *s)
}
}
func parseJournalLogSections(content string, result *models.AnalysisResult) {
sections := []struct {
heading string
eventType string
source string
}{
{heading: "Last 275 System log entries:", eventType: "System Log", source: "system.log"},
{heading: "Last 275 SMARTD log entries:", eventType: "SMARTD Log", source: "smartd.log"},
{heading: "Last 275 Daemon log entries:", eventType: "Daemon Log", source: "daemon.log"},
}
for _, sec := range sections {
body := extractLogSection(content, sec.heading)
if body == "" {
continue
}
for _, line := range strings.Split(body, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
msg := extractSyslogMessage(line)
if msg == "" {
msg = line
}
result.Events = append(result.Events, models.Event{
Timestamp: parseEventTimestamp(line),
Source: sec.source,
EventType: sec.eventType,
Severity: classifyEventSeverity(line),
Description: msg,
RawData: line,
})
}
}
}
func extractLogSection(content, heading string) string {
start := strings.Index(content, heading)
if start == -1 {
return ""
}
tail := content[start+len(heading):]
lines := strings.Split(tail, "\n")
i := 0
for i < len(lines) && strings.TrimSpace(lines[i]) == "" {
i++
}
if i < len(lines) && isDashLine(lines[i]) {
i++
}
out := make([]string, 0, 64)
for ; i < len(lines); i++ {
line := lines[i]
trimmed := strings.TrimSpace(line)
if strings.HasPrefix(trimmed, "Last 275 ") && strings.HasSuffix(trimmed, " log entries:") {
break
}
out = append(out, line)
}
return strings.TrimSpace(strings.Join(out, "\n"))
}
func isDashLine(s string) bool {
s = strings.TrimSpace(s)
if s == "" {
return false
}
for _, r := range s {
if r != '-' {
return false
}
}
return true
}
func parseEventTimestamp(line string) time.Time {
isoRe := regexp.MustCompile(`\b\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2}\b`)
if iso := isoRe.FindString(line); iso != "" {
if ts, err := time.Parse(time.RFC3339Nano, iso); err == nil {
return ts
}
}
prefixRe := regexp.MustCompile(`^[A-Z][a-z]{2}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2}`)
if prefix := prefixRe.FindString(line); prefix != "" {
year := time.Now().Year()
if ts, err := time.Parse("Jan 2 15:04:05 2006", prefix+" "+strconv.Itoa(year)); err == nil {
return ts
}
}
return time.Now()
}
func classifyEventSeverity(line string) models.Severity {
lower := strings.ToLower(line)
switch {
case strings.Contains(lower, "panic"), strings.Contains(lower, "fatal"), strings.Contains(lower, "critical"):
return models.SeverityCritical
case strings.Contains(lower, "warning"),
strings.Contains(lower, "error"),
strings.Contains(lower, "failed"),
strings.Contains(lower, "failure"),
strings.Contains(lower, "login failure"),
strings.Contains(lower, "limiting open port"):
return models.SeverityWarning
default:
return models.SeverityInfo
}
}
func extractSyslogMessage(line string) string {
if idx := strings.Index(line, ": "); idx != -1 && idx+2 < len(line) {
return strings.TrimSpace(line[idx+2:])
}
// RFC5424-like segment in XigmaNAS dumps: "... <host> <proc> <pid> - - <message>"
fields := strings.Fields(line)
if len(fields) > 10 {
return strings.TrimSpace(strings.Join(fields[10:], " "))
}
return strings.TrimSpace(line)
}
func splitModelAndFirmware(raw string) (string, string) {
fields := strings.Fields(raw)
if len(fields) < 2 {
return raw, ""
}
last := fields[len(fields)-1]
// Firmware token is usually compact (e.g. GKAOAB0A, 1.00).
if regexp.MustCompile(`^[A-Za-z0-9._-]{2,12}$`).MatchString(last) {
return strings.TrimSpace(strings.Join(fields[:len(fields)-1], " ")), last
}
return raw, ""
}
func guessStorageType(slot string) string {
switch {
case strings.HasPrefix(slot, "cd"):
return "optical"
case strings.HasPrefix(slot, "da"), strings.HasPrefix(slot, "ada"):
return "hdd"
default:
return "unknown"
}
}
func findFirst(content, expr string) string {
m := regexp.MustCompile(expr).FindStringSubmatch(content)
if len(m) != 2 {
return ""
}
return strings.TrimSpace(m[1])
}
func parseCapacityBytes(s string) int64 {
clean := strings.ReplaceAll(strings.TrimSpace(s), ",", "")
if clean == "" {
return 0
}
v, err := strconv.ParseInt(clean, 10, 64)
if err != nil {
return 0
}
return v
}
func parseInt(s string) int {
v, _ := strconv.Atoi(strings.TrimSpace(s))
return v
}
func parseFloat(s string) float64 {
v, _ := strconv.ParseFloat(strings.TrimSpace(s), 64)
return v
}

View File

@@ -0,0 +1,116 @@
package xigmanas
import (
"os"
"path/filepath"
"strings"
"testing"
"git.mchus.pro/mchus/logpile/internal/parser"
)
func TestParserDetect(t *testing.T) {
p := &Parser{}
files := []parser.ExtractedFile{
{
Path: "xigmanas",
Content: []byte(`Version:
--------
14.3.0.5
loader_brand="XigmaNAS"`),
},
}
if got := p.Detect(files); got < 70 {
t.Fatalf("expected high confidence, got %d", got)
}
files2 := []parser.ExtractedFile{
{
Path: "random_file.txt",
Content: []byte("Some random content"),
},
}
if got := p.Detect(files2); got != 0 {
t.Fatalf("expected zero confidence, got %d", got)
}
}
func TestParserParseExample(t *testing.T) {
p := &Parser{}
examplePath := filepath.Join("..", "..", "..", "..", "example", "xigmanas.txt")
raw, err := os.ReadFile(examplePath)
if err != nil {
t.Fatalf("read example file: %v", err)
}
files := []parser.ExtractedFile{
{Path: "xigmanas", Content: raw},
}
result, err := p.Parse(files)
if err != nil {
t.Fatalf("parse failed: %v", err)
}
if result == nil || result.Hardware == nil {
t.Fatal("expected non-nil result with hardware")
}
if len(result.Hardware.Firmware) == 0 {
t.Fatal("expected firmware data")
}
foundXigmaVersion := false
for _, fw := range result.Hardware.Firmware {
if fw.DeviceName == "XigmaNAS" && fw.Version == "14.3.0.5" {
foundXigmaVersion = true
}
}
if !foundXigmaVersion {
t.Fatalf("expected XigmaNAS firmware version 14.3.0.5, got %+v", result.Hardware.Firmware)
}
if result.Hardware.BoardInfo.Manufacturer != "HP" {
t.Fatalf("expected board manufacturer HP, got %q", result.Hardware.BoardInfo.Manufacturer)
}
if len(result.Hardware.CPUs) == 0 {
t.Fatal("expected at least one CPU")
}
if !strings.Contains(strings.ToLower(result.Hardware.CPUs[0].Model), "athlon") {
t.Fatalf("expected CPU model to contain athlon, got %q", result.Hardware.CPUs[0].Model)
}
if len(result.Hardware.Storage) < 4 {
t.Fatalf("expected at least 4 storage devices, got %d", len(result.Hardware.Storage))
}
if len(result.Sensors) == 0 {
t.Fatal("expected SMART temperature sensors")
}
if len(result.Events) == 0 {
t.Fatal("expected events from uptime/zfs sections")
}
var hasSystemLog, hasSmartdLog, hasDaemonLog, hasLoginFailure bool
for _, ev := range result.Events {
if ev.EventType == "System Log" {
hasSystemLog = true
}
if ev.EventType == "SMARTD Log" {
hasSmartdLog = true
}
if ev.EventType == "Daemon Log" {
hasDaemonLog = true
}
if strings.Contains(strings.ToLower(ev.Description), "login failure") {
hasLoginFailure = true
}
}
if !hasSystemLog || !hasSmartdLog || !hasDaemonLog {
t.Fatalf("expected events from System/SMARTD/Daemon sections, got system=%v smartd=%v daemon=%v", hasSystemLog, hasSmartdLog, hasDaemonLog)
}
if !hasLoginFailure {
t.Fatal("expected to parse login failure event from system log section")
}
}

View File

@@ -0,0 +1,260 @@
package server
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
func newCollectTestServer() (*Server, *httptest.Server) {
s := &Server{
jobManager: NewJobManager(),
collectors: testCollectorRegistry(),
}
mux := http.NewServeMux()
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
return s, httptest.NewServer(mux)
}
func TestCollectLifecycleToTerminal(t *testing.T) {
_, ts := newCollectTestServer()
defer ts.Close()
body := `{"host":"bmc01.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
resp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusAccepted {
t.Fatalf("expected 202, got %d", resp.StatusCode)
}
var created CollectJobResponse
if err := json.NewDecoder(resp.Body).Decode(&created); err != nil {
t.Fatalf("decode create response: %v", err)
}
if created.JobID == "" {
t.Fatalf("expected job id")
}
status := waitForTerminalStatus(t, ts.URL, created.JobID, 4*time.Second)
if status.Status != CollectStatusSuccess {
t.Fatalf("expected success, got %q (error=%q)", status.Status, status.Error)
}
if status.Progress == nil || *status.Progress != 100 {
t.Fatalf("expected progress 100, got %#v", status.Progress)
}
if len(status.Logs) < 4 {
t.Fatalf("expected detailed logs, got %v", status.Logs)
}
}
func TestCollectCancel(t *testing.T) {
_, ts := newCollectTestServer()
defer ts.Close()
body := `{"host":"bmc02.local","protocol":"ipmi","port":623,"username":"operator","auth_type":"token","token":"keep-me-secret","tls_mode":"insecure"}`
resp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect failed: %v", err)
}
defer resp.Body.Close()
var created CollectJobResponse
if err := json.NewDecoder(resp.Body).Decode(&created); err != nil {
t.Fatalf("decode create response: %v", err)
}
cancelResp, err := http.Post(ts.URL+"/api/collect/"+created.JobID+"/cancel", "application/json", nil)
if err != nil {
t.Fatalf("cancel collect failed: %v", err)
}
defer cancelResp.Body.Close()
if cancelResp.StatusCode != http.StatusOK {
t.Fatalf("expected 200 cancel, got %d", cancelResp.StatusCode)
}
var canceled CollectJobStatusResponse
if err := json.NewDecoder(cancelResp.Body).Decode(&canceled); err != nil {
t.Fatalf("decode cancel response: %v", err)
}
if canceled.Status != CollectStatusCanceled {
t.Fatalf("expected canceled, got %q", canceled.Status)
}
time.Sleep(500 * time.Millisecond)
final := getCollectStatus(t, ts.URL, created.JobID, http.StatusOK)
if final.Status != CollectStatusCanceled {
t.Fatalf("expected canceled to stay terminal, got %q", final.Status)
}
}
func TestCollectNotFoundAndSecretLeak(t *testing.T) {
_, ts := newCollectTestServer()
defer ts.Close()
notFound := getCollectStatus(t, ts.URL, "job_notfound123", http.StatusNotFound)
if notFound.JobID != "" || notFound.Status != "" {
t.Fatalf("unexpected body for not found: %+v", notFound)
}
cancelResp, err := http.Post(ts.URL+"/api/collect/job_notfound123/cancel", "application/json", nil)
if err != nil {
t.Fatalf("cancel not found request failed: %v", err)
}
cancelResp.Body.Close()
if cancelResp.StatusCode != http.StatusNotFound {
t.Fatalf("expected 404 for cancel not found, got %d", cancelResp.StatusCode)
}
body := `{"host":"need-fail.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"ultra-secret","tls_mode":"strict"}`
resp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect failed: %v", err)
}
defer resp.Body.Close()
var created CollectJobResponse
if err := json.NewDecoder(resp.Body).Decode(&created); err != nil {
t.Fatalf("decode create response: %v", err)
}
status := waitForTerminalStatus(t, ts.URL, created.JobID, 4*time.Second)
if status.Status != CollectStatusFailed {
t.Fatalf("expected failed by host toggle, got %q", status.Status)
}
raw, err := json.Marshal(status)
if err != nil {
t.Fatalf("marshal status: %v", err)
}
if strings.Contains(string(raw), "ultra-secret") || strings.Contains(strings.Join(status.Logs, " "), "ultra-secret") {
t.Fatalf("secret leaked into API response or logs")
}
}
func TestCollectStartPreservesCurrentResultUntilSuccess(t *testing.T) {
s, ts := newCollectTestServer()
defer ts.Close()
existing := &models.AnalysisResult{
Filename: "archive.tar.gz",
SourceType: models.SourceTypeArchive,
CollectedAt: time.Now().UTC(),
}
s.SetResult(existing)
body := `{"host":"bmc-success.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
resp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect failed: %v", err)
}
defer resp.Body.Close()
var created CollectJobResponse
if err := json.NewDecoder(resp.Body).Decode(&created); err != nil {
t.Fatalf("decode create response: %v", err)
}
current := s.GetResult()
if current != existing {
t.Fatalf("expected current result to stay unchanged before success")
}
status := waitForTerminalStatus(t, ts.URL, created.JobID, 4*time.Second)
if status.Status != CollectStatusSuccess {
t.Fatalf("expected success, got %q", status.Status)
}
finalResult := s.GetResult()
if finalResult == nil {
t.Fatalf("expected result to be set on success")
}
if finalResult.SourceType != models.SourceTypeAPI {
t.Fatalf("expected api source type after success, got %q", finalResult.SourceType)
}
if finalResult.TargetHost != "bmc-success.local" {
t.Fatalf("expected target host to be updated, got %q", finalResult.TargetHost)
}
}
func TestCollectFailedDoesNotOverwriteCurrentResult(t *testing.T) {
s, ts := newCollectTestServer()
defer ts.Close()
existing := &models.AnalysisResult{
Filename: "still-archive.tar.gz",
SourceType: models.SourceTypeArchive,
CollectedAt: time.Now().UTC(),
}
s.SetResult(existing)
body := `{"host":"contains-fail.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
resp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect failed: %v", err)
}
defer resp.Body.Close()
var created CollectJobResponse
if err := json.NewDecoder(resp.Body).Decode(&created); err != nil {
t.Fatalf("decode create response: %v", err)
}
status := waitForTerminalStatus(t, ts.URL, created.JobID, 4*time.Second)
if status.Status != CollectStatusFailed {
t.Fatalf("expected failed, got %q", status.Status)
}
finalResult := s.GetResult()
if finalResult != existing {
t.Fatalf("expected existing result to remain on failed job")
}
}
func waitForTerminalStatus(t *testing.T, baseURL, jobID string, timeout time.Duration) CollectJobStatusResponse {
t.Helper()
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
status := getCollectStatus(t, baseURL, jobID, http.StatusOK)
if status.Status == CollectStatusSuccess || status.Status == CollectStatusFailed || status.Status == CollectStatusCanceled {
return status
}
time.Sleep(100 * time.Millisecond)
}
t.Fatalf("job %s did not reach terminal status before timeout", jobID)
return CollectJobStatusResponse{}
}
func getCollectStatus(t *testing.T, baseURL, jobID string, expectedCode int) CollectJobStatusResponse {
t.Helper()
resp, err := http.Get(baseURL + "/api/collect/" + jobID)
if err != nil {
t.Fatalf("get collect status failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != expectedCode {
t.Fatalf("expected status %d, got %d", expectedCode, resp.StatusCode)
}
if expectedCode != http.StatusOK {
return CollectJobStatusResponse{}
}
var status CollectJobStatusResponse
if err := json.NewDecoder(resp.Body).Decode(&status); err != nil {
t.Fatalf("decode collect status: %v", err)
}
return status
}

View File

@@ -0,0 +1,63 @@
package server
import (
"context"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/collector"
"git.mchus.pro/mchus/logpile/internal/models"
)
type mockConnector struct {
protocol string
}
func (c *mockConnector) Protocol() string {
return c.protocol
}
func (c *mockConnector) Collect(ctx context.Context, req collector.Request, emit collector.ProgressFn) (*models.AnalysisResult, error) {
steps := []collector.Progress{
{Status: CollectStatusRunning, Progress: 20, Message: "Подключение..."},
{Status: CollectStatusRunning, Progress: 50, Message: "Сбор инвентаря..."},
{Status: CollectStatusRunning, Progress: 80, Message: "Нормализация..."},
}
for _, step := range steps {
if !collectorSleep(ctx, 100*time.Millisecond) {
return nil, ctx.Err()
}
if emit != nil {
emit(step)
}
}
if strings.Contains(strings.ToLower(req.Host), "fail") {
return nil, context.DeadlineExceeded
}
return &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
Hardware: &models.HardwareConfig{},
}, nil
}
func testCollectorRegistry() *collector.Registry {
r := collector.NewRegistry()
r.Register(&mockConnector{protocol: "redfish"})
r.Register(&mockConnector{protocol: "ipmi"})
return r
}
func collectorSleep(ctx context.Context, d time.Duration) bool {
timer := time.NewTimer(d)
defer timer.Stop()
select {
case <-ctx.Done():
return false
case <-timer.C:
return true
}
}

View File

@@ -0,0 +1,83 @@
package server
import "time"
const (
CollectStatusQueued = "queued"
CollectStatusRunning = "running"
CollectStatusSuccess = "success"
CollectStatusFailed = "failed"
CollectStatusCanceled = "canceled"
)
type CollectRequest struct {
Host string `json:"host"`
Protocol string `json:"protocol"`
Port int `json:"port"`
Username string `json:"username"`
AuthType string `json:"auth_type"`
Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"`
TLSMode string `json:"tls_mode"`
}
type CollectJobResponse struct {
JobID string `json:"job_id"`
Status string `json:"status"`
Message string `json:"message,omitempty"`
CreatedAt time.Time `json:"created_at"`
}
type CollectJobStatusResponse struct {
JobID string `json:"job_id"`
Status string `json:"status"`
Progress *int `json:"progress,omitempty"`
Logs []string `json:"logs,omitempty"`
Error string `json:"error,omitempty"`
CreatedAt time.Time `json:"created_at,omitempty"`
UpdatedAt time.Time `json:"updated_at"`
}
type CollectRequestMeta struct {
Host string `json:"host"`
Protocol string `json:"protocol"`
Port int `json:"port"`
Username string `json:"username"`
AuthType string `json:"auth_type"`
TLSMode string `json:"tls_mode"`
}
type Job struct {
ID string
Status string
Progress int
Logs []string
Error string
CreatedAt time.Time
UpdatedAt time.Time
RequestMeta CollectRequestMeta
cancel func()
}
func (j *Job) toStatusResponse() CollectJobStatusResponse {
progress := j.Progress
resp := CollectJobStatusResponse{
JobID: j.ID,
Status: j.Status,
Progress: &progress,
Logs: append([]string(nil), j.Logs...),
Error: j.Error,
CreatedAt: j.CreatedAt,
UpdatedAt: j.UpdatedAt,
}
return resp
}
func (j *Job) toJobResponse(message string) CollectJobResponse {
return CollectJobResponse{
JobID: j.ID,
Status: j.Status,
Message: message,
CreatedAt: j.CreatedAt,
}
}

View File

@@ -1,14 +1,22 @@
package server
import (
"bytes"
"context"
"crypto/rand"
"encoding/json"
"fmt"
"html/template"
"io"
"net/http"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/collector"
"git.mchus.pro/mchus/logpile/internal/exporter"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
@@ -50,22 +58,48 @@ func (s *Server) handleUpload(w http.ResponseWriter, r *http.Request) {
}
defer file.Close()
// Parse archive
p := parser.NewBMCParser()
if err := p.ParseFromReader(file, header.Filename); err != nil {
jsonError(w, "Failed to parse archive: "+err.Error(), http.StatusBadRequest)
payload, err := io.ReadAll(file)
if err != nil {
jsonError(w, "Failed to read file", http.StatusBadRequest)
return
}
result := p.Result()
var (
result *models.AnalysisResult
vendor string
)
if looksLikeJSONSnapshot(header.Filename, payload) {
snapshotResult, snapshotErr := parseUploadedSnapshot(payload)
if snapshotErr != nil {
jsonError(w, "Failed to parse snapshot: "+snapshotErr.Error(), http.StatusBadRequest)
return
}
result = snapshotResult
vendor = strings.TrimSpace(snapshotResult.Protocol)
if vendor == "" {
vendor = "snapshot"
}
} else {
// Parse archive
p := parser.NewBMCParser()
if err := p.ParseFromReader(bytes.NewReader(payload), header.Filename); err != nil {
jsonError(w, "Failed to parse archive: "+err.Error(), http.StatusBadRequest)
return
}
result = p.Result()
applyArchiveSourceMetadata(result)
vendor = p.DetectedVendor()
}
s.SetResult(result)
s.SetDetectedVendor(p.DetectedVendor())
s.SetDetectedVendor(vendor)
jsonResponse(w, map[string]interface{}{
"status": "ok",
"message": "File uploaded and parsed successfully",
"filename": header.Filename,
"vendor": p.DetectedVendor(),
"vendor": vendor,
"stats": map[string]int{
"events": len(result.Events),
"sensors": len(result.Sensors),
@@ -86,7 +120,17 @@ func (s *Server) handleGetEvents(w http.ResponseWriter, r *http.Request) {
jsonResponse(w, []interface{}{})
return
}
jsonResponse(w, result.Events)
// Sort events by timestamp (newest first)
events := make([]models.Event, len(result.Events))
copy(events, result.Events)
// Sort in descending order using sort.Slice (newest first)
sort.Slice(events, func(i, j int) bool {
return events[i].Timestamp.After(events[j].Timestamp)
})
jsonResponse(w, events)
}
func (s *Server) handleGetSensors(w http.ResponseWriter, r *http.Request) {
@@ -100,18 +144,31 @@ func (s *Server) handleGetSensors(w http.ResponseWriter, r *http.Request) {
func (s *Server) handleGetConfig(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
if result == nil || result.Hardware == nil {
if result == nil {
jsonResponse(w, map[string]interface{}{})
return
}
response := map[string]interface{}{
"source_type": result.SourceType,
"protocol": result.Protocol,
"target_host": result.TargetHost,
"collected_at": result.CollectedAt,
}
if result.Hardware == nil {
response["hardware"] = map[string]interface{}{}
response["specification"] = []SpecLine{}
jsonResponse(w, response)
return
}
// Build specification summary
spec := buildSpecification(result)
jsonResponse(w, map[string]interface{}{
"hardware": result.Hardware,
"specification": spec,
})
response["hardware"] = result.Hardware
response["specification"] = spec
jsonResponse(w, response)
}
// SpecLine represents a single line in specification
@@ -145,10 +202,20 @@ func buildSpecification(result *models.AnalysisResult) []SpecLine {
spec = append(spec, SpecLine{Category: "Процессор", Name: name, Quantity: count})
}
// Memory - group by size and type
// Memory - group by size, type and frequency (only installed modules)
memGroups := make(map[string]int)
for _, mem := range hw.Memory {
key := fmt.Sprintf("%s %dGB", mem.Type, mem.SizeMB/1024)
// Skip empty slots (not present or 0 size)
if !mem.Present || mem.SizeMB == 0 {
continue
}
// Include frequency if available
key := ""
if mem.CurrentSpeedMHz > 0 {
key = fmt.Sprintf("%s %dGB %dMHz", mem.Type, mem.SizeMB/1024, mem.CurrentSpeedMHz)
} else {
key = fmt.Sprintf("%s %dGB", mem.Type, mem.SizeMB/1024)
}
memGroups[key]++
}
for key, count := range memGroups {
@@ -471,9 +538,13 @@ func (s *Server) handleGetStatus(w http.ResponseWriter, r *http.Request) {
}
jsonResponse(w, map[string]interface{}{
"loaded": true,
"filename": result.Filename,
"vendor": s.GetDetectedVendor(),
"loaded": true,
"filename": result.Filename,
"vendor": s.GetDetectedVendor(),
"source_type": result.SourceType,
"protocol": result.Protocol,
"target_host": result.TargetHost,
"collected_at": result.CollectedAt,
"stats": map[string]int{
"events": len(result.Events),
"sensors": len(result.Sensors),
@@ -486,7 +557,7 @@ func (s *Server) handleExportCSV(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
w.Header().Set("Content-Type", "text/csv; charset=utf-8")
w.Header().Set("Content-Disposition", "attachment; filename=serials.csv")
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", exportFilename(result, "csv")))
exp := exporter.New(result)
exp.ExportCSV(w)
@@ -496,7 +567,7 @@ func (s *Server) handleExportJSON(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
w.Header().Set("Content-Type", "application/json")
w.Header().Set("Content-Disposition", "attachment; filename=report.json")
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", exportFilename(result, "json")))
exp := exporter.New(result)
exp.ExportJSON(w)
@@ -506,7 +577,7 @@ func (s *Server) handleExportTXT(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Disposition", "attachment; filename=report.txt")
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", exportFilename(result, "txt")))
exp := exporter.New(result)
exp.ExportTXT(w)
@@ -535,6 +606,240 @@ func (s *Server) handleShutdown(w http.ResponseWriter, r *http.Request) {
}()
}
func (s *Server) handleCollectStart(w http.ResponseWriter, r *http.Request) {
var req CollectRequest
decoder := json.NewDecoder(r.Body)
decoder.DisallowUnknownFields()
if err := decoder.Decode(&req); err != nil {
jsonError(w, "Invalid JSON body", http.StatusBadRequest)
return
}
if err := validateCollectRequest(req); err != nil {
jsonError(w, err.Error(), http.StatusUnprocessableEntity)
return
}
job := s.jobManager.CreateJob(req)
s.startCollectionJob(job.ID, req)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusAccepted)
_ = json.NewEncoder(w).Encode(job.toJobResponse("Collection job accepted"))
}
func (s *Server) handleCollectStatus(w http.ResponseWriter, r *http.Request) {
jobID := strings.TrimSpace(r.PathValue("id"))
if !isValidCollectJobID(jobID) {
jsonError(w, "Invalid collect job id", http.StatusBadRequest)
return
}
job, ok := s.jobManager.GetJob(jobID)
if !ok {
jsonError(w, "Collect job not found", http.StatusNotFound)
return
}
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) handleCollectCancel(w http.ResponseWriter, r *http.Request) {
jobID := strings.TrimSpace(r.PathValue("id"))
if !isValidCollectJobID(jobID) {
jsonError(w, "Invalid collect job id", http.StatusBadRequest)
return
}
job, ok := s.jobManager.CancelJob(jobID)
if !ok {
jsonError(w, "Collect job not found", http.StatusNotFound)
return
}
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
ctx, cancel := context.WithCancel(context.Background())
if attached := s.jobManager.AttachJobCancel(jobID, cancel); !attached {
cancel()
return
}
go func() {
connector, ok := s.getCollector(req.Protocol)
if !ok {
s.jobManager.UpdateJobStatus(jobID, CollectStatusFailed, 100, "Коннектор для протокола не зарегистрирован")
s.jobManager.AppendJobLog(jobID, "Сбор завершен с ошибкой")
return
}
emitProgress := func(update collector.Progress) {
if job, ok := s.jobManager.GetJob(jobID); !ok || isTerminalCollectStatus(job.Status) {
return
}
status := update.Status
if status == "" {
status = CollectStatusRunning
}
s.jobManager.UpdateJobStatus(jobID, status, update.Progress, "")
if update.Message != "" {
s.jobManager.AppendJobLog(jobID, update.Message)
}
}
result, err := connector.Collect(ctx, toCollectorRequest(req), emitProgress)
if err != nil {
if ctx.Err() != nil {
return
}
if job, ok := s.jobManager.GetJob(jobID); !ok || isTerminalCollectStatus(job.Status) {
return
}
s.jobManager.UpdateJobStatus(jobID, CollectStatusFailed, 100, err.Error())
s.jobManager.AppendJobLog(jobID, "Сбор завершен с ошибкой")
return
}
if job, ok := s.jobManager.GetJob(jobID); !ok || isTerminalCollectStatus(job.Status) {
return
}
applyCollectSourceMetadata(result, req)
s.jobManager.UpdateJobStatus(jobID, CollectStatusSuccess, 100, "")
s.jobManager.AppendJobLog(jobID, "Сбор завершен")
s.SetResult(result)
s.SetDetectedVendor(req.Protocol)
}()
}
func validateCollectRequest(req CollectRequest) error {
if strings.TrimSpace(req.Host) == "" {
return fmt.Errorf("field 'host' is required")
}
switch req.Protocol {
case "redfish", "ipmi":
default:
return fmt.Errorf("field 'protocol' must be one of: redfish, ipmi")
}
if req.Port < 1 || req.Port > 65535 {
return fmt.Errorf("field 'port' must be in range 1..65535")
}
if strings.TrimSpace(req.Username) == "" {
return fmt.Errorf("field 'username' is required")
}
switch req.AuthType {
case "password":
if strings.TrimSpace(req.Password) == "" {
return fmt.Errorf("field 'password' is required when auth_type=password")
}
case "token":
if strings.TrimSpace(req.Token) == "" {
return fmt.Errorf("field 'token' is required when auth_type=token")
}
default:
return fmt.Errorf("field 'auth_type' must be one of: password, token")
}
switch req.TLSMode {
case "strict", "insecure":
default:
return fmt.Errorf("field 'tls_mode' must be one of: strict, insecure")
}
return nil
}
var collectJobIDPattern = regexp.MustCompile(`^job_[a-zA-Z0-9_-]{8,}$`)
func isValidCollectJobID(id string) bool {
return collectJobIDPattern.MatchString(id)
}
func generateJobID() string {
buf := make([]byte, 8)
if _, err := rand.Read(buf); err != nil {
return fmt.Sprintf("job_%d", time.Now().UnixNano())
}
return fmt.Sprintf("job_%x", buf)
}
func applyArchiveSourceMetadata(result *models.AnalysisResult) {
if result == nil {
return
}
result.SourceType = models.SourceTypeArchive
result.Protocol = ""
result.TargetHost = ""
result.CollectedAt = time.Now().UTC()
}
func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectRequest) {
if result == nil {
return
}
result.SourceType = models.SourceTypeAPI
result.Protocol = req.Protocol
result.TargetHost = req.Host
result.CollectedAt = time.Now().UTC()
if strings.TrimSpace(result.Filename) == "" {
result.Filename = fmt.Sprintf("%s://%s", req.Protocol, req.Host)
}
}
func toCollectorRequest(req CollectRequest) collector.Request {
return collector.Request{
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
Password: req.Password,
Token: req.Token,
TLSMode: req.TLSMode,
}
}
func looksLikeJSONSnapshot(filename string, payload []byte) bool {
ext := strings.ToLower(filepath.Ext(filename))
if ext == ".json" {
return true
}
trimmed := bytes.TrimSpace(payload)
return len(trimmed) > 0 && (trimmed[0] == '{' || trimmed[0] == '[')
}
func parseUploadedSnapshot(payload []byte) (*models.AnalysisResult, error) {
var result models.AnalysisResult
if err := json.Unmarshal(payload, &result); err != nil {
return nil, err
}
if result.Hardware == nil && len(result.Events) == 0 && len(result.Sensors) == 0 && len(result.FRU) == 0 {
return nil, fmt.Errorf("unsupported snapshot format")
}
if strings.TrimSpace(result.SourceType) == "" {
if result.Protocol != "" {
result.SourceType = models.SourceTypeAPI
} else {
result.SourceType = models.SourceTypeArchive
}
}
if result.CollectedAt.IsZero() {
result.CollectedAt = time.Now().UTC()
}
if strings.TrimSpace(result.Filename) == "" {
result.Filename = "uploaded_snapshot.json"
}
return &result, nil
}
func (s *Server) getCollector(protocol string) (collector.Connector, bool) {
if s.collectors == nil {
s.collectors = collector.NewDefaultRegistry()
}
return s.collectors.Get(protocol)
}
func jsonResponse(w http.ResponseWriter, data interface{}) {
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(data)
@@ -567,3 +872,59 @@ func isGPUDevice(deviceClass string) bool {
}
return false
}
func exportFilename(result *models.AnalysisResult, ext string) string {
date := time.Now().UTC().Format("2006-01-02")
model := "SERVER MODEL"
sn := "SERVER SN"
if result != nil {
if !result.CollectedAt.IsZero() {
date = result.CollectedAt.UTC().Format("2006-01-02")
}
if result.Hardware != nil {
if m := strings.TrimSpace(result.Hardware.BoardInfo.ProductName); m != "" {
model = m
}
if serial := strings.TrimSpace(result.Hardware.BoardInfo.SerialNumber); serial != "" {
sn = serial
}
}
}
model = sanitizeFilenamePart(model)
sn = sanitizeFilenamePart(sn)
ext = strings.TrimPrefix(strings.TrimSpace(ext), ".")
if ext == "" {
ext = "txt"
}
return fmt.Sprintf("%s (%s) - %s.%s", date, model, sn, ext)
}
func sanitizeFilenamePart(v string) string {
v = strings.TrimSpace(v)
if v == "" {
return "-"
}
replacer := strings.NewReplacer(
"/", "_",
"\\", "_",
":", "_",
"*", "_",
"?", "_",
"\"", "_",
"<", "_",
">", "_",
"|", "_",
"\n", " ",
"\r", " ",
"\t", " ",
)
v = replacer.Replace(v)
v = strings.Join(strings.Fields(v), " ")
if v == "" {
return "-"
}
return v
}

View File

@@ -0,0 +1,168 @@
package server
import (
"context"
"sync"
"time"
)
type JobManager struct {
mu sync.RWMutex
jobs map[string]*Job
}
func NewJobManager() *JobManager {
return &JobManager{
jobs: make(map[string]*Job),
}
}
func (m *JobManager) CreateJob(req CollectRequest) *Job {
now := time.Now().UTC()
job := &Job{
ID: generateJobID(),
Status: CollectStatusQueued,
Progress: 0,
Logs: []string{"Задача поставлена в очередь"},
CreatedAt: now,
UpdatedAt: now,
RequestMeta: CollectRequestMeta{
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
TLSMode: req.TLSMode,
},
}
m.mu.Lock()
m.jobs[job.ID] = job
m.mu.Unlock()
return cloneJob(job)
}
func (m *JobManager) GetJob(id string) (*Job, bool) {
m.mu.RLock()
job, ok := m.jobs[id]
m.mu.RUnlock()
if !ok || job == nil {
return nil, false
}
return cloneJob(job), true
}
func (m *JobManager) CancelJob(id string) (*Job, bool) {
m.mu.Lock()
job, ok := m.jobs[id]
if !ok || job == nil {
m.mu.Unlock()
return nil, false
}
if !isTerminalCollectStatus(job.Status) {
job.Status = CollectStatusCanceled
job.Error = ""
job.UpdatedAt = time.Now().UTC()
job.Logs = append(job.Logs, "Сбор отменен пользователем")
}
cancelFn := job.cancel
job.cancel = nil
cloned := cloneJob(job)
m.mu.Unlock()
if cancelFn != nil {
cancelFn()
}
return cloned, true
}
func (m *JobManager) UpdateJobStatus(id, status string, progress int, errMsg string) (*Job, bool) {
m.mu.Lock()
job, ok := m.jobs[id]
if !ok || job == nil {
m.mu.Unlock()
return nil, false
}
if isTerminalCollectStatus(job.Status) {
cloned := cloneJob(job)
m.mu.Unlock()
return cloned, true
}
job.Status = status
job.Progress = normalizeProgress(progress)
job.Error = errMsg
job.UpdatedAt = time.Now().UTC()
if isTerminalCollectStatus(status) {
job.cancel = nil
}
cloned := cloneJob(job)
m.mu.Unlock()
return cloned, true
}
func (m *JobManager) AppendJobLog(id, message string) (*Job, bool) {
if message == "" {
return m.GetJob(id)
}
m.mu.Lock()
job, ok := m.jobs[id]
if !ok || job == nil {
m.mu.Unlock()
return nil, false
}
job.Logs = append(job.Logs, message)
job.UpdatedAt = time.Now().UTC()
cloned := cloneJob(job)
m.mu.Unlock()
return cloned, true
}
func (m *JobManager) AttachJobCancel(id string, cancelFn context.CancelFunc) bool {
m.mu.Lock()
defer m.mu.Unlock()
job, ok := m.jobs[id]
if !ok || job == nil || isTerminalCollectStatus(job.Status) {
return false
}
job.cancel = cancelFn
return true
}
func isTerminalCollectStatus(status string) bool {
switch status {
case CollectStatusSuccess, CollectStatusFailed, CollectStatusCanceled:
return true
default:
return false
}
}
func normalizeProgress(progress int) int {
if progress < 0 {
return 0
}
if progress > 100 {
return 100
}
return progress
}
func cloneJob(job *Job) *Job {
if job == nil {
return nil
}
cloned := *job
cloned.Logs = append([]string(nil), job.Logs...)
cloned.cancel = nil
return &cloned
}

View File

@@ -0,0 +1,77 @@
package server
import (
"strings"
"testing"
)
func TestJobManagerCreateGetUpdateCancel(t *testing.T) {
manager := NewJobManager()
req := CollectRequest{
Host: "bmc01.local",
Protocol: "redfish",
Port: 443,
Username: "admin",
AuthType: "password",
Password: "top-secret",
TLSMode: "strict",
}
job := manager.CreateJob(req)
if job == nil {
t.Fatalf("expected created job")
}
if job.Status != CollectStatusQueued {
t.Fatalf("expected queued status, got %q", job.Status)
}
if job.Progress != 0 {
t.Fatalf("expected progress 0, got %d", job.Progress)
}
if job.RequestMeta.Host != req.Host {
t.Fatalf("expected host in request meta")
}
if strings.Contains(strings.Join(job.Logs, " "), req.Password) {
t.Fatalf("password leaked in logs")
}
got, ok := manager.GetJob(job.ID)
if !ok {
t.Fatalf("expected job to exist")
}
if got.ID != job.ID {
t.Fatalf("wrong job id")
}
updated, ok := manager.UpdateJobStatus(job.ID, CollectStatusRunning, 42, "")
if !ok {
t.Fatalf("expected update to succeed")
}
if updated.Status != CollectStatusRunning || updated.Progress != 42 {
t.Fatalf("unexpected update snapshot: %+v", updated)
}
withLog, ok := manager.AppendJobLog(job.ID, "Сбор инвентаря...")
if !ok {
t.Fatalf("expected append to succeed")
}
if len(withLog.Logs) < 2 {
t.Fatalf("expected additional log, got %v", withLog.Logs)
}
canceled, ok := manager.CancelJob(job.ID)
if !ok {
t.Fatalf("expected cancel to succeed")
}
if canceled.Status != CollectStatusCanceled {
t.Fatalf("expected canceled status, got %q", canceled.Status)
}
canceledAgain, ok := manager.CancelJob(job.ID)
if !ok {
t.Fatalf("expected repeated cancel to succeed")
}
if canceledAgain.Status != CollectStatusCanceled {
t.Fatalf("expected canceled status after repeated cancel")
}
}

View File

@@ -9,6 +9,7 @@ import (
"sync"
"time"
"git.mchus.pro/mchus/logpile/internal/collector"
"git.mchus.pro/mchus/logpile/internal/models"
)
@@ -28,12 +29,17 @@ type Server struct {
mu sync.RWMutex
result *models.AnalysisResult
detectedVendor string
jobManager *JobManager
collectors *collector.Registry
}
func New(cfg Config) *Server {
s := &Server{
config: cfg,
mux: http.NewServeMux(),
config: cfg,
mux: http.NewServeMux(),
jobManager: NewJobManager(),
collectors: collector.NewDefaultRegistry(),
}
s.setupRoutes()
return s
@@ -64,6 +70,9 @@ func (s *Server) setupRoutes() {
s.mux.HandleFunc("GET /api/export/txt", s.handleExportTXT)
s.mux.HandleFunc("DELETE /api/clear", s.handleClear)
s.mux.HandleFunc("POST /api/shutdown", s.handleShutdown)
s.mux.HandleFunc("POST /api/collect", s.handleCollectStart)
s.mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
s.mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
}
func (s *Server) Run() error {

View File

@@ -0,0 +1,136 @@
package server
import (
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
func TestApplyArchiveSourceMetadata(t *testing.T) {
result := &models.AnalysisResult{}
applyArchiveSourceMetadata(result)
if result.SourceType != models.SourceTypeArchive {
t.Fatalf("expected source type %q, got %q", models.SourceTypeArchive, result.SourceType)
}
if result.Protocol != "" {
t.Fatalf("expected empty protocol for archive, got %q", result.Protocol)
}
if result.TargetHost != "" {
t.Fatalf("expected empty target host for archive, got %q", result.TargetHost)
}
if result.CollectedAt.IsZero() {
t.Fatalf("expected collected_at to be set")
}
}
func TestApplyCollectSourceMetadata(t *testing.T) {
req := CollectRequest{
Host: "bmc-api.local",
Protocol: "redfish",
Port: 443,
Username: "admin",
AuthType: "password",
Password: "super-secret",
TLSMode: "strict",
}
result := &models.AnalysisResult{
Events: make([]models.Event, 0),
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
}
applyCollectSourceMetadata(result, req)
if result.SourceType != models.SourceTypeAPI {
t.Fatalf("expected source type %q, got %q", models.SourceTypeAPI, result.SourceType)
}
if result.Protocol != req.Protocol {
t.Fatalf("expected protocol %q, got %q", req.Protocol, result.Protocol)
}
if result.TargetHost != req.Host {
t.Fatalf("expected target host %q, got %q", req.Host, result.TargetHost)
}
if result.CollectedAt.IsZero() {
t.Fatalf("expected collected_at to be set")
}
if len(result.Events) != 0 || len(result.FRU) != 0 || len(result.Sensors) != 0 {
t.Fatalf("expected empty slices for api result")
}
raw, err := json.Marshal(result)
if err != nil {
t.Fatalf("marshal result: %v", err)
}
if string(raw) == "" {
t.Fatalf("expected non-empty json")
}
if strings.Contains(string(raw), req.Password) || (req.Token != "" && strings.Contains(string(raw), req.Token)) {
t.Fatalf("secrets should not be present in api result")
}
}
func TestStatusAndConfigExposeSourceMetadata(t *testing.T) {
s := &Server{}
s.SetDetectedVendor("nvidia")
s.SetResult(&models.AnalysisResult{
Filename: "archive.tar.gz",
SourceType: models.SourceTypeArchive,
Protocol: "",
TargetHost: "",
CollectedAt: time.Now().UTC(),
Events: []models.Event{{ID: "1"}},
Sensors: []models.SensorReading{{Name: "Temp1"}},
FRU: []models.FRUInfo{{Description: "Board"}},
})
statusReq := httptest.NewRequest(http.MethodGet, "/api/status", nil)
statusRec := httptest.NewRecorder()
s.handleGetStatus(statusRec, statusReq)
if statusRec.Code != http.StatusOK {
t.Fatalf("expected 200 from /api/status, got %d", statusRec.Code)
}
var statusPayload map[string]interface{}
if err := json.NewDecoder(statusRec.Body).Decode(&statusPayload); err != nil {
t.Fatalf("decode status payload: %v", err)
}
if loaded, _ := statusPayload["loaded"].(bool); !loaded {
t.Fatalf("expected loaded=true")
}
if statusPayload["source_type"] != models.SourceTypeArchive {
t.Fatalf("expected source_type in status payload")
}
if _, ok := statusPayload["stats"]; !ok {
t.Fatalf("expected legacy stats field to remain")
}
configReq := httptest.NewRequest(http.MethodGet, "/api/config", nil)
configRec := httptest.NewRecorder()
s.handleGetConfig(configRec, configReq)
if configRec.Code != http.StatusOK {
t.Fatalf("expected 200 from /api/config, got %d", configRec.Code)
}
var configPayload map[string]interface{}
if err := json.NewDecoder(configRec.Body).Decode(&configPayload); err != nil {
t.Fatalf("decode config payload: %v", err)
}
if configPayload["source_type"] != models.SourceTypeArchive {
t.Fatalf("expected source_type in config payload")
}
if _, ok := configPayload["hardware"]; !ok {
t.Fatalf("expected legacy hardware field in config payload")
}
if _, ok := configPayload["specification"]; !ok {
t.Fatalf("expected legacy specification field in config payload")
}
}

View File

@@ -0,0 +1,332 @@
package server
import (
"archive/tar"
"bytes"
"encoding/json"
"mime/multipart"
"net/http"
"net/http/httptest"
"strings"
"testing"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors"
)
func newFlowTestServer() (*Server, *httptest.Server) {
s := &Server{
jobManager: NewJobManager(),
collectors: testCollectorRegistry(),
}
mux := http.NewServeMux()
mux.HandleFunc("POST /api/upload", s.handleUpload)
mux.HandleFunc("GET /api/status", s.handleGetStatus)
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
return s, httptest.NewServer(mux)
}
func TestUploadArchiveRegressionAndSourceMetadata(t *testing.T) {
_, ts := newFlowTestServer()
defer ts.Close()
archiveBody := buildTarArchive(t, "logs/plain.txt", "smoke archive content")
reqBody := &bytes.Buffer{}
writer := multipart.NewWriter(reqBody)
part, err := writer.CreateFormFile("archive", "smoke.tar")
if err != nil {
t.Fatalf("create form file: %v", err)
}
if _, err := part.Write(archiveBody); err != nil {
t.Fatalf("write archive body: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("close multipart writer: %v", err)
}
uploadReq, err := http.NewRequest(http.MethodPost, ts.URL+"/api/upload", reqBody)
if err != nil {
t.Fatalf("build upload request: %v", err)
}
uploadReq.Header.Set("Content-Type", writer.FormDataContentType())
uploadResp, err := http.DefaultClient.Do(uploadReq)
if err != nil {
t.Fatalf("upload request failed: %v", err)
}
defer uploadResp.Body.Close()
if uploadResp.StatusCode != http.StatusOK {
t.Fatalf("expected 200 from /api/upload, got %d", uploadResp.StatusCode)
}
var uploadPayload map[string]interface{}
if err := json.NewDecoder(uploadResp.Body).Decode(&uploadPayload); err != nil {
t.Fatalf("decode upload response: %v", err)
}
if uploadPayload["status"] != "ok" {
t.Fatalf("expected upload status ok, got %v", uploadPayload["status"])
}
if uploadPayload["filename"] != "smoke.tar" {
t.Fatalf("expected filename smoke.tar, got %v", uploadPayload["filename"])
}
stats, ok := uploadPayload["stats"].(map[string]interface{})
if !ok {
t.Fatalf("expected stats object in upload response")
}
if events, ok := stats["events"].(float64); !ok || events < 1 {
t.Fatalf("expected at least one parsed event, got %v", stats["events"])
}
statusResp, err := http.Get(ts.URL + "/api/status")
if err != nil {
t.Fatalf("status request failed: %v", err)
}
defer statusResp.Body.Close()
if statusResp.StatusCode != http.StatusOK {
t.Fatalf("expected 200 from /api/status, got %d", statusResp.StatusCode)
}
var statusPayload map[string]interface{}
if err := json.NewDecoder(statusResp.Body).Decode(&statusPayload); err != nil {
t.Fatalf("decode status response: %v", err)
}
if loaded, _ := statusPayload["loaded"].(bool); !loaded {
t.Fatalf("expected loaded=true after upload")
}
if statusPayload["source_type"] != "archive" {
t.Fatalf("expected source_type=archive, got %v", statusPayload["source_type"])
}
if protocol, _ := statusPayload["protocol"].(string); protocol != "" {
t.Fatalf("expected empty protocol for archive, got %q", protocol)
}
if targetHost, _ := statusPayload["target_host"].(string); targetHost != "" {
t.Fatalf("expected empty target_host for archive, got %q", targetHost)
}
if collectedAt, _ := statusPayload["collected_at"].(string); strings.TrimSpace(collectedAt) == "" {
t.Fatalf("expected non-empty collected_at for archive")
}
}
func TestUploadTXTFile(t *testing.T) {
_, ts := newFlowTestServer()
defer ts.Close()
txt := `Version:
--------
14.3.0.5
loader_brand="XigmaNAS"
`
reqBody := &bytes.Buffer{}
writer := multipart.NewWriter(reqBody)
part, err := writer.CreateFormFile("archive", "xigmanas.txt")
if err != nil {
t.Fatalf("create form file: %v", err)
}
if _, err := part.Write([]byte(txt)); err != nil {
t.Fatalf("write txt body: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("close multipart writer: %v", err)
}
uploadReq, err := http.NewRequest(http.MethodPost, ts.URL+"/api/upload", reqBody)
if err != nil {
t.Fatalf("build upload request: %v", err)
}
uploadReq.Header.Set("Content-Type", writer.FormDataContentType())
uploadResp, err := http.DefaultClient.Do(uploadReq)
if err != nil {
t.Fatalf("upload request failed: %v", err)
}
defer uploadResp.Body.Close()
if uploadResp.StatusCode != http.StatusOK {
t.Fatalf("expected 200 from /api/upload, got %d", uploadResp.StatusCode)
}
var uploadPayload map[string]interface{}
if err := json.NewDecoder(uploadResp.Body).Decode(&uploadPayload); err != nil {
t.Fatalf("decode upload response: %v", err)
}
if uploadPayload["status"] != "ok" {
t.Fatalf("expected upload status ok, got %v", uploadPayload["status"])
}
if uploadPayload["filename"] != "xigmanas.txt" {
t.Fatalf("expected filename xigmanas.txt, got %v", uploadPayload["filename"])
}
if uploadPayload["vendor"] != "XigmaNAS Parser" {
t.Fatalf("expected vendor XigmaNAS Parser, got %v", uploadPayload["vendor"])
}
}
func TestCollectSmokeErrorFormat(t *testing.T) {
_, ts := newFlowTestServer()
defer ts.Close()
invalidJSONResp, err := http.Post(ts.URL+"/api/collect", "application/json", strings.NewReader("{"))
if err != nil {
t.Fatalf("post collect invalid json failed: %v", err)
}
defer invalidJSONResp.Body.Close()
if invalidJSONResp.StatusCode != http.StatusBadRequest {
t.Fatalf("expected 400 for invalid json, got %d", invalidJSONResp.StatusCode)
}
assertJSONError(t, invalidJSONResp, "Invalid JSON body")
invalidFieldsBody := `{"host":"","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
invalidFieldsResp, err := http.Post(ts.URL+"/api/collect", "application/json", bytes.NewBufferString(invalidFieldsBody))
if err != nil {
t.Fatalf("post collect invalid fields failed: %v", err)
}
defer invalidFieldsResp.Body.Close()
if invalidFieldsResp.StatusCode != http.StatusUnprocessableEntity {
t.Fatalf("expected 422 for invalid fields, got %d", invalidFieldsResp.StatusCode)
}
assertJSONError(t, invalidFieldsResp, "field 'host' is required")
}
func TestCollectStatusNotFoundSmoke(t *testing.T) {
_, ts := newFlowTestServer()
defer ts.Close()
resp, err := http.Get(ts.URL + "/api/collect/job_notfound123456")
if err != nil {
t.Fatalf("get collect status failed: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusNotFound {
t.Fatalf("expected 404 for missing collect job, got %d", resp.StatusCode)
}
assertJSONError(t, resp, "Collect job not found")
}
func TestUploadRedfishSnapshotJSON(t *testing.T) {
_, ts := newFlowTestServer()
defer ts.Close()
snapshot := `{
"filename": "redfish://bmc01.local",
"source_type": "api",
"protocol": "redfish",
"target_host": "bmc01.local",
"hardware": {
"storage": [
{
"slot": "Drive1",
"type": "NVMe",
"model": "KIOXIA CD8",
"size_gb": 3840,
"serial_number": "SN-NVME-1",
"present": true
}
]
},
"raw_payloads": {
"redfish_tree": {
"/redfish/v1": {"Name": "ServiceRoot"}
}
}
}`
reqBody := &bytes.Buffer{}
writer := multipart.NewWriter(reqBody)
part, err := writer.CreateFormFile("archive", "snapshot.json")
if err != nil {
t.Fatalf("create form file: %v", err)
}
if _, err := part.Write([]byte(snapshot)); err != nil {
t.Fatalf("write snapshot body: %v", err)
}
if err := writer.Close(); err != nil {
t.Fatalf("close multipart writer: %v", err)
}
uploadReq, err := http.NewRequest(http.MethodPost, ts.URL+"/api/upload", reqBody)
if err != nil {
t.Fatalf("build upload request: %v", err)
}
uploadReq.Header.Set("Content-Type", writer.FormDataContentType())
uploadResp, err := http.DefaultClient.Do(uploadReq)
if err != nil {
t.Fatalf("upload request failed: %v", err)
}
defer uploadResp.Body.Close()
if uploadResp.StatusCode != http.StatusOK {
t.Fatalf("expected 200 from /api/upload, got %d", uploadResp.StatusCode)
}
var uploadPayload map[string]interface{}
if err := json.NewDecoder(uploadResp.Body).Decode(&uploadPayload); err != nil {
t.Fatalf("decode upload response: %v", err)
}
if uploadPayload["vendor"] != "redfish" {
t.Fatalf("expected vendor redfish, got %v", uploadPayload["vendor"])
}
statusResp, err := http.Get(ts.URL + "/api/status")
if err != nil {
t.Fatalf("status request failed: %v", err)
}
defer statusResp.Body.Close()
var statusPayload map[string]interface{}
if err := json.NewDecoder(statusResp.Body).Decode(&statusPayload); err != nil {
t.Fatalf("decode status response: %v", err)
}
if statusPayload["protocol"] != "redfish" {
t.Fatalf("expected protocol redfish, got %v", statusPayload["protocol"])
}
if statusPayload["filename"] != "redfish://bmc01.local" {
t.Fatalf("expected snapshot filename, got %v", statusPayload["filename"])
}
}
func buildTarArchive(t *testing.T, name, content string) []byte {
t.Helper()
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
if err := tw.WriteHeader(&tar.Header{
Name: name,
Mode: 0o600,
Size: int64(len(content)),
}); err != nil {
t.Fatalf("write tar header: %v", err)
}
if _, err := tw.Write([]byte(content)); err != nil {
t.Fatalf("write tar content: %v", err)
}
if err := tw.Close(); err != nil {
t.Fatalf("close tar writer: %v", err)
}
return buf.Bytes()
}
func assertJSONError(t *testing.T, resp *http.Response, expectedMessage string) {
t.Helper()
contentType := resp.Header.Get("Content-Type")
if !strings.Contains(contentType, "application/json") {
t.Fatalf("expected application/json error response, got %q", contentType)
}
var payload map[string]string
if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil {
t.Fatalf("decode error payload: %v", err)
}
if payload["error"] != expectedMessage {
t.Fatalf("expected error %q, got %q", expectedMessage, payload["error"])
}
}

BIN
logpile Executable file

Binary file not shown.

35
quick_test.go Normal file
View File

@@ -0,0 +1,35 @@
//go:build ignore
// +build ignore
package main
import (
"fmt"
"log"
"git.mchus.pro/mchus/logpile/internal/parser"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors"
)
func main() {
p := parser.NewBMCParser()
fmt.Println("Testing archive parsing...")
if err := p.ParseArchive("example/A514359X5A07900_logs-20260122-074208.tar"); err != nil {
log.Fatalf("ERROR: %v", err)
}
fmt.Println("✓ Archive parsed successfully!")
fmt.Printf("✓ Detected vendor: %s\n", p.DetectedVendor())
result := p.Result()
fmt.Printf("✓ GPUs found: %d\n", len(result.Hardware.GPUs))
fmt.Printf("✓ Events found: %d\n", len(result.Events))
fmt.Printf("✓ PCIe Devices found: %d\n", len(result.Hardware.PCIeDevices))
fmt.Println("\nBoard Info:")
fmt.Printf(" Manufacturer: %s\n", result.Hardware.BoardInfo.Manufacturer)
fmt.Printf(" Product Name: %s\n", result.Hardware.BoardInfo.ProductName)
fmt.Printf(" Serial Number: %s\n", result.Hardware.BoardInfo.SerialNumber)
fmt.Printf(" Part Number: %s\n", result.Hardware.BoardInfo.PartNumber)
}

BIN
test_nvidia_full Executable file

Binary file not shown.

99
test_nvidia_full.go Normal file
View File

@@ -0,0 +1,99 @@
//go:build ignore
// +build ignore
package main
import (
"fmt"
"log"
"git.mchus.pro/mchus/logpile/internal/parser"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors"
)
func main() {
p := parser.NewBMCParser()
fmt.Println("Testing NVIDIA Bug Report parser (full)...")
if err := p.ParseArchive("/Users/mchusavitin/Downloads/nvidia-bug-report-2KD501412.log.gz"); err != nil {
log.Fatalf("ERROR: %v", err)
}
fmt.Println("✓ Archive parsed successfully!")
fmt.Printf("✓ Detected vendor: %s\n", p.DetectedVendor())
result := p.Result()
fmt.Printf("✓ CPUs: %d\n", len(result.Hardware.CPUs))
fmt.Printf("✓ Memory: %d modules\n", len(result.Hardware.Memory))
fmt.Printf("✓ Power Supplies: %d\n", len(result.Hardware.PowerSupply))
fmt.Printf("✓ GPUs: %d\n", len(result.Hardware.GPUs))
fmt.Printf("✓ Network Adapters: %d\n", len(result.Hardware.NetworkAdapters))
fmt.Println("\nSystem Information:")
if result.Hardware.BoardInfo.SerialNumber != "" {
fmt.Printf(" Serial Number: %s\n", result.Hardware.BoardInfo.SerialNumber)
}
if result.Hardware.BoardInfo.UUID != "" {
fmt.Printf(" UUID: %s\n", result.Hardware.BoardInfo.UUID)
}
if result.Hardware.BoardInfo.Manufacturer != "" {
fmt.Printf(" Manufacturer: %s\n", result.Hardware.BoardInfo.Manufacturer)
}
if result.Hardware.BoardInfo.ProductName != "" {
fmt.Printf(" Product: %s\n", result.Hardware.BoardInfo.ProductName)
}
if result.Hardware.BoardInfo.Version != "" {
fmt.Printf(" Version: %s\n", result.Hardware.BoardInfo.Version)
}
fmt.Println("\nCPU Information:")
for _, cpu := range result.Hardware.CPUs {
fmt.Printf(" Socket %d: %s\n", cpu.Socket, cpu.Model)
fmt.Printf(" S/N: %s, Cores: %d, Threads: %d\n", cpu.SerialNumber, cpu.Cores, cpu.Threads)
}
fmt.Println("\nPower Supplies:")
for _, psu := range result.Hardware.PowerSupply {
fmt.Printf(" %s: %s (%s)\n", psu.Slot, psu.Model, psu.Vendor)
fmt.Printf(" S/N: %s\n", psu.SerialNumber)
fmt.Printf(" Power: %d W, Revision: %s\n", psu.WattageW, psu.Firmware)
fmt.Printf(" Status: %s\n", psu.Status)
}
totalMemGB := 0
for _, mem := range result.Hardware.Memory {
totalMemGB += mem.SizeMB / 1024
}
fmt.Printf("\nMemory: %d modules, %d GB total\n", len(result.Hardware.Memory), totalMemGB)
fmt.Printf("\nNetwork Adapters: %d devices\n", len(result.Hardware.NetworkAdapters))
for _, nic := range result.Hardware.NetworkAdapters {
fmt.Printf(" %s: %s\n", nic.Location, nic.Model)
if nic.Slot != "" {
fmt.Printf(" Slot: %s\n", nic.Slot)
}
if nic.PartNumber != "" {
fmt.Printf(" P/N: %s\n", nic.PartNumber)
}
if nic.SerialNumber != "" {
fmt.Printf(" S/N: %s\n", nic.SerialNumber)
}
if nic.PortCount > 0 {
fmt.Printf(" Ports: %d x %s\n", nic.PortCount, nic.PortType)
}
}
fmt.Printf("\nGPUs: %d devices\n", len(result.Hardware.GPUs))
for _, gpu := range result.Hardware.GPUs {
fmt.Printf(" %s: %s\n", gpu.BDF, gpu.Model)
if gpu.UUID != "" {
fmt.Printf(" UUID: %s\n", gpu.UUID)
}
if gpu.VideoBIOS != "" {
fmt.Printf(" Video BIOS: %s\n", gpu.VideoBIOS)
}
if gpu.IRQ > 0 {
fmt.Printf(" IRQ: %d\n", gpu.IRQ)
}
}
}

View File

@@ -40,6 +40,35 @@ main {
}
/* Upload section */
.source-switch {
display: inline-flex;
gap: 0.25rem;
background: #e9ecef;
border-radius: 8px;
padding: 0.25rem;
margin-bottom: 1rem;
}
.source-switch-btn {
border: none;
background: transparent;
color: #495057;
padding: 0.45rem 0.9rem;
border-radius: 6px;
cursor: pointer;
font-size: 0.9rem;
font-weight: 500;
}
.source-switch-btn:hover {
background: #dee2e6;
}
.source-switch-btn.active {
background: #3498db;
color: #fff;
}
.upload-area {
border: 2px dashed #ccc;
border-radius: 8px;
@@ -74,6 +103,204 @@ main {
color: #888;
}
.api-placeholder {
background: #fff;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 2rem;
color: #555;
}
#api-connect-form h3 {
margin-bottom: 1rem;
color: #2c3e50;
}
.api-form-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
gap: 0.75rem 1rem;
}
.api-form-field {
display: flex;
flex-direction: column;
gap: 0.35rem;
font-size: 0.875rem;
color: #2c3e50;
}
.api-form-field input,
.api-form-field select {
border: 1px solid #d0d7de;
border-radius: 4px;
padding: 0.5rem 0.6rem;
font-size: 0.9rem;
}
.api-form-field.has-error input,
.api-form-field.has-error select {
border-color: #dc3545;
}
.field-error {
min-height: 1rem;
color: #dc3545;
font-size: 0.75rem;
}
.form-errors {
margin-bottom: 1rem;
border: 1px solid #f0b9bf;
background: #fff4f5;
color: #8e1f2b;
border-radius: 6px;
padding: 0.75rem 0.9rem;
font-size: 0.85rem;
}
.form-errors ul {
margin: 0.4rem 0 0;
padding-left: 1.1rem;
}
.api-form-actions {
margin-top: 0.9rem;
}
#api-connect-form.is-disabled {
opacity: 0.6;
pointer-events: none;
}
#api-connect-btn {
background: #3498db;
color: white;
border: none;
padding: 0.6rem 1.2rem;
border-radius: 4px;
cursor: pointer;
}
#api-connect-btn:hover {
background: #2980b9;
}
.api-connect-status {
margin-top: 0.75rem;
font-size: 0.85rem;
}
.api-connect-status.success {
color: #1f8f4c;
}
.api-connect-status.error {
color: #dc3545;
}
.job-status {
margin-top: 1rem;
border: 1px solid #d0d7de;
border-radius: 8px;
padding: 1rem;
background: #f8fafc;
}
.job-status-header {
display: flex;
justify-content: space-between;
align-items: center;
gap: 0.75rem;
margin-bottom: 0.75rem;
}
.job-status-header h4 {
margin: 0;
color: #2c3e50;
}
#cancel-job-btn {
background: #dc3545;
color: #fff;
border: none;
border-radius: 4px;
padding: 0.45rem 0.75rem;
cursor: pointer;
}
#cancel-job-btn:disabled {
background: #9ca3af;
cursor: default;
}
.job-status-meta {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(230px, 1fr));
gap: 0.5rem 0.75rem;
margin-bottom: 0.75rem;
font-size: 0.9rem;
}
.meta-label {
color: #64748b;
font-weight: 600;
}
.job-status-badge {
display: inline-flex;
align-items: center;
border-radius: 999px;
padding: 0.2rem 0.6rem;
font-size: 0.8rem;
font-weight: 600;
}
.job-status-badge.status-queued,
.job-status-badge.status-running {
background: #eff6ff;
color: #1d4ed8;
}
.job-status-badge.status-success {
background: #ecfdf3;
color: #15803d;
}
.job-status-badge.status-failed {
background: #fef2f2;
color: #b91c1c;
}
.job-status-badge.status-canceled {
background: #f1f5f9;
color: #334155;
}
.job-status-logs ul {
list-style: none;
margin-top: 0.35rem;
border-top: 1px solid #e5e7eb;
}
.job-status-logs li {
display: grid;
grid-template-columns: 90px 1fr;
gap: 0.5rem;
padding: 0.45rem 0;
border-bottom: 1px solid #eef2f7;
font-size: 0.85rem;
}
.log-time {
color: #64748b;
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace;
}
.log-message {
color: #334155;
}
#upload-status {
margin-top: 1rem;
text-align: center;
@@ -130,6 +357,46 @@ main {
border-radius: 3px;
}
/* File Info */
.file-info {
background: #fff;
border: 1px solid #e0e0e0;
border-radius: 8px;
padding: 1rem 1.5rem;
margin-bottom: 1.5rem;
display: flex;
gap: 2rem;
flex-wrap: wrap;
align-items: center;
}
.parser-badge, .file-name {
display: flex;
align-items: center;
gap: 0.5rem;
}
.badge-label {
font-size: 0.875rem;
color: #666;
font-weight: 500;
}
.badge-value {
font-size: 0.875rem;
color: #2c3e50;
font-weight: 600;
background: #e3f2fd;
padding: 0.25rem 0.75rem;
border-radius: 4px;
border: 1px solid #90caf9;
}
.parser-badge .badge-value {
background: #e8f5e9;
border-color: #81c784;
}
/* Tabs */
.tabs {
display: flex;
@@ -698,3 +965,14 @@ footer {
padding: 0.25rem 0.5rem;
}
}
/* PCIe degraded link highlighting */
.pcie-degraded {
color: #dc3545;
font-weight: 600;
}
.pcie-max {
color: #6c757d;
font-size: 0.9em;
}

View File

@@ -1,12 +1,482 @@
// LOGPile Frontend Application
document.addEventListener('DOMContentLoaded', () => {
initSourceType();
initApiSource();
initUpload();
initTabs();
initFilters();
loadParsersInfo();
});
let sourceType = 'archive';
let apiConnectPayload = null;
let collectionJob = null;
let collectionJobPollTimer = null;
let collectionJobLogCounter = 0;
let apiPortTouchedByUser = false;
let isAutoUpdatingApiPort = false;
function initSourceType() {
const sourceButtons = document.querySelectorAll('.source-switch-btn');
sourceButtons.forEach(button => {
button.addEventListener('click', () => {
setSourceType(button.dataset.sourceType);
});
});
setSourceType(sourceType);
}
function setSourceType(nextType) {
sourceType = nextType === 'api' ? 'api' : 'archive';
document.querySelectorAll('.source-switch-btn').forEach(button => {
button.classList.toggle('active', button.dataset.sourceType === sourceType);
});
const archiveContent = document.getElementById('archive-source-content');
const apiSourceContent = document.getElementById('api-source-content');
archiveContent.classList.toggle('hidden', sourceType !== 'archive');
apiSourceContent.classList.toggle('hidden', sourceType !== 'api');
}
function initApiSource() {
const apiForm = document.getElementById('api-connect-form');
if (!apiForm) {
return;
}
const cancelJobButton = document.getElementById('cancel-job-btn');
const fieldNames = ['host', 'port', 'username', 'password'];
apiForm.addEventListener('submit', (event) => {
event.preventDefault();
const { isValid, payload, errors } = validateCollectForm();
renderFormErrors(errors);
if (!isValid) {
renderApiConnectStatus(false, null);
apiConnectPayload = null;
return;
}
apiConnectPayload = payload;
renderApiConnectStatus(true, payload);
startCollectionJob(payload);
});
if (cancelJobButton) {
cancelJobButton.addEventListener('click', () => {
cancelCollectionJob();
});
}
fieldNames.forEach((fieldName) => {
const field = apiForm.elements.namedItem(fieldName);
if (!field) {
return;
}
const eventName = field.tagName.toLowerCase() === 'select' ? 'change' : 'input';
field.addEventListener(eventName, () => {
if (fieldName === 'port') {
handleApiPortInput(field.value);
}
const { errors } = validateCollectForm();
renderFormErrors(errors);
clearApiConnectStatus();
if (collectionJob && isCollectionJobTerminal(collectionJob.status)) {
resetCollectionJobState();
}
});
});
applyRedfishDefaultPort();
renderCollectionJob();
}
function validateCollectForm() {
const host = getApiValue('host');
const portRaw = getApiValue('port');
const username = getApiValue('username');
const password = getApiValue('password');
const errors = {};
if (!host) {
errors.host = 'Укажите host.';
}
const port = Number(portRaw);
const isPortInteger = Number.isInteger(port);
if (!portRaw) {
errors.port = 'Укажите порт.';
} else if (!isPortInteger || port < 1 || port > 65535) {
errors.port = 'Порт должен быть от 1 до 65535.';
}
if (!username) {
errors.username = 'Укажите username.';
}
if (!password) {
errors.password = 'Введите пароль.';
}
if (Object.keys(errors).length > 0) {
return { isValid: false, errors, payload: null };
}
// TODO: UI для выбора протокола вернем, когда откроем IPMI коннектор.
const payload = {
host,
protocol: 'redfish',
port,
username,
auth_type: 'password',
tls_mode: 'insecure',
password
};
return { isValid: true, errors: {}, payload };
}
function renderFormErrors(errors) {
const apiForm = document.getElementById('api-connect-form');
const summary = document.getElementById('api-form-errors');
if (!apiForm || !summary) {
return;
}
const errorFields = ['host', 'port', 'username', 'password'];
errorFields.forEach((fieldName) => {
const errorNode = apiForm.querySelector(`[data-error-for="${fieldName}"]`);
if (!errorNode) {
return;
}
const fieldWrapper = errorNode.closest('.api-form-field');
const message = errors[fieldName] || '';
errorNode.textContent = message;
if (fieldWrapper) {
fieldWrapper.classList.toggle('has-error', Boolean(message));
}
});
const messages = Object.values(errors);
if (messages.length === 0) {
summary.innerHTML = '';
summary.classList.add('hidden');
return;
}
summary.classList.remove('hidden');
summary.innerHTML = `<strong>Исправьте ошибки в форме:</strong><ul>${messages.map(msg => `<li>${escapeHtml(msg)}</li>`).join('')}</ul>`;
}
function renderApiConnectStatus(isValid, payload) {
const status = document.getElementById('api-connect-status');
if (!status) {
return;
}
if (!isValid) {
status.textContent = 'Форма не отправлена: есть ошибки.';
status.className = 'api-connect-status error';
return;
}
const payloadPreview = { ...payload };
if (payloadPreview.password) {
payloadPreview.password = '***';
}
if (payloadPreview.token) {
payloadPreview.token = '***';
}
status.textContent = `Payload сформирован: ${JSON.stringify(payloadPreview)}`;
status.className = 'api-connect-status success';
}
function clearApiConnectStatus() {
const status = document.getElementById('api-connect-status');
if (!status) {
return;
}
status.textContent = '';
status.className = 'api-connect-status';
}
function startCollectionJob(payload) {
resetCollectionJobState();
setApiFormBlocked(true);
fetch('/api/collect', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
})
.then(async (response) => {
const body = await response.json().catch(() => ({}));
if (!response.ok) {
throw new Error(body.error || 'Не удалось запустить задачу');
}
collectionJob = {
id: body.job_id,
status: normalizeJobStatus(body.status || 'queued'),
progress: 0,
logs: [],
payload
};
appendJobLog(body.message || 'Задача поставлена в очередь');
renderCollectionJob();
collectionJobPollTimer = window.setInterval(() => {
pollCollectionJobStatus();
}, 1200);
})
.catch((err) => {
setApiFormBlocked(false);
clearApiConnectStatus();
renderApiConnectStatus(false, null);
const status = document.getElementById('api-connect-status');
if (status) {
status.textContent = err.message || 'Ошибка запуска задачи';
status.className = 'api-connect-status error';
}
});
}
function pollCollectionJobStatus() {
if (!collectionJob || isCollectionJobTerminal(collectionJob.status)) {
clearCollectionJobPolling();
return;
}
fetch(`/api/collect/${encodeURIComponent(collectionJob.id)}`)
.then(async (response) => {
const body = await response.json().catch(() => ({}));
if (!response.ok) {
throw new Error(body.error || 'Не удалось получить статус задачи');
}
const prevStatus = collectionJob.status;
collectionJob.status = normalizeJobStatus(body.status || collectionJob.status);
collectionJob.progress = Number.isFinite(body.progress) ? body.progress : collectionJob.progress;
collectionJob.error = body.error || '';
syncServerLogs(body.logs);
renderCollectionJob();
if (isCollectionJobTerminal(collectionJob.status)) {
clearCollectionJobPolling();
if (collectionJob.status === 'success') {
loadDataFromStatus();
} else if (collectionJob.status === 'failed' && collectionJob.error) {
appendJobLog(`Ошибка: ${collectionJob.error}`);
renderCollectionJob();
}
} else if (prevStatus !== collectionJob.status && collectionJob.status === 'running') {
appendJobLog('Сбор выполняется...');
renderCollectionJob();
}
})
.catch((err) => {
appendJobLog(`Ошибка статуса: ${err.message}`);
renderCollectionJob();
clearCollectionJobPolling();
setApiFormBlocked(false);
});
}
function cancelCollectionJob() {
if (!collectionJob || isCollectionJobTerminal(collectionJob.status)) {
return;
}
fetch(`/api/collect/${encodeURIComponent(collectionJob.id)}/cancel`, {
method: 'POST'
})
.then(async (response) => {
const body = await response.json().catch(() => ({}));
if (!response.ok) {
throw new Error(body.error || 'Не удалось отменить задачу');
}
collectionJob.status = normalizeJobStatus(body.status || 'canceled');
collectionJob.progress = Number.isFinite(body.progress) ? body.progress : collectionJob.progress;
syncServerLogs(body.logs);
clearCollectionJobPolling();
renderCollectionJob();
})
.catch((err) => {
appendJobLog(`Ошибка отмены: ${err.message}`);
renderCollectionJob();
});
}
function appendJobLog(message) {
if (!collectionJob) {
return;
}
const time = new Date().toLocaleTimeString('ru-RU', { hour12: false });
collectionJob.logs.push({
id: ++collectionJobLogCounter,
time,
message
});
}
function renderCollectionJob() {
const jobStatusBlock = document.getElementById('api-job-status');
const jobIdValue = document.getElementById('job-id-value');
const statusValue = document.getElementById('job-status-value');
const progressValue = document.getElementById('job-progress-value');
const logsList = document.getElementById('job-logs-list');
const cancelButton = document.getElementById('cancel-job-btn');
if (!jobStatusBlock || !jobIdValue || !statusValue || !progressValue || !logsList || !cancelButton) {
return;
}
if (!collectionJob) {
jobStatusBlock.classList.add('hidden');
setApiFormBlocked(false);
return;
}
jobStatusBlock.classList.remove('hidden');
jobIdValue.textContent = collectionJob.id;
statusValue.textContent = collectionJob.status;
statusValue.className = `job-status-badge status-${collectionJob.status.toLowerCase()}`;
const isTerminal = isCollectionJobTerminal(collectionJob.status);
const terminalMessage = {
success: 'Сбор завершен',
failed: 'Сбор завершился ошибкой',
canceled: 'Сбор отменен'
}[collectionJob.status];
const progressLabel = isTerminal
? terminalMessage
: 'Сбор данных...';
progressValue.textContent = `${collectionJob.progress}% · ${progressLabel}`;
logsList.innerHTML = collectionJob.logs.map((log) => (
`<li><span class="log-time">${escapeHtml(log.time)}</span><span class="log-message">${escapeHtml(log.message)}</span></li>`
)).join('');
cancelButton.disabled = isTerminal;
setApiFormBlocked(!isTerminal);
}
function isCollectionJobTerminal(status) {
return ['success', 'failed', 'canceled'].includes(normalizeJobStatus(status));
}
function setApiFormBlocked(shouldBlock) {
const apiForm = document.getElementById('api-connect-form');
if (!apiForm) {
return;
}
apiForm.classList.toggle('is-disabled', shouldBlock);
Array.from(apiForm.elements).forEach((field) => {
field.disabled = shouldBlock;
});
}
function clearCollectionJobPolling() {
if (!collectionJobPollTimer) {
return;
}
window.clearInterval(collectionJobPollTimer);
collectionJobPollTimer = null;
}
function resetCollectionJobState() {
clearCollectionJobPolling();
collectionJob = null;
renderCollectionJob();
}
function syncServerLogs(logs) {
if (!collectionJob || !Array.isArray(logs)) {
return;
}
if (logs.length <= collectionJob.logs.length) {
return;
}
const from = collectionJob.logs.length;
for (let i = from; i < logs.length; i += 1) {
appendJobLog(logs[i]);
}
}
function normalizeJobStatus(status) {
return String(status || '').trim().toLowerCase();
}
async function loadDataFromStatus() {
try {
const response = await fetch('/api/status');
const payload = await response.json();
if (!payload.loaded) {
return;
}
const vendor = payload.vendor || payload.protocol || '';
const filename = payload.filename || (payload.protocol && payload.target_host
? `${payload.protocol}://${payload.target_host}`
: '');
await loadData(vendor, filename);
} catch (err) {
console.error('Failed to load data after collect:', err);
}
}
function applyRedfishDefaultPort() {
const apiForm = document.getElementById('api-connect-form');
if (!apiForm) {
return;
}
const portField = apiForm.elements.namedItem('port');
if (!portField || typeof portField.value !== 'string') {
return;
}
const currentValue = portField.value.trim();
if (apiPortTouchedByUser && currentValue !== '') {
return;
}
isAutoUpdatingApiPort = true;
portField.value = '443';
isAutoUpdatingApiPort = false;
}
function handleApiPortInput(value) {
if (isAutoUpdatingApiPort) {
return;
}
apiPortTouchedByUser = value.trim() !== '';
}
function getApiValue(fieldName) {
const apiForm = document.getElementById('api-connect-form');
if (!apiForm) {
return '';
}
const field = apiForm.elements.namedItem(fieldName);
if (!field || typeof field.value !== 'string') {
return '';
}
return field.value.trim();
}
// Load and display available parsers
async function loadParsersInfo() {
try {
@@ -80,7 +550,7 @@ async function uploadFile(file) {
status.innerHTML = `<strong>${escapeHtml(result.vendor)}</strong><br>` +
`${result.stats.sensors} сенсоров, ${result.stats.fru} компонентов, ${result.stats.events} событий`;
status.className = 'success';
loadData(result.vendor);
loadData(result.vendor, result.filename);
} else {
status.textContent = result.error || 'Ошибка загрузки';
status.className = 'error';
@@ -124,13 +594,23 @@ let allSerials = [];
let currentVendor = '';
// Load data from API
async function loadData(vendor) {
async function loadData(vendor, filename) {
currentVendor = vendor || '';
document.getElementById('upload-section').classList.add('hidden');
document.getElementById('data-section').classList.remove('hidden');
document.getElementById('clear-btn').classList.remove('hidden');
// Update vendor badge if exists
// Update parser name and filename
const parserName = document.getElementById('parser-name');
const fileNameElem = document.getElementById('file-name');
if (parserName && currentVendor) {
parserName.textContent = currentVendor;
}
if (fileNameElem && filename) {
fileNameElem.textContent = filename;
}
// Update vendor badge if exists (legacy support)
const vendorBadge = document.getElementById('vendor-badge');
if (vendorBadge && currentVendor) {
vendorBadge.textContent = currentVendor;
@@ -326,20 +806,24 @@ function renderConfig(data) {
if (storNVMe > 0) typesSummary.push(`${storNVMe} NVMe`);
html += `<h3>Накопители</h3>
<div class="section-overview">
<div class="stat-box"><span class="stat-value">${storTotal}</span><span class="stat-label">Всего</span></div>
<div class="stat-box"><span class="stat-value">${totalTB} TB</span><span class="stat-label">Объём</span></div>
<div class="stat-box"><span class="stat-value">${storTotal}</span><span class="stat-label">Всего слотов</span></div>
<div class="stat-box"><span class="stat-value">${config.storage.filter(s => s.present).length}</span><span class="stat-label">Установлено</span></div>
<div class="stat-box"><span class="stat-value">${totalTB > 0 ? totalTB + ' TB' : '-'}</span><span class="stat-label">Объём</span></div>
<div class="stat-box model-box"><span class="stat-value">${typesSummary.join(', ') || '-'}</span><span class="stat-label">По типам</span></div>
</div>
<table class="config-table"><thead><tr><th>Слот</th><th>Тип</th><th>Интерфейс</th><th>Модель</th><th>Производитель</th><th>Размер</th><th>Серийный номер</th></tr></thead><tbody>`;
<table class="config-table"><thead><tr><th>NO.</th><th>Статус</th><th>Расположение</th><th>Backplane ID</th><th>Тип</th><th>Модель</th><th>Размер</th><th>Серийный номер</th></tr></thead><tbody>`;
config.storage.forEach(s => {
const presentIcon = s.present ? '<span style="color: #27ae60;">●</span>' : '<span style="color: #95a5a6;">○</span>';
const presentText = s.present ? 'Present' : 'Empty';
html += `<tr>
<td>${escapeHtml(s.slot || '-')}</td>
<td>${presentIcon} ${presentText}</td>
<td>${escapeHtml(s.location || '-')}</td>
<td>${s.backplane_id !== undefined ? s.backplane_id : '-'}</td>
<td>${escapeHtml(s.type || '-')}</td>
<td>${escapeHtml(s.interface || '-')}</td>
<td>${escapeHtml(s.model || '-')}</td>
<td>${escapeHtml(s.manufacturer || '-')}</td>
<td>${s.size_gb} GB</td>
<td><code>${escapeHtml(s.serial_number || '-')}</code></td>
<td>${s.size_gb > 0 ? s.size_gb + ' GB' : '-'}</td>
<td>${s.serial_number ? '<code>' + escapeHtml(s.serial_number) + '</code>' : '-'}</td>
</tr>`;
});
html += '</tbody></table>';
@@ -362,12 +846,18 @@ function renderConfig(data) {
</div>
<table class="config-table"><thead><tr><th>Слот</th><th>Модель</th><th>Производитель</th><th>BDF</th><th>PCIe</th><th>Серийный номер</th></tr></thead><tbody>`;
config.gpus.forEach(gpu => {
const pcieLink = formatPCIeLink(
gpu.current_link_width || gpu.link_width,
gpu.current_link_speed || gpu.link_speed,
gpu.max_link_width,
gpu.max_link_speed
);
html += `<tr>
<td>${escapeHtml(gpu.slot || '-')}</td>
<td>${escapeHtml(gpu.model || '-')}</td>
<td>${escapeHtml(gpu.manufacturer || '-')}</td>
<td><code>${escapeHtml(gpu.bdf || '-')}</code></td>
<td>x${gpu.link_width || '-'} ${escapeHtml(gpu.link_speed || '-')}</td>
<td>${pcieLink}</td>
<td><code>${escapeHtml(gpu.serial_number || '-')}</code></td>
</tr>`;
});
@@ -416,7 +906,12 @@ function renderConfig(data) {
if (config.pcie_devices && config.pcie_devices.length > 0) {
html += '<h3>PCIe устройства</h3><table class="config-table"><thead><tr><th>Слот</th><th>BDF</th><th>Тип</th><th>Производитель</th><th>Vendor:Device ID</th><th>PCIe Link</th></tr></thead><tbody>';
config.pcie_devices.forEach(p => {
const pcieLink = formatPCIeLink(p.link_width, p.link_speed);
const pcieLink = formatPCIeLink(
p.link_width,
p.link_speed,
p.max_link_width,
p.max_link_speed
);
html += `<tr>
<td>${escapeHtml(p.slot || '-')}</td>
<td><code>${escapeHtml(p.bdf || '-')}</code></td>
@@ -592,7 +1087,8 @@ function renderSerials(serials) {
};
serials.forEach(item => {
if (!item.serial_number) return;
// Skip items without serial number or with N/A
if (!item.serial_number || item.serial_number === 'N/A') return;
const row = document.createElement('tr');
row.innerHTML = `
<td><span class="category-badge ${item.category.toLowerCase()}">${categoryNames[item.category] || item.category}</span></td>
@@ -711,23 +1207,59 @@ function escapeHtml(text) {
return div.innerHTML;
}
function formatPCIeLink(width, speed) {
if (!width && !speed) return '-';
// Convert GT/s to PCIe generation
let gen = '';
if (speed) {
function formatPCIeLink(currentWidth, currentSpeed, maxWidth, maxSpeed) {
// Helper to convert speed to generation
function speedToGen(speed) {
if (!speed) return '';
const gtMatch = speed.match(/(\d+\.?\d*)\s*GT/i);
if (gtMatch) {
const gts = parseFloat(gtMatch[1]);
if (gts >= 32) gen = 'Gen5';
else if (gts >= 16) gen = 'Gen4';
else if (gts >= 8) gen = 'Gen3';
else if (gts >= 5) gen = 'Gen2';
else if (gts >= 2.5) gen = 'Gen1';
if (gts >= 32) return 'Gen5';
if (gts >= 16) return 'Gen4';
if (gts >= 8) return 'Gen3';
if (gts >= 5) return 'Gen2';
if (gts >= 2.5) return 'Gen1';
}
return '';
}
const widthStr = width ? `x${width}` : '';
return gen ? `${widthStr} PCIe ${gen}` : `${widthStr} ${speed || ''}`;
// Helper to extract GT/s value for comparison
function extractGTs(speed) {
if (!speed) return 0;
const gtMatch = speed.match(/(\d+\.?\d*)\s*GT/i);
return gtMatch ? parseFloat(gtMatch[1]) : 0;
}
// If no data, return dash
if (!currentWidth && !currentSpeed) return '-';
const curGen = speedToGen(currentSpeed);
const maxGen = speedToGen(maxSpeed);
// Check if current is lower than max
const widthDegraded = maxWidth && currentWidth && currentWidth < maxWidth;
const speedDegraded = maxSpeed && currentSpeed && extractGTs(currentSpeed) < extractGTs(maxSpeed);
// Build current link string
const curWidthStr = currentWidth ? `x${currentWidth}` : '';
const curLinkStr = curGen ? `${curWidthStr} ${curGen}` : `${curWidthStr} ${currentSpeed || ''}`;
// Build max link string (if available)
let maxLinkStr = '';
if (maxWidth || maxSpeed) {
const maxWidthStr = maxWidth ? `x${maxWidth}` : '';
maxLinkStr = maxGen ? `${maxWidthStr} ${maxGen}` : `${maxWidthStr} ${maxSpeed || ''}`;
}
// Apply degraded class if needed
const degradedClass = (widthDegraded || speedDegraded) ? ' class="pcie-degraded"' : '';
// Format output: show "current" or "current / max" if max differs
if (maxLinkStr && (widthDegraded || speedDegraded)) {
return `<span${degradedClass}>${curLinkStr}</span> <span class="pcie-max">/ ${maxLinkStr}</span>`;
} else if (maxLinkStr && maxLinkStr !== curLinkStr) {
return `${curLinkStr} <span class="pcie-max">/ ${maxLinkStr}</span>`;
} else {
return curLinkStr;
}
}

View File

@@ -14,17 +14,92 @@
<main>
<section id="upload-section">
<div class="upload-area" id="drop-zone">
<p>Перетащите архив сюда или</p>
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip" hidden>
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
<p class="hint">Поддерживаемые форматы: tar.gz, zip</p>
<div class="source-switch" role="tablist" aria-label="Источник данных">
<button type="button" class="source-switch-btn active" data-source-type="archive">Архив</button>
<button type="button" class="source-switch-btn" data-source-type="api">API</button>
</div>
<div id="archive-source-content">
<div class="upload-area" id="drop-zone">
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.zip,.txt,.log" hidden>
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
<p class="hint">Поддерживаемые форматы: tar.gz, zip, json, txt, log</p>
</div>
<div id="upload-status"></div>
<div id="parsers-info" class="parsers-info"></div>
</div>
<div id="api-source-content" class="api-placeholder hidden">
<form id="api-connect-form" novalidate>
<h3>Подключение к BMC API</h3>
<div id="api-form-errors" class="form-errors hidden"></div>
<div class="api-form-grid">
<label class="api-form-field" for="api-host">
<span>Host</span>
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 или bmc.example.local">
<span class="field-error" data-error-for="host"></span>
</label>
<label class="api-form-field" for="api-port">
<span>Порт</span>
<input id="api-port" name="port" type="number" min="1" max="65535" value="443" placeholder="443">
<span class="field-error" data-error-for="port"></span>
</label>
<label class="api-form-field" for="api-username">
<span>Username</span>
<input id="api-username" name="username" type="text" placeholder="admin">
<span class="field-error" data-error-for="username"></span>
</label>
<label class="api-form-field" id="api-password-field" for="api-password">
<span>Пароль</span>
<input id="api-password" name="password" type="password" autocomplete="current-password">
<span class="field-error" data-error-for="password"></span>
</label>
</div>
<div class="api-form-actions">
<button id="api-connect-btn" type="submit">Подключиться</button>
</div>
<div id="api-connect-status" class="api-connect-status"></div>
</form>
<section id="api-job-status" class="job-status hidden" aria-live="polite">
<div class="job-status-header">
<h4>Статус задачи сбора</h4>
<button id="cancel-job-btn" type="button">Отменить</button>
</div>
<div class="job-status-meta">
<div><span class="meta-label">jobId:</span> <code id="job-id-value">-</code></div>
<div>
<span class="meta-label">Статус:</span>
<span id="job-status-value" class="job-status-badge">Queued</span>
</div>
<div><span class="meta-label">Прогресс:</span> <span id="job-progress-value">0% · Шаг 0 из 4</span></div>
</div>
<div class="job-status-logs">
<p class="meta-label">Журнал шагов:</p>
<ul id="job-logs-list"></ul>
</div>
</section>
</div>
<div id="upload-status"></div>
<div id="parsers-info" class="parsers-info"></div>
</section>
<section id="data-section" class="hidden">
<div class="file-info">
<div class="parser-badge">
<span class="badge-label">Парсер:</span>
<span id="parser-name" class="badge-value"></span>
</div>
<div class="file-name">
<span class="badge-label">Файл:</span>
<span id="file-name" class="badge-value"></span>
</div>
</div>
<nav class="tabs">
<button class="tab active" data-tab="config">Конфигурация</button>
<button class="tab" data-tab="firmware">Прошивки</button>