Compare commits
12 Commits
ae588ae75a
...
v1.3.0
| Author | SHA1 | Date | |
|---|---|---|---|
| 5e49adaf05 | |||
| c7b2a7ab29 | |||
| 0af3cee9b6 | |||
| 8715fcace4 | |||
| 1b1bc74fc7 | |||
| 77e25ddc02 | |||
| bcce975fd6 | |||
| 8b065c6cca | |||
| aa22034944 | |||
|
|
7d9135dc63 | ||
|
|
80e726d756 | ||
| 92134a6cc1 |
14
CLAUDE.md
14
CLAUDE.md
@@ -49,14 +49,24 @@ Registry: `internal/collector/registry.go`
|
||||
Endpoints:
|
||||
- `/api/export/csv`
|
||||
- `/api/export/json`
|
||||
- `/api/export/txt`
|
||||
- `/api/export/reanimator`
|
||||
|
||||
Filename pattern for all exports:
|
||||
`YYYY-MM-DD (SERVER MODEL) - SERVER SN.<ext>`
|
||||
|
||||
Notes:
|
||||
- JSON export contains full `AnalysisResult`, including `raw_payloads`.
|
||||
- TXT export is tabular and mirrors UI sections (no raw JSON section).
|
||||
- **Reanimator export** (`/api/export/reanimator`):
|
||||
- Exports hardware data in Reanimator format for integration with asset tracking systems.
|
||||
- Format specification: `example/docs/INTEGRATION_GUIDE.md`
|
||||
- Requires `hardware.board.serial_number` to be present.
|
||||
- Key features:
|
||||
- Infers CPU manufacturer from model name (Intel/AMD/ARM/Ampere).
|
||||
- Generates PCIe serial numbers if missing: `{board_serial}-PCIE-{slot}`.
|
||||
- Adds status fields (defaults to "OK").
|
||||
- RFC3339 timestamp format.
|
||||
- Includes GPUs and NetworkAdapters as PCIe devices.
|
||||
- Filters out storage devices and PSUs without serial numbers.
|
||||
|
||||
## CLI flags (`cmd/logpile/main.go`)
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ LOGPile — standalone Go-приложение для анализа диагн
|
||||
- нормализованные данные (CPU/RAM/Storage/GPU/PSU/NIC/PCIe/Firmware),
|
||||
- сырой `redfish_tree` для будущего анализа.
|
||||
- Загрузка JSON snapshot обратно через `/api/upload` для оффлайн-работы.
|
||||
- Экспорт в CSV / JSON / TXT.
|
||||
- Экспорт в CSV / JSON.
|
||||
|
||||
## Требования
|
||||
|
||||
@@ -98,7 +98,6 @@ POST /api/collect
|
||||
|
||||
- `GET /api/export/csv` — серийные номера
|
||||
- `GET /api/export/json` — полный `AnalysisResult` (включая `raw_payloads`)
|
||||
- `GET /api/export/txt` — табличный отчёт по разделам UI
|
||||
|
||||
Имена экспортируемых файлов:
|
||||
|
||||
@@ -123,7 +122,6 @@ GET /api/serials
|
||||
GET /api/firmware
|
||||
GET /api/export/csv
|
||||
GET /api/export/json
|
||||
GET /api/export/txt
|
||||
DELETE /api/clear
|
||||
POST /api/shutdown
|
||||
```
|
||||
@@ -141,7 +139,7 @@ cmd/logpile/main.go # entrypoint
|
||||
internal/collector/ # live collectors (redfish, ipmi mock)
|
||||
internal/parser/ # archive parsers
|
||||
internal/server/ # HTTP handlers
|
||||
internal/exporter/ # CSV/JSON/TXT export
|
||||
internal/exporter/ # CSV/JSON export
|
||||
internal/models/ # data contracts
|
||||
web/ # embedded templates/static
|
||||
```
|
||||
|
||||
227
REANIMATOR_EXPORT.md
Normal file
227
REANIMATOR_EXPORT.md
Normal file
@@ -0,0 +1,227 @@
|
||||
# Reanimator Export - Implementation Summary
|
||||
|
||||
## Обзор
|
||||
|
||||
Реализован новый формат экспорта данных LOGPile в формат Reanimator для интеграции с системами отслеживания серверных компонентов (asset tracking).
|
||||
|
||||
## Реализованные компоненты
|
||||
|
||||
### 1. Модели данных (`internal/exporter/reanimator_models.go`)
|
||||
|
||||
Определены структуры для формата Reanimator:
|
||||
- `ReanimatorExport` - корневая структура экспорта
|
||||
- `ReanimatorHardware` - контейнер для всех аппаратных компонентов
|
||||
- `ReanimatorBoard` - материнская плата/сервер
|
||||
- `ReanimatorCPU` - процессоры
|
||||
- `ReanimatorMemory` - модули памяти (DIMM)
|
||||
- `ReanimatorStorage` - накопители
|
||||
- `ReanimatorPCIe` - PCIe устройства
|
||||
- `ReanimatorPSU` - блоки питания
|
||||
- `ReanimatorFirmware` - прошивки
|
||||
|
||||
### 2. Функции конвертации (`internal/exporter/reanimator_converter.go`)
|
||||
|
||||
Главная функция: `ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, error)`
|
||||
|
||||
Вспомогательные функции:
|
||||
- `inferCPUManufacturer()` - определение производителя CPU по модели (Intel/AMD/ARM/Ampere)
|
||||
- `generatePCIeSerialNumber()` - генерация серийных номеров для PCIe устройств
|
||||
- `inferStorageStatus()` - определение статуса накопителей
|
||||
- `convertBoard()`, `convertCPUs()`, `convertMemory()`, и т.д. - конвертация отдельных секций
|
||||
|
||||
**Ключевые особенности конвертации:**
|
||||
- Автоматическое определение производителя CPU из модели
|
||||
- Генерация серийных номеров для PCIe устройств: `{board_serial}-PCIE-{slot}`
|
||||
- Объединение GPUs и NetworkAdapters в секцию pcie_devices
|
||||
- Фильтрация компонентов без серийных номеров (storage, PSU)
|
||||
- Нормализация статусов в допустимые значения (`OK`, `Warning`, `Critical`, `Unknown`; `Empty` только для memory)
|
||||
- RFC3339 формат для collected_at
|
||||
- Вывод target_host из filename (`redfish://`, `ipmi://`) если отсутствует в source
|
||||
- `target_host` опционален: если определить не удалось, поле не включается в JSON
|
||||
- Нормализация `board.manufacturer` и `board.product_name`: строка `"NULL"` трактуется как отсутствующее значение
|
||||
- Нормализация/очистка `source_type` и `protocol`: в экспорт попадают только допустимые значения из гайда
|
||||
|
||||
### 3. HTTP эндпоинт
|
||||
|
||||
**Маршрут:** `GET /api/export/reanimator`
|
||||
|
||||
**Обработчик:** `handleExportReanimator()` в `internal/server/handlers.go`
|
||||
|
||||
**Функциональность:**
|
||||
- Проверка наличия данных hardware
|
||||
- Конвертация в формат Reanimator
|
||||
- Возврат JSON с отступами для читаемости
|
||||
- Установка заголовка Content-Disposition для скачивания
|
||||
|
||||
### 4. Frontend интеграция
|
||||
|
||||
Добавлена кнопка "Экспорт Reanimator" в веб-интерфейсе:
|
||||
- Расположение: вкладка "Конфигурация"
|
||||
- Использует существующую функцию `exportData('reanimator')`
|
||||
|
||||
### 5. Тесты
|
||||
|
||||
**Unit-тесты** (`reanimator_converter_test.go`):
|
||||
- `TestConvertToReanimator` - основная функция конвертации
|
||||
- `TestInferCPUManufacturer` - определение производителя CPU
|
||||
- `TestGeneratePCIeSerialNumber` - генерация серийных номеров
|
||||
- `TestInferStorageStatus` - определение статуса накопителей
|
||||
- `TestConvertCPUs`, `TestConvertMemory`, и т.д. - тесты для каждого типа компонентов
|
||||
|
||||
**Интеграционные тесты** (`reanimator_integration_test.go`):
|
||||
- `TestFullReanimatorExport` - полный экспорт с реалистичными данными
|
||||
- `TestReanimatorExportWithoutTargetHost` - тест вывода target_host
|
||||
|
||||
**Результаты:** Все тесты проходят успешно ✓
|
||||
|
||||
### 6. Документация
|
||||
|
||||
Обновлен `CLAUDE.md`:
|
||||
- Добавлен эндпоинт `/api/export/reanimator` в секцию "Export behavior"
|
||||
- Описаны ключевые особенности экспорта
|
||||
- Добавлена ссылка на спецификацию формата
|
||||
|
||||
### 7. Примеры
|
||||
|
||||
Создан пример экспорта: `example/docs/export-example-logpile.json`
|
||||
|
||||
## Формат экспорта
|
||||
|
||||
### Обязательные поля:
|
||||
- `collected_at` (RFC3339)
|
||||
- `target_host`
|
||||
- `hardware.board.serial_number`
|
||||
|
||||
### Структура экспорта:
|
||||
|
||||
```json
|
||||
{
|
||||
"filename": "redfish://10.10.10.103",
|
||||
"source_type": "api",
|
||||
"protocol": "redfish",
|
||||
"target_host": "10.10.10.103",
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"hardware": {
|
||||
"board": {...},
|
||||
"firmware": [...],
|
||||
"cpus": [...],
|
||||
"memory": [...],
|
||||
"storage": [...],
|
||||
"pcie_devices": [...],
|
||||
"power_supplies": [...]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Соответствие спецификации Reanimator
|
||||
|
||||
Формат полностью соответствует спецификации из `example/docs/INTEGRATION_GUIDE.md`:
|
||||
|
||||
✓ Все обязательные поля присутствуют
|
||||
✓ Правильные типы данных
|
||||
✓ RFC3339 формат времени
|
||||
✓ Генерация серийных номеров для PCIe
|
||||
✓ Определение производителя CPU
|
||||
✓ Статусы компонентов
|
||||
✓ Включение пустых слотов памяти (present=false)
|
||||
|
||||
## Особенности реализации
|
||||
|
||||
### Маппинг моделей LOGPile → Reanimator
|
||||
|
||||
| LOGPile | Reanimator | Примечания |
|
||||
|---------|------------|------------|
|
||||
| `BoardInfo` | `board` | Прямой маппинг |
|
||||
| `CPU` | `cpus` | + manufacturer (выводится) + status=`Unknown` при отсутствии фактического статуса |
|
||||
| `MemoryDIMM` | `memory` | Прямой маппинг |
|
||||
| `Storage` | `storage` | + status=`Unknown` (статус источником не предоставляется) |
|
||||
| `PCIeDevice` | `pcie_devices` | + model + status=`Unknown` |
|
||||
| `GPU` | `pcie_devices` | Объединены как `device_class=DisplayController` |
|
||||
| `NetworkAdapter` | `pcie_devices` | Объединены как NetworkController |
|
||||
| `PSU` | `power_supplies` | Прямой маппинг |
|
||||
| `FirmwareInfo` | `firmware` | Прямой маппинг |
|
||||
|
||||
### Фильтрация данных
|
||||
|
||||
**Исключаются из экспорта:**
|
||||
- Storage без serial_number
|
||||
- PSU без serial_number или present=false
|
||||
- NetworkAdapters с present=false
|
||||
|
||||
**Включаются в экспорт:**
|
||||
- Memory с present=false (как Empty slots)
|
||||
- PCIe устройства без serial_number (генерируется)
|
||||
|
||||
## Использование
|
||||
|
||||
### Через Web UI:
|
||||
1. Загрузить архив или собрать данные через API
|
||||
2. Перейти на вкладку "Конфигурация"
|
||||
3. Нажать "Экспорт Reanimator"
|
||||
|
||||
### Через API:
|
||||
```bash
|
||||
curl http://localhost:8082/api/export/reanimator > reanimator.json
|
||||
```
|
||||
|
||||
### Программно:
|
||||
```go
|
||||
import "git.mchus.pro/mchus/logpile/internal/exporter"
|
||||
|
||||
result := &models.AnalysisResult{...}
|
||||
reanimatorData, err := exporter.ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
// handle error
|
||||
}
|
||||
|
||||
jsonData, _ := json.MarshalIndent(reanimatorData, "", " ")
|
||||
```
|
||||
|
||||
## Тестирование
|
||||
|
||||
Запуск тестов:
|
||||
```bash
|
||||
# Все тесты
|
||||
go test ./internal/exporter/...
|
||||
|
||||
# Только тесты Reanimator
|
||||
go test ./internal/exporter/... -v -run Reanimator
|
||||
|
||||
# С покрытием
|
||||
go test ./internal/exporter/... -cover
|
||||
```
|
||||
|
||||
## Файлы изменений
|
||||
|
||||
**Новые файлы:**
|
||||
- `internal/exporter/reanimator_models.go` (4.6 KB)
|
||||
- `internal/exporter/reanimator_converter.go` (10 KB)
|
||||
- `internal/exporter/reanimator_converter_test.go` (8.0 KB)
|
||||
- `internal/exporter/reanimator_integration_test.go` (7.4 KB)
|
||||
- `internal/exporter/generate_example_test.go` (4.3 KB)
|
||||
- `example/docs/export-example-logpile.json` (2.3 KB)
|
||||
|
||||
**Измененные файлы:**
|
||||
- `internal/server/handlers.go` - добавлен handleExportReanimator
|
||||
- `internal/server/server.go` - добавлен маршрут
|
||||
- `web/templates/index.html` - добавлена кнопка экспорта
|
||||
- `CLAUDE.md` - обновлена документация
|
||||
|
||||
## Совместимость
|
||||
|
||||
- ✓ Обратная совместимость: существующие экспорты (JSON/CSV) не затронуты
|
||||
- ✓ Формат данных: `AnalysisResult` не изменен
|
||||
- ✓ API контракты: новый эндпоинт не влияет на существующие
|
||||
|
||||
## Будущие улучшения
|
||||
|
||||
1. Поддержка статусов из реальных данных (Warning/Critical) для Storage
|
||||
2. Расширенная телеметрия для компонентов
|
||||
3. Валидация экспорта против JSON схемы Reanimator
|
||||
4. Поддержка инкрементальных обновлений
|
||||
|
||||
---
|
||||
|
||||
**Статус:** ✅ Реализация завершена и протестирована
|
||||
**Версия:** LOGPile v1.2.1+
|
||||
**Дата:** 2026-02-12
|
||||
992
docs/INTEGRATION_GUIDE.md
Normal file
992
docs/INTEGRATION_GUIDE.md
Normal file
@@ -0,0 +1,992 @@
|
||||
# Руководство по интеграции Reanimator
|
||||
|
||||
## Импорт серверов через JSON (Redfish/API)
|
||||
|
||||
Данное руководство описывает формат JSON для импорта аппаратной информации о серверах, собранной через Redfish API или другие источники мониторинга.
|
||||
|
||||
---
|
||||
|
||||
## Принципы импорта
|
||||
|
||||
1. **Snapshot данных** - JSON содержит состояние сервера на момент сбора, без исторической информации
|
||||
2. **Автоматическое определение LOT** - классификация компонентов определяется приложением на основе vendor/model/type
|
||||
3. **Статус компонентов** - каждый компонент имеет статус работоспособности (OK, Warning, Critical, Unknown)
|
||||
4. **Идемпотентность** - повторный импорт с тем же snapshot не создает дубликаты
|
||||
5. **Event-driven обновления** - импорт создает события в timeline (LOG_COLLECTED, INSTALLED, REMOVED, FIRMWARE_CHANGED)
|
||||
|
||||
---
|
||||
|
||||
## Формат JSON для импорта
|
||||
|
||||
> Важно: endpoint использует строгий JSON-декодер (`DisallowUnknownFields`).
|
||||
> Любое неизвестное поле (включая вложенные объекты) приведет к `400 Bad Request`.
|
||||
> Используйте только `snake_case` ключи из этого руководства.
|
||||
|
||||
### Структура верхнего уровня
|
||||
|
||||
```json
|
||||
{
|
||||
"filename": "redfish://10.10.10.103",
|
||||
"source_type": "api",
|
||||
"protocol": "redfish",
|
||||
"target_host": "10.10.10.103",
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"hardware": {
|
||||
"board": {...},
|
||||
"cpus": [...],
|
||||
"memory": [...],
|
||||
"storage": [...],
|
||||
"pcie_devices": [...],
|
||||
"power_supplies": [...],
|
||||
"firmware": [...]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Обязательные поля верхнего уровня
|
||||
|
||||
- `collected_at` (string RFC3339, обязательно) - время сбора информации
|
||||
- `target_host` (string, опционально) - IP или hostname сервера
|
||||
- `hardware.board.serial_number` (string, обязательно) - серийный номер сервера/платы
|
||||
- `source_type` (string, опционально) - тип источника: `api`, `logfile`, `manual`
|
||||
- `protocol` (string, опционально) - протокол сбора: `redfish`, `ipmi`, `snmp`, `ssh`
|
||||
- `filename` (string, опционально) - идентификатор источника данных
|
||||
- `hardware` (object, обязательно) - структура с аппаратными компонентами
|
||||
|
||||
---
|
||||
|
||||
## Секция hardware
|
||||
|
||||
### 1. Board (Материнская плата / Server)
|
||||
|
||||
**Назначение:** Основная информация о сервере/материнской плате. Эта информация используется для создания/обновления Asset.
|
||||
|
||||
```json
|
||||
{
|
||||
"board": {
|
||||
"manufacturer": "Supermicro",
|
||||
"product_name": "X12DPG-QT6",
|
||||
"serial_number": "21D634101",
|
||||
"part_number": "X12DPG-QT6-REV1.01",
|
||||
"uuid": "d7ef2fe5-2fd0-11f0-910a-346f11040868"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `serial_number` (string, обязательно) - серийный номер материнской платы/сервера (используется как `vendor_serial` для Asset)
|
||||
- `manufacturer` (string, опционально) - производитель (используется как `vendor` для Asset)
|
||||
- `product_name` (string, опционально) - модель (используется как `model` для Asset)
|
||||
- `part_number` (string, опционально) - партномер
|
||||
- `uuid` (string, опционально) - UUID системы
|
||||
|
||||
**Примечание:** Если `manufacturer` или `product_name` = "NULL", они интерпретируются как отсутствующие.
|
||||
|
||||
---
|
||||
|
||||
### 2. CPUs (Процессоры)
|
||||
|
||||
**Назначение:** Информация о установленных процессорах.
|
||||
|
||||
```json
|
||||
{
|
||||
"cpus": [
|
||||
{
|
||||
"socket": 0,
|
||||
"model": "INTEL(R) XEON(R) GOLD 6530",
|
||||
"cores": 32,
|
||||
"threads": 64,
|
||||
"frequency_mhz": 2100,
|
||||
"max_frequency_mhz": 4000,
|
||||
"manufacturer": "Intel",
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"socket": 1,
|
||||
"model": "INTEL(R) XEON(R) GOLD 6530",
|
||||
"cores": 32,
|
||||
"threads": 64,
|
||||
"frequency_mhz": 2100,
|
||||
"max_frequency_mhz": 4000,
|
||||
"manufacturer": "Intel",
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `socket` (int, обязательно) - номер сокета (используется для формирования уникального идентификатора)
|
||||
- `model` (string, обязательно) - модель процессора
|
||||
- `cores` (int, опционально) - количество ядер
|
||||
- `threads` (int, опционально) - количество потоков
|
||||
- `frequency_mhz` (int, опционально) - текущая частота в МГц
|
||||
- `max_frequency_mhz` (int, опционально) - максимальная частота в МГц
|
||||
- `manufacturer` (string, опционально) - производитель (Intel, AMD, etc.)
|
||||
- `status` (string, опционально) - статус: `OK`, `Warning`, `Critical`, `Unknown`
|
||||
|
||||
**Генерация serial_number:**
|
||||
- Формат: `{board_serial}-CPU-{socket}`
|
||||
- Пример: `21D634101-CPU-0`, `21D634101-CPU-1`
|
||||
|
||||
**LOT автоопределение:**
|
||||
- Формат: `CPU_{NORMALIZED_MODEL}`
|
||||
- Пример: `CPU_XEON_GOLD_6530`, `CPU_EPYC_7763`
|
||||
|
||||
---
|
||||
|
||||
### 3. Memory (Модули памяти)
|
||||
|
||||
**Назначение:** Информация о модулях памяти (DIMM).
|
||||
|
||||
```json
|
||||
{
|
||||
"memory": [
|
||||
{
|
||||
"slot": "CPU0_C0D0",
|
||||
"location": "CPU0_C0D0",
|
||||
"present": true,
|
||||
"size_mb": 32768,
|
||||
"type": "DDR5",
|
||||
"max_speed_mhz": 4800,
|
||||
"current_speed_mhz": 4800,
|
||||
"manufacturer": "Hynix",
|
||||
"serial_number": "80AD032419E17CEEC1",
|
||||
"part_number": "HMCG88AGBRA191N",
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"slot": "CPU0_C1D0",
|
||||
"location": "CPU0_C1D0",
|
||||
"present": false,
|
||||
"size_mb": 0,
|
||||
"type": null,
|
||||
"manufacturer": null,
|
||||
"serial_number": null,
|
||||
"part_number": null,
|
||||
"status": "Empty"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `slot` (string, обязательно) - идентификатор слота
|
||||
- `location` (string, опционально) - физическое расположение
|
||||
- `present` (bool, обязательно) - наличие модуля в слоте
|
||||
- `size_mb` (int, опционально) - размер в МБ
|
||||
- `type` (string, опционально) - тип памяти: `DDR4`, `DDR5`, `DDR3`, etc.
|
||||
- `max_speed_mhz` (int, опционально) - максимальная частота
|
||||
- `current_speed_mhz` (int, опционально) - текущая частота
|
||||
- `manufacturer` (string, опционально) - производитель
|
||||
- `serial_number` (string, условно обязательно если present=true) - серийный номер
|
||||
- `part_number` (string, опционально) - партномер
|
||||
- `status` (string, опционально) - статус: `OK`, `Warning`, `Critical`, `Unknown`, `Empty`
|
||||
|
||||
**Обработка:**
|
||||
- Если `present = false` или `status = "Empty"`, компонент не создается/не обновляется
|
||||
- Если модуль был в предыдущем snapshot, но отсутствует в текущем - создается событие REMOVED
|
||||
|
||||
**LOT автоопределение:**
|
||||
- Формат: `DIMM_{TYPE}_{SIZE_GB}GB`
|
||||
- Пример: `DIMM_DDR5_32GB`, `DIMM_DDR4_64GB`
|
||||
|
||||
---
|
||||
|
||||
### 4. Storage (Диски)
|
||||
|
||||
**Назначение:** Информация о дисках (SSD, HDD, NVMe).
|
||||
|
||||
```json
|
||||
{
|
||||
"storage": [
|
||||
{
|
||||
"slot": "OB01",
|
||||
"type": "NVMe",
|
||||
"model": "INTEL SSDPF2KX076T1",
|
||||
"size_gb": 7680,
|
||||
"serial_number": "BTAX41900GF87P6DGN",
|
||||
"manufacturer": "Intel",
|
||||
"firmware": "9CV10510",
|
||||
"interface": "NVMe",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"slot": "FP00HDD00",
|
||||
"type": "HDD",
|
||||
"model": "ST12000NM0008",
|
||||
"size_gb": 12000,
|
||||
"serial_number": "ZJV01234",
|
||||
"manufacturer": "Seagate",
|
||||
"firmware": "SN03",
|
||||
"interface": "SATA",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `slot` (string, обязательно) - идентификатор слота
|
||||
- `type` (string, опционально) - тип: `NVMe`, `SSD`, `HDD`
|
||||
- `model` (string, обязательно) - модель диска
|
||||
- `size_gb` (int, опционально) - размер в ГБ
|
||||
- `serial_number` (string, обязательно) - серийный номер
|
||||
- `manufacturer` (string, опционально) - производитель (может быть VID в hex формате типа "8086")
|
||||
- `firmware` (string, опционально) - версия прошивки
|
||||
- `interface` (string, опционально) - интерфейс: `NVMe`, `SATA`, `SAS`
|
||||
- `present` (bool, обязательно) - наличие диска в слоте
|
||||
- `status` (string, опционально) - статус: `OK`, `Warning`, `Critical`, `Unknown`
|
||||
|
||||
**Обработка firmware:**
|
||||
- Если версия firmware изменилась относительно предыдущего observation - создается событие FIRMWARE_CHANGED
|
||||
|
||||
**LOT автоопределение:**
|
||||
- Формат: `{TYPE}_{INTERFACE}_{SIZE_TB}TB` или `{TYPE}_{INTERFACE}_{SIZE_GB}GB`
|
||||
- Пример: `SSD_NVME_07.68TB`, `HDD_SATA_12TB`
|
||||
|
||||
---
|
||||
|
||||
### 5. Power Supplies (Блоки питания)
|
||||
|
||||
**Назначение:** Информация о блоках питания.
|
||||
|
||||
```json
|
||||
{
|
||||
"power_supplies": [
|
||||
{
|
||||
"slot": "0",
|
||||
"present": true,
|
||||
"model": "GW-CRPS3000LW",
|
||||
"vendor": "Great Wall",
|
||||
"wattage_w": 3000,
|
||||
"serial_number": "2P06C102610",
|
||||
"part_number": "V0310C9000000000",
|
||||
"firmware": "00.03.05",
|
||||
"status": "OK",
|
||||
"input_type": "ACWideRange",
|
||||
"input_power_w": 137,
|
||||
"output_power_w": 104,
|
||||
"input_voltage": 215.25
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `slot` (string, обязательно) - идентификатор слота
|
||||
- `present` (bool, обязательно) - наличие PSU
|
||||
- `model` (string, опционально) - модель
|
||||
- `vendor` (string, опционально) - производитель
|
||||
- `wattage_w` (int, опционально) - мощность в ваттах
|
||||
- `serial_number` (string, условно обязательно если present=true) - серийный номер
|
||||
- `part_number` (string, опционально) - партномер
|
||||
- `firmware` (string, опционально) - версия прошивки
|
||||
- `status` (string, опционально) - статус: `OK`, `Warning`, `Critical`, `Unknown`
|
||||
- `input_type` (string, опционально) - тип входа
|
||||
- `input_power_w` (int, опционально) - входная мощность (telemetry)
|
||||
- `output_power_w` (int, опционально) - выходная мощность (telemetry)
|
||||
- `input_voltage` (float, опционально) - входное напряжение (telemetry)
|
||||
|
||||
**Примечание:** Telemetry поля (input_power_w, output_power_w, input_voltage) сохраняются в observation, но не влияют на Component.
|
||||
|
||||
**LOT автоопределение:**
|
||||
- Формат: `PSU_{WATTAGE}W`
|
||||
- Пример: `PSU_3000W`, `PSU_1600W`
|
||||
|
||||
---
|
||||
|
||||
### 6. PCIe Devices (PCIe устройства)
|
||||
|
||||
**Назначение:** Информация о PCIe устройствах (NIC, RAID контроллеры, GPU, etc.).
|
||||
|
||||
```json
|
||||
{
|
||||
"pcie_devices": [
|
||||
{
|
||||
"slot": "PCIeCard1",
|
||||
"vendor_id": 32902,
|
||||
"device_id": 2912,
|
||||
"bdf": "0000:18:00.0",
|
||||
"device_class": "MassStorageController",
|
||||
"manufacturer": "Intel",
|
||||
"model": "RAID Controller RSP3DD080F",
|
||||
"link_width": 8,
|
||||
"link_speed": "Gen3",
|
||||
"max_link_width": 8,
|
||||
"max_link_speed": "Gen3",
|
||||
"serial_number": "RAID-001-12345",
|
||||
"firmware": "50.9.1-4296",
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"slot": "PCIeCard2",
|
||||
"vendor_id": 5555,
|
||||
"device_id": 4401,
|
||||
"bdf": "",
|
||||
"device_class": "NetworkController",
|
||||
"manufacturer": "Mellanox",
|
||||
"model": "ConnectX-5",
|
||||
"link_width": 16,
|
||||
"link_speed": "Gen3",
|
||||
"max_link_width": 16,
|
||||
"max_link_speed": "Gen3",
|
||||
"serial_number": "MT2892012345",
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `slot` (string, обязательно) - идентификатор слота
|
||||
- `vendor_id` (int, опционально) - PCI Vendor ID (hex в decimal)
|
||||
- `device_id` (int, опционально) - PCI Device ID (hex в decimal)
|
||||
- `bdf` (string, опционально) - Bus:Device.Function (например "0000:18:00.0")
|
||||
- `device_class` (string, опционально) - класс устройства: `NetworkController`, `MassStorageController`, `DisplayController`, etc.
|
||||
- `manufacturer` (string, опционально) - производитель
|
||||
- `model` (string, опционально) - модель устройства
|
||||
- `link_width` (int, опционально) - текущая ширина линка (x1, x4, x8, x16)
|
||||
- `link_speed` (string, опционально) - текущая скорость линка (Gen3, Gen4, Gen5)
|
||||
- `max_link_width` (int, опционально) - максимальная ширина линка
|
||||
- `max_link_speed` (string, опционально) - максимальная скорость линка
|
||||
- `serial_number` (string, опционально) - серийный номер (если доступен, иначе генерируется)
|
||||
- `firmware` (string, опционально) - версия прошивки
|
||||
- `status` (string, опционально) - статус: `OK`, `Warning`, `Critical`, `Unknown`
|
||||
|
||||
**Генерация serial_number (если отсутствует):**
|
||||
- Формат: `{board_serial}-PCIE-{slot}`
|
||||
- Пример: `21D634101-PCIE-PCIeCard1`
|
||||
|
||||
**LOT автоопределение:**
|
||||
- Формат: `PCIE_{DEVICE_CLASS}_{NORMALIZED_MODEL}` или `PCIE_{DEVICE_CLASS}_{VENDOR_ID}_{DEVICE_ID}`
|
||||
- Пример: `PCIE_NETWORK_CONNECTX5`, `PCIE_STORAGE_32902_2912`
|
||||
|
||||
---
|
||||
|
||||
### 7. Firmware (Прошивки системных компонентов)
|
||||
|
||||
**Назначение:** Информация о версиях прошивок системных компонентов (BIOS, BMC, etc.).
|
||||
|
||||
```json
|
||||
{
|
||||
"firmware": [
|
||||
{
|
||||
"device_name": "BIOS",
|
||||
"version": "06.08.05 (2025-05-15 18:39:00)"
|
||||
},
|
||||
{
|
||||
"device_name": "BMC",
|
||||
"version": "5.17.00 (2025-04-22 12:06:31)"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Поля:**
|
||||
- `device_name` (string, обязательно) - название устройства: `BIOS`, `BMC`, `CPLD`, etc.
|
||||
- `version` (string, обязательно) - версия прошивки
|
||||
|
||||
**Обработка:**
|
||||
- Firmware данные сохраняются в observation
|
||||
- Изменения версий firmware системных компонентов создают события FIRMWARE_CHANGED для Asset
|
||||
|
||||
---
|
||||
|
||||
## Полный пример JSON
|
||||
|
||||
```json
|
||||
{
|
||||
"filename": "redfish://10.10.10.103",
|
||||
"source_type": "api",
|
||||
"protocol": "redfish",
|
||||
"target_host": "10.10.10.103",
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"hardware": {
|
||||
"board": {
|
||||
"manufacturer": "Supermicro",
|
||||
"product_name": "X12DPG-QT6",
|
||||
"serial_number": "21D634101",
|
||||
"part_number": "X12DPG-QT6-REV1.01",
|
||||
"uuid": "d7ef2fe5-2fd0-11f0-910a-346f11040868"
|
||||
},
|
||||
"firmware": [
|
||||
{
|
||||
"device_name": "BIOS",
|
||||
"version": "06.08.05"
|
||||
},
|
||||
{
|
||||
"device_name": "BMC",
|
||||
"version": "5.17.00"
|
||||
}
|
||||
],
|
||||
"cpus": [
|
||||
{
|
||||
"socket": 0,
|
||||
"model": "INTEL(R) XEON(R) GOLD 6530",
|
||||
"cores": 32,
|
||||
"threads": 64,
|
||||
"frequency_mhz": 2100,
|
||||
"max_frequency_mhz": 4000,
|
||||
"manufacturer": "Intel",
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"socket": 1,
|
||||
"model": "INTEL(R) XEON(R) GOLD 6530",
|
||||
"cores": 32,
|
||||
"threads": 64,
|
||||
"frequency_mhz": 2100,
|
||||
"max_frequency_mhz": 4000,
|
||||
"manufacturer": "Intel",
|
||||
"status": "OK"
|
||||
}
|
||||
],
|
||||
"memory": [
|
||||
{
|
||||
"slot": "CPU0_C0D0",
|
||||
"location": "CPU0_C0D0",
|
||||
"present": true,
|
||||
"size_mb": 32768,
|
||||
"type": "DDR5",
|
||||
"max_speed_mhz": 4800,
|
||||
"current_speed_mhz": 4800,
|
||||
"manufacturer": "Hynix",
|
||||
"serial_number": "80AD032419E17CEEC1",
|
||||
"part_number": "HMCG88AGBRA191N",
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"slot": "CPU1_C0D0",
|
||||
"location": "CPU1_C0D0",
|
||||
"present": true,
|
||||
"size_mb": 32768,
|
||||
"type": "DDR5",
|
||||
"max_speed_mhz": 4800,
|
||||
"current_speed_mhz": 4800,
|
||||
"manufacturer": "Hynix",
|
||||
"serial_number": "80AD032419E17D6FBA",
|
||||
"part_number": "HMCG88AGBRA191N",
|
||||
"status": "OK"
|
||||
}
|
||||
],
|
||||
"storage": [
|
||||
{
|
||||
"slot": "OB01",
|
||||
"type": "NVMe",
|
||||
"model": "INTEL SSDPF2KX076T1",
|
||||
"size_gb": 7680,
|
||||
"serial_number": "BTAX41900GF87P6DGN",
|
||||
"manufacturer": "Intel",
|
||||
"firmware": "9CV10510",
|
||||
"interface": "NVMe",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
},
|
||||
{
|
||||
"slot": "OB02",
|
||||
"type": "NVMe",
|
||||
"model": "INTEL SSDPF2KX076T1",
|
||||
"size_gb": 7680,
|
||||
"serial_number": "BTAX41900BEG7P6DGN",
|
||||
"manufacturer": "Intel",
|
||||
"firmware": "9CV10510",
|
||||
"interface": "NVMe",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
}
|
||||
],
|
||||
"pcie_devices": [
|
||||
{
|
||||
"slot": "PCIeCard1",
|
||||
"vendor_id": 32902,
|
||||
"device_id": 2912,
|
||||
"bdf": "0000:18:00.0",
|
||||
"device_class": "MassStorageController",
|
||||
"manufacturer": "Intel",
|
||||
"model": "RAID Controller",
|
||||
"serial_number": "RAID-001-12345",
|
||||
"status": "OK"
|
||||
}
|
||||
],
|
||||
"power_supplies": [
|
||||
{
|
||||
"slot": "0",
|
||||
"present": true,
|
||||
"model": "GW-CRPS3000LW",
|
||||
"vendor": "Great Wall",
|
||||
"wattage_w": 3000,
|
||||
"serial_number": "2P06C102610",
|
||||
"part_number": "V0310C9000000000",
|
||||
"firmware": "00.03.05",
|
||||
"status": "OK",
|
||||
"input_power_w": 137,
|
||||
"output_power_w": 104,
|
||||
"input_voltage": 215.25
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Процесс обработки импорта
|
||||
|
||||
### 1. Валидация входных данных
|
||||
|
||||
- Проверка наличия обязательных полей: `collected_at`, `hardware.board.serial_number`
|
||||
- Проверка формата `collected_at` (RFC3339)
|
||||
- Проверка наличия `hardware.board.serial_number`
|
||||
|
||||
### 2. Поиск или создание Asset
|
||||
|
||||
**Поиск:** по `hardware.board.serial_number` (= `vendor_serial` в таблице assets)
|
||||
|
||||
**Создание/Обновление:**
|
||||
```
|
||||
vendor_serial = board.serial_number
|
||||
vendor = board.manufacturer (если != "NULL")
|
||||
model = board.product_name (если != "NULL")
|
||||
name = target_host (если передан), иначе `hardware.board.serial_number`
|
||||
```
|
||||
|
||||
**Примечание:** Asset должен быть связан с Project. Если project_id не задан, используется default project для данного импорта.
|
||||
|
||||
### 3. Обработка компонентов
|
||||
|
||||
Для каждого типа компонентов (cpus, memory, storage, pcie_devices, power_supplies):
|
||||
|
||||
#### 3.1. Фильтрация
|
||||
|
||||
- Игнорировать компоненты с `present = false` (кроме создания REMOVED events)
|
||||
- Игнорировать компоненты без serial_number (после генерации, если применимо)
|
||||
|
||||
#### 3.2. Определение LOT
|
||||
|
||||
- Автоматически определить LOT на основе vendor/model/type/size
|
||||
- Создать LOT если не существует
|
||||
- Связать component с lot_id
|
||||
|
||||
#### 3.3. Поиск или создание Component
|
||||
|
||||
**Поиск:** по `vendor_serial`
|
||||
|
||||
**Создание/Обновление:**
|
||||
```
|
||||
vendor_serial = {serial_number или generated}
|
||||
vendor = {manufacturer}
|
||||
model = {model}
|
||||
lot_id = {auto-determined lot}
|
||||
```
|
||||
|
||||
#### 3.4. Создание Observation
|
||||
|
||||
Для каждого компонента создается observation запись:
|
||||
```
|
||||
log_bundle_id = {created bundle}
|
||||
asset_id = {asset.id}
|
||||
component_id = {component.id}
|
||||
observed_at = collected_at
|
||||
```
|
||||
|
||||
Дополнительные данные сохраняются в JSON поле observation (slot, status, firmware, telemetry, etc.)
|
||||
|
||||
#### 3.5. Обновление Installations
|
||||
|
||||
**Логика:**
|
||||
- Если компонент уже установлен в этот asset (`installations.removed_at IS NULL`) - ничего не делать
|
||||
- Если компонент был в другом asset - закрыть старую installation (установить `removed_at`)
|
||||
- Создать новую installation:
|
||||
```
|
||||
asset_id = {asset.id}
|
||||
component_id = {component.id}
|
||||
installed_at = collected_at (или first_seen_at если компонент новый)
|
||||
removed_at = NULL
|
||||
```
|
||||
|
||||
#### 3.6. Определение removed компонентов
|
||||
|
||||
Сравнить текущий snapshot с предыдущим observation для этого asset:
|
||||
- Если компонент был в предыдущем observation, но отсутствует в текущем - закрыть installation (`removed_at = collected_at`)
|
||||
|
||||
### 4. Создание Timeline Events
|
||||
|
||||
События создаются автоматически на основе изменений:
|
||||
|
||||
**LOG_COLLECTED:**
|
||||
```
|
||||
subject_type = "asset"
|
||||
subject_id = asset.id
|
||||
event_type = "LOG_COLLECTED"
|
||||
event_time = collected_at
|
||||
```
|
||||
|
||||
**INSTALLED:** (при создании новой installation)
|
||||
```
|
||||
subject_type = "component"
|
||||
subject_id = component.id
|
||||
event_type = "INSTALLED"
|
||||
event_time = installed_at
|
||||
asset_id = asset.id
|
||||
component_id = component.id
|
||||
```
|
||||
|
||||
**REMOVED:** (при закрытии installation)
|
||||
```
|
||||
subject_type = "component"
|
||||
subject_id = component.id
|
||||
event_type = "REMOVED"
|
||||
event_time = removed_at
|
||||
asset_id = asset.id
|
||||
component_id = component.id
|
||||
```
|
||||
|
||||
**FIRMWARE_CHANGED:** (при изменении firmware версии)
|
||||
```
|
||||
subject_type = "component" (или "asset" для BIOS/BMC)
|
||||
subject_id = {id}
|
||||
event_type = "FIRMWARE_CHANGED"
|
||||
event_time = collected_at
|
||||
firmware_version = {new version}
|
||||
```
|
||||
|
||||
### 5. Обработка статусов компонентов
|
||||
|
||||
Статусы (`OK`, `Warning`, `Critical`, `Unknown`) сохраняются в observation и могут быть использованы для:
|
||||
- Анализа трендов деградации
|
||||
- Автоматического создания failure events (если `status = "Critical"`)
|
||||
- Dashboard отображения текущего состояния
|
||||
|
||||
---
|
||||
|
||||
## API Endpoint для импорта
|
||||
|
||||
```http
|
||||
POST /ingest/hardware
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"target_host": "10.10.10.103",
|
||||
"hardware": {...}
|
||||
}
|
||||
```
|
||||
|
||||
### Ответ при успехе (201 Created)
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"bundle_id": "lb_01J...",
|
||||
"asset_id": "mach_01J...",
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"duplicate": false,
|
||||
"summary": {
|
||||
"parts_observed": 15,
|
||||
"parts_created": 2,
|
||||
"parts_updated": 13,
|
||||
"installations_created": 2,
|
||||
"installations_closed": 1,
|
||||
"timeline_events_created": 9,
|
||||
"failure_events_created": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Ответ при дубликате (200 OK)
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"bundle_id": "lb_01J...",
|
||||
"asset_id": "mach_01J...",
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"duplicate": true,
|
||||
"message": "LogBundle with this content hash already exists"
|
||||
}
|
||||
```
|
||||
|
||||
### Ответ при ошибке (400 Bad Request)
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "error",
|
||||
"error": "validation_failed",
|
||||
"details": {
|
||||
"field": "hardware.board.serial_number",
|
||||
"message": "serial_number is required"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Частые причины `400 Bad Request`
|
||||
|
||||
- Лишние поля в JSON (даже глубоко во вложенных объектах).
|
||||
- Неверное имя ключа (например, `targetHost` вместо `target_host`).
|
||||
- Неверный формат даты (`collected_at` должен быть RFC3339).
|
||||
- Пустой `hardware.board.serial_number`.
|
||||
|
||||
---
|
||||
|
||||
## Правила автоопределения LOT
|
||||
|
||||
### CPU
|
||||
```
|
||||
Формат: CPU_{VENDOR}_{MODEL_NORMALIZED}
|
||||
Пример:
|
||||
"INTEL(R) XEON(R) GOLD 6530" -> "CPU_INTEL_XEON_GOLD_6530"
|
||||
"AMD EPYC 7763" -> "CPU_AMD_EPYC_7763"
|
||||
```
|
||||
|
||||
### Memory (DIMM)
|
||||
```
|
||||
Формат: DIMM_{TYPE}_{SIZE_GB}GB
|
||||
Пример:
|
||||
DDR5 32GB -> "DIMM_DDR5_32GB"
|
||||
DDR4 64GB -> "DIMM_DDR4_64GB"
|
||||
```
|
||||
|
||||
### Storage
|
||||
```
|
||||
Формат: {TYPE}_{INTERFACE}_{SIZE_TB}TB (или GB для маленьких дисков)
|
||||
Пример:
|
||||
NVMe 7.68TB -> "SSD_NVME_07.68TB"
|
||||
HDD 12TB -> "HDD_SATA_12TB"
|
||||
SSD 960GB -> "SSD_SATA_0.96TB" или "SSD_SATA_960GB"
|
||||
```
|
||||
|
||||
### Power Supply
|
||||
```
|
||||
Формат: PSU_{WATTAGE}W_{VENDOR_NORMALIZED}
|
||||
Пример:
|
||||
3000W Great Wall -> "PSU_3000W_GREAT_WALL"
|
||||
1600W Delta -> "PSU_1600W_DELTA"
|
||||
```
|
||||
|
||||
### PCIe Device
|
||||
```
|
||||
Формат: PCIE_{DEVICE_CLASS}_{MODEL_NORMALIZED} или PCIE_{DEVICE_CLASS}_{VENDOR_ID}_{DEVICE_ID}
|
||||
Пример:
|
||||
Network Mellanox ConnectX-5 -> "PCIE_NETWORK_CONNECTX5"
|
||||
Storage Intel RAID -> "PCIE_STORAGE_INTEL_RAID"
|
||||
Unknown 32902:2912 -> "PCIE_STORAGE_32902_2912"
|
||||
```
|
||||
|
||||
### Правила нормализации
|
||||
|
||||
1. **Удаление специальных символов:** `(`, `)`, `-`, `®`, `™`, пробелы заменяются на `_`
|
||||
2. **Uppercase:** все символы в верхнем регистре
|
||||
3. **Множественные подчеркивания:** заменяются на одно
|
||||
4. **Префиксы:** убираются общие префиксы типа `MODEL:`, `PN:`, etc.
|
||||
|
||||
---
|
||||
|
||||
## Статусы компонентов
|
||||
|
||||
### Возможные значения
|
||||
|
||||
- `OK` - компонент работает нормально
|
||||
- `Warning` - есть предупреждения (degraded, warning threshold)
|
||||
- `Critical` - критическое состояние (failed, error)
|
||||
- `Unknown` - статус неизвестен или не поддерживается
|
||||
- `Empty` - слот пустой (только для memory, pcie_devices)
|
||||
|
||||
### Обработка статусов
|
||||
|
||||
**OK:**
|
||||
- Нормальная обработка, никаких дополнительных действий
|
||||
|
||||
**Warning:**
|
||||
- Создать observation с флагом warning
|
||||
- Опционально: создать timeline event `COMPONENT_WARNING`
|
||||
|
||||
**Critical:**
|
||||
- Создать observation с флагом critical
|
||||
- **Автоматически создать failure_event** для этого компонента
|
||||
- Создать timeline event `COMPONENT_FAILED`
|
||||
|
||||
**Unknown:**
|
||||
- Сохранить как есть, считать компонент рабочим
|
||||
|
||||
**Empty:**
|
||||
- Не создавать component/observation для этого слота
|
||||
|
||||
---
|
||||
|
||||
## Обработка отсутствующих полей
|
||||
|
||||
### serial_number
|
||||
|
||||
**CPU:** генерируется как `{board_serial}-CPU-{socket}`
|
||||
|
||||
**PCIe Device:** генерируется как `{board_serial}-PCIE-{slot}` (если serial_number = "N/A" или пустой)
|
||||
|
||||
**Другие компоненты:** если serial_number отсутствует и не может быть сгенерирован - компонент игнорируется
|
||||
|
||||
### manufacturer
|
||||
|
||||
**Если vendor_id присутствует (PCIe):** используется lookup таблица PCI vendors
|
||||
- `8086` -> `Intel`
|
||||
- `10de` -> `NVIDIA`
|
||||
- `15b3` -> `Mellanox`
|
||||
- etc.
|
||||
|
||||
**Если "NULL" или пустой:** сохраняется как NULL в БД
|
||||
|
||||
### status
|
||||
|
||||
**Если отсутствует:** считается `Unknown`
|
||||
|
||||
### firmware
|
||||
|
||||
**Если отсутствует:** не создается FIRMWARE_CHANGED event
|
||||
|
||||
---
|
||||
|
||||
## Примеры использования
|
||||
|
||||
### Пример 1: Минимальный snapshot
|
||||
|
||||
```json
|
||||
{
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"target_host": "192.168.1.100",
|
||||
"hardware": {
|
||||
"board": {
|
||||
"serial_number": "TEST-SERVER-001"
|
||||
},
|
||||
"cpus": [
|
||||
{
|
||||
"socket": 0,
|
||||
"model": "Intel Xeon Gold 6530"
|
||||
}
|
||||
],
|
||||
"memory": [],
|
||||
"storage": [],
|
||||
"pcie_devices": [],
|
||||
"power_supplies": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Пример 2: Server с отказавшим диском
|
||||
|
||||
```json
|
||||
{
|
||||
"collected_at": "2026-02-10T15:30:00Z",
|
||||
"target_host": "prod-db-01",
|
||||
"hardware": {
|
||||
"board": {
|
||||
"manufacturer": "Dell",
|
||||
"product_name": "PowerEdge R740",
|
||||
"serial_number": "CN7475162Q0123"
|
||||
},
|
||||
"storage": [
|
||||
{
|
||||
"slot": "Disk.Bay.0",
|
||||
"type": "SSD",
|
||||
"model": "Samsung PM1733",
|
||||
"serial_number": "S5GUNG0N123456",
|
||||
"firmware": "9CV10510",
|
||||
"interface": "NVMe",
|
||||
"present": true,
|
||||
"status": "Critical"
|
||||
},
|
||||
{
|
||||
"slot": "Disk.Bay.1",
|
||||
"type": "SSD",
|
||||
"model": "Samsung PM1733",
|
||||
"serial_number": "S5GUNG0N123457",
|
||||
"firmware": "9CV10510",
|
||||
"interface": "NVMe",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Обработка:**
|
||||
- Disk.Bay.0 получит статус Critical
|
||||
- Автоматически создастся failure_event для компонента S5GUNG0N123456
|
||||
- Timeline event COMPONENT_FAILED
|
||||
|
||||
### Пример 3: Замена памяти
|
||||
|
||||
**Snapshot 1 (до замены):**
|
||||
```json
|
||||
{
|
||||
"collected_at": "2026-02-09T10:00:00Z",
|
||||
"target_host": "web-01",
|
||||
"hardware": {
|
||||
"board": {"serial_number": "SRV001"},
|
||||
"memory": [
|
||||
{
|
||||
"slot": "DIMM_A1",
|
||||
"serial_number": "OLD-DIMM-001",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Snapshot 2 (после замены):**
|
||||
```json
|
||||
{
|
||||
"collected_at": "2026-02-10T14:00:00Z",
|
||||
"target_host": "web-01",
|
||||
"hardware": {
|
||||
"board": {"serial_number": "SRV001"},
|
||||
"memory": [
|
||||
{
|
||||
"slot": "DIMM_A1",
|
||||
"serial_number": "NEW-DIMM-002",
|
||||
"present": true,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Обработка:**
|
||||
- Компонент OLD-DIMM-001: закрывается installation (`removed_at = 2026-02-10T14:00:00Z`), создается REMOVED event
|
||||
- Компонент NEW-DIMM-002: создается новый component, создается installation, создается INSTALLED event
|
||||
|
||||
---
|
||||
|
||||
## Интеграция с существующим кодом
|
||||
|
||||
Текущий эндпоинт `/ingest/logbundle` уже реализует часть этой логики. Новый формат должен:
|
||||
|
||||
1. **Быть обратно совместимым** - старый формат продолжает работать
|
||||
2. **Использовать ту же инфраструктуру** - LogBundle, Observations, Installations
|
||||
3. **Расширить observation JSON** - добавить поля status, slot, telemetry
|
||||
4. **Добавить LOT автоопределение** - новая функция в `internal/ingest`
|
||||
5. **Добавить обработку статусов** - автоматическое создание failure events
|
||||
|
||||
### Предлагаемые изменения
|
||||
|
||||
**Новый endpoint:** `POST /ingest/hardware` (принимает новый формат)
|
||||
|
||||
**Старый endpoint:** `POST /ingest/logbundle` (остается без изменений)
|
||||
|
||||
**Общая логика:** оба endpoint используют общий `ingest.Service` с расширенной обработкой
|
||||
|
||||
---
|
||||
|
||||
## Следующие шаги
|
||||
|
||||
1. **Реализовать парсер нового формата** - `internal/ingest/parser_hardware.go`
|
||||
2. **Добавить LOT автоопределение** - `internal/ingest/lot_classifier.go`
|
||||
3. **Расширить observation модель** - добавить JSON поля для status, slot, etc.
|
||||
4. **Реализовать обработку статусов** - автосоздание failure events
|
||||
5. **Добавить endpoint** - `POST /ingest/hardware`
|
||||
6. **Написать тесты** - unit + integration тесты для нового формата
|
||||
7. **Документировать API** - OpenAPI спецификация
|
||||
@@ -8,7 +8,6 @@ Release date: 2026-02-04
|
||||
- Upload flow now accepts JSON snapshots in addition to archives, enabling offline re-open of live Redfish collections.
|
||||
- Export UX improved:
|
||||
- Export filenames now follow `YYYY-MM-DD (SERVER MODEL) - SERVER SN`.
|
||||
- TXT export now outputs tabular sections matching web UI views (no raw JSON dump).
|
||||
- Live API UI improvements: parser/file badges for Redfish sessions and clearer upload format messaging.
|
||||
- Redfish progress logs are more informative (snapshot stage and active top-level roots).
|
||||
- Build/distribution hardening:
|
||||
|
||||
@@ -3,9 +3,7 @@ package exporter
|
||||
import (
|
||||
"encoding/csv"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"text/tabwriter"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
@@ -114,221 +112,3 @@ func (e *Exporter) ExportJSON(w io.Writer) error {
|
||||
encoder.SetIndent("", " ")
|
||||
return encoder.Encode(e.result)
|
||||
}
|
||||
|
||||
// ExportTXT exports a human-readable text report
|
||||
func (e *Exporter) ExportTXT(w io.Writer) error {
|
||||
fmt.Fprintln(w, "LOGPile Analysis Report - mchus.pro")
|
||||
fmt.Fprintln(w, "====================================")
|
||||
fmt.Fprintln(w)
|
||||
|
||||
if e.result == nil {
|
||||
fmt.Fprintln(w, "No data loaded.")
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Fprintf(w, "File:\t%s\n", e.result.Filename)
|
||||
fmt.Fprintf(w, "Source:\t%s\n", e.result.SourceType)
|
||||
fmt.Fprintf(w, "Protocol:\t%s\n", e.result.Protocol)
|
||||
fmt.Fprintf(w, "Target:\t%s\n", e.result.TargetHost)
|
||||
fmt.Fprintln(w)
|
||||
|
||||
// Server model and serial number
|
||||
if e.result.Hardware != nil && e.result.Hardware.BoardInfo.ProductName != "" {
|
||||
fmt.Fprintf(w, "Server Model:\t%s\n", e.result.Hardware.BoardInfo.ProductName)
|
||||
fmt.Fprintf(w, "Serial Number:\t%s\n", e.result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
fmt.Fprintln(w)
|
||||
|
||||
// Hardware summary
|
||||
if e.result.Hardware != nil {
|
||||
hw := e.result.Hardware
|
||||
|
||||
// Firmware tab
|
||||
if len(hw.Firmware) > 0 {
|
||||
fmt.Fprintln(w, "FIRMWARE VERSIONS")
|
||||
fmt.Fprintln(w, "-----------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Component\tVersion\tBuild Time")
|
||||
for _, fw := range hw.Firmware {
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\n", fw.DeviceName, fw.Version, fw.BuildTime)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// CPU tab
|
||||
if len(hw.CPUs) > 0 {
|
||||
fmt.Fprintln(w, "PROCESSORS")
|
||||
fmt.Fprintln(w, "----------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Socket\tModel\tCores\tThreads\tFreq MHz\tTurbo MHz\tTDP W\tPPIN/SN")
|
||||
for _, cpu := range hw.CPUs {
|
||||
id := cpu.SerialNumber
|
||||
if id == "" {
|
||||
id = cpu.PPIN
|
||||
}
|
||||
fmt.Fprintf(tw, "CPU%d\t%s\t%d\t%d\t%d\t%d\t%d\t%s\n",
|
||||
cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz, cpu.MaxFreqMHz, cpu.TDP, id)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Memory tab
|
||||
if len(hw.Memory) > 0 {
|
||||
fmt.Fprintln(w, "MEMORY")
|
||||
fmt.Fprintln(w, "------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tPresent\tSize MB\tType\tSpeed MHz\tVendor\tModel/PN\tSerial\tStatus")
|
||||
for _, mem := range hw.Memory {
|
||||
location := mem.Location
|
||||
if location == "" {
|
||||
location = mem.Slot
|
||||
}
|
||||
fmt.Fprintf(tw, "%s\t%t\t%d\t%s\t%d\t%s\t%s\t%s\t%s\n",
|
||||
location, mem.Present, mem.SizeMB, mem.Type, mem.CurrentSpeedMHz, mem.Manufacturer, mem.PartNumber, mem.SerialNumber, mem.Status)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Power tab
|
||||
if len(hw.PowerSupply) > 0 {
|
||||
fmt.Fprintln(w, "POWER SUPPLIES")
|
||||
fmt.Fprintln(w, "--------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tPresent\tVendor\tModel\tWattage W\tInput W\tOutput W\tInput V\tTemp C\tStatus\tSerial")
|
||||
for _, psu := range hw.PowerSupply {
|
||||
fmt.Fprintf(tw, "%s\t%t\t%s\t%s\t%d\t%d\t%d\t%.0f\t%d\t%s\t%s\n",
|
||||
psu.Slot, psu.Present, psu.Vendor, psu.Model, psu.WattageW, psu.InputPowerW, psu.OutputPowerW, psu.InputVoltage, psu.TemperatureC, psu.Status, psu.SerialNumber)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Storage tab
|
||||
if len(hw.Storage) > 0 {
|
||||
fmt.Fprintln(w, "STORAGE")
|
||||
fmt.Fprintln(w, "-------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tPresent\tType\tInterface\tModel\tSize GB\tVendor\tFirmware\tSerial")
|
||||
for _, stor := range hw.Storage {
|
||||
fmt.Fprintf(tw, "%s\t%t\t%s\t%s\t%s\t%d\t%s\t%s\t%s\n",
|
||||
stor.Slot, stor.Present, stor.Type, stor.Interface, stor.Model, stor.SizeGB, stor.Manufacturer, stor.Firmware, stor.SerialNumber)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// GPU tab
|
||||
if len(hw.GPUs) > 0 {
|
||||
fmt.Fprintln(w, "GPUS")
|
||||
fmt.Fprintln(w, "----")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tModel\tVendor\tBDF\tPCIe\tSerial\tStatus")
|
||||
for _, gpu := range hw.GPUs {
|
||||
link := fmt.Sprintf("x%d %s", gpu.CurrentLinkWidth, gpu.CurrentLinkSpeed)
|
||||
if gpu.MaxLinkWidth > 0 || gpu.MaxLinkSpeed != "" {
|
||||
link = fmt.Sprintf("%s / x%d %s", link, gpu.MaxLinkWidth, gpu.MaxLinkSpeed)
|
||||
}
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\t%s\t%s\n",
|
||||
gpu.Slot, gpu.Model, gpu.Manufacturer, gpu.BDF, link, gpu.SerialNumber, gpu.Status)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Network tab
|
||||
if len(hw.NetworkAdapters) > 0 {
|
||||
fmt.Fprintln(w, "NETWORK ADAPTERS")
|
||||
fmt.Fprintln(w, "----------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tLocation\tModel\tVendor\tPorts\tType\tStatus\tSerial")
|
||||
for _, nic := range hw.NetworkAdapters {
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%d\t%s\t%s\t%s\n",
|
||||
nic.Slot, nic.Location, nic.Model, nic.Vendor, nic.PortCount, nic.PortType, nic.Status, nic.SerialNumber)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Device inventory tab
|
||||
if len(hw.PCIeDevices) > 0 {
|
||||
fmt.Fprintln(w, "PCIE DEVICES")
|
||||
fmt.Fprintln(w, "------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Slot\tBDF\tClass\tVendor\tVID:DID\tLink\tSerial")
|
||||
for _, pcie := range hw.PCIeDevices {
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%04x:%04x\tx%d %s / x%d %s\t%s\n",
|
||||
pcie.Slot, pcie.BDF, pcie.DeviceClass, pcie.Manufacturer, pcie.VendorID, pcie.DeviceID,
|
||||
pcie.LinkWidth, pcie.LinkSpeed, pcie.MaxLinkWidth, pcie.MaxLinkSpeed, pcie.SerialNumber)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
}
|
||||
|
||||
// Sensors tab
|
||||
if len(e.result.Sensors) > 0 {
|
||||
fmt.Fprintln(w, "SENSOR READINGS")
|
||||
fmt.Fprintln(w, "---------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Type\tName\tValue\tUnit\tRaw\tStatus")
|
||||
for _, s := range e.result.Sensors {
|
||||
fmt.Fprintf(tw, "%s\t%s\t%.0f\t%s\t%s\t%s\n", s.Type, s.Name, s.Value, s.Unit, s.RawValue, s.Status)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Serials/FRU tab
|
||||
if len(e.result.FRU) > 0 {
|
||||
fmt.Fprintln(w, "FRU COMPONENTS")
|
||||
fmt.Fprintln(w, "--------------")
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Description\tManufacturer\tProduct\tSerial\tPart Number")
|
||||
for _, fru := range e.result.FRU {
|
||||
name := fru.ProductName
|
||||
if name == "" {
|
||||
name = fru.Description
|
||||
}
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\n", fru.Description, fru.Manufacturer, name, fru.SerialNumber, fru.PartNumber)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
// Events tab
|
||||
fmt.Fprintf(w, "EVENTS: %d total\n", len(e.result.Events))
|
||||
if len(e.result.Events) > 0 {
|
||||
tw := tabwriter.NewWriter(w, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintln(tw, "Time\tSeverity\tSource\tType\tName\tDescription")
|
||||
for _, ev := range e.result.Events {
|
||||
fmt.Fprintf(tw, "%s\t%s\t%s\t%s\t%s\t%s\n",
|
||||
ev.Timestamp.Format("2006-01-02 15:04:05"), ev.Severity, ev.Source, ev.SensorType, ev.SensorName, ev.Description)
|
||||
}
|
||||
_ = tw.Flush()
|
||||
}
|
||||
var critical, warning, info int
|
||||
for _, ev := range e.result.Events {
|
||||
switch ev.Severity {
|
||||
case models.SeverityCritical:
|
||||
critical++
|
||||
case models.SeverityWarning:
|
||||
warning++
|
||||
case models.SeverityInfo:
|
||||
info++
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(w, " Critical: %d\n", critical)
|
||||
fmt.Fprintf(w, " Warning: %d\n", warning)
|
||||
fmt.Fprintf(w, " Info: %d\n", info)
|
||||
|
||||
// Footer
|
||||
fmt.Fprintln(w)
|
||||
fmt.Fprintln(w, "------------------------------------")
|
||||
fmt.Fprintln(w, "Generated by LOGPile - mchus.pro")
|
||||
fmt.Fprintln(w, "https://git.mchus.pro/mchus/logpile")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
164
internal/exporter/generate_example_test.go
Normal file
164
internal/exporter/generate_example_test.go
Normal file
@@ -0,0 +1,164 @@
|
||||
package exporter
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// TestGenerateReanimatorExample generates an example reanimator.json file
|
||||
// This test is marked as skipped by default - run with: go test -v -run TestGenerateReanimatorExample
|
||||
func TestGenerateReanimatorExample(t *testing.T) {
|
||||
t.Skip("Skip by default - run manually to generate example")
|
||||
|
||||
// Create realistic test data matching import-example-full.json structure
|
||||
result := &models.AnalysisResult{
|
||||
Filename: "redfish://10.10.10.103",
|
||||
SourceType: "api",
|
||||
Protocol: "redfish",
|
||||
TargetHost: "10.10.10.103",
|
||||
CollectedAt: time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{
|
||||
Manufacturer: "Supermicro",
|
||||
ProductName: "X12DPG-QT6",
|
||||
SerialNumber: "21D634101",
|
||||
PartNumber: "X12DPG-QT6-REV1.01",
|
||||
UUID: "d7ef2fe5-2fd0-11f0-910a-346f11040868",
|
||||
},
|
||||
Firmware: []models.FirmwareInfo{
|
||||
{DeviceName: "BIOS", Version: "06.08.05"},
|
||||
{DeviceName: "BMC", Version: "5.17.00"},
|
||||
{DeviceName: "CPLD", Version: "01.02.03"},
|
||||
},
|
||||
CPUs: []models.CPU{
|
||||
{
|
||||
Socket: 0,
|
||||
Model: "INTEL(R) XEON(R) GOLD 6530",
|
||||
Cores: 32,
|
||||
Threads: 64,
|
||||
FrequencyMHz: 2100,
|
||||
MaxFreqMHz: 4000,
|
||||
},
|
||||
{
|
||||
Socket: 1,
|
||||
Model: "INTEL(R) XEON(R) GOLD 6530",
|
||||
Cores: 32,
|
||||
Threads: 64,
|
||||
FrequencyMHz: 2100,
|
||||
MaxFreqMHz: 4000,
|
||||
},
|
||||
},
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "CPU0_C0D0",
|
||||
Location: "CPU0_C0D0",
|
||||
Present: true,
|
||||
SizeMB: 32768,
|
||||
Type: "DDR5",
|
||||
MaxSpeedMHz: 4800,
|
||||
CurrentSpeedMHz: 4800,
|
||||
Manufacturer: "Hynix",
|
||||
SerialNumber: "80AD032419E17CEEC1",
|
||||
PartNumber: "HMCG88AGBRA191N",
|
||||
Status: "OK",
|
||||
},
|
||||
{
|
||||
Slot: "CPU1_C0D0",
|
||||
Location: "CPU1_C0D0",
|
||||
Present: true,
|
||||
SizeMB: 32768,
|
||||
Type: "DDR5",
|
||||
MaxSpeedMHz: 4800,
|
||||
CurrentSpeedMHz: 4800,
|
||||
Manufacturer: "Hynix",
|
||||
SerialNumber: "80AD032419E17D6FBA",
|
||||
PartNumber: "HMCG88AGBRA191N",
|
||||
Status: "OK",
|
||||
},
|
||||
},
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SizeGB: 7680,
|
||||
SerialNumber: "BTAX41900GF87P6DGN",
|
||||
Manufacturer: "Intel",
|
||||
Firmware: "9CV10510",
|
||||
Interface: "NVMe",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SizeGB: 7680,
|
||||
SerialNumber: "BTAX41900BEG7P6DGN",
|
||||
Manufacturer: "Intel",
|
||||
Firmware: "9CV10510",
|
||||
Interface: "NVMe",
|
||||
Present: true,
|
||||
},
|
||||
},
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
{
|
||||
Slot: "PCIeCard1",
|
||||
VendorID: 32902,
|
||||
DeviceID: 2912,
|
||||
BDF: "0000:18:00.0",
|
||||
DeviceClass: "MassStorageController",
|
||||
Manufacturer: "Intel",
|
||||
PartNumber: "RAID Controller",
|
||||
SerialNumber: "RAID-001-12345",
|
||||
LinkWidth: 8,
|
||||
LinkSpeed: "Gen3",
|
||||
MaxLinkWidth: 8,
|
||||
MaxLinkSpeed: "Gen3",
|
||||
},
|
||||
},
|
||||
PowerSupply: []models.PSU{
|
||||
{
|
||||
Slot: "0",
|
||||
Present: true,
|
||||
Model: "GW-CRPS3000LW",
|
||||
Vendor: "Great Wall",
|
||||
WattageW: 3000,
|
||||
SerialNumber: "2P06C102610",
|
||||
PartNumber: "V0310C9000000000",
|
||||
Firmware: "00.03.05",
|
||||
Status: "OK",
|
||||
InputType: "ACWideRange",
|
||||
InputPowerW: 137,
|
||||
OutputPowerW: 104,
|
||||
InputVoltage: 215.25,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Convert to Reanimator format
|
||||
reanimator, err := ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator failed: %v", err)
|
||||
}
|
||||
|
||||
// Marshal to JSON with indentation
|
||||
jsonData, err := json.MarshalIndent(reanimator, "", " ")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to marshal JSON: %v", err)
|
||||
}
|
||||
|
||||
// Write to example file
|
||||
examplePath := filepath.Join("../../example/docs", "export-example-logpile.json")
|
||||
if err := os.WriteFile(examplePath, jsonData, 0644); err != nil {
|
||||
t.Fatalf("Failed to write example file: %v", err)
|
||||
}
|
||||
|
||||
t.Logf("Generated example file: %s", examplePath)
|
||||
t.Logf("JSON length: %d bytes", len(jsonData))
|
||||
}
|
||||
427
internal/exporter/reanimator_converter.go
Normal file
427
internal/exporter/reanimator_converter.go
Normal file
@@ -0,0 +1,427 @@
|
||||
package exporter
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// ConvertToReanimator converts AnalysisResult to Reanimator export format
|
||||
func ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, error) {
|
||||
if result == nil {
|
||||
return nil, fmt.Errorf("no data available for export")
|
||||
}
|
||||
|
||||
if result.Hardware == nil {
|
||||
return nil, fmt.Errorf("no hardware data available for export")
|
||||
}
|
||||
|
||||
if result.Hardware.BoardInfo.SerialNumber == "" {
|
||||
return nil, fmt.Errorf("board serial_number is required for Reanimator export")
|
||||
}
|
||||
|
||||
// Determine target host (optional field)
|
||||
targetHost := inferTargetHost(result.TargetHost, result.Filename)
|
||||
|
||||
export := &ReanimatorExport{
|
||||
Filename: result.Filename,
|
||||
SourceType: normalizeSourceType(result.SourceType),
|
||||
Protocol: normalizeProtocol(result.Protocol),
|
||||
TargetHost: targetHost,
|
||||
CollectedAt: formatRFC3339(result.CollectedAt),
|
||||
Hardware: ReanimatorHardware{
|
||||
Board: convertBoard(result.Hardware.BoardInfo),
|
||||
Firmware: convertFirmware(result.Hardware.Firmware),
|
||||
CPUs: convertCPUs(result.Hardware.CPUs),
|
||||
Memory: convertMemory(result.Hardware.Memory),
|
||||
Storage: convertStorage(result.Hardware.Storage),
|
||||
PCIeDevices: convertPCIeDevices(result.Hardware),
|
||||
PowerSupplies: convertPowerSupplies(result.Hardware.PowerSupply),
|
||||
},
|
||||
}
|
||||
|
||||
return export, nil
|
||||
}
|
||||
|
||||
// formatRFC3339 formats time in RFC3339 format, returns current time if zero
|
||||
func formatRFC3339(t time.Time) string {
|
||||
if t.IsZero() {
|
||||
return time.Now().UTC().Format(time.RFC3339)
|
||||
}
|
||||
return t.UTC().Format(time.RFC3339)
|
||||
}
|
||||
|
||||
// convertBoard converts BoardInfo to Reanimator format
|
||||
func convertBoard(board models.BoardInfo) ReanimatorBoard {
|
||||
return ReanimatorBoard{
|
||||
Manufacturer: normalizeNullableString(board.Manufacturer),
|
||||
ProductName: normalizeNullableString(board.ProductName),
|
||||
SerialNumber: board.SerialNumber,
|
||||
PartNumber: board.PartNumber,
|
||||
UUID: board.UUID,
|
||||
}
|
||||
}
|
||||
|
||||
// convertFirmware converts firmware information to Reanimator format
|
||||
func convertFirmware(firmware []models.FirmwareInfo) []ReanimatorFirmware {
|
||||
if len(firmware) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]ReanimatorFirmware, 0, len(firmware))
|
||||
for _, fw := range firmware {
|
||||
result = append(result, ReanimatorFirmware{
|
||||
DeviceName: fw.DeviceName,
|
||||
Version: fw.Version,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// convertCPUs converts CPU information to Reanimator format
|
||||
func convertCPUs(cpus []models.CPU) []ReanimatorCPU {
|
||||
if len(cpus) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]ReanimatorCPU, 0, len(cpus))
|
||||
for _, cpu := range cpus {
|
||||
manufacturer := inferCPUManufacturer(cpu.Model)
|
||||
|
||||
result = append(result, ReanimatorCPU{
|
||||
Socket: cpu.Socket,
|
||||
Model: cpu.Model,
|
||||
Cores: cpu.Cores,
|
||||
Threads: cpu.Threads,
|
||||
FrequencyMHz: cpu.FrequencyMHz,
|
||||
MaxFrequencyMHz: cpu.MaxFreqMHz,
|
||||
Manufacturer: manufacturer,
|
||||
Status: "Unknown",
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// convertMemory converts memory modules to Reanimator format
|
||||
func convertMemory(memory []models.MemoryDIMM) []ReanimatorMemory {
|
||||
if len(memory) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]ReanimatorMemory, 0, len(memory))
|
||||
for _, mem := range memory {
|
||||
status := normalizeStatus(mem.Status, true)
|
||||
if strings.TrimSpace(mem.Status) == "" {
|
||||
if mem.Present {
|
||||
status = "OK"
|
||||
} else {
|
||||
status = "Empty"
|
||||
}
|
||||
}
|
||||
|
||||
result = append(result, ReanimatorMemory{
|
||||
Slot: mem.Slot,
|
||||
Location: mem.Location,
|
||||
Present: mem.Present,
|
||||
SizeMB: mem.SizeMB,
|
||||
Type: mem.Type,
|
||||
MaxSpeedMHz: mem.MaxSpeedMHz,
|
||||
CurrentSpeedMHz: mem.CurrentSpeedMHz,
|
||||
Manufacturer: mem.Manufacturer,
|
||||
SerialNumber: mem.SerialNumber,
|
||||
PartNumber: mem.PartNumber,
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// convertStorage converts storage devices to Reanimator format
|
||||
func convertStorage(storage []models.Storage) []ReanimatorStorage {
|
||||
if len(storage) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]ReanimatorStorage, 0, len(storage))
|
||||
for _, stor := range storage {
|
||||
// Skip storage without serial number
|
||||
if stor.SerialNumber == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
status := inferStorageStatus(stor)
|
||||
|
||||
result = append(result, ReanimatorStorage{
|
||||
Slot: stor.Slot,
|
||||
Type: stor.Type,
|
||||
Model: stor.Model,
|
||||
SizeGB: stor.SizeGB,
|
||||
SerialNumber: stor.SerialNumber,
|
||||
Manufacturer: stor.Manufacturer,
|
||||
Firmware: stor.Firmware,
|
||||
Interface: stor.Interface,
|
||||
Present: stor.Present,
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// convertPCIeDevices converts PCIe devices, GPUs, and network adapters to Reanimator format
|
||||
func convertPCIeDevices(hw *models.HardwareConfig) []ReanimatorPCIe {
|
||||
result := make([]ReanimatorPCIe, 0)
|
||||
|
||||
// Convert regular PCIe devices
|
||||
for _, pcie := range hw.PCIeDevices {
|
||||
serialNumber := normalizedSerial(pcie.SerialNumber)
|
||||
|
||||
// Determine model (prefer PartNumber, fallback to DeviceClass)
|
||||
model := pcie.PartNumber
|
||||
if model == "" {
|
||||
model = pcie.DeviceClass
|
||||
}
|
||||
|
||||
result = append(result, ReanimatorPCIe{
|
||||
Slot: pcie.Slot,
|
||||
VendorID: pcie.VendorID,
|
||||
DeviceID: pcie.DeviceID,
|
||||
BDF: pcie.BDF,
|
||||
DeviceClass: pcie.DeviceClass,
|
||||
Manufacturer: pcie.Manufacturer,
|
||||
Model: model,
|
||||
LinkWidth: pcie.LinkWidth,
|
||||
LinkSpeed: pcie.LinkSpeed,
|
||||
MaxLinkWidth: pcie.MaxLinkWidth,
|
||||
MaxLinkSpeed: pcie.MaxLinkSpeed,
|
||||
SerialNumber: serialNumber,
|
||||
Firmware: "", // PCIeDevice doesn't have firmware in models
|
||||
Status: "Unknown",
|
||||
})
|
||||
}
|
||||
|
||||
// Convert GPUs as PCIe devices
|
||||
for _, gpu := range hw.GPUs {
|
||||
serialNumber := normalizedSerial(gpu.SerialNumber)
|
||||
|
||||
// Determine device class
|
||||
deviceClass := "DisplayController"
|
||||
|
||||
result = append(result, ReanimatorPCIe{
|
||||
Slot: gpu.Slot,
|
||||
VendorID: gpu.VendorID,
|
||||
DeviceID: gpu.DeviceID,
|
||||
BDF: gpu.BDF,
|
||||
DeviceClass: deviceClass,
|
||||
Manufacturer: gpu.Manufacturer,
|
||||
Model: gpu.Model,
|
||||
LinkWidth: gpu.CurrentLinkWidth,
|
||||
LinkSpeed: gpu.CurrentLinkSpeed,
|
||||
MaxLinkWidth: gpu.MaxLinkWidth,
|
||||
MaxLinkSpeed: gpu.MaxLinkSpeed,
|
||||
SerialNumber: serialNumber,
|
||||
Firmware: gpu.Firmware,
|
||||
Status: normalizeStatus(gpu.Status, false),
|
||||
})
|
||||
}
|
||||
|
||||
// Convert network adapters as PCIe devices
|
||||
for _, nic := range hw.NetworkAdapters {
|
||||
if !nic.Present {
|
||||
continue
|
||||
}
|
||||
|
||||
serialNumber := normalizedSerial(nic.SerialNumber)
|
||||
|
||||
result = append(result, ReanimatorPCIe{
|
||||
Slot: nic.Slot,
|
||||
VendorID: nic.VendorID,
|
||||
DeviceID: nic.DeviceID,
|
||||
BDF: "",
|
||||
DeviceClass: "NetworkController",
|
||||
Manufacturer: nic.Vendor,
|
||||
Model: nic.Model,
|
||||
LinkWidth: 0,
|
||||
LinkSpeed: "",
|
||||
MaxLinkWidth: 0,
|
||||
MaxLinkSpeed: "",
|
||||
SerialNumber: serialNumber,
|
||||
Firmware: nic.Firmware,
|
||||
Status: normalizeStatus(nic.Status, false),
|
||||
})
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// convertPowerSupplies converts power supplies to Reanimator format
|
||||
func convertPowerSupplies(psus []models.PSU) []ReanimatorPSU {
|
||||
if len(psus) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make([]ReanimatorPSU, 0, len(psus))
|
||||
for _, psu := range psus {
|
||||
// Skip PSUs without serial number (if not present)
|
||||
if !psu.Present || psu.SerialNumber == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
status := normalizeStatus(psu.Status, false)
|
||||
|
||||
result = append(result, ReanimatorPSU{
|
||||
Slot: psu.Slot,
|
||||
Present: psu.Present,
|
||||
Model: psu.Model,
|
||||
Vendor: psu.Vendor,
|
||||
WattageW: psu.WattageW,
|
||||
SerialNumber: psu.SerialNumber,
|
||||
PartNumber: psu.PartNumber,
|
||||
Firmware: psu.Firmware,
|
||||
Status: status,
|
||||
InputType: psu.InputType,
|
||||
InputPowerW: psu.InputPowerW,
|
||||
OutputPowerW: psu.OutputPowerW,
|
||||
InputVoltage: psu.InputVoltage,
|
||||
})
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// inferCPUManufacturer determines CPU manufacturer from model string
|
||||
func inferCPUManufacturer(model string) string {
|
||||
upper := strings.ToUpper(model)
|
||||
|
||||
// Intel patterns
|
||||
if strings.Contains(upper, "INTEL") ||
|
||||
strings.Contains(upper, "XEON") ||
|
||||
strings.Contains(upper, "CORE I") {
|
||||
return "Intel"
|
||||
}
|
||||
|
||||
// AMD patterns
|
||||
if strings.Contains(upper, "AMD") ||
|
||||
strings.Contains(upper, "EPYC") ||
|
||||
strings.Contains(upper, "RYZEN") ||
|
||||
strings.Contains(upper, "THREADRIPPER") {
|
||||
return "AMD"
|
||||
}
|
||||
|
||||
// ARM patterns
|
||||
if strings.Contains(upper, "ARM") ||
|
||||
strings.Contains(upper, "CORTEX") {
|
||||
return "ARM"
|
||||
}
|
||||
|
||||
// Ampere patterns
|
||||
if strings.Contains(upper, "AMPERE") ||
|
||||
strings.Contains(upper, "ALTRA") {
|
||||
return "Ampere"
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func normalizedSerial(serial string) string {
|
||||
s := strings.TrimSpace(serial)
|
||||
if s == "" {
|
||||
return ""
|
||||
}
|
||||
switch strings.ToUpper(s) {
|
||||
case "N/A", "NA", "NONE", "NULL", "UNKNOWN", "-":
|
||||
return ""
|
||||
default:
|
||||
return s
|
||||
}
|
||||
}
|
||||
|
||||
// inferStorageStatus determines storage device status
|
||||
func inferStorageStatus(stor models.Storage) string {
|
||||
if !stor.Present {
|
||||
return "Unknown"
|
||||
}
|
||||
return "Unknown"
|
||||
}
|
||||
|
||||
func normalizeSourceType(sourceType string) string {
|
||||
normalized := strings.ToLower(strings.TrimSpace(sourceType))
|
||||
switch normalized {
|
||||
case "api", "logfile", "manual":
|
||||
return normalized
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeProtocol(protocol string) string {
|
||||
normalized := strings.ToLower(strings.TrimSpace(protocol))
|
||||
switch normalized {
|
||||
case "redfish", "ipmi", "snmp", "ssh":
|
||||
return normalized
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeNullableString(v string) string {
|
||||
trimmed := strings.TrimSpace(v)
|
||||
if strings.EqualFold(trimmed, "NULL") {
|
||||
return ""
|
||||
}
|
||||
return trimmed
|
||||
}
|
||||
|
||||
func normalizeStatus(status string, allowEmpty bool) string {
|
||||
switch strings.ToLower(strings.TrimSpace(status)) {
|
||||
case "ok":
|
||||
return "OK"
|
||||
case "pass":
|
||||
return "OK"
|
||||
case "warning":
|
||||
return "Warning"
|
||||
case "critical":
|
||||
return "Critical"
|
||||
case "fail":
|
||||
return "Critical"
|
||||
case "unknown":
|
||||
return "Unknown"
|
||||
case "empty":
|
||||
if allowEmpty {
|
||||
return "Empty"
|
||||
}
|
||||
return "Unknown"
|
||||
default:
|
||||
if allowEmpty {
|
||||
return "Unknown"
|
||||
}
|
||||
return "Unknown"
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
ipv4Regex = regexp.MustCompile(`(?:^|[^0-9])((?:\d{1,3}\.){3}\d{1,3})(?:[^0-9]|$)`)
|
||||
)
|
||||
|
||||
func inferTargetHost(targetHost, filename string) string {
|
||||
if trimmed := strings.TrimSpace(targetHost); trimmed != "" {
|
||||
return trimmed
|
||||
}
|
||||
|
||||
candidate := strings.TrimSpace(filename)
|
||||
if candidate == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
if parsed, err := url.Parse(candidate); err == nil && parsed.Hostname() != "" {
|
||||
return parsed.Hostname()
|
||||
}
|
||||
|
||||
if submatches := ipv4Regex.FindStringSubmatch(candidate); len(submatches) > 1 {
|
||||
return submatches[1]
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
508
internal/exporter/reanimator_converter_test.go
Normal file
508
internal/exporter/reanimator_converter_test.go
Normal file
@@ -0,0 +1,508 @@
|
||||
package exporter
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestConvertToReanimator(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input *models.AnalysisResult
|
||||
wantErr bool
|
||||
errMsg string
|
||||
}{
|
||||
{
|
||||
name: "nil result",
|
||||
input: nil,
|
||||
wantErr: true,
|
||||
errMsg: "no data available",
|
||||
},
|
||||
{
|
||||
name: "no hardware",
|
||||
input: &models.AnalysisResult{
|
||||
Filename: "test.json",
|
||||
},
|
||||
wantErr: true,
|
||||
errMsg: "no hardware data available",
|
||||
},
|
||||
{
|
||||
name: "no board serial",
|
||||
input: &models.AnalysisResult{
|
||||
Filename: "test.json",
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{},
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
errMsg: "board serial_number is required",
|
||||
},
|
||||
{
|
||||
name: "valid minimal data",
|
||||
input: &models.AnalysisResult{
|
||||
Filename: "test.json",
|
||||
SourceType: "api",
|
||||
Protocol: "redfish",
|
||||
TargetHost: "10.10.10.10",
|
||||
CollectedAt: time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{
|
||||
Manufacturer: "Supermicro",
|
||||
ProductName: "X12DPG-QT6",
|
||||
SerialNumber: "TEST123",
|
||||
},
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result, err := ConvertToReanimator(tt.input)
|
||||
if tt.wantErr {
|
||||
if err == nil {
|
||||
t.Errorf("expected error containing %q, got nil", tt.errMsg)
|
||||
}
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
t.Errorf("unexpected error: %v", err)
|
||||
return
|
||||
}
|
||||
if result == nil {
|
||||
t.Error("expected non-nil result")
|
||||
return
|
||||
}
|
||||
if result.Hardware.Board.SerialNumber != tt.input.Hardware.BoardInfo.SerialNumber {
|
||||
t.Errorf("board serial mismatch: got %q, want %q",
|
||||
result.Hardware.Board.SerialNumber,
|
||||
tt.input.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferCPUManufacturer(t *testing.T) {
|
||||
tests := []struct {
|
||||
model string
|
||||
want string
|
||||
}{
|
||||
{"INTEL(R) XEON(R) GOLD 6530", "Intel"},
|
||||
{"Intel Core i9-12900K", "Intel"},
|
||||
{"AMD EPYC 7763", "AMD"},
|
||||
{"AMD Ryzen 9 5950X", "AMD"},
|
||||
{"ARM Cortex-A78", "ARM"},
|
||||
{"Ampere Altra Max", "Ampere"},
|
||||
{"Unknown CPU Model", ""},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.model, func(t *testing.T) {
|
||||
got := inferCPUManufacturer(tt.model)
|
||||
if got != tt.want {
|
||||
t.Errorf("inferCPUManufacturer(%q) = %q, want %q", tt.model, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizedSerial(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
in string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "empty",
|
||||
in: "",
|
||||
want: "",
|
||||
},
|
||||
{
|
||||
name: "n_a",
|
||||
in: "N/A",
|
||||
want: "",
|
||||
},
|
||||
{
|
||||
name: "unknown",
|
||||
in: "unknown",
|
||||
want: "",
|
||||
},
|
||||
{
|
||||
name: "normal",
|
||||
in: "SN123",
|
||||
want: "SN123",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := normalizedSerial(tt.in)
|
||||
if got != tt.want {
|
||||
t.Errorf("normalizedSerial() = %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferStorageStatus(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
stor models.Storage
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "present",
|
||||
stor: models.Storage{
|
||||
Present: true,
|
||||
},
|
||||
want: "Unknown",
|
||||
},
|
||||
{
|
||||
name: "not present",
|
||||
stor: models.Storage{
|
||||
Present: false,
|
||||
},
|
||||
want: "Unknown",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := inferStorageStatus(tt.stor)
|
||||
if got != tt.want {
|
||||
t.Errorf("inferStorageStatus() = %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeStatus_PassFail(t *testing.T) {
|
||||
if got := normalizeStatus("PASS", false); got != "OK" {
|
||||
t.Fatalf("expected PASS -> OK, got %q", got)
|
||||
}
|
||||
if got := normalizeStatus("FAIL", false); got != "Critical" {
|
||||
t.Fatalf("expected FAIL -> Critical, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertCPUs(t *testing.T) {
|
||||
cpus := []models.CPU{
|
||||
{
|
||||
Socket: 0,
|
||||
Model: "INTEL(R) XEON(R) GOLD 6530",
|
||||
Cores: 32,
|
||||
Threads: 64,
|
||||
FrequencyMHz: 2100,
|
||||
MaxFreqMHz: 4000,
|
||||
},
|
||||
{
|
||||
Socket: 1,
|
||||
Model: "AMD EPYC 7763",
|
||||
Cores: 64,
|
||||
Threads: 128,
|
||||
FrequencyMHz: 2450,
|
||||
MaxFreqMHz: 3500,
|
||||
},
|
||||
}
|
||||
|
||||
result := convertCPUs(cpus)
|
||||
|
||||
if len(result) != 2 {
|
||||
t.Fatalf("expected 2 CPUs, got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Manufacturer != "Intel" {
|
||||
t.Errorf("expected Intel manufacturer for first CPU, got %q", result[0].Manufacturer)
|
||||
}
|
||||
|
||||
if result[1].Manufacturer != "AMD" {
|
||||
t.Errorf("expected AMD manufacturer for second CPU, got %q", result[1].Manufacturer)
|
||||
}
|
||||
|
||||
if result[0].Status != "Unknown" {
|
||||
t.Errorf("expected Unknown status, got %q", result[0].Status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertMemory(t *testing.T) {
|
||||
memory := []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "CPU0_C0D0",
|
||||
Present: true,
|
||||
SizeMB: 32768,
|
||||
Type: "DDR5",
|
||||
SerialNumber: "TEST-MEM-001",
|
||||
Status: "OK",
|
||||
},
|
||||
{
|
||||
Slot: "CPU0_C1D0",
|
||||
Present: false,
|
||||
},
|
||||
}
|
||||
|
||||
result := convertMemory(memory)
|
||||
|
||||
if len(result) != 2 {
|
||||
t.Fatalf("expected 2 memory modules, got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Status != "OK" {
|
||||
t.Errorf("expected OK status for first module, got %q", result[0].Status)
|
||||
}
|
||||
|
||||
if result[1].Status != "Empty" {
|
||||
t.Errorf("expected Empty status for second module, got %q", result[1].Status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertStorage(t *testing.T) {
|
||||
storage := []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SerialNumber: "BTAX41900GF87P6DGN",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SerialNumber: "", // No serial - should be skipped
|
||||
Present: true,
|
||||
},
|
||||
}
|
||||
|
||||
result := convertStorage(storage)
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 storage device (skipped one without serial), got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Status != "Unknown" {
|
||||
t.Errorf("expected Unknown status, got %q", result[0].Status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
{
|
||||
Slot: "PCIeCard1",
|
||||
VendorID: 32902,
|
||||
DeviceID: 2912,
|
||||
BDF: "0000:18:00.0",
|
||||
DeviceClass: "MassStorageController",
|
||||
Manufacturer: "Intel",
|
||||
PartNumber: "RSP3DD080F",
|
||||
SerialNumber: "RAID-001",
|
||||
},
|
||||
{
|
||||
Slot: "PCIeCard2",
|
||||
DeviceClass: "NetworkController",
|
||||
Manufacturer: "Mellanox",
|
||||
SerialNumber: "", // Should be generated
|
||||
},
|
||||
},
|
||||
GPUs: []models.GPU{
|
||||
{
|
||||
Slot: "GPU1",
|
||||
Model: "NVIDIA A100",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: "GPU-001",
|
||||
Status: "OK",
|
||||
},
|
||||
},
|
||||
NetworkAdapters: []models.NetworkAdapter{
|
||||
{
|
||||
Slot: "NIC1",
|
||||
Model: "ConnectX-6",
|
||||
Vendor: "Mellanox",
|
||||
Present: true,
|
||||
SerialNumber: "NIC-001",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
result := convertPCIeDevices(hw)
|
||||
|
||||
// Should have: 2 PCIe devices + 1 GPU + 1 NIC = 4 total
|
||||
if len(result) != 4 {
|
||||
t.Fatalf("expected 4 PCIe devices total, got %d", len(result))
|
||||
}
|
||||
|
||||
// Check that serial is empty for second PCIe device (no auto-generation)
|
||||
if result[1].SerialNumber != "" {
|
||||
t.Errorf("expected empty serial for missing device serial, got %q", result[1].SerialNumber)
|
||||
}
|
||||
|
||||
// Check GPU was included
|
||||
foundGPU := false
|
||||
for _, dev := range result {
|
||||
if dev.SerialNumber == "GPU-001" {
|
||||
foundGPU = true
|
||||
if dev.DeviceClass != "DisplayController" {
|
||||
t.Errorf("expected GPU device_class DisplayController, got %q", dev.DeviceClass)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
if !foundGPU {
|
||||
t.Error("expected GPU to be included in PCIe devices")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_NVSwitchWithoutSerialRemainsEmpty(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
{
|
||||
Slot: "NVSWITCH1",
|
||||
DeviceClass: "NVSwitch",
|
||||
BDF: "0000:06:00.0",
|
||||
// SerialNumber empty on purpose; should remain empty.
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
result := convertPCIeDevices(hw)
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 PCIe device, got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].SerialNumber != "" {
|
||||
t.Fatalf("expected empty NVSwitch serial, got %q", result[0].SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPowerSupplies(t *testing.T) {
|
||||
psus := []models.PSU{
|
||||
{
|
||||
Slot: "0",
|
||||
Present: true,
|
||||
Model: "GW-CRPS3000LW",
|
||||
Vendor: "Great Wall",
|
||||
WattageW: 3000,
|
||||
SerialNumber: "PSU-001",
|
||||
Status: "OK",
|
||||
},
|
||||
{
|
||||
Slot: "1",
|
||||
Present: false,
|
||||
SerialNumber: "", // Not present, should be skipped
|
||||
},
|
||||
}
|
||||
|
||||
result := convertPowerSupplies(psus)
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 PSU (skipped empty), got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Status != "OK" {
|
||||
t.Errorf("expected OK status, got %q", result[0].Status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertBoardNormalizesNULL(t *testing.T) {
|
||||
board := convertBoard(models.BoardInfo{
|
||||
Manufacturer: " NULL ",
|
||||
ProductName: "null",
|
||||
SerialNumber: "TEST123",
|
||||
})
|
||||
|
||||
if board.Manufacturer != "" {
|
||||
t.Fatalf("expected empty manufacturer, got %q", board.Manufacturer)
|
||||
}
|
||||
if board.ProductName != "" {
|
||||
t.Fatalf("expected empty product_name, got %q", board.ProductName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSourceTypeOmittedWhenInvalidOrEmpty(t *testing.T) {
|
||||
result, err := ConvertToReanimator(&models.AnalysisResult{
|
||||
Filename: "redfish://10.0.0.1",
|
||||
SourceType: "archive",
|
||||
TargetHost: "10.0.0.1",
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "TEST123"},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(result)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal failed: %v", err)
|
||||
}
|
||||
if strings.Contains(string(payload), `"source_type"`) {
|
||||
t.Fatalf("expected source_type to be omitted for invalid value, got %s", string(payload))
|
||||
}
|
||||
}
|
||||
|
||||
func TestTargetHostOmittedWhenUnavailable(t *testing.T) {
|
||||
result, err := ConvertToReanimator(&models.AnalysisResult{
|
||||
Filename: "test.json",
|
||||
SourceType: "api",
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "TEST123"},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
|
||||
payload, err := json.Marshal(result)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal failed: %v", err)
|
||||
}
|
||||
if strings.Contains(string(payload), `"target_host"`) {
|
||||
t.Fatalf("expected target_host to be omitted when unavailable, got %s", string(payload))
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferTargetHost(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
targetHost string
|
||||
filename string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "explicit target host wins",
|
||||
targetHost: "10.0.0.10",
|
||||
filename: "redfish://10.0.0.20",
|
||||
want: "10.0.0.10",
|
||||
},
|
||||
{
|
||||
name: "hostname from URL",
|
||||
filename: "redfish://10.10.10.103",
|
||||
want: "10.10.10.103",
|
||||
},
|
||||
{
|
||||
name: "ip extracted from archive name",
|
||||
filename: "nvidia_bug_report_192.168.12.34.tar.gz",
|
||||
want: "192.168.12.34",
|
||||
},
|
||||
{
|
||||
name: "no host available",
|
||||
filename: "test.json",
|
||||
want: "",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := inferTargetHost(tt.targetHost, tt.filename)
|
||||
if got != tt.want {
|
||||
t.Fatalf("inferTargetHost() = %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
293
internal/exporter/reanimator_integration_test.go
Normal file
293
internal/exporter/reanimator_integration_test.go
Normal file
@@ -0,0 +1,293 @@
|
||||
package exporter
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// TestFullReanimatorExport tests complete export with realistic data
|
||||
func TestFullReanimatorExport(t *testing.T) {
|
||||
// Create a realistic AnalysisResult similar to import-example-full.json
|
||||
result := &models.AnalysisResult{
|
||||
Filename: "redfish://10.10.10.103",
|
||||
SourceType: "api",
|
||||
Protocol: "redfish",
|
||||
TargetHost: "10.10.10.103",
|
||||
CollectedAt: time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{
|
||||
Manufacturer: "Supermicro",
|
||||
ProductName: "X12DPG-QT6",
|
||||
SerialNumber: "21D634101",
|
||||
PartNumber: "X12DPG-QT6-REV1.01",
|
||||
UUID: "d7ef2fe5-2fd0-11f0-910a-346f11040868",
|
||||
},
|
||||
Firmware: []models.FirmwareInfo{
|
||||
{DeviceName: "BIOS", Version: "06.08.05"},
|
||||
{DeviceName: "BMC", Version: "5.17.00"},
|
||||
{DeviceName: "CPLD", Version: "01.02.03"},
|
||||
},
|
||||
CPUs: []models.CPU{
|
||||
{
|
||||
Socket: 0,
|
||||
Model: "INTEL(R) XEON(R) GOLD 6530",
|
||||
Cores: 32,
|
||||
Threads: 64,
|
||||
FrequencyMHz: 2100,
|
||||
MaxFreqMHz: 4000,
|
||||
},
|
||||
{
|
||||
Socket: 1,
|
||||
Model: "INTEL(R) XEON(R) GOLD 6530",
|
||||
Cores: 32,
|
||||
Threads: 64,
|
||||
FrequencyMHz: 2100,
|
||||
MaxFreqMHz: 4000,
|
||||
},
|
||||
},
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "CPU0_C0D0",
|
||||
Location: "CPU0_C0D0",
|
||||
Present: true,
|
||||
SizeMB: 32768,
|
||||
Type: "DDR5",
|
||||
MaxSpeedMHz: 4800,
|
||||
CurrentSpeedMHz: 4800,
|
||||
Manufacturer: "Hynix",
|
||||
SerialNumber: "80AD032419E17CEEC1",
|
||||
PartNumber: "HMCG88AGBRA191N",
|
||||
Status: "OK",
|
||||
},
|
||||
{
|
||||
Slot: "CPU0_C1D0",
|
||||
Location: "CPU0_C1D0",
|
||||
Present: false,
|
||||
SizeMB: 0,
|
||||
Type: "",
|
||||
MaxSpeedMHz: 0,
|
||||
CurrentSpeedMHz: 0,
|
||||
Status: "Empty",
|
||||
},
|
||||
},
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SizeGB: 7680,
|
||||
SerialNumber: "BTAX41900GF87P6DGN",
|
||||
Manufacturer: "Intel",
|
||||
Firmware: "9CV10510",
|
||||
Interface: "NVMe",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "FP00HDD00",
|
||||
Type: "HDD",
|
||||
Model: "ST12000NM0008",
|
||||
SizeGB: 12000,
|
||||
SerialNumber: "ZJV01234ABC",
|
||||
Manufacturer: "Seagate",
|
||||
Firmware: "SN03",
|
||||
Interface: "SATA",
|
||||
Present: true,
|
||||
},
|
||||
},
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
{
|
||||
Slot: "PCIeCard1",
|
||||
VendorID: 32902,
|
||||
DeviceID: 2912,
|
||||
BDF: "0000:18:00.0",
|
||||
DeviceClass: "MassStorageController",
|
||||
Manufacturer: "Intel",
|
||||
PartNumber: "RAID Controller RSP3DD080F",
|
||||
LinkWidth: 8,
|
||||
LinkSpeed: "Gen3",
|
||||
MaxLinkWidth: 8,
|
||||
MaxLinkSpeed: "Gen3",
|
||||
SerialNumber: "RAID-001-12345",
|
||||
},
|
||||
{
|
||||
Slot: "PCIeCard2",
|
||||
VendorID: 5555,
|
||||
DeviceID: 4401,
|
||||
BDF: "0000:3b:00.0",
|
||||
DeviceClass: "NetworkController",
|
||||
Manufacturer: "Mellanox",
|
||||
PartNumber: "ConnectX-5",
|
||||
LinkWidth: 16,
|
||||
LinkSpeed: "Gen3",
|
||||
MaxLinkWidth: 16,
|
||||
MaxLinkSpeed: "Gen3",
|
||||
SerialNumber: "MT2892012345",
|
||||
},
|
||||
},
|
||||
PowerSupply: []models.PSU{
|
||||
{
|
||||
Slot: "0",
|
||||
Present: true,
|
||||
Model: "GW-CRPS3000LW",
|
||||
Vendor: "Great Wall",
|
||||
WattageW: 3000,
|
||||
SerialNumber: "2P06C102610",
|
||||
PartNumber: "V0310C9000000000",
|
||||
Firmware: "00.03.05",
|
||||
Status: "OK",
|
||||
InputType: "ACWideRange",
|
||||
InputPowerW: 137,
|
||||
OutputPowerW: 104,
|
||||
InputVoltage: 215.25,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Convert to Reanimator format
|
||||
reanimator, err := ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator failed: %v", err)
|
||||
}
|
||||
|
||||
// Verify top-level fields
|
||||
if reanimator.Filename != "redfish://10.10.10.103" {
|
||||
t.Errorf("Filename mismatch: got %q", reanimator.Filename)
|
||||
}
|
||||
|
||||
if reanimator.SourceType != "api" {
|
||||
t.Errorf("SourceType mismatch: got %q", reanimator.SourceType)
|
||||
}
|
||||
|
||||
if reanimator.Protocol != "redfish" {
|
||||
t.Errorf("Protocol mismatch: got %q", reanimator.Protocol)
|
||||
}
|
||||
|
||||
if reanimator.TargetHost != "10.10.10.103" {
|
||||
t.Errorf("TargetHost mismatch: got %q", reanimator.TargetHost)
|
||||
}
|
||||
|
||||
if reanimator.CollectedAt != "2026-02-10T15:30:00Z" {
|
||||
t.Errorf("CollectedAt mismatch: got %q", reanimator.CollectedAt)
|
||||
}
|
||||
|
||||
// Verify hardware sections
|
||||
hw := reanimator.Hardware
|
||||
|
||||
// Board
|
||||
if hw.Board.SerialNumber != "21D634101" {
|
||||
t.Errorf("Board serial mismatch: got %q", hw.Board.SerialNumber)
|
||||
}
|
||||
|
||||
// Firmware
|
||||
if len(hw.Firmware) != 3 {
|
||||
t.Errorf("Expected 3 firmware entries, got %d", len(hw.Firmware))
|
||||
}
|
||||
|
||||
// CPUs
|
||||
if len(hw.CPUs) != 2 {
|
||||
t.Fatalf("Expected 2 CPUs, got %d", len(hw.CPUs))
|
||||
}
|
||||
|
||||
if hw.CPUs[0].Manufacturer != "Intel" {
|
||||
t.Errorf("CPU manufacturer not inferred: got %q", hw.CPUs[0].Manufacturer)
|
||||
}
|
||||
|
||||
if hw.CPUs[0].Status != "Unknown" {
|
||||
t.Errorf("CPU status mismatch: got %q", hw.CPUs[0].Status)
|
||||
}
|
||||
|
||||
// Memory (should include empty slots)
|
||||
if len(hw.Memory) != 2 {
|
||||
t.Errorf("Expected 2 memory entries (including empty), got %d", len(hw.Memory))
|
||||
}
|
||||
|
||||
if hw.Memory[1].Status != "Empty" {
|
||||
t.Errorf("Empty memory slot status mismatch: got %q", hw.Memory[1].Status)
|
||||
}
|
||||
|
||||
// Storage
|
||||
if len(hw.Storage) != 2 {
|
||||
t.Errorf("Expected 2 storage devices, got %d", len(hw.Storage))
|
||||
}
|
||||
|
||||
if hw.Storage[0].Status != "Unknown" {
|
||||
t.Errorf("Storage status mismatch: got %q", hw.Storage[0].Status)
|
||||
}
|
||||
|
||||
// PCIe devices
|
||||
if len(hw.PCIeDevices) != 2 {
|
||||
t.Errorf("Expected 2 PCIe devices, got %d", len(hw.PCIeDevices))
|
||||
}
|
||||
|
||||
if hw.PCIeDevices[0].Model == "" {
|
||||
t.Error("PCIe model should be populated from PartNumber")
|
||||
}
|
||||
|
||||
// Power supplies
|
||||
if len(hw.PowerSupplies) != 1 {
|
||||
t.Errorf("Expected 1 PSU, got %d", len(hw.PowerSupplies))
|
||||
}
|
||||
|
||||
// Verify JSON marshaling works
|
||||
jsonData, err := json.MarshalIndent(reanimator, "", " ")
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to marshal to JSON: %v", err)
|
||||
}
|
||||
|
||||
// Check that JSON contains expected fields
|
||||
jsonStr := string(jsonData)
|
||||
expectedFields := []string{
|
||||
`"filename"`,
|
||||
`"source_type"`,
|
||||
`"protocol"`,
|
||||
`"target_host"`,
|
||||
`"collected_at"`,
|
||||
`"hardware"`,
|
||||
`"board"`,
|
||||
`"cpus"`,
|
||||
`"memory"`,
|
||||
`"storage"`,
|
||||
`"pcie_devices"`,
|
||||
`"power_supplies"`,
|
||||
`"firmware"`,
|
||||
}
|
||||
|
||||
for _, field := range expectedFields {
|
||||
if !strings.Contains(jsonStr, field) {
|
||||
t.Errorf("JSON missing expected field: %s", field)
|
||||
}
|
||||
}
|
||||
|
||||
// Optional: print JSON for manual inspection (commented out for normal test runs)
|
||||
// t.Logf("Generated Reanimator JSON:\n%s", string(jsonData))
|
||||
}
|
||||
|
||||
// TestReanimatorExportWithoutTargetHost tests that target_host is inferred from filename
|
||||
func TestReanimatorExportWithoutTargetHost(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Filename: "redfish://192.168.1.100",
|
||||
SourceType: "api",
|
||||
Protocol: "redfish",
|
||||
TargetHost: "", // Empty - should be inferred
|
||||
CollectedAt: time.Now(),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{
|
||||
SerialNumber: "TEST123",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
reanimator, err := ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator failed: %v", err)
|
||||
}
|
||||
|
||||
if reanimator.TargetHost != "192.168.1.100" {
|
||||
t.Errorf("Expected target_host to be inferred from filename, got %q", reanimator.TargetHost)
|
||||
}
|
||||
}
|
||||
113
internal/exporter/reanimator_models.go
Normal file
113
internal/exporter/reanimator_models.go
Normal file
@@ -0,0 +1,113 @@
|
||||
package exporter
|
||||
|
||||
// ReanimatorExport represents the top-level structure for Reanimator format export
|
||||
type ReanimatorExport struct {
|
||||
Filename string `json:"filename"`
|
||||
SourceType string `json:"source_type,omitempty"`
|
||||
Protocol string `json:"protocol,omitempty"`
|
||||
TargetHost string `json:"target_host,omitempty"`
|
||||
CollectedAt string `json:"collected_at"` // RFC3339 format
|
||||
Hardware ReanimatorHardware `json:"hardware"`
|
||||
}
|
||||
|
||||
// ReanimatorHardware contains all hardware components
|
||||
type ReanimatorHardware struct {
|
||||
Board ReanimatorBoard `json:"board"`
|
||||
Firmware []ReanimatorFirmware `json:"firmware,omitempty"`
|
||||
CPUs []ReanimatorCPU `json:"cpus,omitempty"`
|
||||
Memory []ReanimatorMemory `json:"memory,omitempty"`
|
||||
Storage []ReanimatorStorage `json:"storage,omitempty"`
|
||||
PCIeDevices []ReanimatorPCIe `json:"pcie_devices,omitempty"`
|
||||
PowerSupplies []ReanimatorPSU `json:"power_supplies,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorBoard represents motherboard/server information
|
||||
type ReanimatorBoard struct {
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
ProductName string `json:"product_name,omitempty"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
PartNumber string `json:"part_number,omitempty"`
|
||||
UUID string `json:"uuid,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorFirmware represents firmware version information
|
||||
type ReanimatorFirmware struct {
|
||||
DeviceName string `json:"device_name"`
|
||||
Version string `json:"version"`
|
||||
}
|
||||
|
||||
// ReanimatorCPU represents processor information
|
||||
type ReanimatorCPU struct {
|
||||
Socket int `json:"socket"`
|
||||
Model string `json:"model"`
|
||||
Cores int `json:"cores,omitempty"`
|
||||
Threads int `json:"threads,omitempty"`
|
||||
FrequencyMHz int `json:"frequency_mhz,omitempty"`
|
||||
MaxFrequencyMHz int `json:"max_frequency_mhz,omitempty"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorMemory represents a memory module (DIMM)
|
||||
type ReanimatorMemory struct {
|
||||
Slot string `json:"slot"`
|
||||
Location string `json:"location,omitempty"`
|
||||
Present bool `json:"present"`
|
||||
SizeMB int `json:"size_mb,omitempty"`
|
||||
Type string `json:"type,omitempty"`
|
||||
MaxSpeedMHz int `json:"max_speed_mhz,omitempty"`
|
||||
CurrentSpeedMHz int `json:"current_speed_mhz,omitempty"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
PartNumber string `json:"part_number,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorStorage represents a storage device
|
||||
type ReanimatorStorage struct {
|
||||
Slot string `json:"slot"`
|
||||
Type string `json:"type,omitempty"`
|
||||
Model string `json:"model"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
Interface string `json:"interface,omitempty"`
|
||||
Present bool `json:"present"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorPCIe represents a PCIe device
|
||||
type ReanimatorPCIe struct {
|
||||
Slot string `json:"slot"`
|
||||
VendorID int `json:"vendor_id,omitempty"`
|
||||
DeviceID int `json:"device_id,omitempty"`
|
||||
BDF string `json:"bdf,omitempty"`
|
||||
DeviceClass string `json:"device_class,omitempty"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Model string `json:"model,omitempty"`
|
||||
LinkWidth int `json:"link_width,omitempty"`
|
||||
LinkSpeed string `json:"link_speed,omitempty"`
|
||||
MaxLinkWidth int `json:"max_link_width,omitempty"`
|
||||
MaxLinkSpeed string `json:"max_link_speed,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorPSU represents a power supply unit
|
||||
type ReanimatorPSU struct {
|
||||
Slot string `json:"slot"`
|
||||
Present bool `json:"present"`
|
||||
Model string `json:"model,omitempty"`
|
||||
Vendor string `json:"vendor,omitempty"`
|
||||
WattageW int `json:"wattage_w,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
PartNumber string `json:"part_number,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
InputType string `json:"input_type,omitempty"`
|
||||
InputPowerW int `json:"input_power_w,omitempty"`
|
||||
OutputPowerW int `json:"output_power_w,omitempty"`
|
||||
InputVoltage float64 `json:"input_voltage,omitempty"`
|
||||
}
|
||||
@@ -12,10 +12,16 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
const maxSingleFileSize = 10 * 1024 * 1024
|
||||
const maxZipArchiveSize = 50 * 1024 * 1024
|
||||
const maxGzipDecompressedSize = 50 * 1024 * 1024
|
||||
|
||||
// ExtractedFile represents a file extracted from archive
|
||||
type ExtractedFile struct {
|
||||
Path string
|
||||
Content []byte
|
||||
Path string
|
||||
Content []byte
|
||||
Truncated bool
|
||||
TruncatedMessage string
|
||||
}
|
||||
|
||||
// ExtractArchive extracts tar.gz or zip archive and returns file contents
|
||||
@@ -29,6 +35,8 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
return extractTar(archivePath)
|
||||
case ".zip":
|
||||
return extractZip(archivePath)
|
||||
case ".txt", ".log":
|
||||
return extractSingleFile(archivePath)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported archive format: %s", ext)
|
||||
}
|
||||
@@ -43,6 +51,10 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
|
||||
return extractTarGzFromReader(r, filename)
|
||||
case ".tar":
|
||||
return extractTarFromReader(r)
|
||||
case ".zip":
|
||||
return extractZipFromReader(r)
|
||||
case ".txt", ".log":
|
||||
return extractSingleFileFromReader(r, filename)
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported archive format: %s", ext)
|
||||
}
|
||||
@@ -112,12 +124,16 @@ func extractTarGzFromReader(r io.Reader, filename string) ([]ExtractedFile, erro
|
||||
}
|
||||
defer gzr.Close()
|
||||
|
||||
// Read all decompressed content into buffer
|
||||
// Limit to 50MB for plain gzip files, 10MB per file for tar.gz
|
||||
decompressed, err := io.ReadAll(io.LimitReader(gzr, 50*1024*1024))
|
||||
// Read decompressed content with a hard cap.
|
||||
// When the payload exceeds the cap, keep the first chunk and mark it as truncated.
|
||||
decompressed, err := io.ReadAll(io.LimitReader(gzr, maxGzipDecompressedSize+1))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read gzip content: %w", err)
|
||||
}
|
||||
gzipTruncated := len(decompressed) > maxGzipDecompressedSize
|
||||
if gzipTruncated {
|
||||
decompressed = decompressed[:maxGzipDecompressedSize]
|
||||
}
|
||||
|
||||
// Try to read as tar archive
|
||||
tr := tar.NewReader(bytes.NewReader(decompressed))
|
||||
@@ -133,12 +149,19 @@ func extractTarGzFromReader(r io.Reader, filename string) ([]ExtractedFile, erro
|
||||
baseName = gzr.Name
|
||||
}
|
||||
|
||||
return []ExtractedFile{
|
||||
{
|
||||
Path: baseName,
|
||||
Content: decompressed,
|
||||
},
|
||||
}, nil
|
||||
file := ExtractedFile{
|
||||
Path: baseName,
|
||||
Content: decompressed,
|
||||
}
|
||||
if gzipTruncated {
|
||||
file.Truncated = true
|
||||
file.TruncatedMessage = fmt.Sprintf(
|
||||
"decompressed gzip content exceeded %d bytes and was truncated",
|
||||
maxGzipDecompressedSize,
|
||||
)
|
||||
}
|
||||
|
||||
return []ExtractedFile{file}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("tar read: %w", err)
|
||||
}
|
||||
@@ -213,6 +236,92 @@ func extractZip(archivePath string) ([]ExtractedFile, error) {
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func extractZipFromReader(r io.Reader) ([]ExtractedFile, error) {
|
||||
// Read all data into memory with a hard cap
|
||||
data, err := io.ReadAll(io.LimitReader(r, maxZipArchiveSize+1))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read zip data: %w", err)
|
||||
}
|
||||
if len(data) > maxZipArchiveSize {
|
||||
return nil, fmt.Errorf("zip too large: max %d bytes", maxZipArchiveSize)
|
||||
}
|
||||
|
||||
// Create a ReaderAt from the byte slice
|
||||
readerAt := bytes.NewReader(data)
|
||||
|
||||
// Open the zip archive
|
||||
zipReader, err := zip.NewReader(readerAt, int64(len(data)))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open zip: %w", err)
|
||||
}
|
||||
|
||||
var files []ExtractedFile
|
||||
|
||||
for _, f := range zipReader.File {
|
||||
if f.FileInfo().IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
// Skip large files (>10MB)
|
||||
if f.FileInfo().Size() > 10*1024*1024 {
|
||||
continue
|
||||
}
|
||||
|
||||
rc, err := f.Open()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open file %s: %w", f.Name, err)
|
||||
}
|
||||
|
||||
content, err := io.ReadAll(rc)
|
||||
rc.Close()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read file %s: %w", f.Name, err)
|
||||
}
|
||||
|
||||
files = append(files, ExtractedFile{
|
||||
Path: f.Name,
|
||||
Content: content,
|
||||
})
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func extractSingleFile(path string) ([]ExtractedFile, error) {
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open file: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
return extractSingleFileFromReader(f, filepath.Base(path))
|
||||
}
|
||||
|
||||
func extractSingleFileFromReader(r io.Reader, filename string) ([]ExtractedFile, error) {
|
||||
content, err := io.ReadAll(io.LimitReader(r, maxSingleFileSize+1))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("read file content: %w", err)
|
||||
}
|
||||
truncated := len(content) > maxSingleFileSize
|
||||
if truncated {
|
||||
content = content[:maxSingleFileSize]
|
||||
}
|
||||
|
||||
file := ExtractedFile{
|
||||
Path: filepath.Base(filename),
|
||||
Content: content,
|
||||
}
|
||||
if truncated {
|
||||
file.Truncated = true
|
||||
file.TruncatedMessage = fmt.Sprintf(
|
||||
"file exceeded %d bytes and was truncated",
|
||||
maxSingleFileSize,
|
||||
)
|
||||
}
|
||||
|
||||
return []ExtractedFile{file}, nil
|
||||
}
|
||||
|
||||
// FindFileByPattern finds files matching pattern in extracted files
|
||||
func FindFileByPattern(files []ExtractedFile, patterns ...string) []ExtractedFile {
|
||||
var result []ExtractedFile
|
||||
|
||||
71
internal/parser/archive_test.go
Normal file
71
internal/parser/archive_test.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package parser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestExtractArchiveFromReaderTXT(t *testing.T) {
|
||||
content := "loader_brand=\"XigmaNAS\"\nSystem uptime:\n"
|
||||
files, err := ExtractArchiveFromReader(strings.NewReader(content), "xigmanas.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("extract txt from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "xigmanas.txt" {
|
||||
t.Fatalf("expected filename xigmanas.txt, got %q", files[0].Path)
|
||||
}
|
||||
if string(files[0].Content) != content {
|
||||
t.Fatalf("content mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveTXT(t *testing.T) {
|
||||
dir := t.TempDir()
|
||||
path := filepath.Join(dir, "sample.txt")
|
||||
want := "plain text log"
|
||||
if err := os.WriteFile(path, []byte(want), 0o600); err != nil {
|
||||
t.Fatalf("write sample txt: %v", err)
|
||||
}
|
||||
|
||||
files, err := ExtractArchive(path)
|
||||
if err != nil {
|
||||
t.Fatalf("extract txt file: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "sample.txt" {
|
||||
t.Fatalf("expected sample.txt, got %q", files[0].Path)
|
||||
}
|
||||
if string(files[0].Content) != want {
|
||||
t.Fatalf("content mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveFromReaderTXT_TruncatedWhenTooLarge(t *testing.T) {
|
||||
large := bytes.Repeat([]byte("a"), maxSingleFileSize+1024)
|
||||
files, err := ExtractArchiveFromReader(bytes.NewReader(large), "huge.log")
|
||||
if err != nil {
|
||||
t.Fatalf("extract huge txt from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 file, got %d", len(files))
|
||||
}
|
||||
|
||||
f := files[0]
|
||||
if !f.Truncated {
|
||||
t.Fatalf("expected file to be marked as truncated")
|
||||
}
|
||||
if got := len(f.Content); got != maxSingleFileSize {
|
||||
t.Fatalf("expected truncated size %d, got %d", maxSingleFileSize, got)
|
||||
}
|
||||
if f.TruncatedMessage == "" {
|
||||
t.Fatalf("expected truncation message")
|
||||
}
|
||||
}
|
||||
@@ -3,6 +3,8 @@ package parser
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
@@ -62,11 +64,44 @@ func (p *BMCParser) parseFiles() error {
|
||||
|
||||
// Preserve filename
|
||||
result.Filename = p.result.Filename
|
||||
|
||||
appendExtractionWarnings(result, p.files)
|
||||
p.result = result
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func appendExtractionWarnings(result *models.AnalysisResult, files []ExtractedFile) {
|
||||
if result == nil {
|
||||
return
|
||||
}
|
||||
|
||||
truncated := make([]string, 0)
|
||||
for _, f := range files {
|
||||
if !f.Truncated {
|
||||
continue
|
||||
}
|
||||
if f.TruncatedMessage != "" {
|
||||
truncated = append(truncated, fmt.Sprintf("%s: %s", f.Path, f.TruncatedMessage))
|
||||
continue
|
||||
}
|
||||
truncated = append(truncated, fmt.Sprintf("%s: content was truncated due to size limit", f.Path))
|
||||
}
|
||||
|
||||
if len(truncated) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: time.Now(),
|
||||
Source: "LOGPile",
|
||||
EventType: "Analysis Warning",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: "Input data was too large; analysis is partial and may be incomplete",
|
||||
RawData: strings.Join(truncated, "; "),
|
||||
})
|
||||
}
|
||||
|
||||
// Result returns the analysis result
|
||||
func (p *BMCParser) Result() *models.AnalysisResult {
|
||||
return p.result
|
||||
|
||||
34
internal/parser/parser_test.go
Normal file
34
internal/parser/parser_test.go
Normal file
@@ -0,0 +1,34 @@
|
||||
package parser
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestAppendExtractionWarnings(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
}
|
||||
|
||||
files := []ExtractedFile{
|
||||
{Path: "ok.log", Content: []byte("ok")},
|
||||
{Path: "big.log", Truncated: true, TruncatedMessage: "file exceeded size limit and was truncated"},
|
||||
}
|
||||
|
||||
appendExtractionWarnings(result, files)
|
||||
|
||||
if len(result.Events) != 1 {
|
||||
t.Fatalf("expected 1 warning event, got %d", len(result.Events))
|
||||
}
|
||||
ev := result.Events[0]
|
||||
if ev.Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", ev.Severity)
|
||||
}
|
||||
if ev.EventType != "Analysis Warning" {
|
||||
t.Fatalf("unexpected event type: %q", ev.EventType)
|
||||
}
|
||||
if ev.RawData == "" {
|
||||
t.Fatalf("expected warning details in RawData")
|
||||
}
|
||||
}
|
||||
39
internal/parser/vendors/inspur/fru.go
vendored
39
internal/parser/vendors/inspur/fru.go
vendored
@@ -103,8 +103,9 @@ func extractBoardInfo(fruList []models.FRUInfo, hw *models.HardwareConfig) {
|
||||
return
|
||||
}
|
||||
|
||||
// Look for the main board/chassis FRU entry
|
||||
// Usually it's the first entry or one with "Builtin FRU" or containing board info
|
||||
// Look for the main board/chassis FRU entry.
|
||||
// Keep the first non-empty serial as the server serial and avoid overwriting it
|
||||
// with module-specific serials (e.g., SCM_FRU).
|
||||
for _, fru := range fruList {
|
||||
// Skip empty entries
|
||||
if fru.ProductName == "" && fru.SerialNumber == "" {
|
||||
@@ -118,25 +119,23 @@ func extractBoardInfo(fruList []models.FRUInfo, hw *models.HardwareConfig) {
|
||||
strings.Contains(desc, "chassis") ||
|
||||
strings.Contains(desc, "board")
|
||||
|
||||
// If we haven't set board info yet, or this is a main board entry
|
||||
if hw.BoardInfo.ProductName == "" || isMainBoard {
|
||||
if fru.ProductName != "" {
|
||||
hw.BoardInfo.ProductName = fru.ProductName
|
||||
}
|
||||
if fru.SerialNumber != "" {
|
||||
hw.BoardInfo.SerialNumber = fru.SerialNumber
|
||||
}
|
||||
if fru.Manufacturer != "" {
|
||||
hw.BoardInfo.Manufacturer = fru.Manufacturer
|
||||
}
|
||||
if fru.PartNumber != "" {
|
||||
hw.BoardInfo.PartNumber = fru.PartNumber
|
||||
}
|
||||
if fru.SerialNumber != "" && hw.BoardInfo.SerialNumber == "" {
|
||||
hw.BoardInfo.SerialNumber = fru.SerialNumber
|
||||
}
|
||||
if fru.ProductName != "" && (hw.BoardInfo.ProductName == "" || isMainBoard) {
|
||||
hw.BoardInfo.ProductName = fru.ProductName
|
||||
}
|
||||
// Manufacturer from non-main FRU entries (e.g. PSU vendor) should not become server vendor.
|
||||
if fru.Manufacturer != "" && isMainBoard && hw.BoardInfo.Manufacturer == "" {
|
||||
hw.BoardInfo.Manufacturer = fru.Manufacturer
|
||||
}
|
||||
if fru.PartNumber != "" && (hw.BoardInfo.PartNumber == "" || isMainBoard) {
|
||||
hw.BoardInfo.PartNumber = fru.PartNumber
|
||||
}
|
||||
|
||||
// If we found a main board entry, stop searching
|
||||
if isMainBoard && fru.ProductName != "" && fru.SerialNumber != "" {
|
||||
break
|
||||
}
|
||||
// Main board entry with complete data is good enough to stop.
|
||||
if isMainBoard && hw.BoardInfo.ProductName != "" && hw.BoardInfo.SerialNumber != "" {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
59
internal/parser/vendors/inspur/fru_test.go
vendored
Normal file
59
internal/parser/vendors/inspur/fru_test.go
vendored
Normal file
@@ -0,0 +1,59 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestExtractBoardInfo_PreservesBuiltinSerial(t *testing.T) {
|
||||
hw := &models.HardwareConfig{}
|
||||
fruList := []models.FRUInfo{
|
||||
{
|
||||
Description: "Builtin FRU Device (ID 0)",
|
||||
SerialNumber: "21D634101",
|
||||
},
|
||||
{
|
||||
Description: "SCM_FRU (ID 8)",
|
||||
SerialNumber: "CAR509K10613C10",
|
||||
ProductName: "CA",
|
||||
Manufacturer: "inagile",
|
||||
PartNumber: "YZCA-02758-105",
|
||||
},
|
||||
}
|
||||
|
||||
extractBoardInfo(fruList, hw)
|
||||
|
||||
if hw.BoardInfo.SerialNumber != "21D634101" {
|
||||
t.Fatalf("expected board serial 21D634101, got %q", hw.BoardInfo.SerialNumber)
|
||||
}
|
||||
if hw.BoardInfo.ProductName != "CA" {
|
||||
t.Fatalf("expected product name CA, got %q", hw.BoardInfo.ProductName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractBoardInfo_DoesNotUsePSUVendorAsBoardManufacturer(t *testing.T) {
|
||||
hw := &models.HardwareConfig{}
|
||||
fruList := []models.FRUInfo{
|
||||
{
|
||||
Description: "Builtin FRU Device (ID 0)",
|
||||
SerialNumber: "2KD605238",
|
||||
},
|
||||
{
|
||||
Description: "PSU0_FRU (ID 30)",
|
||||
SerialNumber: "PMR315HS10F1A",
|
||||
ProductName: "AP-CR3000F12BY",
|
||||
Manufacturer: "APLUSPOWER",
|
||||
PartNumber: "18XA1M43400C2",
|
||||
},
|
||||
}
|
||||
|
||||
extractBoardInfo(fruList, hw)
|
||||
|
||||
if hw.BoardInfo.SerialNumber != "2KD605238" {
|
||||
t.Fatalf("expected board serial 2KD605238, got %q", hw.BoardInfo.SerialNumber)
|
||||
}
|
||||
if hw.BoardInfo.Manufacturer != "" {
|
||||
t.Fatalf("expected empty board manufacturer, got %q", hw.BoardInfo.Manufacturer)
|
||||
}
|
||||
}
|
||||
56
internal/parser/vendors/inspur/gpu_status.go
vendored
Normal file
56
internal/parser/vendors/inspur/gpu_status.go
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
var reFaultGPU = regexp.MustCompile(`\bF_GPU(\d+)\b`)
|
||||
|
||||
func applyGPUStatusFromEvents(hw *models.HardwareConfig, events []models.Event) {
|
||||
if hw == nil || len(hw.GPUs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
faulty := make(map[int]bool)
|
||||
for _, e := range events {
|
||||
if !isGPUFaultEvent(e) {
|
||||
continue
|
||||
}
|
||||
|
||||
matches := reFaultGPU.FindAllStringSubmatch(e.Description, -1)
|
||||
for _, m := range matches {
|
||||
if len(m) < 2 {
|
||||
continue
|
||||
}
|
||||
idx, err := strconv.Atoi(m[1])
|
||||
if err == nil && idx >= 0 {
|
||||
faulty[idx] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for i := range hw.GPUs {
|
||||
gpu := &hw.GPUs[i]
|
||||
idx, ok := extractLogicalGPUIndex(gpu.Slot)
|
||||
if ok && faulty[idx] {
|
||||
gpu.Status = "Critical"
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.TrimSpace(gpu.Status) == "" {
|
||||
gpu.Status = "OK"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func isGPUFaultEvent(e models.Event) bool {
|
||||
desc := strings.ToLower(e.Description)
|
||||
if strings.Contains(desc, "bios miss f_gpu") {
|
||||
return true
|
||||
}
|
||||
return strings.EqualFold(strings.TrimSpace(e.ID), "17FFB002")
|
||||
}
|
||||
120
internal/parser/vendors/inspur/hgx_gpu_status_test.go
vendored
Normal file
120
internal/parser/vendors/inspur/hgx_gpu_status_test.go
vendored
Normal file
@@ -0,0 +1,120 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestEnrichGPUsFromHGXHWInfo_UsesHGXLogicalMapping(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{Slot: "#GPU6"},
|
||||
{Slot: "#GPU7"},
|
||||
{Slot: "#GPU0"},
|
||||
{Slot: "#CPU0_PE1_E_BMC", Model: "AST2500 VGA"},
|
||||
},
|
||||
}
|
||||
|
||||
content := []byte(`
|
||||
# curl -X GET http://127.0.0.1/redfish/v1/Chassis/HGX_GPU_SXM_1/Assembly
|
||||
{"Name":"GPU Board Assembly","Model":"B200 180GB HBM3e","PartNumber":"PN1","SerialNumber":"SXM1SN"}
|
||||
# curl -X GET http://127.0.0.1/redfish/v1/Chassis/HGX_GPU_SXM_3/Assembly
|
||||
{"Name":"GPU Board Assembly","Model":"B200 180GB HBM3e","PartNumber":"PN3","SerialNumber":"SXM3SN"}
|
||||
# curl -X GET http://127.0.0.1/redfish/v1/Chassis/HGX_GPU_SXM_5/Assembly
|
||||
{"Name":"GPU Board Assembly","Model":"B200 180GB HBM3e","PartNumber":"PN5","SerialNumber":"SXM5SN"}
|
||||
`)
|
||||
|
||||
enrichGPUsFromHGXHWInfo(content, hw)
|
||||
|
||||
if hw.GPUs[0].SerialNumber != "SXM3SN" {
|
||||
t.Fatalf("expected #GPU6 to map to SXM3 serial, got %q", hw.GPUs[0].SerialNumber)
|
||||
}
|
||||
if hw.GPUs[1].SerialNumber != "SXM1SN" {
|
||||
t.Fatalf("expected #GPU7 to map to SXM1 serial, got %q", hw.GPUs[1].SerialNumber)
|
||||
}
|
||||
if hw.GPUs[2].SerialNumber != "SXM5SN" {
|
||||
t.Fatalf("expected #GPU0 to map to SXM5 serial, got %q", hw.GPUs[2].SerialNumber)
|
||||
}
|
||||
for _, g := range hw.GPUs {
|
||||
if g.Slot == "#CPU0_PE1_E_BMC" {
|
||||
t.Fatalf("expected non-HGX BMC VGA entry to be filtered out")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnrichGPUsFromHGXHWInfo_AddsMissingLogicalGPU(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{Slot: "#GPU0"},
|
||||
{Slot: "#GPU1"},
|
||||
{Slot: "#GPU2"},
|
||||
{Slot: "#GPU3"},
|
||||
{Slot: "#GPU4"},
|
||||
{Slot: "#GPU5"},
|
||||
{Slot: "#GPU7"},
|
||||
},
|
||||
}
|
||||
|
||||
content := []byte(`
|
||||
# curl -X GET http://127.0.0.1/redfish/v1/Chassis/HGX_GPU_SXM_3/Assembly
|
||||
{"Name":"GPU Board Assembly","Model":"B200 180GB HBM3e","PartNumber":"PN3","SerialNumber":"SXM3SN"}
|
||||
`)
|
||||
|
||||
enrichGPUsFromHGXHWInfo(content, hw)
|
||||
|
||||
found := false
|
||||
for _, g := range hw.GPUs {
|
||||
if g.Slot == "#GPU6" {
|
||||
found = true
|
||||
if g.SerialNumber != "SXM3SN" {
|
||||
t.Fatalf("expected synthesized #GPU6 serial SXM3SN, got %q", g.SerialNumber)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("expected synthesized #GPU6 entry")
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyGPUStatusFromEvents_MarksFaultedGPU(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{Slot: "#GPU6"},
|
||||
{Slot: "#GPU5"},
|
||||
},
|
||||
}
|
||||
|
||||
events := []models.Event{
|
||||
{
|
||||
ID: "17FFB002",
|
||||
Timestamp: time.Now(),
|
||||
Description: "PCIe Present mismatch BIOS miss F_GPU6",
|
||||
},
|
||||
}
|
||||
|
||||
applyGPUStatusFromEvents(hw, events)
|
||||
|
||||
if hw.GPUs[0].Status != "Critical" {
|
||||
t.Fatalf("expected #GPU6 status Critical, got %q", hw.GPUs[0].Status)
|
||||
}
|
||||
if hw.GPUs[1].Status != "OK" {
|
||||
t.Fatalf("expected healthy GPU status OK, got %q", hw.GPUs[1].Status)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseIDLLog_ParsesStructuredJSONLine(t *testing.T) {
|
||||
content := []byte(`{ "MESSAGE": "|2026-01-12T23:05:18+08:00|PCIE|Assert|Critical|17FFB002|PCIe Present mismatch BIOS miss F_GPU6 - Assert|" }`)
|
||||
|
||||
events := ParseIDLLog(content)
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 event from JSON line, got %d", len(events))
|
||||
}
|
||||
if events[0].ID != "17FFB002" {
|
||||
t.Fatalf("expected event ID 17FFB002, got %q", events[0].ID)
|
||||
}
|
||||
if events[0].Source != "PCIE" {
|
||||
t.Fatalf("expected source PCIE, got %q", events[0].Source)
|
||||
}
|
||||
}
|
||||
175
internal/parser/vendors/inspur/hgx_hwinfo.go
vendored
Normal file
175
internal/parser/vendors/inspur/hgx_hwinfo.go
vendored
Normal file
@@ -0,0 +1,175 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
type hgxGPUAssemblyInfo struct {
|
||||
Model string
|
||||
Part string
|
||||
Serial string
|
||||
}
|
||||
|
||||
// Logical GPU index mapping used by HGX B200 UI ordering.
|
||||
// Example from real logs/UI:
|
||||
// GPU0->SXM5, GPU1->SXM7, GPU2->SXM6, GPU3->SXM8, GPU4->SXM2, GPU5->SXM4, GPU6->SXM3, GPU7->SXM1.
|
||||
var hgxLogicalToSXM = map[int]int{
|
||||
0: 5,
|
||||
1: 7,
|
||||
2: 6,
|
||||
3: 8,
|
||||
4: 2,
|
||||
5: 4,
|
||||
6: 3,
|
||||
7: 1,
|
||||
}
|
||||
|
||||
var (
|
||||
reHGXGPUBlock = regexp.MustCompile(`(?s)/redfish/v1/Chassis/HGX_GPU_SXM_(\d+)/Assembly.*?"Name":\s*"GPU Board Assembly".*?"Model":\s*"([^"]+)".*?"PartNumber":\s*"([^"]+)".*?"SerialNumber":\s*"([^"]+)"`)
|
||||
reSlotGPU = regexp.MustCompile(`(?i)gpu\s*#?\s*(\d+)`)
|
||||
)
|
||||
|
||||
func enrichGPUsFromHGXHWInfo(content []byte, hw *models.HardwareConfig) {
|
||||
if hw == nil || len(hw.GPUs) == 0 || len(content) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
bySXM := parseHGXGPUAssembly(content)
|
||||
if len(bySXM) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
normalizeHGXGPUInventory(hw, bySXM)
|
||||
|
||||
for i := range hw.GPUs {
|
||||
gpu := &hw.GPUs[i]
|
||||
logicalIdx, ok := extractLogicalGPUIndex(gpu.Slot)
|
||||
if !ok {
|
||||
// Keep existing info if slot index cannot be determined.
|
||||
continue
|
||||
}
|
||||
|
||||
sxm := resolveSXMIndex(logicalIdx, bySXM)
|
||||
info, found := bySXM[sxm]
|
||||
if !found {
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.TrimSpace(gpu.SerialNumber) == "" {
|
||||
gpu.SerialNumber = info.Serial
|
||||
}
|
||||
if shouldReplaceGPUModel(gpu.Model) {
|
||||
gpu.Model = info.Model
|
||||
}
|
||||
if strings.TrimSpace(gpu.PartNumber) == "" {
|
||||
gpu.PartNumber = info.Part
|
||||
}
|
||||
if strings.TrimSpace(gpu.Manufacturer) == "" {
|
||||
gpu.Manufacturer = "NVIDIA"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseHGXGPUAssembly(content []byte) map[int]hgxGPUAssemblyInfo {
|
||||
result := make(map[int]hgxGPUAssemblyInfo)
|
||||
matches := reHGXGPUBlock.FindAllSubmatch(content, -1)
|
||||
for _, m := range matches {
|
||||
if len(m) != 5 {
|
||||
continue
|
||||
}
|
||||
|
||||
sxmIdx, err := strconv.Atoi(string(m[1]))
|
||||
if err != nil || sxmIdx <= 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
result[sxmIdx] = hgxGPUAssemblyInfo{
|
||||
Model: strings.TrimSpace(string(m[2])),
|
||||
Part: strings.TrimSpace(string(m[3])),
|
||||
Serial: strings.TrimSpace(string(m[4])),
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func extractLogicalGPUIndex(slot string) (int, bool) {
|
||||
m := reSlotGPU.FindStringSubmatch(slot)
|
||||
if len(m) < 2 {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
idx, err := strconv.Atoi(m[1])
|
||||
if err != nil || idx < 0 {
|
||||
return 0, false
|
||||
}
|
||||
return idx, true
|
||||
}
|
||||
|
||||
func resolveSXMIndex(logicalIdx int, bySXM map[int]hgxGPUAssemblyInfo) int {
|
||||
if sxm, ok := hgxLogicalToSXM[logicalIdx]; ok {
|
||||
if _, exists := bySXM[sxm]; exists {
|
||||
return sxm
|
||||
}
|
||||
}
|
||||
|
||||
identity := logicalIdx + 1
|
||||
if _, exists := bySXM[identity]; exists {
|
||||
return identity
|
||||
}
|
||||
|
||||
return identity
|
||||
}
|
||||
|
||||
func shouldReplaceGPUModel(model string) bool {
|
||||
trimmed := strings.TrimSpace(model)
|
||||
if trimmed == "" {
|
||||
return true
|
||||
}
|
||||
switch strings.ToLower(trimmed) {
|
||||
case "vga", "3d controller", "display controller", "unknown":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeHGXGPUInventory(hw *models.HardwareConfig, bySXM map[int]hgxGPUAssemblyInfo) {
|
||||
// Keep only logical HGX GPUs (#GPU0..#GPU7) and remove BMC VGA entries.
|
||||
filtered := make([]models.GPU, 0, len(hw.GPUs))
|
||||
present := make(map[int]bool)
|
||||
for _, gpu := range hw.GPUs {
|
||||
idx, ok := extractLogicalGPUIndex(gpu.Slot)
|
||||
if !ok || idx < 0 || idx > 7 {
|
||||
continue
|
||||
}
|
||||
present[idx] = true
|
||||
filtered = append(filtered, gpu)
|
||||
}
|
||||
|
||||
// If some logical GPUs are missing in asset.json, add placeholders from HGX Redfish assembly.
|
||||
for logicalIdx := 0; logicalIdx <= 7; logicalIdx++ {
|
||||
if present[logicalIdx] {
|
||||
continue
|
||||
}
|
||||
sxm := resolveSXMIndex(logicalIdx, bySXM)
|
||||
info, ok := bySXM[sxm]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
filtered = append(filtered, models.GPU{
|
||||
Slot: fmt.Sprintf("#GPU%d", logicalIdx),
|
||||
Model: info.Model,
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: info.Serial,
|
||||
PartNumber: info.Part,
|
||||
})
|
||||
}
|
||||
|
||||
hw.GPUs = filtered
|
||||
}
|
||||
10
internal/parser/vendors/inspur/idl.go
vendored
10
internal/parser/vendors/inspur/idl.go
vendored
@@ -8,8 +8,10 @@ import (
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// ParseIDLLog parses the IDL (Inspur Diagnostic Log) file for BMC alarms
|
||||
// Format: |timestamp|component|type|severity|eventID|description|
|
||||
// ParseIDLLog parses IDL-style entries for BMC alarms.
|
||||
// Works for both plain idl.log lines and JSON structured logs (idl_json/run_json)
|
||||
// where MESSAGE/LOG2_FMTMSG contains:
|
||||
// |timestamp|component|type|severity|eventID|description|
|
||||
func ParseIDLLog(content []byte) []models.Event {
|
||||
var events []models.Event
|
||||
|
||||
@@ -21,10 +23,6 @@ func ParseIDLLog(content []byte) []models.Event {
|
||||
seenEvents := make(map[string]bool) // Deduplicate events
|
||||
|
||||
for _, line := range lines {
|
||||
if !strings.Contains(line, "CommerDiagnose") {
|
||||
continue
|
||||
}
|
||||
|
||||
matches := re.FindStringSubmatch(line)
|
||||
if matches == nil {
|
||||
continue
|
||||
|
||||
30
internal/parser/vendors/inspur/parser.go
vendored
30
internal/parser/vendors/inspur/parser.go
vendored
@@ -15,7 +15,7 @@ import (
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
// IMPORTANT: Increment this version when making changes to parser logic!
|
||||
const parserVersion = "1.0.0"
|
||||
const parserVersion = "1.1.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -125,8 +125,9 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
result.Events = append(result.Events, componentEvents...)
|
||||
}
|
||||
|
||||
// Parse IDL log (BMC alarms/diagnose events)
|
||||
if f := parser.FindFileByName(files, "idl.log"); f != nil {
|
||||
// Parse IDL-like logs (plain and structured JSON logs with embedded IDL messages)
|
||||
idlFiles := parser.FindFileByPattern(files, "/idl.log", "idl_json.log", "run_json.log")
|
||||
for _, f := range idlFiles {
|
||||
idlEvents := ParseIDLLog(f.Content)
|
||||
result.Events = append(result.Events, idlEvents...)
|
||||
}
|
||||
@@ -144,6 +145,29 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
result.Events = append(result.Events, events...)
|
||||
}
|
||||
|
||||
// Fallback for archives where board serial is missing in parsed FRU/asset data:
|
||||
// recover it from log content, never from archive filename.
|
||||
if strings.TrimSpace(result.Hardware.BoardInfo.SerialNumber) == "" {
|
||||
if serial := inferBoardSerialFromFallbackLogs(files); serial != "" {
|
||||
result.Hardware.BoardInfo.SerialNumber = serial
|
||||
}
|
||||
}
|
||||
if strings.TrimSpace(result.Hardware.BoardInfo.ProductName) == "" {
|
||||
if model := inferBoardModelFromFallbackLogs(files); model != "" {
|
||||
result.Hardware.BoardInfo.ProductName = model
|
||||
}
|
||||
}
|
||||
|
||||
// Enrich GPU inventory from HGX Redfish snapshot (serial/model/part mapping).
|
||||
if f := parser.FindFileByName(files, "HGX_HWInfo_FWVersion.log"); f != nil && result.Hardware != nil {
|
||||
enrichGPUsFromHGXHWInfo(f.Content, result.Hardware)
|
||||
}
|
||||
|
||||
// Mark problematic GPUs from IDL errors like "BIOS miss F_GPU6".
|
||||
if result.Hardware != nil {
|
||||
applyGPUStatusFromEvents(result.Hardware, result.Events)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
|
||||
92
internal/parser/vendors/inspur/serial_fallback.go
vendored
Normal file
92
internal/parser/vendors/inspur/serial_fallback.go
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
var (
|
||||
hostnameJSONRegex = regexp.MustCompile(`"_HOSTNAME"\s*:\s*"([^"]+)"`)
|
||||
)
|
||||
|
||||
func inferBoardSerialFromFallbackLogs(files []parser.ExtractedFile) string {
|
||||
// Prefer FRU dump when present.
|
||||
if f := parser.FindFileByName(files, "fru.txt"); f != nil {
|
||||
fruList := ParseFRU(f.Content)
|
||||
for _, fru := range fruList {
|
||||
serial := strings.TrimSpace(fru.SerialNumber)
|
||||
if serial == "" || serial == "0" {
|
||||
continue
|
||||
}
|
||||
desc := strings.ToLower(strings.TrimSpace(fru.Description))
|
||||
if strings.Contains(desc, "builtin") || strings.Contains(desc, "fru device") {
|
||||
return serial
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback to explicit hostname file.
|
||||
if f := parser.FindFileByName(files, "hostname"); f != nil {
|
||||
if serial := sanitizeCandidateSerial(firstNonEmptyLine(string(f.Content))); serial != "" {
|
||||
return serial
|
||||
}
|
||||
}
|
||||
|
||||
// Last-resort fallback from structured journal logs.
|
||||
if f := parser.FindFileByName(files, "maintenance_json.log"); f != nil {
|
||||
if m := hostnameJSONRegex.FindSubmatch(f.Content); len(m) == 2 {
|
||||
if serial := sanitizeCandidateSerial(string(m[1])); serial != "" {
|
||||
return serial
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func inferBoardModelFromFallbackLogs(files []parser.ExtractedFile) string {
|
||||
// Prefer FRU dump when present.
|
||||
if f := parser.FindFileByName(files, "fru.txt"); f != nil {
|
||||
fruList := ParseFRU(f.Content)
|
||||
for _, fru := range fruList {
|
||||
model := sanitizeCandidateModel(fru.ProductName)
|
||||
if model == "" {
|
||||
continue
|
||||
}
|
||||
desc := strings.ToLower(strings.TrimSpace(fru.Description))
|
||||
if strings.Contains(desc, "builtin") || strings.Contains(desc, "fru device") {
|
||||
return model
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
func firstNonEmptyLine(s string) string {
|
||||
for _, line := range strings.Split(s, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if line != "" {
|
||||
return line
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func sanitizeCandidateSerial(s string) string {
|
||||
s = strings.TrimSpace(s)
|
||||
if s == "" || strings.EqualFold(s, "localhost") || strings.ContainsAny(s, " \t") {
|
||||
return ""
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
func sanitizeCandidateModel(s string) string {
|
||||
s = strings.TrimSpace(s)
|
||||
if s == "" || strings.EqualFold(s, "null") || s == "0" {
|
||||
return ""
|
||||
}
|
||||
return s
|
||||
}
|
||||
76
internal/parser/vendors/inspur/serial_fallback_test.go
vendored
Normal file
76
internal/parser/vendors/inspur/serial_fallback_test.go
vendored
Normal file
@@ -0,0 +1,76 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestInferBoardSerialFromFallbackLogs_PrefersFRU(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "component/fru.txt",
|
||||
Content: []byte(`FRU Device Description : Builtin FRU Device (ID 0)
|
||||
Product Serial : 23DB01639
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "runningdata/RTOSDump/hostname",
|
||||
Content: []byte("HOSTNAME-FALLBACK\n"),
|
||||
},
|
||||
{
|
||||
Path: "log/bmc/struct-log/maintenance_json.log",
|
||||
Content: []byte(`{ "_HOSTNAME": "JSON-FALLBACK" }`),
|
||||
},
|
||||
}
|
||||
|
||||
got := inferBoardSerialFromFallbackLogs(files)
|
||||
if got != "23DB01639" {
|
||||
t.Fatalf("expected FRU serial 23DB01639, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferBoardSerialFromFallbackLogs_UsesHostnameFile(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "runningdata/RTOSDump/hostname",
|
||||
Content: []byte("23DB01639\n"),
|
||||
},
|
||||
}
|
||||
|
||||
got := inferBoardSerialFromFallbackLogs(files)
|
||||
if got != "23DB01639" {
|
||||
t.Fatalf("expected hostname serial 23DB01639, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferBoardSerialFromFallbackLogs_UsesMaintenanceJSON(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "log/bmc/struct-log/maintenance_json.log",
|
||||
Content: []byte(`{ "_HOSTNAME": "23DB01639", "MESSAGE": "ok" }`),
|
||||
},
|
||||
}
|
||||
|
||||
got := inferBoardSerialFromFallbackLogs(files)
|
||||
if got != "23DB01639" {
|
||||
t.Fatalf("expected JSON hostname serial 23DB01639, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferBoardModelFromFallbackLogs_PrefersFRU(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "component/fru.txt",
|
||||
Content: []byte(`FRU Device Description : Builtin FRU Device (ID 0)
|
||||
Board Product : KR9288-X3-A0-F0-00
|
||||
Product Name : KR9288-X3-A0-F0-00
|
||||
`),
|
||||
},
|
||||
}
|
||||
|
||||
got := inferBoardModelFromFallbackLogs(files)
|
||||
if got != "KR9288-X3-A0-F0-00" {
|
||||
t.Fatalf("expected board model KR9288-X3-A0-F0-00, got %q", got)
|
||||
}
|
||||
}
|
||||
178
internal/parser/vendors/nvidia/gpu_model.go
vendored
Normal file
178
internal/parser/vendors/nvidia/gpu_model.go
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
var (
|
||||
gpuNameWithSerialRegex = regexp.MustCompile(`^SXM(\d+)_SN_(.+)$`)
|
||||
gpuNameSlotOnlyRegex = regexp.MustCompile(`^SXM(\d+)$`)
|
||||
skuModelRegex = regexp.MustCompile(`sku_hgx-([a-z0-9]+)-\d+-gpu`)
|
||||
skuCodeRegex = regexp.MustCompile(`^(G\d{3})[.-](\d{4})`)
|
||||
)
|
||||
|
||||
type testSpecData struct {
|
||||
Actions []struct {
|
||||
VirtualID string `json:"virtual_id"`
|
||||
Args struct {
|
||||
SKUToFile map[string]string `json:"sku_to_sku_json_file_map"`
|
||||
} `json:"args"`
|
||||
} `json:"actions"`
|
||||
}
|
||||
|
||||
type inventoryFieldDiagSummary struct {
|
||||
ModsRuns []struct {
|
||||
ModsHeader []struct {
|
||||
GPUName string `json:"GpuName"`
|
||||
BoardInfo string `json:"BoardInfo"`
|
||||
} `json:"ModsHeader"`
|
||||
} `json:"ModsRuns"`
|
||||
}
|
||||
|
||||
// ApplyGPUModelsFromSKU updates GPU model names using SKU mapping from testspec.json.
|
||||
// Mapping source:
|
||||
// - inventory/fieldiag_summary.json: GPUName -> BoardInfo(SKU)
|
||||
// - testspec.json: SKU -> sku_hgx-... filename
|
||||
func ApplyGPUModelsFromSKU(files []parser.ExtractedFile, result *models.AnalysisResult) {
|
||||
if result == nil || result.Hardware == nil || len(result.Hardware.GPUs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
skuToFile := parseSKUToFileMap(files)
|
||||
if len(skuToFile) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
serialToSKU, slotToSKU := parseGPUSKUMapping(files)
|
||||
if len(serialToSKU) == 0 && len(slotToSKU) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for i := range result.Hardware.GPUs {
|
||||
gpu := &result.Hardware.GPUs[i]
|
||||
sku := ""
|
||||
|
||||
if serial := strings.TrimSpace(gpu.SerialNumber); serial != "" {
|
||||
sku = serialToSKU[serial]
|
||||
}
|
||||
if sku == "" {
|
||||
sku = slotToSKU[strings.TrimSpace(gpu.Slot)]
|
||||
}
|
||||
if sku == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
model := resolveModelFromSKU(sku, skuToFile)
|
||||
if model == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
gpu.Model = model
|
||||
}
|
||||
}
|
||||
|
||||
func parseSKUToFileMap(files []parser.ExtractedFile) map[string]string {
|
||||
specFile := parser.FindFileByName(files, "testspec.json")
|
||||
if specFile == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var spec testSpecData
|
||||
if err := json.Unmarshal(specFile.Content, &spec); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
result := make(map[string]string)
|
||||
for _, action := range spec.Actions {
|
||||
for sku, file := range action.Args.SKUToFile {
|
||||
normSKU := normalizeSKUCode(sku)
|
||||
if normSKU == "" {
|
||||
continue
|
||||
}
|
||||
result[normSKU] = strings.TrimSpace(file)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func parseGPUSKUMapping(files []parser.ExtractedFile) (map[string]string, map[string]string) {
|
||||
var summaryFile *parser.ExtractedFile
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
if strings.Contains(path, "inventory/fieldiag_summary.json") ||
|
||||
strings.Contains(path, "inventory\\fieldiag_summary.json") {
|
||||
summaryFile = &f
|
||||
break
|
||||
}
|
||||
}
|
||||
if summaryFile == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var summary inventoryFieldDiagSummary
|
||||
if err := json.Unmarshal(summaryFile.Content, &summary); err != nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
serialToSKU := make(map[string]string)
|
||||
slotToSKU := make(map[string]string)
|
||||
|
||||
for _, run := range summary.ModsRuns {
|
||||
for _, h := range run.ModsHeader {
|
||||
sku := normalizeSKUCode(h.BoardInfo)
|
||||
if sku == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
gpuName := strings.TrimSpace(h.GPUName)
|
||||
if matches := gpuNameWithSerialRegex.FindStringSubmatch(gpuName); len(matches) == 3 {
|
||||
slotToSKU["GPUSXM"+matches[1]] = sku
|
||||
serialToSKU[strings.TrimSpace(matches[2])] = sku
|
||||
continue
|
||||
}
|
||||
if matches := gpuNameSlotOnlyRegex.FindStringSubmatch(gpuName); len(matches) == 2 {
|
||||
slotToSKU["GPUSXM"+matches[1]] = sku
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return serialToSKU, slotToSKU
|
||||
}
|
||||
|
||||
func resolveModelFromSKU(sku string, skuToFile map[string]string) string {
|
||||
file := strings.ToLower(strings.TrimSpace(skuToFile[normalizeSKUCode(sku)]))
|
||||
if file == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
m := skuModelRegex.FindStringSubmatch(file)
|
||||
if len(m) != 2 {
|
||||
return ""
|
||||
}
|
||||
|
||||
gpuFamily := strings.ToUpper(strings.TrimSpace(m[1]))
|
||||
if gpuFamily == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
return fmt.Sprintf("NVIDIA %s SXM", gpuFamily)
|
||||
}
|
||||
|
||||
func normalizeSKUCode(v string) string {
|
||||
s := strings.TrimSpace(strings.ToUpper(v))
|
||||
if s == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
if m := skuCodeRegex.FindStringSubmatch(s); len(m) == 3 {
|
||||
return m[1] + "-" + m[2]
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
56
internal/parser/vendors/nvidia/gpu_model_test.go
vendored
Normal file
56
internal/parser/vendors/nvidia/gpu_model_test.go
vendored
Normal file
@@ -0,0 +1,56 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestApplyGPUModelsFromSKU(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "inventory/fieldiag_summary.json",
|
||||
Content: []byte(`{
|
||||
"ModsRuns":[
|
||||
{"ModsHeader":[
|
||||
{"GpuName":"SXM5_SN_1653925025497","BoardInfo":"G520-0280"}
|
||||
]}
|
||||
]
|
||||
}`),
|
||||
},
|
||||
{
|
||||
Path: "testspec.json",
|
||||
Content: []byte(`{
|
||||
"actions":[
|
||||
{
|
||||
"virtual_id":"inventory",
|
||||
"args":{
|
||||
"sku_to_sku_json_file_map":{
|
||||
"G520-0280":"sku_hgx-h200-8-gpu_141g_aircooled_field.json"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}`),
|
||||
},
|
||||
}
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{
|
||||
Slot: "GPUSXM5",
|
||||
SerialNumber: "1653925025497",
|
||||
Model: "NVIDIA Device 2335",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
ApplyGPUModelsFromSKU(files, result)
|
||||
|
||||
if got := result.Hardware.GPUs[0].Model; got != "NVIDIA H200 SXM" {
|
||||
t.Fatalf("expected model NVIDIA H200 SXM, got %q", got)
|
||||
}
|
||||
}
|
||||
92
internal/parser/vendors/nvidia/inventory_log.go
vendored
Normal file
92
internal/parser/vendors/nvidia/inventory_log.go
vendored
Normal file
@@ -0,0 +1,92 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
var (
|
||||
// Regex to extract devname mappings from fieldiag command line
|
||||
// Example: "devname=0000:ba:00.0,SXM5_SN_1653925027099"
|
||||
devnameRegex = regexp.MustCompile(`devname=([\da-fA-F:\.]+),(\w+)`)
|
||||
)
|
||||
|
||||
// ParseInventoryLog parses inventory/output.log to extract GPU serial numbers
|
||||
// from fieldiag devname parameters (e.g., "SXM5_SN_1653925027099")
|
||||
func ParseInventoryLog(content []byte, result *models.AnalysisResult) error {
|
||||
if result.Hardware == nil || len(result.Hardware.GPUs) == 0 {
|
||||
// No GPUs to update
|
||||
return nil
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(strings.NewReader(string(content)))
|
||||
|
||||
// First pass: build mapping of PCI BDF -> Slot name and serial number from fieldiag command line
|
||||
pciToSlot := make(map[string]string)
|
||||
pciToSerial := make(map[string]string)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
// Look for fieldiag command with devname parameters
|
||||
if strings.Contains(line, "devname=") && strings.Contains(line, "fieldiag") {
|
||||
matches := devnameRegex.FindAllStringSubmatch(line, -1)
|
||||
for _, match := range matches {
|
||||
if len(match) == 3 {
|
||||
pciBDF := match[1]
|
||||
slotName := match[2]
|
||||
// Extract slot number and serial from name like "SXM5_SN_1653925027099"
|
||||
if strings.HasPrefix(slotName, "SXM") {
|
||||
parts := strings.Split(slotName, "_")
|
||||
if len(parts) >= 1 {
|
||||
// Convert "SXM5" to "GPUSXM5"
|
||||
slot := "GPU" + parts[0]
|
||||
pciToSlot[pciBDF] = slot
|
||||
}
|
||||
// Extract serial number from "SXM5_SN_1653925027099"
|
||||
if len(parts) == 3 && parts[1] == "SN" {
|
||||
serial := parts[2]
|
||||
pciToSerial[pciBDF] = serial
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Second pass: assign serial numbers to GPUs based on slot mapping
|
||||
for i := range result.Hardware.GPUs {
|
||||
slot := result.Hardware.GPUs[i].Slot
|
||||
// Find the PCI BDF for this slot
|
||||
var foundSerial string
|
||||
for pciBDF, mappedSlot := range pciToSlot {
|
||||
if mappedSlot == slot {
|
||||
// Found matching slot, get serial number
|
||||
if serial, ok := pciToSerial[pciBDF]; ok {
|
||||
foundSerial = serial
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if foundSerial != "" {
|
||||
result.Hardware.GPUs[i].SerialNumber = foundSerial
|
||||
}
|
||||
}
|
||||
|
||||
return scanner.Err()
|
||||
}
|
||||
|
||||
// findInventoryOutputLog finds the inventory/output.log file
|
||||
func findInventoryOutputLog(files []parser.ExtractedFile) *parser.ExtractedFile {
|
||||
for _, f := range files {
|
||||
// Look for inventory/output.log
|
||||
path := strings.ToLower(f.Path)
|
||||
if strings.Contains(path, "inventory/output.log") ||
|
||||
strings.Contains(path, "inventory\\output.log") {
|
||||
return &f
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
83
internal/parser/vendors/nvidia/inventory_log_test.go
vendored
Normal file
83
internal/parser/vendors/nvidia/inventory_log_test.go
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestParseInventoryLog(t *testing.T) {
|
||||
// Test with the real archive
|
||||
archivePath := filepath.Join("../../../../example", "A514359X5A09844_logs-20260115-151707.tar")
|
||||
|
||||
// Check if file exists
|
||||
if _, err := os.Stat(archivePath); os.IsNotExist(err) {
|
||||
t.Skip("Test archive not found, skipping test")
|
||||
}
|
||||
|
||||
// Extract files from archive
|
||||
files, err := parser.ExtractArchive(archivePath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to extract archive: %v", err)
|
||||
}
|
||||
|
||||
// Find inventory/output.log
|
||||
var inventoryLog *parser.ExtractedFile
|
||||
for _, f := range files {
|
||||
if strings.Contains(f.Path, "inventory/output.log") {
|
||||
inventoryLog = &f
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if inventoryLog == nil {
|
||||
t.Fatal("inventory/output.log not found")
|
||||
}
|
||||
|
||||
content := string(inventoryLog.Content)
|
||||
|
||||
// Test devname regex - this extracts both slot mapping and serial numbers
|
||||
t.Log("Testing devname extraction:")
|
||||
lines := strings.Split(content, "\n")
|
||||
serialCount := 0
|
||||
for i, line := range lines {
|
||||
if strings.Contains(line, "devname=") && strings.Contains(line, "fieldiag") {
|
||||
t.Logf("Line %d: Found fieldiag command", i)
|
||||
matches := devnameRegex.FindAllStringSubmatch(line, -1)
|
||||
t.Logf(" Found %d devname matches", len(matches))
|
||||
for _, match := range matches {
|
||||
if len(match) == 3 {
|
||||
pciBDF := match[1]
|
||||
slotName := match[2]
|
||||
t.Logf(" PCI: %s -> Slot: %s", pciBDF, slotName)
|
||||
|
||||
// Extract serial number from slot name
|
||||
if strings.HasPrefix(slotName, "SXM") {
|
||||
parts := strings.Split(slotName, "_")
|
||||
if len(parts) == 3 && parts[1] == "SN" {
|
||||
serial := parts[2]
|
||||
t.Logf(" Serial: %s", serial)
|
||||
serialCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
t.Logf("\nTotal GPU serials extracted: %d", serialCount)
|
||||
|
||||
if serialCount == 0 {
|
||||
t.Error("Expected to find GPU serial numbers, but found none")
|
||||
}
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
24
internal/parser/vendors/nvidia/parser.go
vendored
24
internal/parser/vendors/nvidia/parser.go
vendored
@@ -14,7 +14,7 @@ import (
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
// IMPORTANT: Increment this version when making changes to parser logic!
|
||||
const parserVersion = "1.1.0"
|
||||
const parserVersion = "1.2.4"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -105,6 +105,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
result.Hardware = &models.HardwareConfig{
|
||||
GPUs: make([]models.GPU, 0),
|
||||
}
|
||||
gpuStatuses := make(map[string]string)
|
||||
|
||||
// Parse output.log first (contains dmidecode system info)
|
||||
// Find the output.log file that contains dmidecode output
|
||||
@@ -124,18 +125,39 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
}
|
||||
}
|
||||
|
||||
// Parse inventory/output.log (contains GPU serial numbers from lspci)
|
||||
inventoryLogFile := findInventoryOutputLog(files)
|
||||
if inventoryLogFile != nil {
|
||||
if err := ParseInventoryLog(inventoryLogFile.Content, result); err != nil {
|
||||
// Log error but continue parsing other files
|
||||
_ = err // Ignore error for now
|
||||
}
|
||||
}
|
||||
|
||||
// Enhance GPU model names using SKU mapping from testspec + inventory summary.
|
||||
ApplyGPUModelsFromSKU(files, result)
|
||||
|
||||
// Parse summary.json (test results summary)
|
||||
if f := parser.FindFileByName(files, "summary.json"); f != nil {
|
||||
events := ParseSummaryJSON(f.Content)
|
||||
result.Events = append(result.Events, events...)
|
||||
for componentID, status := range CollectGPUStatusesFromSummaryJSON(f.Content) {
|
||||
gpuStatuses[componentID] = mergeGPUStatus(gpuStatuses[componentID], status)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse summary.csv (alternative format)
|
||||
if f := parser.FindFileByName(files, "summary.csv"); f != nil {
|
||||
csvEvents := ParseSummaryCSV(f.Content)
|
||||
result.Events = append(result.Events, csvEvents...)
|
||||
for componentID, status := range CollectGPUStatusesFromSummaryCSV(f.Content) {
|
||||
gpuStatuses[componentID] = mergeGPUStatus(gpuStatuses[componentID], status)
|
||||
}
|
||||
}
|
||||
|
||||
// Apply per-GPU PASS/FAIL status derived from summary files.
|
||||
ApplyGPUStatuses(result, gpuStatuses)
|
||||
|
||||
// Parse GPU field diagnostics logs
|
||||
gpuFieldiagFiles := parser.FindFileByPattern(files, "gpu_fieldiag/", ".log")
|
||||
for _, f := range gpuFieldiagFiles {
|
||||
|
||||
196
internal/parser/vendors/nvidia/parser_test.go
vendored
Normal file
196
internal/parser/vendors/nvidia/parser_test.go
vendored
Normal file
@@ -0,0 +1,196 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestNVIDIAParser_RealArchive(t *testing.T) {
|
||||
// Test with the real archive that was reported as problematic
|
||||
archivePath := filepath.Join("../../../../example", "A514359X5A09844_logs-20260115-151707.tar")
|
||||
|
||||
// Check if file exists
|
||||
if _, err := os.Stat(archivePath); os.IsNotExist(err) {
|
||||
t.Skip("Test archive not found, skipping test")
|
||||
}
|
||||
|
||||
// Extract files from archive
|
||||
files, err := parser.ExtractArchive(archivePath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to extract archive: %v", err)
|
||||
}
|
||||
|
||||
// Check if inventory/output.log exists
|
||||
hasInventoryLog := false
|
||||
for _, f := range files {
|
||||
if filepath.Base(f.Path) == "output.log" {
|
||||
t.Logf("Found file: %s", f.Path)
|
||||
}
|
||||
if f.Path == "./inventory/output.log" || f.Path == "inventory/output.log" {
|
||||
hasInventoryLog = true
|
||||
t.Logf("Found inventory/output.log with %d bytes", len(f.Content))
|
||||
}
|
||||
}
|
||||
if !hasInventoryLog {
|
||||
t.Error("inventory/output.log not found in extracted files")
|
||||
}
|
||||
|
||||
// Create parser and parse
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse archive: %v", err)
|
||||
}
|
||||
|
||||
// Verify basic system info
|
||||
if result.Hardware.BoardInfo.Manufacturer == "" {
|
||||
t.Error("Expected Manufacturer to be set")
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName == "" {
|
||||
t.Error("Expected ProductName to be set")
|
||||
}
|
||||
if result.Hardware.BoardInfo.SerialNumber == "" {
|
||||
t.Error("Expected SerialNumber to be set")
|
||||
}
|
||||
|
||||
t.Logf("System Info:")
|
||||
t.Logf(" Manufacturer: %s", result.Hardware.BoardInfo.Manufacturer)
|
||||
t.Logf(" Product: %s", result.Hardware.BoardInfo.ProductName)
|
||||
t.Logf(" Serial: %s", result.Hardware.BoardInfo.SerialNumber)
|
||||
|
||||
// Verify GPUs were found
|
||||
if len(result.Hardware.GPUs) == 0 {
|
||||
t.Error("Expected to find GPUs")
|
||||
}
|
||||
|
||||
t.Logf("\nFound %d GPUs:", len(result.Hardware.GPUs))
|
||||
|
||||
gpusWithSerials := 0
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
t.Logf(" %s: %s (Firmware: %s, Serial: %s, BDF: %s)",
|
||||
gpu.Slot, gpu.Model, gpu.Firmware, gpu.SerialNumber, gpu.BDF)
|
||||
|
||||
if gpu.SerialNumber != "" {
|
||||
gpusWithSerials++
|
||||
}
|
||||
}
|
||||
|
||||
// Verify that GPU serial numbers were extracted
|
||||
if gpusWithSerials == 0 {
|
||||
t.Error("Expected at least some GPUs to have serial numbers")
|
||||
}
|
||||
|
||||
t.Logf("\nGPUs with serial numbers: %d/%d", gpusWithSerials, len(result.Hardware.GPUs))
|
||||
|
||||
// Check events for SXM2 failures
|
||||
t.Logf("\nTotal events: %d", len(result.Events))
|
||||
|
||||
// Look for the specific serial or SXM2
|
||||
sxm2Events := 0
|
||||
for _, event := range result.Events {
|
||||
desc := event.Description + " " + event.RawData + " " + event.EventType
|
||||
if contains(desc, "SXM2") || contains(desc, "1653925025827") {
|
||||
t.Logf(" SXM2 Event: [%s] %s (Severity: %s)", event.EventType, event.Description, event.Severity)
|
||||
sxm2Events++
|
||||
}
|
||||
}
|
||||
|
||||
if sxm2Events == 0 {
|
||||
t.Error("Expected to find events for SXM2 (faulty GPU 1653925025827)")
|
||||
}
|
||||
t.Logf("\nSXM2 failure events: %d", sxm2Events)
|
||||
}
|
||||
|
||||
func TestNVIDIAParser_GPUStatusFromSummary_RealArchive07900(t *testing.T) {
|
||||
archivePath := filepath.Join("../../../../example", "A514359X5A07900_logs-20260122-074208.tar")
|
||||
if _, err := os.Stat(archivePath); os.IsNotExist(err) {
|
||||
t.Skip("Test archive not found, skipping test")
|
||||
}
|
||||
|
||||
files, err := parser.ExtractArchive(archivePath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to extract archive: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse archive: %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware == nil || len(result.Hardware.GPUs) == 0 {
|
||||
t.Fatalf("expected GPUs in parsed result")
|
||||
}
|
||||
|
||||
statusBySerial := make(map[string]string, len(result.Hardware.GPUs))
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
if gpu.SerialNumber != "" {
|
||||
statusBySerial[gpu.SerialNumber] = gpu.Status
|
||||
}
|
||||
}
|
||||
|
||||
if got := statusBySerial["1653925025497"]; got != "FAIL" {
|
||||
t.Fatalf("expected GPU serial 1653925025497 status FAIL, got %q", got)
|
||||
}
|
||||
|
||||
for serial, st := range statusBySerial {
|
||||
if serial == "1653925025497" {
|
||||
continue
|
||||
}
|
||||
if st != "PASS" {
|
||||
t.Fatalf("expected non-failing GPU serial %s status PASS, got %q", serial, st)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNVIDIAParser_GPUModelFromSKU_RealArchive07900(t *testing.T) {
|
||||
archivePath := filepath.Join("../../../../example", "A514359X5A07900_logs-20260122-074208.tar")
|
||||
if _, err := os.Stat(archivePath); os.IsNotExist(err) {
|
||||
t.Skip("Test archive not found, skipping test")
|
||||
}
|
||||
|
||||
files, err := parser.ExtractArchive(archivePath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to extract archive: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to parse archive: %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware == nil || len(result.Hardware.GPUs) == 0 {
|
||||
t.Fatalf("expected GPUs in parsed result")
|
||||
}
|
||||
|
||||
found := false
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
if gpu.Model == "NVIDIA H200 SXM" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
t.Fatalf("expected at least one GPU model NVIDIA H200 SXM")
|
||||
}
|
||||
}
|
||||
|
||||
func contains(s, substr string) bool {
|
||||
return len(s) >= len(substr) && (s == substr || len(s) > len(substr) &&
|
||||
(s[:len(substr)] == substr || s[len(s)-len(substr):] == substr ||
|
||||
findSubstring(s, substr)))
|
||||
}
|
||||
|
||||
func findSubstring(s, substr string) bool {
|
||||
for i := 0; i <= len(s)-len(substr); i++ {
|
||||
if s[i:i+len(substr)] == substr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
121
internal/parser/vendors/nvidia/summary.go
vendored
121
internal/parser/vendors/nvidia/summary.go
vendored
@@ -4,6 +4,7 @@ import (
|
||||
"encoding/csv"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -20,6 +21,8 @@ type SummaryEntry struct {
|
||||
IgnoreError string `json:"Ignore Error"`
|
||||
}
|
||||
|
||||
var gpuComponentIDRegex = regexp.MustCompile(`^SXM(\d+)_SN_(.+)$`)
|
||||
|
||||
// ParseSummaryJSON parses summary.json file and returns events
|
||||
func ParseSummaryJSON(content []byte) []models.Event {
|
||||
var entries []SummaryEntry
|
||||
@@ -92,6 +95,124 @@ func ParseSummaryCSV(content []byte) []models.Event {
|
||||
return events
|
||||
}
|
||||
|
||||
// CollectGPUStatusesFromSummaryJSON extracts per-GPU PASS/FAIL status from summary.json.
|
||||
// Key format in returned map is component ID from summary (e.g. "SXM5_SN_1653925025497").
|
||||
func CollectGPUStatusesFromSummaryJSON(content []byte) map[string]string {
|
||||
var entries []SummaryEntry
|
||||
if err := json.Unmarshal(content, &entries); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
statuses := make(map[string]string)
|
||||
for _, entry := range entries {
|
||||
component := strings.TrimSpace(entry.ComponentID)
|
||||
if component == "" || !gpuComponentIDRegex.MatchString(component) {
|
||||
continue
|
||||
}
|
||||
|
||||
current := statuses[component]
|
||||
next := "PASS"
|
||||
if !isSummaryJSONRecordPassing(entry.ErrorCode, entry.Notes) {
|
||||
next = "FAIL"
|
||||
}
|
||||
statuses[component] = mergeGPUStatus(current, next)
|
||||
}
|
||||
|
||||
return statuses
|
||||
}
|
||||
|
||||
// CollectGPUStatusesFromSummaryCSV extracts per-GPU PASS/FAIL status from summary.csv.
|
||||
// Key format in returned map is component ID from summary (e.g. "SXM5_SN_1653925025497").
|
||||
func CollectGPUStatusesFromSummaryCSV(content []byte) map[string]string {
|
||||
reader := csv.NewReader(strings.NewReader(string(content)))
|
||||
records, err := reader.ReadAll()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
statuses := make(map[string]string)
|
||||
for i, record := range records {
|
||||
if i == 0 || len(record) < 7 {
|
||||
continue
|
||||
}
|
||||
|
||||
component := strings.TrimSpace(record[5])
|
||||
if component == "" || !gpuComponentIDRegex.MatchString(component) {
|
||||
continue
|
||||
}
|
||||
|
||||
errorCode := strings.TrimSpace(record[0])
|
||||
notes := strings.TrimSpace(record[6])
|
||||
|
||||
current := statuses[component]
|
||||
next := "PASS"
|
||||
if !isSummaryCSVRecordPassing(errorCode, notes) {
|
||||
next = "FAIL"
|
||||
}
|
||||
statuses[component] = mergeGPUStatus(current, next)
|
||||
}
|
||||
|
||||
return statuses
|
||||
}
|
||||
|
||||
func isSummaryJSONRecordPassing(errorCode, notes string) bool {
|
||||
_ = errorCode
|
||||
return strings.TrimSpace(notes) == "OK"
|
||||
}
|
||||
|
||||
func isSummaryCSVRecordPassing(errorCode, notes string) bool {
|
||||
_ = errorCode
|
||||
return strings.TrimSpace(notes) == "OK"
|
||||
}
|
||||
|
||||
func mergeGPUStatus(current, next string) string {
|
||||
// FAIL has highest priority.
|
||||
if current == "FAIL" || next == "FAIL" {
|
||||
return "FAIL"
|
||||
}
|
||||
if current == "PASS" || next == "PASS" {
|
||||
return "PASS"
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// ApplyGPUStatuses applies aggregated PASS/FAIL statuses from summary components to parsed GPUs.
|
||||
func ApplyGPUStatuses(result *models.AnalysisResult, componentStatuses map[string]string) {
|
||||
if result == nil || result.Hardware == nil || len(result.Hardware.GPUs) == 0 || len(componentStatuses) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
slotStatus := make(map[string]string) // key: GPUSXM<idx>
|
||||
serialStatus := make(map[string]string) // key: GPU serial
|
||||
|
||||
for componentID, status := range componentStatuses {
|
||||
matches := gpuComponentIDRegex.FindStringSubmatch(strings.TrimSpace(componentID))
|
||||
if len(matches) != 3 {
|
||||
continue
|
||||
}
|
||||
slotKey := "GPUSXM" + matches[1]
|
||||
serialKey := strings.TrimSpace(matches[2])
|
||||
slotStatus[slotKey] = mergeGPUStatus(slotStatus[slotKey], status)
|
||||
if serialKey != "" {
|
||||
serialStatus[serialKey] = mergeGPUStatus(serialStatus[serialKey], status)
|
||||
}
|
||||
}
|
||||
|
||||
for i := range result.Hardware.GPUs {
|
||||
gpu := &result.Hardware.GPUs[i]
|
||||
next := ""
|
||||
if serial := strings.TrimSpace(gpu.SerialNumber); serial != "" {
|
||||
next = serialStatus[serial]
|
||||
}
|
||||
if next == "" {
|
||||
next = slotStatus[strings.TrimSpace(gpu.Slot)]
|
||||
}
|
||||
if next != "" {
|
||||
gpu.Status = next
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// formatSummaryDescription creates a human-readable description from summary entry
|
||||
func formatSummaryDescription(entry SummaryEntry) string {
|
||||
component := entry.ComponentID
|
||||
|
||||
46
internal/parser/vendors/nvidia/summary_status_test.go
vendored
Normal file
46
internal/parser/vendors/nvidia/summary_status_test.go
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestApplyGPUStatuses_FromSummaryCSV_FailAndPass(t *testing.T) {
|
||||
csvData := strings.Join([]string{
|
||||
"ErrorCode,Test,VirtualID,SubTest,Type,ComponentID,Notes,Level,,,IgnoreError",
|
||||
"0,gpumem,gpumem,,GPU,SXM1_SN_111,OK,1,,,False",
|
||||
"363,gpumem,gpumem,,GPU,SXM5_SN_1653925025497,Row remapping failed,1,,,False",
|
||||
"0,gpu_fieldiag,gpu_fieldiag,,GPU,SXM1_SN_111,OK,1,,,False",
|
||||
"0,gpu_fieldiag,gpu_fieldiag,,GPU,SXM2_SN_222,OK,1,,,False",
|
||||
}, "\n")
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{Slot: "GPUSXM1", SerialNumber: "111"},
|
||||
{Slot: "GPUSXM2", SerialNumber: "222"},
|
||||
{Slot: "GPUSXM5", SerialNumber: "1653925025497"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
statuses := CollectGPUStatusesFromSummaryCSV([]byte(csvData))
|
||||
ApplyGPUStatuses(result, statuses)
|
||||
|
||||
bySerial := map[string]string{}
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
bySerial[gpu.SerialNumber] = gpu.Status
|
||||
}
|
||||
|
||||
if bySerial["1653925025497"] != "FAIL" {
|
||||
t.Fatalf("expected serial 1653925025497 status FAIL, got %q", bySerial["1653925025497"])
|
||||
}
|
||||
if bySerial["111"] != "PASS" {
|
||||
t.Fatalf("expected serial 111 status PASS, got %q", bySerial["111"])
|
||||
}
|
||||
if bySerial["222"] != "PASS" {
|
||||
t.Fatalf("expected serial 222 status PASS, got %q", bySerial["222"])
|
||||
}
|
||||
}
|
||||
@@ -3,6 +3,7 @@ package nvidia
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
@@ -53,6 +54,8 @@ type Property struct {
|
||||
Value interface{} `json:"value"` // Can be string or number
|
||||
}
|
||||
|
||||
var nvswitchComponentIDRegex = regexp.MustCompile(`^(NVSWITCH\d+|NVSWITCHNVSWITCH\d+)$`)
|
||||
|
||||
// GetValueAsString returns the value as a string
|
||||
func (p *Property) GetValueAsString() string {
|
||||
switch v := p.Value.(type) {
|
||||
@@ -107,7 +110,7 @@ func parseInventoryComponents(components []Component, result *models.AnalysisRes
|
||||
}
|
||||
|
||||
// Parse NVSwitch components
|
||||
if strings.HasPrefix(comp.ComponentID, "NVSWITCHNVSWITCH") {
|
||||
if isNVSwitchComponentID(comp.ComponentID) {
|
||||
nvswitch := parseNVSwitchComponent(comp)
|
||||
if nvswitch != nil {
|
||||
// Add as PCIe device for now
|
||||
@@ -217,7 +220,7 @@ func parseGPUComponent(comp Component) *models.GPU {
|
||||
// parseNVSwitchComponent parses NVSwitch component information
|
||||
func parseNVSwitchComponent(comp Component) *models.PCIeDevice {
|
||||
device := &models.PCIeDevice{
|
||||
Slot: comp.ComponentID, // e.g., "NVSWITCHNVSWITCH0"
|
||||
Slot: normalizeNVSwitchSlot(comp.ComponentID),
|
||||
}
|
||||
|
||||
var vendorIDStr, deviceIDStr, vbios, pciID string
|
||||
@@ -279,3 +282,15 @@ func parseNVSwitchComponent(comp Component) *models.PCIeDevice {
|
||||
|
||||
return device
|
||||
}
|
||||
|
||||
func normalizeNVSwitchSlot(componentID string) string {
|
||||
slot := strings.TrimSpace(componentID)
|
||||
if strings.HasPrefix(slot, "NVSWITCHNVSWITCH") {
|
||||
return strings.Replace(slot, "NVSWITCHNVSWITCH", "NVSWITCH", 1)
|
||||
}
|
||||
return slot
|
||||
}
|
||||
|
||||
func isNVSwitchComponentID(componentID string) bool {
|
||||
return nvswitchComponentIDRegex.MatchString(strings.TrimSpace(componentID))
|
||||
}
|
||||
|
||||
46
internal/parser/vendors/nvidia/unified_summary_filter_test.go
vendored
Normal file
46
internal/parser/vendors/nvidia/unified_summary_filter_test.go
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
package nvidia
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestParseInventoryComponents_IgnoresNVSwitchPropertyChecks(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{},
|
||||
}
|
||||
|
||||
components := []Component{
|
||||
{
|
||||
ComponentID: "NVSWITCHNVSWITCH1",
|
||||
Properties: []Property{
|
||||
{ID: "VendorID", Value: "10de"},
|
||||
{ID: "DeviceID", Value: "22a3"},
|
||||
{ID: "PCIID", Value: "0000:06:00.0"},
|
||||
},
|
||||
},
|
||||
{
|
||||
ComponentID: "NVSWITCHNum",
|
||||
Properties: []Property{
|
||||
{ID: "NVSWITCHNum", Value: 4},
|
||||
},
|
||||
},
|
||||
{
|
||||
ComponentID: "NVSWITCH_NVSWITCH1_VendorID",
|
||||
Properties: []Property{
|
||||
{ID: "NVSWITCH_NVSWITCH1_VendorID", Value: "10de"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
parseInventoryComponents(components, result)
|
||||
|
||||
if got := len(result.Hardware.PCIeDevices); got != 1 {
|
||||
t.Fatalf("expected exactly 1 parsed NVSwitch device, got %d", got)
|
||||
}
|
||||
|
||||
if result.Hardware.PCIeDevices[0].Slot != "NVSWITCH1" {
|
||||
t.Fatalf("expected slot NVSWITCH1, got %q", result.Hardware.PCIeDevices[0].Slot)
|
||||
}
|
||||
}
|
||||
35
internal/parser/vendors/nvidia/unified_summary_test.go
vendored
Normal file
35
internal/parser/vendors/nvidia/unified_summary_test.go
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
package nvidia
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestParseNVSwitchComponent_NormalizesDuplicatedPrefixInSlot(t *testing.T) {
|
||||
comp := Component{
|
||||
ComponentID: "NVSWITCHNVSWITCH1",
|
||||
Properties: []Property{
|
||||
{ID: "VendorID", Value: "10de"},
|
||||
{ID: "DeviceID", Value: "22a3"},
|
||||
{ID: "Vendor", Value: "NVIDIA Corporation"},
|
||||
{ID: "PCIID", Value: "0000:06:00.0"},
|
||||
{ID: "PCISpeed", Value: "16GT/s"},
|
||||
{ID: "PCIWidth", Value: "x2"},
|
||||
{ID: "VBIOS_version", Value: "96.10.6D.00.01"},
|
||||
},
|
||||
}
|
||||
|
||||
device := parseNVSwitchComponent(comp)
|
||||
if device == nil {
|
||||
t.Fatal("expected non-nil NVSwitch device")
|
||||
}
|
||||
|
||||
if device.Slot != "NVSWITCH1" {
|
||||
t.Fatalf("expected normalized slot NVSWITCH1, got %q", device.Slot)
|
||||
}
|
||||
|
||||
if device.BDF != "0000:06:00.0" {
|
||||
t.Fatalf("expected BDF 0000:06:00.0, got %q", device.BDF)
|
||||
}
|
||||
|
||||
if device.DeviceClass != "NVSwitch" {
|
||||
t.Fatalf("expected device class NVSwitch, got %q", device.DeviceClass)
|
||||
}
|
||||
}
|
||||
137
internal/parser/vendors/nvidia_bug_report/gpu.go
vendored
137
internal/parser/vendors/nvidia_bug_report/gpu.go
vendored
@@ -106,6 +106,8 @@ func parseGPUInfo(content string, result *models.AnalysisResult) {
|
||||
result.Hardware.GPUs = append(result.Hardware.GPUs, *currentGPU)
|
||||
}
|
||||
|
||||
applyGPUSerialNumbers(content, result.Hardware.GPUs)
|
||||
|
||||
// Create event for GPU summary
|
||||
if len(result.Hardware.GPUs) > 0 {
|
||||
result.Events = append(result.Events, models.Event{
|
||||
@@ -168,3 +170,138 @@ func formatGPUSummary(gpus []models.GPU) string {
|
||||
|
||||
return summary.String()
|
||||
}
|
||||
|
||||
func applyGPUSerialNumbers(content string, gpus []models.GPU) {
|
||||
if len(gpus) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
serialByBDF := parseGPUSerialsFromNvidiaSMI(content)
|
||||
if len(serialByBDF) == 0 {
|
||||
serialByBDF = parseGPUSerialsFromSummary(content)
|
||||
}
|
||||
|
||||
if len(serialByBDF) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for i := range gpus {
|
||||
bdf := normalizeGPUAddress(gpus[i].BDF)
|
||||
if bdf == "" {
|
||||
continue
|
||||
}
|
||||
if serial, ok := serialByBDF[bdf]; ok && serial != "" {
|
||||
gpus[i].SerialNumber = serial
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseGPUSerialsFromNvidiaSMI(content string) map[string]string {
|
||||
scanner := bufio.NewScanner(strings.NewReader(content))
|
||||
reGPU := regexp.MustCompile(`^GPU\s+([0-9A-F]{8}:[0-9A-F]{2}:[0-9A-F]{2}\.[0-9A-F])$`)
|
||||
|
||||
serialByBDF := make(map[string]string)
|
||||
currentBDF := ""
|
||||
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if matches := reGPU.FindStringSubmatch(line); len(matches) == 2 {
|
||||
currentBDF = normalizeGPUAddress(matches[1])
|
||||
continue
|
||||
}
|
||||
|
||||
if currentBDF == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if strings.HasPrefix(line, "Serial Number") {
|
||||
parts := strings.SplitN(line, ":", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
serial := strings.TrimSpace(parts[1])
|
||||
if serial != "" && !strings.EqualFold(serial, "N/A") {
|
||||
serialByBDF[currentBDF] = serial
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return serialByBDF
|
||||
}
|
||||
|
||||
func parseGPUSerialsFromSummary(content string) map[string]string {
|
||||
scanner := bufio.NewScanner(strings.NewReader(content))
|
||||
|
||||
serialByBDF := make(map[string]string)
|
||||
inGPUDetails := false
|
||||
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
trimmed := strings.TrimSpace(line)
|
||||
|
||||
if strings.HasPrefix(trimmed, "NVIDIA GPU Details") {
|
||||
inGPUDetails = true
|
||||
}
|
||||
if !inGPUDetails {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(trimmed, "NVIDIA Switch Details") {
|
||||
break
|
||||
}
|
||||
|
||||
parts := strings.Split(line, "|")
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
payload := strings.TrimSpace(parts[len(parts)-1])
|
||||
if payload == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
fields := strings.Split(payload, ",")
|
||||
if len(fields) < 6 {
|
||||
continue
|
||||
}
|
||||
|
||||
bdf := normalizeGPUAddress(strings.TrimSpace(fields[4]))
|
||||
serial := strings.TrimSpace(fields[5])
|
||||
if bdf == "" || serial == "" || strings.EqualFold(serial, "N/A") {
|
||||
continue
|
||||
}
|
||||
serialByBDF[bdf] = serial
|
||||
}
|
||||
|
||||
return serialByBDF
|
||||
}
|
||||
|
||||
func normalizeGPUAddress(addr string) string {
|
||||
addr = strings.TrimSpace(addr)
|
||||
if addr == "" {
|
||||
return ""
|
||||
}
|
||||
parts := strings.Split(addr, ":")
|
||||
if len(parts) != 3 {
|
||||
return strings.ToLower(addr)
|
||||
}
|
||||
|
||||
domain := parts[0]
|
||||
bus := parts[1]
|
||||
devFn := parts[2]
|
||||
|
||||
devFnParts := strings.Split(devFn, ".")
|
||||
if len(devFnParts) != 2 {
|
||||
return strings.ToLower(addr)
|
||||
}
|
||||
device := devFnParts[0]
|
||||
fn := devFnParts[1]
|
||||
|
||||
if len(domain) == 8 {
|
||||
domain = domain[4:]
|
||||
}
|
||||
|
||||
return strings.ToLower(domain + ":" + bus + ":" + device + "." + fn)
|
||||
}
|
||||
|
||||
54
internal/parser/vendors/nvidia_bug_report/gpu_test.go
vendored
Normal file
54
internal/parser/vendors/nvidia_bug_report/gpu_test.go
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
package nvidia_bug_report
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestApplyGPUSerialNumbers_FromNvidiaSMI(t *testing.T) {
|
||||
content := `
|
||||
/usr/bin/nvidia-smi --query
|
||||
GPU 00000000:18:00.0
|
||||
Serial Number : 1653925025827
|
||||
GPU 00000000:2A:00.0
|
||||
Serial Number : 1653925050608
|
||||
`
|
||||
|
||||
gpus := []models.GPU{
|
||||
{BDF: "0000:18:00.0"},
|
||||
{BDF: "0000:2a:00.0"},
|
||||
}
|
||||
|
||||
applyGPUSerialNumbers(content, gpus)
|
||||
|
||||
if gpus[0].SerialNumber != "1653925025827" {
|
||||
t.Fatalf("unexpected serial for gpu0: %q", gpus[0].SerialNumber)
|
||||
}
|
||||
if gpus[1].SerialNumber != "1653925050608" {
|
||||
t.Fatalf("unexpected serial for gpu1: %q", gpus[1].SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyGPUSerialNumbers_FromSummaryFallback(t *testing.T) {
|
||||
content := `
|
||||
NVIDIA GPU Details | NVIDIA H200, 570.172.08, 143771 MiB, 96.00.D0.00.03, 00000000:18:00.0, 1653925025827
|
||||
| NVIDIA H200, 570.172.08, 143771 MiB, 96.00.D0.00.03, 00000000:2A:00.0, 1653925050608
|
||||
NVIDIA Switch Details | No devices matching query 'Quantum'
|
||||
`
|
||||
|
||||
gpus := []models.GPU{
|
||||
{BDF: "0000:18:00.0"},
|
||||
{BDF: "0000:2a:00.0"},
|
||||
}
|
||||
|
||||
applyGPUSerialNumbers(content, gpus)
|
||||
|
||||
if gpus[0].SerialNumber != "1653925025827" {
|
||||
t.Fatalf("unexpected serial for gpu0: %q", gpus[0].SerialNumber)
|
||||
}
|
||||
if gpus[1].SerialNumber != "1653925050608" {
|
||||
t.Fatalf("unexpected serial for gpu1: %q", gpus[1].SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
606
internal/parser/vendors/unraid/parser.go
vendored
Normal file
606
internal/parser/vendors/unraid/parser.go
vendored
Normal file
@@ -0,0 +1,606 @@
|
||||
// Package unraid provides parser for Unraid diagnostics archives.
|
||||
package unraid
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
// parserVersion - increment when parsing logic changes.
|
||||
const parserVersion = "1.0.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser implements VendorParser for Unraid diagnostics.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string { return "Unraid Parser" }
|
||||
func (p *Parser) Vendor() string { return "unraid" }
|
||||
func (p *Parser) Version() string { return parserVersion }
|
||||
|
||||
// Detect checks if files contain typical Unraid markers.
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
hasUnraidVersion := false
|
||||
hasDiagnosticsDir := false
|
||||
hasVarsParity := false
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
content := string(f.Content)
|
||||
|
||||
// Check for unraid version file
|
||||
if strings.Contains(path, "unraid-") && strings.HasSuffix(path, ".txt") {
|
||||
hasUnraidVersion = true
|
||||
confidence += 40
|
||||
}
|
||||
|
||||
// Check for Unraid-specific directories
|
||||
if strings.Contains(path, "diagnostics-") &&
|
||||
(strings.Contains(path, "/system/") ||
|
||||
strings.Contains(path, "/smart/") ||
|
||||
strings.Contains(path, "/config/")) {
|
||||
hasDiagnosticsDir = true
|
||||
if confidence < 60 {
|
||||
confidence += 20
|
||||
}
|
||||
}
|
||||
|
||||
// Check file content for Unraid markers
|
||||
if strings.Contains(content, "Unraid kernel build") {
|
||||
confidence += 50
|
||||
}
|
||||
|
||||
// Check for vars.txt with disk array info
|
||||
if strings.Contains(path, "vars.txt") && strings.Contains(content, "[parity]") {
|
||||
hasVarsParity = true
|
||||
confidence += 30
|
||||
}
|
||||
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
}
|
||||
}
|
||||
|
||||
// Boost confidence if we see multiple key indicators together
|
||||
if hasUnraidVersion && (hasDiagnosticsDir || hasVarsParity) {
|
||||
confidence += 20
|
||||
}
|
||||
|
||||
if confidence > 100 {
|
||||
return 100
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
// Parse parses Unraid diagnostics and returns normalized data.
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
},
|
||||
}
|
||||
|
||||
// Track storage by slot to avoid duplicates
|
||||
storageBySlot := make(map[string]*models.Storage)
|
||||
|
||||
// Parse different file types
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
content := string(f.Content)
|
||||
|
||||
switch {
|
||||
case strings.Contains(path, "unraid-") && strings.HasSuffix(path, ".txt"):
|
||||
parseVersionFile(content, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/lscpu.txt") || strings.HasSuffix(path, "\\system\\lscpu.txt"):
|
||||
parseLsCPU(content, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/motherboard.txt") || strings.HasSuffix(path, "\\system\\motherboard.txt"):
|
||||
parseMotherboard(content, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/memory.txt") || strings.HasSuffix(path, "\\system\\memory.txt"):
|
||||
parseMemory(content, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/vars.txt") || strings.HasSuffix(path, "\\system\\vars.txt"):
|
||||
parseVarsToMap(content, storageBySlot, result)
|
||||
|
||||
case strings.Contains(path, "/smart/") && strings.HasSuffix(path, ".txt"):
|
||||
parseSMARTFileToMap(content, f.Path, storageBySlot, result)
|
||||
|
||||
case strings.HasSuffix(path, "/logs/syslog.txt") || strings.HasSuffix(path, "\\logs\\syslog.txt"):
|
||||
parseSyslog(content, result)
|
||||
}
|
||||
}
|
||||
|
||||
// Convert storage map to slice
|
||||
for _, disk := range storageBySlot {
|
||||
result.Hardware.Storage = append(result.Hardware.Storage, *disk)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func parseVersionFile(content string, result *models.AnalysisResult) {
|
||||
lines := strings.Split(content, "\n")
|
||||
if len(lines) > 0 {
|
||||
version := strings.TrimSpace(lines[0])
|
||||
if version != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: "Unraid OS",
|
||||
Version: version,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseLsCPU(content string, result *models.AnalysisResult) {
|
||||
// Normalize line endings
|
||||
content = strings.ReplaceAll(content, "\r\n", "\n")
|
||||
|
||||
var cpu models.CPU
|
||||
cpu.Socket = 0 // Default to socket 0
|
||||
|
||||
// Parse CPU model - handle multiple spaces
|
||||
if m := regexp.MustCompile(`(?m)^Model name:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
cpu.Model = strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
// Parse CPU(s) - total thread count
|
||||
if m := regexp.MustCompile(`(?m)^CPU\(s\):\s+(\d+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
cpu.Threads = parseInt(m[1])
|
||||
}
|
||||
|
||||
// Parse cores per socket
|
||||
if m := regexp.MustCompile(`(?m)^Core\(s\) per socket:\s+(\d+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
cpu.Cores = parseInt(m[1])
|
||||
}
|
||||
|
||||
// Parse CPU max MHz
|
||||
if m := regexp.MustCompile(`(?m)^CPU max MHz:\s+([\d.]+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
cpu.FrequencyMHz = int(parseFloat(m[1]))
|
||||
}
|
||||
|
||||
// If no max MHz, try current MHz
|
||||
if cpu.FrequencyMHz == 0 {
|
||||
if m := regexp.MustCompile(`(?m)^CPU MHz:\s+([\d.]+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
cpu.FrequencyMHz = int(parseFloat(m[1]))
|
||||
}
|
||||
}
|
||||
|
||||
// Only add if we got at least the model
|
||||
if cpu.Model != "" {
|
||||
result.Hardware.CPUs = append(result.Hardware.CPUs, cpu)
|
||||
}
|
||||
}
|
||||
|
||||
func parseMotherboard(content string, result *models.AnalysisResult) {
|
||||
var board models.BoardInfo
|
||||
|
||||
// Parse manufacturer from dmidecode output
|
||||
lines := strings.Split(content, "\n")
|
||||
inBIOSSection := false
|
||||
|
||||
for _, line := range lines {
|
||||
trimmed := strings.TrimSpace(line)
|
||||
|
||||
if strings.Contains(trimmed, "BIOS Information") {
|
||||
inBIOSSection = true
|
||||
continue
|
||||
}
|
||||
|
||||
if inBIOSSection {
|
||||
if strings.HasPrefix(trimmed, "Vendor:") {
|
||||
parts := strings.SplitN(trimmed, ":", 2)
|
||||
if len(parts) == 2 {
|
||||
board.Manufacturer = strings.TrimSpace(parts[1])
|
||||
}
|
||||
} else if strings.HasPrefix(trimmed, "Version:") {
|
||||
parts := strings.SplitN(trimmed, ":", 2)
|
||||
if len(parts) == 2 {
|
||||
biosVersion := strings.TrimSpace(parts[1])
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: "System BIOS",
|
||||
Version: biosVersion,
|
||||
})
|
||||
}
|
||||
} else if strings.HasPrefix(trimmed, "Release Date:") {
|
||||
// Could extract BIOS date if needed
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Extract product name from first line
|
||||
if len(lines) > 0 {
|
||||
firstLine := strings.TrimSpace(lines[0])
|
||||
if firstLine != "" {
|
||||
board.ProductName = firstLine
|
||||
}
|
||||
}
|
||||
|
||||
result.Hardware.BoardInfo = board
|
||||
}
|
||||
|
||||
func parseMemory(content string, result *models.AnalysisResult) {
|
||||
// Parse memory from free output
|
||||
// Example: Mem: 50Gi 11Gi 1.4Gi 565Mi 39Gi 39Gi
|
||||
if m := regexp.MustCompile(`(?m)^Mem:\s+(\d+(?:\.\d+)?)(Ki|Mi|Gi|Ti)`).FindStringSubmatch(content); len(m) >= 3 {
|
||||
size := parseFloat(m[1])
|
||||
unit := m[2]
|
||||
|
||||
var sizeMB int
|
||||
switch unit {
|
||||
case "Ki":
|
||||
sizeMB = int(size / 1024)
|
||||
case "Mi":
|
||||
sizeMB = int(size)
|
||||
case "Gi":
|
||||
sizeMB = int(size * 1024)
|
||||
case "Ti":
|
||||
sizeMB = int(size * 1024 * 1024)
|
||||
}
|
||||
|
||||
if sizeMB > 0 {
|
||||
result.Hardware.Memory = append(result.Hardware.Memory, models.MemoryDIMM{
|
||||
Slot: "system",
|
||||
Present: true,
|
||||
SizeMB: sizeMB,
|
||||
Type: "DRAM",
|
||||
Status: "ok",
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseVarsToMap(content string, storageBySlot map[string]*models.Storage, result *models.AnalysisResult) {
|
||||
// Normalize line endings
|
||||
content = strings.ReplaceAll(content, "\r\n", "\n")
|
||||
|
||||
// Parse PHP-style array from vars.txt
|
||||
// Extract only the first "disks" section to avoid duplicates
|
||||
disksStart := strings.Index(content, "disks\n(")
|
||||
if disksStart == -1 {
|
||||
return
|
||||
}
|
||||
|
||||
// Find the end of this disks array (look for next top-level key or end)
|
||||
remaining := content[disksStart:]
|
||||
endPattern := regexp.MustCompile(`(?m)^[a-z_]+\n\(`)
|
||||
endMatches := endPattern.FindAllStringIndex(remaining, -1)
|
||||
|
||||
var disksSection string
|
||||
if len(endMatches) > 1 {
|
||||
// Use second match as end (first match is "disks" itself)
|
||||
disksSection = remaining[:endMatches[1][0]]
|
||||
} else {
|
||||
disksSection = remaining
|
||||
}
|
||||
|
||||
// Look for disk entries within this section only
|
||||
diskRe := regexp.MustCompile(`(?m)^\s+\[(disk\d+|parity|cache\d*)\]\s+=>\s+Array`)
|
||||
matches := diskRe.FindAllStringSubmatch(disksSection, -1)
|
||||
|
||||
seen := make(map[string]bool)
|
||||
for _, match := range matches {
|
||||
if len(match) < 2 {
|
||||
continue
|
||||
}
|
||||
diskName := match[1]
|
||||
|
||||
// Skip if already processed
|
||||
if seen[diskName] {
|
||||
continue
|
||||
}
|
||||
seen[diskName] = true
|
||||
|
||||
// Find the section for this disk
|
||||
diskSection := extractDiskSection(disksSection, diskName)
|
||||
if diskSection == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
var disk models.Storage
|
||||
disk.Slot = diskName
|
||||
|
||||
// Parse disk properties
|
||||
if m := regexp.MustCompile(`\[device\]\s*=>\s*(\w+)`).FindStringSubmatch(diskSection); len(m) == 2 {
|
||||
disk.Interface = "SATA (" + m[1] + ")"
|
||||
}
|
||||
|
||||
if m := regexp.MustCompile(`\[id\]\s*=>\s*([^\n]+)`).FindStringSubmatch(diskSection); len(m) == 2 {
|
||||
idValue := strings.TrimSpace(m[1])
|
||||
// Only use if it's not empty or a placeholder
|
||||
if idValue != "" && !strings.Contains(idValue, "=>") {
|
||||
disk.Model = idValue
|
||||
}
|
||||
}
|
||||
|
||||
if m := regexp.MustCompile(`\[size\]\s*=>\s*(\d+)`).FindStringSubmatch(diskSection); len(m) == 2 {
|
||||
sizeKB := parseInt(m[1])
|
||||
if sizeKB > 0 {
|
||||
disk.SizeGB = sizeKB / (1024 * 1024) // Convert KB to GB
|
||||
}
|
||||
}
|
||||
|
||||
if m := regexp.MustCompile(`\[temp\]\s*=>\s*(\d+)`).FindStringSubmatch(diskSection); len(m) == 2 {
|
||||
temp := parseInt(m[1])
|
||||
if temp > 0 {
|
||||
result.Sensors = append(result.Sensors, models.SensorReading{
|
||||
Name: diskName + "_temp",
|
||||
Type: "temperature",
|
||||
Value: float64(temp),
|
||||
Unit: "C",
|
||||
Status: getTempStatus(temp),
|
||||
RawValue: strconv.Itoa(temp),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if m := regexp.MustCompile(`\[fsType\]\s*=>\s*(\w+)`).FindStringSubmatch(diskSection); len(m) == 2 {
|
||||
fsType := m[1]
|
||||
if fsType != "" && fsType != "auto" {
|
||||
disk.Type = fsType
|
||||
}
|
||||
}
|
||||
|
||||
disk.Present = true
|
||||
|
||||
// Only add/merge disks with meaningful data
|
||||
if disk.Model != "" && disk.SizeGB > 0 {
|
||||
// Check if we already have this disk from SMART files
|
||||
if existing, ok := storageBySlot[diskName]; ok {
|
||||
// Merge vars.txt data into existing entry, preferring SMART data
|
||||
if existing.Model == "" && disk.Model != "" {
|
||||
existing.Model = disk.Model
|
||||
}
|
||||
if existing.SizeGB == 0 && disk.SizeGB > 0 {
|
||||
existing.SizeGB = disk.SizeGB
|
||||
}
|
||||
if existing.Type == "" && disk.Type != "" {
|
||||
existing.Type = disk.Type
|
||||
}
|
||||
if existing.Interface == "" && disk.Interface != "" {
|
||||
existing.Interface = disk.Interface
|
||||
}
|
||||
// vars.txt doesn't have serial/firmware, so don't overwrite from SMART
|
||||
} else {
|
||||
// New disk not in SMART data
|
||||
storageBySlot[diskName] = &disk
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func extractDiskSection(content, diskName string) string {
|
||||
// Find the start of this disk's array section
|
||||
startPattern := regexp.MustCompile(`(?m)^\s+\[` + regexp.QuoteMeta(diskName) + `\]\s+=>\s+Array\s*\n\s+\(`)
|
||||
startIdx := startPattern.FindStringIndex(content)
|
||||
if startIdx == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Find the end (next disk or end of disks array)
|
||||
endPattern := regexp.MustCompile(`(?m)^\s+\)`)
|
||||
remainingContent := content[startIdx[1]:]
|
||||
endIdx := endPattern.FindStringIndex(remainingContent)
|
||||
|
||||
if endIdx == nil {
|
||||
return remainingContent
|
||||
}
|
||||
|
||||
return remainingContent[:endIdx[0]]
|
||||
}
|
||||
|
||||
func parseSMARTFileToMap(content, filePath string, storageBySlot map[string]*models.Storage, result *models.AnalysisResult) {
|
||||
// Extract disk name from filename
|
||||
// Example: ST4000NM000B-2TF100_WX103EC9-20260205-2333 disk1 (sdi).txt
|
||||
diskName := ""
|
||||
if m := regexp.MustCompile(`(disk\d+|parity|cache\d*)`).FindStringSubmatch(filePath); len(m) > 0 {
|
||||
diskName = m[1]
|
||||
}
|
||||
if diskName == "" {
|
||||
return
|
||||
}
|
||||
|
||||
var disk models.Storage
|
||||
disk.Slot = diskName
|
||||
|
||||
// Parse device model
|
||||
if m := regexp.MustCompile(`(?m)^Device Model:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
disk.Model = strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
// Parse serial number
|
||||
if m := regexp.MustCompile(`(?m)^Serial Number:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
disk.SerialNumber = strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
// Parse firmware version
|
||||
if m := regexp.MustCompile(`(?m)^Firmware Version:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
disk.Firmware = strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
// Parse capacity
|
||||
if m := regexp.MustCompile(`(?m)^User Capacity:\s+([\d,]+)\s+bytes`).FindStringSubmatch(content); len(m) == 2 {
|
||||
capacityStr := strings.ReplaceAll(m[1], ",", "")
|
||||
if capacity, err := strconv.ParseInt(capacityStr, 10, 64); err == nil {
|
||||
disk.SizeGB = int(capacity / 1_000_000_000)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse rotation rate
|
||||
if m := regexp.MustCompile(`(?m)^Rotation Rate:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
rateStr := strings.TrimSpace(m[1])
|
||||
if strings.Contains(strings.ToLower(rateStr), "solid state") {
|
||||
disk.Type = "ssd"
|
||||
} else {
|
||||
disk.Type = "hdd"
|
||||
}
|
||||
}
|
||||
|
||||
// Parse SATA version for interface
|
||||
if m := regexp.MustCompile(`(?m)^SATA Version is:\s+(.+?)(?:,|$)`).FindStringSubmatch(content); len(m) == 2 {
|
||||
disk.Interface = strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
// Parse SMART health
|
||||
if m := regexp.MustCompile(`(?m)^SMART overall-health self-assessment test result:\s+(.+)$`).FindStringSubmatch(content); len(m) == 2 {
|
||||
health := strings.TrimSpace(m[1])
|
||||
if !strings.EqualFold(health, "PASSED") {
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: time.Now(),
|
||||
Source: "SMART",
|
||||
EventType: "Disk Health",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: "SMART health check failed for " + diskName,
|
||||
RawData: health,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
disk.Present = true
|
||||
|
||||
// Only add/merge if we got meaningful data
|
||||
if disk.Model != "" || disk.SerialNumber != "" {
|
||||
// Check if we already have this disk from vars.txt
|
||||
if existing, ok := storageBySlot[diskName]; ok {
|
||||
// Merge SMART data into existing entry
|
||||
if existing.Model == "" && disk.Model != "" {
|
||||
existing.Model = disk.Model
|
||||
}
|
||||
if existing.SerialNumber == "" && disk.SerialNumber != "" {
|
||||
existing.SerialNumber = disk.SerialNumber
|
||||
}
|
||||
if existing.Firmware == "" && disk.Firmware != "" {
|
||||
existing.Firmware = disk.Firmware
|
||||
}
|
||||
if existing.SizeGB == 0 && disk.SizeGB > 0 {
|
||||
existing.SizeGB = disk.SizeGB
|
||||
}
|
||||
if existing.Type == "" && disk.Type != "" {
|
||||
existing.Type = disk.Type
|
||||
}
|
||||
if existing.Interface == "" && disk.Interface != "" {
|
||||
existing.Interface = disk.Interface
|
||||
}
|
||||
} else {
|
||||
// New disk not in vars.txt
|
||||
storageBySlot[diskName] = &disk
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseSyslog(content string, result *models.AnalysisResult) {
|
||||
scanner := bufio.NewScanner(strings.NewReader(content))
|
||||
scanner.Buffer(make([]byte, 0, 64*1024), 1024*1024)
|
||||
lineCount := 0
|
||||
maxLines := 100 // Limit parsing to avoid too many events
|
||||
|
||||
for scanner.Scan() && lineCount < maxLines {
|
||||
line := scanner.Text()
|
||||
if strings.TrimSpace(line) == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse syslog line
|
||||
// Example: Feb 5 23:33:01 box3 kernel: Linux version 6.12.54-Unraid
|
||||
timestamp, message, severity := parseSyslogLine(line)
|
||||
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: timestamp,
|
||||
Source: "syslog",
|
||||
EventType: "System Log",
|
||||
Severity: severity,
|
||||
Description: message,
|
||||
RawData: line,
|
||||
})
|
||||
|
||||
lineCount++
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: time.Now(),
|
||||
Source: "syslog",
|
||||
EventType: "System Log",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: "syslog scan error",
|
||||
RawData: err.Error(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func parseSyslogLine(line string) (time.Time, string, models.Severity) {
|
||||
// Simple syslog parser
|
||||
// Format: Feb 5 23:33:01 hostname process[pid]: message
|
||||
timestamp := time.Now()
|
||||
message := line
|
||||
severity := models.SeverityInfo
|
||||
|
||||
// Try to parse timestamp
|
||||
syslogRe := regexp.MustCompile(`^(\w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2})\s+\S+\s+(.+)$`)
|
||||
if m := syslogRe.FindStringSubmatch(line); len(m) == 3 {
|
||||
timeStr := m[1]
|
||||
message = m[2]
|
||||
|
||||
// Parse timestamp (add current year)
|
||||
year := time.Now().Year()
|
||||
if ts, err := time.Parse("Jan 2 15:04:05 2006", timeStr+" "+strconv.Itoa(year)); err == nil {
|
||||
timestamp = ts
|
||||
}
|
||||
}
|
||||
|
||||
// Classify severity
|
||||
lowerMsg := strings.ToLower(message)
|
||||
switch {
|
||||
case strings.Contains(lowerMsg, "panic"),
|
||||
strings.Contains(lowerMsg, "fatal"),
|
||||
strings.Contains(lowerMsg, "critical"):
|
||||
severity = models.SeverityCritical
|
||||
|
||||
case strings.Contains(lowerMsg, "error"),
|
||||
strings.Contains(lowerMsg, "warning"),
|
||||
strings.Contains(lowerMsg, "failed"):
|
||||
severity = models.SeverityWarning
|
||||
|
||||
default:
|
||||
severity = models.SeverityInfo
|
||||
}
|
||||
|
||||
return timestamp, message, severity
|
||||
}
|
||||
|
||||
func getTempStatus(temp int) string {
|
||||
switch {
|
||||
case temp >= 60:
|
||||
return "critical"
|
||||
case temp >= 50:
|
||||
return "warning"
|
||||
default:
|
||||
return "ok"
|
||||
}
|
||||
}
|
||||
|
||||
func parseInt(s string) int {
|
||||
v, _ := strconv.Atoi(strings.TrimSpace(s))
|
||||
return v
|
||||
}
|
||||
|
||||
func parseFloat(s string) float64 {
|
||||
v, _ := strconv.ParseFloat(strings.TrimSpace(s), 64)
|
||||
return v
|
||||
}
|
||||
277
internal/parser/vendors/unraid/parser_test.go
vendored
Normal file
277
internal/parser/vendors/unraid/parser_test.go
vendored
Normal file
@@ -0,0 +1,277 @@
|
||||
package unraid
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetect(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
files []parser.ExtractedFile
|
||||
wantMin int
|
||||
wantMax int
|
||||
shouldFind bool
|
||||
}{
|
||||
{
|
||||
name: "typical unraid diagnostics",
|
||||
files: []parser.ExtractedFile{
|
||||
{
|
||||
Path: "box3-diagnostics-20260205-2333/unraid-7.2.0.txt",
|
||||
Content: []byte("7.2.0\n"),
|
||||
},
|
||||
{
|
||||
Path: "box3-diagnostics-20260205-2333/system/vars.txt",
|
||||
Content: []byte("[parity] => Array\n[disk1] => Array\n"),
|
||||
},
|
||||
},
|
||||
wantMin: 50,
|
||||
wantMax: 100,
|
||||
shouldFind: true,
|
||||
},
|
||||
{
|
||||
name: "unraid with kernel marker",
|
||||
files: []parser.ExtractedFile{
|
||||
{
|
||||
Path: "diagnostics/system/lscpu.txt",
|
||||
Content: []byte("Unraid kernel build 6.12.54"),
|
||||
},
|
||||
},
|
||||
wantMin: 50,
|
||||
wantMax: 100,
|
||||
shouldFind: true,
|
||||
},
|
||||
{
|
||||
name: "not unraid",
|
||||
files: []parser.ExtractedFile{
|
||||
{
|
||||
Path: "some/random/file.txt",
|
||||
Content: []byte("just some random content"),
|
||||
},
|
||||
},
|
||||
wantMin: 0,
|
||||
wantMax: 0,
|
||||
shouldFind: false,
|
||||
},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := p.Detect(tt.files)
|
||||
|
||||
if tt.shouldFind && got < tt.wantMin {
|
||||
t.Errorf("Detect() = %v, want at least %v", got, tt.wantMin)
|
||||
}
|
||||
|
||||
if got > tt.wantMax {
|
||||
t.Errorf("Detect() = %v, want at most %v", got, tt.wantMax)
|
||||
}
|
||||
|
||||
if !tt.shouldFind && got > 0 {
|
||||
t.Errorf("Detect() = %v, want 0 (should not detect)", got)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_Version(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "unraid-7.2.0.txt",
|
||||
Content: []byte("7.2.0\n"),
|
||||
},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Fatal("expected firmware info")
|
||||
}
|
||||
|
||||
fw := result.Hardware.Firmware[0]
|
||||
if fw.DeviceName != "Unraid OS" {
|
||||
t.Errorf("DeviceName = %v, want 'Unraid OS'", fw.DeviceName)
|
||||
}
|
||||
|
||||
if fw.Version != "7.2.0" {
|
||||
t.Errorf("Version = %v, want '7.2.0'", fw.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_CPU(t *testing.T) {
|
||||
lscpuContent := `Architecture: x86_64
|
||||
CPU op-mode(s): 32-bit, 64-bit
|
||||
CPU(s): 16
|
||||
Model name: Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
|
||||
Core(s) per socket: 8
|
||||
Socket(s): 1
|
||||
CPU max MHz: 3400.0000
|
||||
`
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "diagnostics/system/lscpu.txt",
|
||||
Content: []byte(lscpuContent),
|
||||
},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) == 0 {
|
||||
t.Fatal("expected CPU info")
|
||||
}
|
||||
|
||||
cpu := result.Hardware.CPUs[0]
|
||||
if cpu.Model != "Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz" {
|
||||
t.Errorf("Model = %v", cpu.Model)
|
||||
}
|
||||
|
||||
if cpu.Cores != 8 {
|
||||
t.Errorf("Cores = %v, want 8", cpu.Cores)
|
||||
}
|
||||
|
||||
if cpu.Threads != 16 {
|
||||
t.Errorf("Threads = %v, want 16", cpu.Threads)
|
||||
}
|
||||
|
||||
if cpu.FrequencyMHz != 3400 {
|
||||
t.Errorf("FrequencyMHz = %v, want 3400", cpu.FrequencyMHz)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_Memory(t *testing.T) {
|
||||
memContent := ` total used free shared buff/cache available
|
||||
Mem: 50Gi 11Gi 1.4Gi 565Mi 39Gi 39Gi
|
||||
Swap: 0B 0B 0B
|
||||
Total: 50Gi 11Gi 1.4Gi
|
||||
`
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "diagnostics/system/memory.txt",
|
||||
Content: []byte(memContent),
|
||||
},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) == 0 {
|
||||
t.Fatal("expected memory info")
|
||||
}
|
||||
|
||||
mem := result.Hardware.Memory[0]
|
||||
expectedSizeMB := 50 * 1024 // 50 GiB in MB
|
||||
|
||||
if mem.SizeMB != expectedSizeMB {
|
||||
t.Errorf("SizeMB = %v, want %v", mem.SizeMB, expectedSizeMB)
|
||||
}
|
||||
|
||||
if mem.Type != "DRAM" {
|
||||
t.Errorf("Type = %v, want 'DRAM'", mem.Type)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_SMART(t *testing.T) {
|
||||
smartContent := `smartctl 7.5 2025-04-30 r5714 [x86_64-linux-6.12.54-Unraid] (local build)
|
||||
Copyright (C) 2002-25, Bruce Allen, Christian Franke, www.smartmontools.org
|
||||
|
||||
=== START OF INFORMATION SECTION ===
|
||||
Device Model: ST4000NM000B-2TF100
|
||||
Serial Number: WX103EC9
|
||||
LU WWN Device Id: 5 000c50 0ed59db60
|
||||
Firmware Version: TNA1
|
||||
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
|
||||
Sector Size: 512 bytes logical/physical
|
||||
Rotation Rate: 7200 rpm
|
||||
Form Factor: 3.5 inches
|
||||
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
|
||||
|
||||
=== START OF READ SMART DATA SECTION ===
|
||||
SMART overall-health self-assessment test result: PASSED
|
||||
`
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "diagnostics/smart/ST4000NM000B-2TF100_WX103EC9-20260205-2333 disk1 (sdi).txt",
|
||||
Content: []byte(smartContent),
|
||||
},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) == 0 {
|
||||
t.Fatal("expected storage info")
|
||||
}
|
||||
|
||||
disk := result.Hardware.Storage[0]
|
||||
|
||||
if disk.Model != "ST4000NM000B-2TF100" {
|
||||
t.Errorf("Model = %v, want 'ST4000NM000B-2TF100'", disk.Model)
|
||||
}
|
||||
|
||||
if disk.SerialNumber != "WX103EC9" {
|
||||
t.Errorf("SerialNumber = %v, want 'WX103EC9'", disk.SerialNumber)
|
||||
}
|
||||
|
||||
if disk.Firmware != "TNA1" {
|
||||
t.Errorf("Firmware = %v, want 'TNA1'", disk.Firmware)
|
||||
}
|
||||
|
||||
if disk.SizeGB != 4000 {
|
||||
t.Errorf("SizeGB = %v, want 4000", disk.SizeGB)
|
||||
}
|
||||
|
||||
if disk.Type != "hdd" {
|
||||
t.Errorf("Type = %v, want 'hdd'", disk.Type)
|
||||
}
|
||||
|
||||
// Check that no health warnings were generated (PASSED health)
|
||||
healthWarnings := 0
|
||||
for _, event := range result.Events {
|
||||
if event.EventType == "Disk Health" && event.Severity == "warning" {
|
||||
healthWarnings++
|
||||
}
|
||||
}
|
||||
if healthWarnings != 0 {
|
||||
t.Errorf("Expected no health warnings for PASSED disk, got %v", healthWarnings)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParser_Metadata(t *testing.T) {
|
||||
p := &Parser{}
|
||||
|
||||
if p.Name() != "Unraid Parser" {
|
||||
t.Errorf("Name() = %v, want 'Unraid Parser'", p.Name())
|
||||
}
|
||||
|
||||
if p.Vendor() != "unraid" {
|
||||
t.Errorf("Vendor() = %v, want 'unraid'", p.Vendor())
|
||||
}
|
||||
|
||||
if p.Version() == "" {
|
||||
t.Error("Version() should not be empty")
|
||||
}
|
||||
}
|
||||
1
internal/parser/vendors/vendors.go
vendored
1
internal/parser/vendors/vendors.go
vendored
@@ -8,6 +8,7 @@ import (
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/supermicro"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
|
||||
135
internal/parser/vendors/xigmanas/parser.go
vendored
135
internal/parser/vendors/xigmanas/parser.go
vendored
@@ -12,7 +12,7 @@ import (
|
||||
)
|
||||
|
||||
// parserVersion - increment when parsing logic changes.
|
||||
const parserVersion = "2.0.0"
|
||||
const parserVersion = "2.1.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -86,6 +86,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
parseUptime(content, result)
|
||||
parseZFSState(content, result)
|
||||
parseStorageAndSMART(content, result)
|
||||
parseJournalLogSections(content, result)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
@@ -337,6 +338,138 @@ func parseStorageAndSMART(content string, result *models.AnalysisResult) {
|
||||
}
|
||||
}
|
||||
|
||||
func parseJournalLogSections(content string, result *models.AnalysisResult) {
|
||||
sections := []struct {
|
||||
heading string
|
||||
eventType string
|
||||
source string
|
||||
}{
|
||||
{heading: "Last 275 System log entries:", eventType: "System Log", source: "system.log"},
|
||||
{heading: "Last 275 SMARTD log entries:", eventType: "SMARTD Log", source: "smartd.log"},
|
||||
{heading: "Last 275 Daemon log entries:", eventType: "Daemon Log", source: "daemon.log"},
|
||||
}
|
||||
|
||||
for _, sec := range sections {
|
||||
body := extractLogSection(content, sec.heading)
|
||||
if body == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, line := range strings.Split(body, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
msg := extractSyslogMessage(line)
|
||||
if msg == "" {
|
||||
msg = line
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: parseEventTimestamp(line),
|
||||
Source: sec.source,
|
||||
EventType: sec.eventType,
|
||||
Severity: classifyEventSeverity(line),
|
||||
Description: msg,
|
||||
RawData: line,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func extractLogSection(content, heading string) string {
|
||||
start := strings.Index(content, heading)
|
||||
if start == -1 {
|
||||
return ""
|
||||
}
|
||||
|
||||
tail := content[start+len(heading):]
|
||||
lines := strings.Split(tail, "\n")
|
||||
i := 0
|
||||
for i < len(lines) && strings.TrimSpace(lines[i]) == "" {
|
||||
i++
|
||||
}
|
||||
if i < len(lines) && isDashLine(lines[i]) {
|
||||
i++
|
||||
}
|
||||
|
||||
out := make([]string, 0, 64)
|
||||
for ; i < len(lines); i++ {
|
||||
line := lines[i]
|
||||
trimmed := strings.TrimSpace(line)
|
||||
if strings.HasPrefix(trimmed, "Last 275 ") && strings.HasSuffix(trimmed, " log entries:") {
|
||||
break
|
||||
}
|
||||
out = append(out, line)
|
||||
}
|
||||
|
||||
return strings.TrimSpace(strings.Join(out, "\n"))
|
||||
}
|
||||
|
||||
func isDashLine(s string) bool {
|
||||
s = strings.TrimSpace(s)
|
||||
if s == "" {
|
||||
return false
|
||||
}
|
||||
for _, r := range s {
|
||||
if r != '-' {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func parseEventTimestamp(line string) time.Time {
|
||||
isoRe := regexp.MustCompile(`\b\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?[+-]\d{2}:\d{2}\b`)
|
||||
if iso := isoRe.FindString(line); iso != "" {
|
||||
if ts, err := time.Parse(time.RFC3339Nano, iso); err == nil {
|
||||
return ts
|
||||
}
|
||||
}
|
||||
|
||||
prefixRe := regexp.MustCompile(`^[A-Z][a-z]{2}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2}`)
|
||||
if prefix := prefixRe.FindString(line); prefix != "" {
|
||||
year := time.Now().Year()
|
||||
if ts, err := time.Parse("Jan 2 15:04:05 2006", prefix+" "+strconv.Itoa(year)); err == nil {
|
||||
return ts
|
||||
}
|
||||
}
|
||||
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func classifyEventSeverity(line string) models.Severity {
|
||||
lower := strings.ToLower(line)
|
||||
switch {
|
||||
case strings.Contains(lower, "panic"), strings.Contains(lower, "fatal"), strings.Contains(lower, "critical"):
|
||||
return models.SeverityCritical
|
||||
case strings.Contains(lower, "warning"),
|
||||
strings.Contains(lower, "error"),
|
||||
strings.Contains(lower, "failed"),
|
||||
strings.Contains(lower, "failure"),
|
||||
strings.Contains(lower, "login failure"),
|
||||
strings.Contains(lower, "limiting open port"):
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
func extractSyslogMessage(line string) string {
|
||||
if idx := strings.Index(line, ": "); idx != -1 && idx+2 < len(line) {
|
||||
return strings.TrimSpace(line[idx+2:])
|
||||
}
|
||||
|
||||
// RFC5424-like segment in XigmaNAS dumps: "... <host> <proc> <pid> - - <message>"
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) > 10 {
|
||||
return strings.TrimSpace(strings.Join(fields[10:], " "))
|
||||
}
|
||||
|
||||
return strings.TrimSpace(line)
|
||||
}
|
||||
|
||||
func splitModelAndFirmware(raw string) (string, string) {
|
||||
fields := strings.Fields(raw)
|
||||
if len(fields) < 2 {
|
||||
|
||||
22
internal/parser/vendors/xigmanas/parser_test.go
vendored
22
internal/parser/vendors/xigmanas/parser_test.go
vendored
@@ -91,4 +91,26 @@ func TestParserParseExample(t *testing.T) {
|
||||
if len(result.Events) == 0 {
|
||||
t.Fatal("expected events from uptime/zfs sections")
|
||||
}
|
||||
|
||||
var hasSystemLog, hasSmartdLog, hasDaemonLog, hasLoginFailure bool
|
||||
for _, ev := range result.Events {
|
||||
if ev.EventType == "System Log" {
|
||||
hasSystemLog = true
|
||||
}
|
||||
if ev.EventType == "SMARTD Log" {
|
||||
hasSmartdLog = true
|
||||
}
|
||||
if ev.EventType == "Daemon Log" {
|
||||
hasDaemonLog = true
|
||||
}
|
||||
if strings.Contains(strings.ToLower(ev.Description), "login failure") {
|
||||
hasLoginFailure = true
|
||||
}
|
||||
}
|
||||
if !hasSystemLog || !hasSmartdLog || !hasDaemonLog {
|
||||
t.Fatalf("expected events from System/SMARTD/Daemon sections, got system=%v smartd=%v daemon=%v", hasSystemLog, hasSmartdLog, hasDaemonLog)
|
||||
}
|
||||
if !hasLoginFailure {
|
||||
t.Fatal("expected to parse login failure event from system log section")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -312,7 +312,7 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// From FRU
|
||||
for _, fru := range result.FRU {
|
||||
if fru.SerialNumber == "" {
|
||||
if !hasUsableSerial(fru.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
name := fru.ProductName
|
||||
@@ -321,7 +321,7 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: name,
|
||||
SerialNumber: fru.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(fru.SerialNumber),
|
||||
Manufacturer: fru.Manufacturer,
|
||||
PartNumber: fru.PartNumber,
|
||||
Category: "FRU",
|
||||
@@ -331,10 +331,10 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
// From Hardware
|
||||
if result.Hardware != nil {
|
||||
// Board
|
||||
if result.Hardware.BoardInfo.SerialNumber != "" {
|
||||
if hasUsableSerial(result.Hardware.BoardInfo.SerialNumber) {
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: result.Hardware.BoardInfo.ProductName,
|
||||
SerialNumber: result.Hardware.BoardInfo.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(result.Hardware.BoardInfo.SerialNumber),
|
||||
Manufacturer: result.Hardware.BoardInfo.Manufacturer,
|
||||
PartNumber: result.Hardware.BoardInfo.PartNumber,
|
||||
Category: "Board",
|
||||
@@ -343,24 +343,20 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// CPUs
|
||||
for _, cpu := range result.Hardware.CPUs {
|
||||
sn := cpu.SerialNumber
|
||||
if sn == "" {
|
||||
sn = cpu.PPIN // Use PPIN as fallback identifier
|
||||
}
|
||||
if sn == "" {
|
||||
if !hasUsableSerial(cpu.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: cpu.Model,
|
||||
Location: fmt.Sprintf("CPU%d", cpu.Socket),
|
||||
SerialNumber: sn,
|
||||
SerialNumber: strings.TrimSpace(cpu.SerialNumber),
|
||||
Category: "CPU",
|
||||
})
|
||||
}
|
||||
|
||||
// Memory DIMMs
|
||||
for _, mem := range result.Hardware.Memory {
|
||||
if mem.SerialNumber == "" {
|
||||
if !hasUsableSerial(mem.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
location := mem.Location
|
||||
@@ -370,7 +366,7 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: mem.PartNumber,
|
||||
Location: location,
|
||||
SerialNumber: mem.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(mem.SerialNumber),
|
||||
Manufacturer: mem.Manufacturer,
|
||||
PartNumber: mem.PartNumber,
|
||||
Category: "Memory",
|
||||
@@ -379,27 +375,45 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// Storage
|
||||
for _, stor := range result.Hardware.Storage {
|
||||
if stor.SerialNumber == "" {
|
||||
if !hasUsableSerial(stor.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: stor.Model,
|
||||
Location: stor.Slot,
|
||||
SerialNumber: stor.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(stor.SerialNumber),
|
||||
Manufacturer: stor.Manufacturer,
|
||||
Category: "Storage",
|
||||
})
|
||||
}
|
||||
|
||||
// GPUs
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
if !hasUsableSerial(gpu.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
model := gpu.Model
|
||||
if model == "" {
|
||||
model = "GPU"
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: model,
|
||||
Location: gpu.Slot,
|
||||
SerialNumber: strings.TrimSpace(gpu.SerialNumber),
|
||||
Manufacturer: gpu.Manufacturer,
|
||||
Category: "GPU",
|
||||
})
|
||||
}
|
||||
|
||||
// PCIe devices
|
||||
for _, pcie := range result.Hardware.PCIeDevices {
|
||||
if pcie.SerialNumber == "" {
|
||||
if !hasUsableSerial(pcie.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: pcie.DeviceClass,
|
||||
Location: pcie.Slot,
|
||||
SerialNumber: pcie.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(pcie.SerialNumber),
|
||||
Manufacturer: pcie.Manufacturer,
|
||||
PartNumber: pcie.PartNumber,
|
||||
Category: "PCIe",
|
||||
@@ -408,43 +422,47 @@ func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
// Network cards
|
||||
for _, nic := range result.Hardware.NetworkCards {
|
||||
if nic.SerialNumber == "" {
|
||||
if !hasUsableSerial(nic.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: nic.Model,
|
||||
SerialNumber: nic.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(nic.SerialNumber),
|
||||
Category: "Network",
|
||||
})
|
||||
}
|
||||
|
||||
// Power supplies
|
||||
for _, psu := range result.Hardware.PowerSupply {
|
||||
if psu.SerialNumber == "" {
|
||||
if !hasUsableSerial(psu.SerialNumber) {
|
||||
continue
|
||||
}
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: psu.Model,
|
||||
Location: psu.Slot,
|
||||
SerialNumber: psu.SerialNumber,
|
||||
SerialNumber: strings.TrimSpace(psu.SerialNumber),
|
||||
Manufacturer: psu.Vendor,
|
||||
Category: "PSU",
|
||||
})
|
||||
}
|
||||
|
||||
// Firmware (using version as "serial number" for display)
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
serials = append(serials, SerialEntry{
|
||||
Component: fw.DeviceName,
|
||||
SerialNumber: fw.Version,
|
||||
Category: "Firmware",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
jsonResponse(w, serials)
|
||||
}
|
||||
|
||||
func hasUsableSerial(serial string) bool {
|
||||
s := strings.TrimSpace(serial)
|
||||
if s == "" {
|
||||
return false
|
||||
}
|
||||
switch strings.ToUpper(s) {
|
||||
case "N/A", "NA", "NONE", "NULL", "UNKNOWN", "-":
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleGetFirmware(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil || result.Hardware == nil {
|
||||
@@ -573,14 +591,32 @@ func (s *Server) handleExportJSON(w http.ResponseWriter, r *http.Request) {
|
||||
exp.ExportJSON(w)
|
||||
}
|
||||
|
||||
func (s *Server) handleExportTXT(w http.ResponseWriter, r *http.Request) {
|
||||
func (s *Server) handleExportReanimator(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil || result.Hardware == nil {
|
||||
jsonError(w, "No hardware data available for export", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
||||
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", exportFilename(result, "txt")))
|
||||
reanimatorData, err := exporter.ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
statusCode := http.StatusInternalServerError
|
||||
if strings.Contains(err.Error(), "required for Reanimator export") {
|
||||
statusCode = http.StatusBadRequest
|
||||
}
|
||||
jsonError(w, fmt.Sprintf("Export failed: %v", err), statusCode)
|
||||
return
|
||||
}
|
||||
|
||||
exp := exporter.New(result)
|
||||
exp.ExportTXT(w)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", exportFilename(result, "reanimator.json")))
|
||||
|
||||
encoder := json.NewEncoder(w)
|
||||
encoder.SetIndent("", " ")
|
||||
if err := encoder.Encode(reanimatorData); err != nil {
|
||||
// Log error, but likely too late to send error response
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleClear(w http.ResponseWriter, r *http.Request) {
|
||||
@@ -896,7 +932,7 @@ func exportFilename(result *models.AnalysisResult, ext string) string {
|
||||
sn = sanitizeFilenamePart(sn)
|
||||
ext = strings.TrimPrefix(strings.TrimSpace(ext), ".")
|
||||
if ext == "" {
|
||||
ext = "txt"
|
||||
ext = "json"
|
||||
}
|
||||
return fmt.Sprintf("%s (%s) - %s.%s", date, model, sn, ext)
|
||||
}
|
||||
|
||||
132
internal/server/handlers_gpu_test.go
Normal file
132
internal/server/handlers_gpu_test.go
Normal file
@@ -0,0 +1,132 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestHandleGetSerials_WithGPUs(t *testing.T) {
|
||||
// Create test server with GPU data
|
||||
srv := &Server{}
|
||||
|
||||
testResult := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{
|
||||
Slot: "GPUSXM1",
|
||||
Model: "NVIDIA Device 2335",
|
||||
Manufacturer: "NVIDIA Corporation",
|
||||
SerialNumber: "48:B0:2D:BB:8E:51:9E:E5",
|
||||
Firmware: "96.00.D0.00.03",
|
||||
BDF: "0000:3a:00.0",
|
||||
},
|
||||
{
|
||||
Slot: "GPUSXM2",
|
||||
Model: "NVIDIA Device 2335",
|
||||
Manufacturer: "NVIDIA Corporation",
|
||||
SerialNumber: "48:B0:2D:EE:DA:27:CF:78",
|
||||
Firmware: "96.00.D0.00.03",
|
||||
BDF: "0000:18:00.0",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
srv.SetResult(testResult)
|
||||
|
||||
// Create request
|
||||
req := httptest.NewRequest("GET", "/api/serials", nil)
|
||||
w := httptest.NewRecorder()
|
||||
|
||||
// Call handler
|
||||
srv.handleGetSerials(w, req)
|
||||
|
||||
// Check response
|
||||
if w.Code != http.StatusOK {
|
||||
t.Errorf("Expected status 200, got %d", w.Code)
|
||||
}
|
||||
|
||||
// Parse response
|
||||
var serials []struct {
|
||||
Component string `json:"component"`
|
||||
Location string `json:"location,omitempty"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(w.Body).Decode(&serials); err != nil {
|
||||
t.Fatalf("Failed to decode response: %v", err)
|
||||
}
|
||||
|
||||
// Check that we have GPU entries
|
||||
gpuCount := 0
|
||||
for _, s := range serials {
|
||||
if s.Category == "GPU" {
|
||||
gpuCount++
|
||||
t.Logf("Found GPU: %s (%s) S/N: %s", s.Component, s.Location, s.SerialNumber)
|
||||
|
||||
// Verify fields are set
|
||||
if s.SerialNumber == "" {
|
||||
t.Errorf("GPU serial number is empty")
|
||||
}
|
||||
if s.Location == "" {
|
||||
t.Errorf("GPU location is empty")
|
||||
}
|
||||
if s.Manufacturer == "" {
|
||||
t.Errorf("GPU manufacturer is empty")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if gpuCount != 2 {
|
||||
t.Errorf("Expected 2 GPUs in serials, got %d", gpuCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleGetSerials_WithoutGPUSerials(t *testing.T) {
|
||||
// Create test server with GPUs but no serial numbers
|
||||
srv := &Server{}
|
||||
|
||||
testResult := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
{
|
||||
Slot: "GPU0",
|
||||
Model: "Some GPU",
|
||||
Manufacturer: "Vendor",
|
||||
SerialNumber: "", // No serial number
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
srv.SetResult(testResult)
|
||||
|
||||
// Create request
|
||||
req := httptest.NewRequest("GET", "/api/serials", nil)
|
||||
w := httptest.NewRecorder()
|
||||
|
||||
// Call handler
|
||||
srv.handleGetSerials(w, req)
|
||||
|
||||
// Parse response
|
||||
var serials []struct {
|
||||
Category string `json:"category"`
|
||||
}
|
||||
|
||||
if err := json.NewDecoder(w.Body).Decode(&serials); err != nil {
|
||||
t.Fatalf("Failed to decode response: %v", err)
|
||||
}
|
||||
|
||||
// Check that GPUs without serial numbers are not included
|
||||
for _, s := range serials {
|
||||
if s.Category == "GPU" {
|
||||
t.Error("GPU without serial number should not be included in serials list")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -30,7 +30,7 @@ type Server struct {
|
||||
result *models.AnalysisResult
|
||||
detectedVendor string
|
||||
|
||||
jobManager *JobManager
|
||||
jobManager *JobManager
|
||||
collectors *collector.Registry
|
||||
}
|
||||
|
||||
@@ -67,7 +67,7 @@ func (s *Server) setupRoutes() {
|
||||
s.mux.HandleFunc("GET /api/firmware", s.handleGetFirmware)
|
||||
s.mux.HandleFunc("GET /api/export/csv", s.handleExportCSV)
|
||||
s.mux.HandleFunc("GET /api/export/json", s.handleExportJSON)
|
||||
s.mux.HandleFunc("GET /api/export/txt", s.handleExportTXT)
|
||||
s.mux.HandleFunc("GET /api/export/reanimator", s.handleExportReanimator)
|
||||
s.mux.HandleFunc("DELETE /api/clear", s.handleClear)
|
||||
s.mux.HandleFunc("POST /api/shutdown", s.handleShutdown)
|
||||
s.mux.HandleFunc("POST /api/collect", s.handleCollectStart)
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
|
||||
func newFlowTestServer() (*Server, *httptest.Server) {
|
||||
s := &Server{
|
||||
jobManager: NewJobManager(),
|
||||
jobManager: NewJobManager(),
|
||||
collectors: testCollectorRegistry(),
|
||||
}
|
||||
mux := http.NewServeMux()
|
||||
@@ -110,6 +110,61 @@ func TestUploadArchiveRegressionAndSourceMetadata(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestUploadTXTFile(t *testing.T) {
|
||||
_, ts := newFlowTestServer()
|
||||
defer ts.Close()
|
||||
|
||||
txt := `Version:
|
||||
--------
|
||||
14.3.0.5
|
||||
|
||||
loader_brand="XigmaNAS"
|
||||
`
|
||||
|
||||
reqBody := &bytes.Buffer{}
|
||||
writer := multipart.NewWriter(reqBody)
|
||||
part, err := writer.CreateFormFile("archive", "xigmanas.txt")
|
||||
if err != nil {
|
||||
t.Fatalf("create form file: %v", err)
|
||||
}
|
||||
if _, err := part.Write([]byte(txt)); err != nil {
|
||||
t.Fatalf("write txt body: %v", err)
|
||||
}
|
||||
if err := writer.Close(); err != nil {
|
||||
t.Fatalf("close multipart writer: %v", err)
|
||||
}
|
||||
|
||||
uploadReq, err := http.NewRequest(http.MethodPost, ts.URL+"/api/upload", reqBody)
|
||||
if err != nil {
|
||||
t.Fatalf("build upload request: %v", err)
|
||||
}
|
||||
uploadReq.Header.Set("Content-Type", writer.FormDataContentType())
|
||||
|
||||
uploadResp, err := http.DefaultClient.Do(uploadReq)
|
||||
if err != nil {
|
||||
t.Fatalf("upload request failed: %v", err)
|
||||
}
|
||||
defer uploadResp.Body.Close()
|
||||
|
||||
if uploadResp.StatusCode != http.StatusOK {
|
||||
t.Fatalf("expected 200 from /api/upload, got %d", uploadResp.StatusCode)
|
||||
}
|
||||
|
||||
var uploadPayload map[string]interface{}
|
||||
if err := json.NewDecoder(uploadResp.Body).Decode(&uploadPayload); err != nil {
|
||||
t.Fatalf("decode upload response: %v", err)
|
||||
}
|
||||
if uploadPayload["status"] != "ok" {
|
||||
t.Fatalf("expected upload status ok, got %v", uploadPayload["status"])
|
||||
}
|
||||
if uploadPayload["filename"] != "xigmanas.txt" {
|
||||
t.Fatalf("expected filename xigmanas.txt, got %v", uploadPayload["filename"])
|
||||
}
|
||||
if uploadPayload["vendor"] != "XigmaNAS Parser" {
|
||||
t.Fatalf("expected vendor XigmaNAS Parser, got %v", uploadPayload["vendor"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectSmokeErrorFormat(t *testing.T) {
|
||||
_, ts := newFlowTestServer()
|
||||
defer ts.Close()
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
//go:build ignore
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors"
|
||||
)
|
||||
|
||||
func main() {
|
||||
p := parser.NewBMCParser()
|
||||
|
||||
fmt.Println("Testing archive parsing...")
|
||||
if err := p.ParseArchive("example/A514359X5A07900_logs-20260122-074208.tar"); err != nil {
|
||||
log.Fatalf("ERROR: %v", err)
|
||||
}
|
||||
|
||||
fmt.Println("✓ Archive parsed successfully!")
|
||||
fmt.Printf("✓ Detected vendor: %s\n", p.DetectedVendor())
|
||||
|
||||
result := p.Result()
|
||||
fmt.Printf("✓ GPUs found: %d\n", len(result.Hardware.GPUs))
|
||||
fmt.Printf("✓ Events found: %d\n", len(result.Events))
|
||||
fmt.Printf("✓ PCIe Devices found: %d\n", len(result.Hardware.PCIeDevices))
|
||||
|
||||
fmt.Println("\nBoard Info:")
|
||||
fmt.Printf(" Manufacturer: %s\n", result.Hardware.BoardInfo.Manufacturer)
|
||||
fmt.Printf(" Product Name: %s\n", result.Hardware.BoardInfo.ProductName)
|
||||
fmt.Printf(" Serial Number: %s\n", result.Hardware.BoardInfo.SerialNumber)
|
||||
fmt.Printf(" Part Number: %s\n", result.Hardware.BoardInfo.PartNumber)
|
||||
}
|
||||
BIN
test_nvidia_full
BIN
test_nvidia_full
Binary file not shown.
@@ -1,99 +0,0 @@
|
||||
//go:build ignore
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors"
|
||||
)
|
||||
|
||||
func main() {
|
||||
p := parser.NewBMCParser()
|
||||
|
||||
fmt.Println("Testing NVIDIA Bug Report parser (full)...")
|
||||
if err := p.ParseArchive("/Users/mchusavitin/Downloads/nvidia-bug-report-2KD501412.log.gz"); err != nil {
|
||||
log.Fatalf("ERROR: %v", err)
|
||||
}
|
||||
|
||||
fmt.Println("✓ Archive parsed successfully!")
|
||||
fmt.Printf("✓ Detected vendor: %s\n", p.DetectedVendor())
|
||||
|
||||
result := p.Result()
|
||||
fmt.Printf("✓ CPUs: %d\n", len(result.Hardware.CPUs))
|
||||
fmt.Printf("✓ Memory: %d modules\n", len(result.Hardware.Memory))
|
||||
fmt.Printf("✓ Power Supplies: %d\n", len(result.Hardware.PowerSupply))
|
||||
fmt.Printf("✓ GPUs: %d\n", len(result.Hardware.GPUs))
|
||||
fmt.Printf("✓ Network Adapters: %d\n", len(result.Hardware.NetworkAdapters))
|
||||
|
||||
fmt.Println("\nSystem Information:")
|
||||
if result.Hardware.BoardInfo.SerialNumber != "" {
|
||||
fmt.Printf(" Serial Number: %s\n", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
if result.Hardware.BoardInfo.UUID != "" {
|
||||
fmt.Printf(" UUID: %s\n", result.Hardware.BoardInfo.UUID)
|
||||
}
|
||||
if result.Hardware.BoardInfo.Manufacturer != "" {
|
||||
fmt.Printf(" Manufacturer: %s\n", result.Hardware.BoardInfo.Manufacturer)
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName != "" {
|
||||
fmt.Printf(" Product: %s\n", result.Hardware.BoardInfo.ProductName)
|
||||
}
|
||||
if result.Hardware.BoardInfo.Version != "" {
|
||||
fmt.Printf(" Version: %s\n", result.Hardware.BoardInfo.Version)
|
||||
}
|
||||
|
||||
fmt.Println("\nCPU Information:")
|
||||
for _, cpu := range result.Hardware.CPUs {
|
||||
fmt.Printf(" Socket %d: %s\n", cpu.Socket, cpu.Model)
|
||||
fmt.Printf(" S/N: %s, Cores: %d, Threads: %d\n", cpu.SerialNumber, cpu.Cores, cpu.Threads)
|
||||
}
|
||||
|
||||
fmt.Println("\nPower Supplies:")
|
||||
for _, psu := range result.Hardware.PowerSupply {
|
||||
fmt.Printf(" %s: %s (%s)\n", psu.Slot, psu.Model, psu.Vendor)
|
||||
fmt.Printf(" S/N: %s\n", psu.SerialNumber)
|
||||
fmt.Printf(" Power: %d W, Revision: %s\n", psu.WattageW, psu.Firmware)
|
||||
fmt.Printf(" Status: %s\n", psu.Status)
|
||||
}
|
||||
|
||||
totalMemGB := 0
|
||||
for _, mem := range result.Hardware.Memory {
|
||||
totalMemGB += mem.SizeMB / 1024
|
||||
}
|
||||
fmt.Printf("\nMemory: %d modules, %d GB total\n", len(result.Hardware.Memory), totalMemGB)
|
||||
|
||||
fmt.Printf("\nNetwork Adapters: %d devices\n", len(result.Hardware.NetworkAdapters))
|
||||
for _, nic := range result.Hardware.NetworkAdapters {
|
||||
fmt.Printf(" %s: %s\n", nic.Location, nic.Model)
|
||||
if nic.Slot != "" {
|
||||
fmt.Printf(" Slot: %s\n", nic.Slot)
|
||||
}
|
||||
if nic.PartNumber != "" {
|
||||
fmt.Printf(" P/N: %s\n", nic.PartNumber)
|
||||
}
|
||||
if nic.SerialNumber != "" {
|
||||
fmt.Printf(" S/N: %s\n", nic.SerialNumber)
|
||||
}
|
||||
if nic.PortCount > 0 {
|
||||
fmt.Printf(" Ports: %d x %s\n", nic.PortCount, nic.PortType)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("\nGPUs: %d devices\n", len(result.Hardware.GPUs))
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
fmt.Printf(" %s: %s\n", gpu.BDF, gpu.Model)
|
||||
if gpu.UUID != "" {
|
||||
fmt.Printf(" UUID: %s\n", gpu.UUID)
|
||||
}
|
||||
if gpu.VideoBIOS != "" {
|
||||
fmt.Printf(" Video BIOS: %s\n", gpu.VideoBIOS)
|
||||
}
|
||||
if gpu.IRQ > 0 {
|
||||
fmt.Printf(" IRQ: %d\n", gpu.IRQ)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1079,6 +1079,7 @@ function renderSerials(serials) {
|
||||
'CPU': 'Процессор',
|
||||
'Memory': 'Память',
|
||||
'Storage': 'Накопитель',
|
||||
'GPU': 'Видеокарта',
|
||||
'PCIe': 'PCIe',
|
||||
'Network': 'Сеть',
|
||||
'PSU': 'БП',
|
||||
|
||||
@@ -21,10 +21,10 @@
|
||||
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,.json,.tar,.tar.gz,.tgz,.zip" hidden>
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, zip, json</p>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, zip, json, txt, log</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
@@ -111,7 +111,7 @@
|
||||
<div class="tab-content active" id="config">
|
||||
<div class="toolbar">
|
||||
<button onclick="exportData('json')">Экспорт JSON</button>
|
||||
<button onclick="exportData('txt')">Экспорт TXT</button>
|
||||
<button onclick="exportData('reanimator')">Экспорт Reanimator</button>
|
||||
</div>
|
||||
<div id="config-content"></div>
|
||||
</div>
|
||||
|
||||
Reference in New Issue
Block a user