Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 8d80048117 | |||
| 21ea129933 | |||
| 9c5512d238 | |||
| 206496efae | |||
| 7d1a02cb72 | |||
| 070971685f | |||
| 78806f9fa0 | |||
| 4940cd9645 | |||
| 736b77f055 | |||
| 0252264ddc | |||
| 25e3b8bb42 | |||
| bb4505a249 | |||
| 2fa4a1235a | |||
| fe5da1dbd7 | |||
| 612058ed16 | |||
| e0146adfff | |||
| 9a30705c9a | |||
| 8dbbec3610 | |||
| 4c60ebbf1d | |||
| c52fea2fec | |||
| dae4744eb3 | |||
| b6ff47fea8 | |||
| 1d282c4196 | |||
| f35cabac48 | |||
| a2c9e9a57f | |||
| b918363252 |
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -1,3 +1,6 @@
|
||||
[submodule "third_party/pciids"]
|
||||
path = third_party/pciids
|
||||
url = https://github.com/pciutils/pciids.git
|
||||
[submodule "bible"]
|
||||
path = bible
|
||||
url = https://git.mchus.pro/mchus/bible.git
|
||||
|
||||
11
AGENTS.md
Normal file
11
AGENTS.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# LOGPile — Instructions for Codex
|
||||
|
||||
## Shared Engineering Rules
|
||||
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
|
||||
Start with `bible/rules/patterns/` for specific contracts.
|
||||
|
||||
## Project Architecture
|
||||
Read `bible-local/` — LOGPile specific architecture.
|
||||
Read order: `bible-local/README.md` → `01-overview.md` → relevant files for the task.
|
||||
|
||||
Every architectural decision specific to this project must be recorded in `bible-local/10-decisions.md`.
|
||||
12
CLAUDE.md
12
CLAUDE.md
@@ -1 +1,11 @@
|
||||
Read and follow [`docs/bible/README.md`](docs/bible/README.md) as the single source of truth, and do not contradict the Bible.
|
||||
# LOGPile — Instructions for Claude
|
||||
|
||||
## Shared Engineering Rules
|
||||
Read `bible/` — shared rules for all projects (CSV, logging, DB, tables, background tasks, code style).
|
||||
Start with `bible/rules/patterns/` for specific contracts.
|
||||
|
||||
## Project Architecture
|
||||
Read `bible-local/` — LOGPile specific architecture.
|
||||
Read order: `bible-local/README.md` → `01-overview.md` → relevant files for the task.
|
||||
|
||||
Every architectural decision specific to this project must be recorded in `bible-local/10-decisions.md`.
|
||||
|
||||
1
bible
Submodule
1
bible
Submodule
Submodule bible added at 0c829182a1
@@ -19,7 +19,7 @@ through the same API and UI.
|
||||
## Key capabilities
|
||||
|
||||
- Single self-contained binary with embedded HTML/JS/CSS (no static file serving required).
|
||||
- Vendor archive parsing: Inspur/Kaytus, Supermicro, NVIDIA HGX Field Diagnostics,
|
||||
- Vendor archive parsing: Inspur/Kaytus, Dell TSR, NVIDIA HGX Field Diagnostics,
|
||||
NVIDIA Bug Report, Unraid, XigmaNAS, Generic text fallback.
|
||||
- Live Redfish collection with async progress tracking.
|
||||
- Normalized hardware inventory: CPU / RAM / Storage / GPU / PSU / NIC / PCIe / Firmware.
|
||||
@@ -29,8 +29,8 @@ internal/
|
||||
interface.go # VendorParser interface
|
||||
vendors/ # Vendor-specific parser modules
|
||||
vendors.go # Import-side-effect registrations
|
||||
dell/
|
||||
inspur/
|
||||
supermicro/
|
||||
nvidia/
|
||||
nvidia_bug_report/
|
||||
unraid/
|
||||
@@ -10,7 +10,7 @@ All registrations are collected in `internal/parser/vendors/vendors.go`:
|
||||
```go
|
||||
import (
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/supermicro"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
// etc.
|
||||
)
|
||||
```
|
||||
@@ -53,6 +53,67 @@ version produced a given result.
|
||||
|
||||
---
|
||||
|
||||
## Parser data quality rules
|
||||
|
||||
### FirmwareInfo — system-level only
|
||||
|
||||
`Hardware.Firmware` must contain **only system-level firmware**: BIOS, BMC/iDRAC,
|
||||
Lifecycle Controller, CPLD, storage controllers, BOSS adapters.
|
||||
|
||||
**Device-bound firmware** (NIC, GPU, PSU, disk, backplane) **must NOT be added to
|
||||
`Hardware.Firmware`**. It belongs to the device's own `Firmware` field and is already
|
||||
present there. Duplicating it in `Hardware.Firmware` causes double entries in Reanimator.
|
||||
|
||||
The Reanimator exporter filters by `FirmwareInfo.DeviceName` prefix and by
|
||||
`FirmwareInfo.Description` (FQDD prefix). Parsers must cooperate:
|
||||
|
||||
- Store the device's FQDD (or equivalent slot identifier) in `FirmwareInfo.Description`
|
||||
for all firmware entries that come from a per-device inventory source (e.g. Dell
|
||||
`DCIM_SoftwareIdentity`).
|
||||
- FQDD prefixes that are device-bound: `NIC.`, `PSU.`, `Disk.`, `RAID.Backplane.`, `GPU.`
|
||||
|
||||
### NIC/device model names — strip embedded MAC addresses
|
||||
|
||||
Some vendors (confirmed: Dell TSR) embed the MAC address in the device model name field,
|
||||
e.g. `ProductName = "NVIDIA ConnectX-6 Lx 2x 25G SFP28 OCP3.0 SFF - C4:70:BD:DB:56:08"`.
|
||||
|
||||
**Rule:** Strip any ` - XX:XX:XX:XX:XX:XX` suffix from model/name strings before storing
|
||||
them in `FirmwareInfo.DeviceName`, `NetworkAdapter.Model`, or any other model field.
|
||||
|
||||
Use `nicMACInModelRE` (defined in the Dell parser) or an equivalent regex:
|
||||
```
|
||||
\s+-\s+([0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}$
|
||||
```
|
||||
|
||||
This applies to **all** string fields used as device names or model identifiers.
|
||||
|
||||
### PCI device name enrichment via pci.ids
|
||||
|
||||
If a PCIe device, GPU, NIC, or any hardware component has a `vendor_id` + `device_id`
|
||||
but its model/name field is **empty or generic** (e.g. blank, equals the description,
|
||||
or is just a raw hex ID), the parser **must** attempt to resolve the human-readable
|
||||
model name from the embedded `pci.ids` database before storing the result.
|
||||
|
||||
**Rule:** When `Model` (or equivalent name field) is empty and both `VendorID` and
|
||||
`DeviceID` are non-zero, call the pciids lookup and use the result as the model name.
|
||||
|
||||
```go
|
||||
// Example pattern — use in any parser that handles PCIe/GPU/NIC devices:
|
||||
if strings.TrimSpace(device.Model) == "" && device.VendorID != 0 && device.DeviceID != 0 {
|
||||
if name := pciids.Lookup(device.VendorID, device.DeviceID); name != "" {
|
||||
device.Model = name
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This rule applies to all vendor parsers. The pciids package is available at
|
||||
`internal/parser/vendors/pciids`. See ADL-005 for the rationale.
|
||||
|
||||
**Do not hardcode model name strings.** If a device is unknown today, it will be
|
||||
resolved automatically once `pci.ids` is updated.
|
||||
|
||||
---
|
||||
|
||||
## Vendor parsers
|
||||
|
||||
### Inspur / Kaytus (`inspur`)
|
||||
@@ -86,29 +147,26 @@ inspur/
|
||||
|
||||
---
|
||||
|
||||
### Supermicro (`supermicro`)
|
||||
### Dell TSR (`dell`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on SYS-821GE-TNHR crash dumps.
|
||||
**Status:** Ready (v3.0). Tested on nested TSR archives with embedded `*.pl.zip`.
|
||||
|
||||
**Archive format:** `.tgz` / `.tar.gz` / `.tar`
|
||||
**Archive format:** `.zip` (outer archive + nested `*.pl.zip`)
|
||||
|
||||
**Primary source file:** `CDump.txt` — JSON crashdump file
|
||||
|
||||
**Confidence:** +80 when `CDump.txt` contains `crash_data`, `METADATA`, `bmc_fw_ver` markers.
|
||||
**Primary source files:**
|
||||
- `tsr/metadata.json`
|
||||
- `tsr/hardware/sysinfo/inventory/sysinfo_DCIM_View.xml`
|
||||
- `tsr/hardware/sysinfo/inventory/sysinfo_DCIM_SoftwareIdentity.xml`
|
||||
- `tsr/hardware/sysinfo/inventory/sysinfo_CIM_Sensor.xml`
|
||||
- `tsr/hardware/sysinfo/lcfiles/curr_lclog.xml`
|
||||
|
||||
**Extracted data:**
|
||||
- CPUs: CPUID, core count, manufacturer (Intel), microcode version (as firmware field)
|
||||
- FRU: BMC firmware version, BIOS version, ME firmware version, CPU PPIN
|
||||
- Events: crashdump collection event + MCA errors
|
||||
|
||||
**MCA error detection:**
|
||||
- Bit 63 (Valid), Bit 61 (UC — uncorrected), Bit 60 (EN — enabled)
|
||||
- Corrected MCA errors → `Warning` severity
|
||||
- Uncorrected MCA errors → `Critical` severity
|
||||
|
||||
**Known limitations:**
|
||||
- TOR dump and extended MCA register data not yet parsed.
|
||||
- No CPU model name (only CPUID hex code available in crashdump format).
|
||||
- Board/system identity and BIOS/iDRAC firmware
|
||||
- CPU, memory, physical disks, virtual disks, PSU, NIC, PCIe
|
||||
- GPU inventory (`DCIM_VideoView`) + GPU sensor enrichment (`DCIM_GPUSensor`)
|
||||
- Controller/backplane inventory (`DCIM_ControllerView`, `DCIM_EnclosureView`)
|
||||
- Sensor readings (temperature/voltage/current/power/fan/utilization)
|
||||
- Lifecycle events (`curr_lclog.xml`)
|
||||
|
||||
---
|
||||
|
||||
@@ -214,6 +272,46 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|
||||
---
|
||||
|
||||
### H3C SDS G5 (`h3c_g5`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on H3C UniServer R4900 G5 SDS archives.
|
||||
|
||||
**Archive format:** `.sds` (tar archive)
|
||||
|
||||
**Detection:** `hardware_info.ini`, `hardware.info`, `firmware_version.ini`, `user/test*.csv`, plus H3C markers.
|
||||
|
||||
**Extracted data (current):**
|
||||
- Board/FRU inventory (`FRUInfo.ini`, `board_info.ini`)
|
||||
- Firmware list (`firmware_version.ini`)
|
||||
- CPU inventory (`hardware_info.ini`)
|
||||
- Memory DIMM inventory (`hardware_info.ini`)
|
||||
- Storage inventory (`hardware.info`, `storage_disk.ini`, `NVMe_info.txt`, RAID text enrichments)
|
||||
- Logical RAID volumes (`raid.json`, `Storage_RAID-*.txt`)
|
||||
- Sensor snapshot (`sensor_info.ini`)
|
||||
- SEL events (`user/test.csv`, `user/test1.csv`, fallback `Sel.json` / `sel_list.txt`)
|
||||
|
||||
---
|
||||
|
||||
### H3C SDS G6 (`h3c_g6`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on H3C UniServer R4700 G6 SDS archives.
|
||||
|
||||
**Archive format:** `.sds` (tar archive)
|
||||
|
||||
**Detection:** `CPUDetailInfo.xml`, `MemoryDetailInfo.xml`, `firmware_version.json`, `Sel.json`, plus H3C markers.
|
||||
|
||||
**Extracted data (current):**
|
||||
- Board/FRU inventory (`FRUInfo.ini`, `board_info.ini`)
|
||||
- Firmware list (`firmware_version.json`)
|
||||
- CPU inventory (`CPUDetailInfo.xml`)
|
||||
- Memory DIMM inventory (`MemoryDetailInfo.xml`)
|
||||
- Storage inventory + capacity/model/interface (`storage_disk.ini`, `Storage_RAID-*.txt`, `NVMe_info.txt`)
|
||||
- Logical RAID volumes (`raid.json`, fallback from `Storage_RAID-*.txt` when available)
|
||||
- Sensor snapshot (`sensor_info.ini`)
|
||||
- SEL events (`user/Sel.json`, fallback `user/sel_list.txt`)
|
||||
|
||||
---
|
||||
|
||||
### Generic text fallback (`generic`)
|
||||
|
||||
**Status:** Ready (v1.0.0).
|
||||
@@ -232,10 +330,12 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|
||||
| Vendor | ID | Status | Tested on |
|
||||
|--------|----|--------|-----------|
|
||||
| Dell TSR | `dell` | Ready | TSR nested zip archives |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| Supermicro | `supermicro` | Ready | SYS-821GE-TNHR crashdump |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
|
||||
| XigmaNAS | `xigmanas` | Ready | FreeBSD NAS logs |
|
||||
| H3C SDS G5 | `h3c_g5` | Ready | H3C UniServer R4900 G5 SDS archives |
|
||||
| H3C SDS G6 | `h3c_g6` | Ready | H3C UniServer R4700 G6 SDS archives |
|
||||
| Generic fallback | `generic` | Ready | Any text file |
|
||||
@@ -201,4 +201,56 @@ the server; frontend only renders status/flags returned by the API.
|
||||
|
||||
---
|
||||
|
||||
## ADL-015 — Supermicro crashdump archive parser removed from active registry
|
||||
|
||||
**Date:** 2026-03-01
|
||||
**Context:** The Supermicro crashdump parser (`SMC Crash Dump Parser`) produced low-value
|
||||
results for current workflows and was explicitly rejected as a supported archive path.
|
||||
**Decision:** Remove `supermicro` vendor parser from active registration and project source.
|
||||
Do not include it in `/api/parsers` output or parser documentation matrix.
|
||||
**Consequences:**
|
||||
- Supermicro crashdump archives (`CDump.txt` format) are no longer parsed by a dedicated vendor parser.
|
||||
- Such archives fall back to other matching parsers (typically `generic`) unless a new replacement parser is added.
|
||||
- Reintroduction requires a new parser package and an explicit registry import in `vendors/vendors.go`.
|
||||
|
||||
---
|
||||
|
||||
## ADL-016 — Device-bound firmware must not appear in hardware.firmware
|
||||
|
||||
**Date:** 2026-03-01
|
||||
**Context:** Dell TSR `DCIM_SoftwareIdentity` lists firmware for every component (NICs,
|
||||
PSUs, disks, backplanes) in addition to system-level firmware. Naively importing all entries
|
||||
into `Hardware.Firmware` caused device firmware to appear twice in Reanimator: once in the
|
||||
device's own record and again in the top-level firmware list.
|
||||
**Decision:**
|
||||
- `Hardware.Firmware` contains only system-level firmware (BIOS, BMC/iDRAC, CPLD,
|
||||
Lifecycle Controller, storage controllers, BOSS).
|
||||
- Device-bound entries (NIC, PSU, Disk, Backplane, GPU) must not be added to
|
||||
`Hardware.Firmware`.
|
||||
- Parsers must store the FQDD (or equivalent slot identifier) in `FirmwareInfo.Description`
|
||||
so the Reanimator exporter can filter by FQDD prefix.
|
||||
- The exporter's `isDeviceBoundFirmwareFQDD()` function performs this filter.
|
||||
**Consequences:**
|
||||
- Any new parser that ingests a per-device firmware inventory must follow the same rule.
|
||||
- Device firmware is accessible only via the device's own record, not the firmware list.
|
||||
|
||||
---
|
||||
|
||||
## ADL-017 — Vendor-embedded MAC addresses must be stripped from model name fields
|
||||
|
||||
**Date:** 2026-03-01
|
||||
**Context:** Dell TSR embeds MAC addresses directly in `ProductName` and `ElementName`
|
||||
fields (e.g. `"NVIDIA ConnectX-6 Lx 2x 25G SFP28 OCP3.0 SFF - C4:70:BD:DB:56:08"`).
|
||||
This caused model names to contain MAC addresses in NIC model, NIC firmware device name,
|
||||
and potentially other fields.
|
||||
**Decision:** Strip any ` - XX:XX:XX:XX:XX:XX` suffix from all model/name string fields
|
||||
at parse time before storing in any model struct. Use the regex
|
||||
`\s+-\s+([0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}$`.
|
||||
**Consequences:**
|
||||
- Model names are clean and consistent across all devices.
|
||||
- All parsers must apply this stripping to any field used as a device name or model.
|
||||
- Confirmed affected fields in Dell: `DCIM_NICView.ProductName`, `DCIM_SoftwareIdentity.ElementName`.
|
||||
|
||||
---
|
||||
|
||||
<!-- Add new decisions below this line using the format above -->
|
||||
28
docs/test_server_collection_memory.md
Normal file
28
docs/test_server_collection_memory.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Test Server Collection Memory
|
||||
|
||||
Keep this table updated after each test-server run.
|
||||
|
||||
Definition:
|
||||
- `Collection Time` = total Redfish collection duration from `collect.log`.
|
||||
- `Speed` = `Documents / seconds`.
|
||||
- `Metrics Collected` = sum of `Counts` fields (`cpus + memory + storage + pcie + gpus + nics + psus + firmware`).
|
||||
- `n/a` means the log does not contain enough timestamp metadata to calculate duration/speed.
|
||||
|
||||
## Server Model: `NF5688M7`
|
||||
|
||||
| Date (UTC) | App Version | Collection Time | Documents | Speed | Metrics Collected | Notes |
|
||||
|---|---|---:|---:|---:|---:|---|
|
||||
| 2026-02-28 | `v1.7.1-12-g612058e` (`612058e`) | 10m10s (610s) | 228 | 0.37 docs/s | 98 | 2026-02-28 (SERVER MODEL) - 23E100043.zip |
|
||||
| 2026-02-28 | `v1.7.1-11-ge0146ad` (`e0146ad`) | 9m36s (576s) | 138 | 0.24 docs/s | 110 | 2026-02-28 (SERVER MODEL) - 23E100042.zip |
|
||||
| 2026-02-28 | `v1.7.1-10-g9a30705` (`9a30705`) | 20m47s (1247s) | 106 | 0.09 docs/s | 97 | 2026-02-28 (SERVER MODEL) - 23E100053.zip |
|
||||
| 2026-02-28 | `v1.7.1` (`6c19a58`) | 15m08s (908s) | 184 | 0.20 docs/s | 96 | 2026-02-28 (DDR5 DIMM) - 23E100051.zip |
|
||||
| 2026-02-28 | `v1.7.0` (`ddab93a`) | n/a | 193 | n/a | 61 | 2026-02-28 (NULL) - 23E100051.zip |
|
||||
| 2026-02-28 | `v1.7.0` (`ddab93a`) | n/a | 291 | n/a | 61 | 2026-02-28 (NULL) - 23E100206.zip |
|
||||
|
||||
## Server Model: `KR1280-X2-A0-R0-00`
|
||||
|
||||
| Date (UTC) | App Version | Collection Time | Documents | Speed | Metrics Collected | Notes |
|
||||
|---|---|---:|---:|---:|---:|---|
|
||||
| 2026-02-28 | `v1.7.1-12-g612058e` (`612058e`) | 6m15s (375s) | 185 | 0.49 docs/s | 46 | 2026-02-28 (KR1280-X2-A0-R0-00) - 23D401657.zip |
|
||||
| 2026-02-28 | `v1.7.1-9-g8dbbec3-dirty` (`8dbbec3`) | 6m16s (376s) | 165 | 0.44 docs/s | 46 | 2026-02-28 (KR1280-X2-A0-R0-00) - 23D401657-2.zip |
|
||||
| 2026-02-28 | `v1.7.1-7-gc52fea2` (`c52fea2`) | 10m51s (651s) | 227 | 0.35 docs/s | 40 | 2026-02-28 (KR1280-X2-A0-R0-00) - 23D401657 copy.zip |
|
||||
File diff suppressed because it is too large
Load Diff
@@ -56,7 +56,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
if len(fruDoc) == 0 {
|
||||
fruDoc = chassisFRUDoc
|
||||
}
|
||||
boardFallbackDocs := r.collectBoardFallbackDocs(chassisPaths)
|
||||
boardFallbackDocs := r.collectBoardFallbackDocs(systemPaths, chassisPaths)
|
||||
|
||||
if emit != nil {
|
||||
emit(Progress{Status: "running", Progress: 55, Message: "Redfish snapshot: replay CPU/RAM/Storage..."})
|
||||
@@ -72,22 +72,31 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
psus := r.collectPSUs(chassisPaths)
|
||||
pcieDevices := r.collectPCIeDevices(systemPaths, chassisPaths)
|
||||
gpus := r.collectGPUs(systemPaths, chassisPaths)
|
||||
gpus = r.collectGPUsFromProcessors(systemPaths, chassisPaths, gpus)
|
||||
nics := r.collectNICs(chassisPaths)
|
||||
r.enrichNICsFromNetworkInterfaces(&nics, systemPaths)
|
||||
thresholdSensors := r.collectThresholdSensors(chassisPaths)
|
||||
thermalSensors := r.collectThermalSensors(chassisPaths)
|
||||
powerSensors := r.collectPowerSensors(chassisPaths)
|
||||
discreteEvents := r.collectDiscreteSensorEvents(chassisPaths)
|
||||
healthEvents := r.collectHealthSummaryEvents(chassisPaths)
|
||||
driveFetchWarningEvents := buildDriveFetchWarningEvents(rawPayloads)
|
||||
managerDoc, _ := r.getJSON(primaryManager)
|
||||
networkProtocolDoc, _ := r.getJSON(joinPath(primaryManager, "/NetworkProtocol"))
|
||||
firmware := parseFirmware(systemDoc, biosDoc, managerDoc, secureBootDoc, networkProtocolDoc)
|
||||
firmware = dedupeFirmwareInfo(append(firmware, r.collectFirmwareInventory()...))
|
||||
boardInfo := parseBoardInfoWithFallback(systemDoc, chassisDoc, fruDoc)
|
||||
applyBoardInfoFallbackFromDocs(&boardInfo, boardFallbackDocs)
|
||||
boardInfo.BMCMACAddress = r.collectBMCMAC(managerPaths)
|
||||
assemblyFRU := r.collectAssemblyFRU(chassisPaths)
|
||||
collectedAt, sourceTimezone := inferRedfishCollectionTime(managerDoc, rawPayloads)
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
Events: append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+1), healthEvents...), discreteEvents...),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: thresholdSensors,
|
||||
CollectedAt: collectedAt,
|
||||
SourceTimezone: sourceTimezone,
|
||||
Events: append(append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+len(driveFetchWarningEvents)+1), healthEvents...), discreteEvents...), driveFetchWarningEvents...),
|
||||
FRU: assemblyFRU,
|
||||
Sensors: dedupeSensorReadings(append(append(thresholdSensors, thermalSensors...), powerSensors...)),
|
||||
RawPayloads: cloneRawPayloads(rawPayloads),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: boardInfo,
|
||||
@@ -102,10 +111,41 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
Firmware: firmware,
|
||||
},
|
||||
}
|
||||
if strings.TrimSpace(sourceTimezone) != "" {
|
||||
if result.RawPayloads == nil {
|
||||
result.RawPayloads = map[string]any{}
|
||||
}
|
||||
result.RawPayloads["source_timezone"] = sourceTimezone
|
||||
}
|
||||
appendMissingServerModelWarning(result, systemDoc, joinPath(primarySystem, "/Oem/Public/FRU"), joinPath(primaryChassis, "/Oem/Public/FRU"))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func inferRedfishCollectionTime(managerDoc map[string]interface{}, rawPayloads map[string]any) (time.Time, string) {
|
||||
dateTime := strings.TrimSpace(asString(managerDoc["DateTime"]))
|
||||
offset := strings.TrimSpace(asString(managerDoc["DateTimeLocalOffset"]))
|
||||
if dateTime != "" {
|
||||
if ts, err := time.Parse(time.RFC3339Nano, dateTime); err == nil {
|
||||
if offset == "" {
|
||||
offset = ts.Format("-07:00")
|
||||
}
|
||||
return ts.UTC(), offset
|
||||
}
|
||||
if ts, err := time.Parse(time.RFC3339, dateTime); err == nil {
|
||||
if offset == "" {
|
||||
offset = ts.Format("-07:00")
|
||||
}
|
||||
return ts.UTC(), offset
|
||||
}
|
||||
}
|
||||
if offset == "" && len(rawPayloads) > 0 {
|
||||
if tz, ok := rawPayloads["source_timezone"].(string); ok {
|
||||
offset = strings.TrimSpace(tz)
|
||||
}
|
||||
}
|
||||
return time.Time{}, offset
|
||||
}
|
||||
|
||||
func appendMissingServerModelWarning(result *models.AnalysisResult, systemDoc map[string]interface{}, systemFRUPath, chassisFRUPath string) {
|
||||
if result == nil || result.Hardware == nil {
|
||||
return
|
||||
@@ -168,6 +208,55 @@ func redfishFetchErrorsFromRawPayloads(rawPayloads map[string]any) map[string]st
|
||||
}
|
||||
}
|
||||
|
||||
func buildDriveFetchWarningEvents(rawPayloads map[string]any) []models.Event {
|
||||
errs := redfishFetchErrorsFromRawPayloads(rawPayloads)
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
paths := make([]string, 0, len(errs))
|
||||
timeoutCount := 0
|
||||
for path, msg := range errs {
|
||||
normalizedPath := normalizeRedfishPath(path)
|
||||
if !strings.Contains(strings.ToLower(normalizedPath), "/drives/") {
|
||||
continue
|
||||
}
|
||||
paths = append(paths, normalizedPath)
|
||||
low := strings.ToLower(msg)
|
||||
if strings.Contains(low, "timeout") || strings.Contains(low, "deadline exceeded") {
|
||||
timeoutCount++
|
||||
}
|
||||
}
|
||||
if len(paths) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
sort.Strings(paths)
|
||||
preview := paths
|
||||
const maxPreview = 8
|
||||
if len(preview) > maxPreview {
|
||||
preview = preview[:maxPreview]
|
||||
}
|
||||
rawData := strings.Join(preview, ", ")
|
||||
if len(paths) > len(preview) {
|
||||
rawData = fmt.Sprintf("%s (+%d more)", rawData, len(paths)-len(preview))
|
||||
}
|
||||
if timeoutCount > 0 {
|
||||
rawData = fmt.Sprintf("timeouts=%d; paths=%s", timeoutCount, rawData)
|
||||
}
|
||||
|
||||
return []models.Event{
|
||||
{
|
||||
Timestamp: time.Now(),
|
||||
Source: "Redfish",
|
||||
EventType: "Collection Warning",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: fmt.Sprintf("%d drive documents were unavailable; storage details may be incomplete", len(paths)),
|
||||
RawData: rawData,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo {
|
||||
docs, err := r.getCollectionMembers("/redfish/v1/UpdateService/FirmwareInventory")
|
||||
if err != nil || len(docs) == 0 {
|
||||
@@ -188,7 +277,12 @@ func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo
|
||||
asString(doc["Name"]),
|
||||
asString(doc["Id"]),
|
||||
)
|
||||
if strings.TrimSpace(name) == "" {
|
||||
name = strings.TrimSpace(name)
|
||||
if name == "" {
|
||||
continue
|
||||
}
|
||||
// BMCImageN entries are redundant backup slot labels; skip them.
|
||||
if strings.HasPrefix(strings.ToLower(name), "bmcimage") {
|
||||
continue
|
||||
}
|
||||
out = append(out, models.FirmwareInfo{DeviceName: name, Version: version})
|
||||
@@ -219,9 +313,12 @@ func (r redfishSnapshotReader) collectThresholdSensors(chassisPaths []string) []
|
||||
out := make([]models.SensorReading, 0)
|
||||
seen := make(map[string]struct{})
|
||||
for _, chassisPath := range chassisPaths {
|
||||
docs, err := r.getCollectionMembers(joinPath(chassisPath, "/ThresholdSensors"))
|
||||
if err != nil || len(docs) == 0 {
|
||||
continue
|
||||
thresholdPath := joinPath(chassisPath, "/ThresholdSensors")
|
||||
docs, _ := r.getCollectionMembers(thresholdPath)
|
||||
if len(docs) == 0 {
|
||||
if thresholdDoc, err := r.getJSON(thresholdPath); err == nil {
|
||||
docs = append(docs, redfishInlineSensors(thresholdDoc)...)
|
||||
}
|
||||
}
|
||||
for _, doc := range docs {
|
||||
sensor, ok := parseThresholdSensor(doc)
|
||||
@@ -293,37 +390,235 @@ func parseThresholdSensor(doc map[string]interface{}) (models.SensorReading, boo
|
||||
func (r redfishSnapshotReader) collectDiscreteSensorEvents(chassisPaths []string) []models.Event {
|
||||
out := make([]models.Event, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
docs, err := r.getCollectionMembers(joinPath(chassisPath, "/DiscreteSensors"))
|
||||
if err != nil || len(docs) == 0 {
|
||||
continue
|
||||
discretePath := joinPath(chassisPath, "/DiscreteSensors")
|
||||
docs, _ := r.getCollectionMembers(discretePath)
|
||||
if len(docs) == 0 {
|
||||
if discreteDoc, err := r.getJSON(discretePath); err == nil {
|
||||
docs = append(docs, redfishInlineSensors(discreteDoc)...)
|
||||
}
|
||||
}
|
||||
for _, doc := range docs {
|
||||
name := firstNonEmpty(asString(doc["Name"]), asString(doc["Id"]))
|
||||
status := mapStatus(doc["Status"])
|
||||
if status == "" {
|
||||
status = firstNonEmpty(asString(doc["Health"]), asString(doc["State"]))
|
||||
}
|
||||
if name == "" || status == "" {
|
||||
ev, ok := parseDiscreteSensorEvent(doc)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
normalized := strings.ToLower(strings.TrimSpace(status))
|
||||
if normalized == "ok" || normalized == "enabled" || normalized == "normal" || normalized == "present" {
|
||||
continue
|
||||
}
|
||||
out = append(out, models.Event{
|
||||
Timestamp: time.Now(),
|
||||
Source: "Redfish",
|
||||
SensorName: name,
|
||||
EventType: "Discrete Sensor Status",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: fmt.Sprintf("%s reports %s", name, status),
|
||||
RawData: firstNonEmpty(asString(doc["Description"]), status),
|
||||
})
|
||||
out = append(out, ev)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseDiscreteSensorEvent(doc map[string]interface{}) (models.Event, bool) {
|
||||
name := firstNonEmpty(asString(doc["Name"]), asString(doc["Id"]))
|
||||
status := mapStatus(doc["Status"])
|
||||
if status == "" {
|
||||
status = firstNonEmpty(asString(doc["Health"]), asString(doc["State"]))
|
||||
}
|
||||
if name == "" || status == "" {
|
||||
return models.Event{}, false
|
||||
}
|
||||
normalized := strings.ToLower(strings.TrimSpace(status))
|
||||
if normalized == "ok" || normalized == "enabled" || normalized == "normal" || normalized == "present" {
|
||||
return models.Event{}, false
|
||||
}
|
||||
return models.Event{
|
||||
Timestamp: time.Now(),
|
||||
Source: "Redfish",
|
||||
SensorName: name,
|
||||
EventType: "Discrete Sensor Status",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: fmt.Sprintf("%s reports %s", name, status),
|
||||
RawData: firstNonEmpty(asString(doc["Description"]), status),
|
||||
}, true
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectThermalSensors(chassisPaths []string) []models.SensorReading {
|
||||
out := make([]models.SensorReading, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
doc, err := r.getJSON(joinPath(chassisPath, "/Thermal"))
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, fanDoc := range redfishArrayObjects(doc["Fans"]) {
|
||||
out = append(out, parseThermalFanSensor(fanDoc))
|
||||
}
|
||||
for _, tempDoc := range redfishArrayObjects(doc["Temperatures"]) {
|
||||
out = append(out, parseThermalTemperatureSensor(tempDoc))
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectPowerSensors(chassisPaths []string) []models.SensorReading {
|
||||
out := make([]models.SensorReading, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
doc, err := r.getJSON(joinPath(chassisPath, "/Power"))
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
out = append(out, parsePowerOemPublicSensors(doc)...)
|
||||
for _, controlDoc := range redfishArrayObjects(doc["PowerControl"]) {
|
||||
if sensor, ok := parsePowerControlSensor(controlDoc); ok {
|
||||
out = append(out, sensor)
|
||||
}
|
||||
}
|
||||
for _, psuDoc := range redfishArrayObjects(doc["PowerSupplies"]) {
|
||||
out = append(out, parsePowerSupplySensors(psuDoc)...)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseThermalFanSensor(doc map[string]interface{}) models.SensorReading {
|
||||
name := firstNonEmpty(asString(doc["Name"]), asString(doc["MemberId"]), "Fan")
|
||||
unit := firstNonEmpty(asString(doc["ReadingUnits"]), "RPM")
|
||||
value := asFloat(doc["Reading"])
|
||||
raw := firstNonEmpty(asString(doc["Reading"]), asString(doc["Name"]))
|
||||
status := firstNonEmpty(mapStatus(doc["Status"]), asString(doc["State"]), asString(doc["Health"]), "unknown")
|
||||
return models.SensorReading{
|
||||
Name: name,
|
||||
Type: "fan_speed",
|
||||
Value: value,
|
||||
Unit: unit,
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
}
|
||||
}
|
||||
|
||||
func parseThermalTemperatureSensor(doc map[string]interface{}) models.SensorReading {
|
||||
name := firstNonEmpty(asString(doc["Name"]), asString(doc["MemberId"]), "Temperature")
|
||||
reading := asFloat(doc["ReadingCelsius"])
|
||||
raw := asString(doc["ReadingCelsius"])
|
||||
if raw == "" {
|
||||
reading = asFloat(doc["Reading"])
|
||||
raw = asString(doc["Reading"])
|
||||
}
|
||||
status := firstNonEmpty(mapStatus(doc["Status"]), asString(doc["State"]), asString(doc["Health"]), "unknown")
|
||||
return models.SensorReading{
|
||||
Name: name,
|
||||
Type: "temperature",
|
||||
Value: reading,
|
||||
Unit: "C",
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
}
|
||||
}
|
||||
|
||||
func parsePowerOemPublicSensors(doc map[string]interface{}) []models.SensorReading {
|
||||
oem, ok := doc["Oem"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
public, ok := oem["Public"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
var out []models.SensorReading
|
||||
add := func(name, key string) {
|
||||
raw := asString(public[key])
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
return
|
||||
}
|
||||
out = append(out, models.SensorReading{
|
||||
Name: name,
|
||||
Type: "power",
|
||||
Value: asFloat(public[key]),
|
||||
Unit: "W",
|
||||
RawValue: raw,
|
||||
Status: "OK",
|
||||
})
|
||||
}
|
||||
add("Total_Power", "TotalPower")
|
||||
add("CPU_Power", "CurrentCPUPowerWatts")
|
||||
add("Memory_Power", "CurrentMemoryPowerWatts")
|
||||
add("Fan_Power", "CurrentFANPowerWatts")
|
||||
return out
|
||||
}
|
||||
|
||||
func parsePowerControlSensor(doc map[string]interface{}) (models.SensorReading, bool) {
|
||||
raw := asString(doc["PowerConsumedWatts"])
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
return models.SensorReading{}, false
|
||||
}
|
||||
name := firstNonEmpty(asString(doc["Name"]), asString(doc["MemberId"]), "PowerControl")
|
||||
status := firstNonEmpty(mapStatus(doc["Status"]), asString(doc["State"]), asString(doc["Health"]), "unknown")
|
||||
return models.SensorReading{
|
||||
Name: name + "_Consumed",
|
||||
Type: "power",
|
||||
Value: asFloat(doc["PowerConsumedWatts"]),
|
||||
Unit: "W",
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
}, true
|
||||
}
|
||||
|
||||
func parsePowerSupplySensors(doc map[string]interface{}) []models.SensorReading {
|
||||
name := firstNonEmpty(asString(doc["Name"]), "PSU")
|
||||
status := firstNonEmpty(mapStatus(doc["Status"]), asString(doc["State"]), asString(doc["Health"]), "unknown")
|
||||
var out []models.SensorReading
|
||||
add := func(suffix, key, unit string) {
|
||||
raw := asString(doc[key])
|
||||
if strings.TrimSpace(raw) == "" {
|
||||
return
|
||||
}
|
||||
out = append(out, models.SensorReading{
|
||||
Name: fmt.Sprintf("%s_%s", name, suffix),
|
||||
Type: strings.ToLower(suffix),
|
||||
Value: asFloat(doc[key]),
|
||||
Unit: unit,
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
add("InputPower", "PowerInputWatts", "W")
|
||||
add("OutputPower", "LastPowerOutputWatts", "W")
|
||||
add("InputVoltage", "LineInputVoltage", "V")
|
||||
return out
|
||||
}
|
||||
|
||||
func redfishArrayObjects(v any) []map[string]interface{} {
|
||||
list, ok := v.([]interface{})
|
||||
if !ok || len(list) == 0 {
|
||||
return nil
|
||||
}
|
||||
out := make([]map[string]interface{}, 0, len(list))
|
||||
for _, item := range list {
|
||||
m, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
out = append(out, m)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func redfishInlineSensors(doc map[string]interface{}) []map[string]interface{} {
|
||||
return redfishArrayObjects(doc["Sensors"])
|
||||
}
|
||||
|
||||
func dedupeSensorReadings(items []models.SensorReading) []models.SensorReading {
|
||||
if len(items) <= 1 {
|
||||
return items
|
||||
}
|
||||
out := make([]models.SensorReading, 0, len(items))
|
||||
seen := make(map[string]struct{}, len(items))
|
||||
for _, s := range items {
|
||||
key := strings.ToLower(strings.TrimSpace(s.Name) + "|" + strings.TrimSpace(s.Type))
|
||||
if strings.TrimSpace(key) == "|" {
|
||||
key = strings.ToLower(strings.TrimSpace(s.RawValue))
|
||||
}
|
||||
if strings.TrimSpace(key) == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, s)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectHealthSummaryEvents(chassisPaths []string) []models.Event {
|
||||
out := make([]models.Event, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
@@ -400,7 +695,7 @@ func (r redfishSnapshotReader) enrichNICsFromNetworkInterfaces(nics *[]models.Ne
|
||||
macs = append(macs, collectNetworkPortMACs(p)...)
|
||||
}
|
||||
(*nics)[idx].MACAddresses = dedupeStrings(macs)
|
||||
if (*nics)[idx].PortCount == 0 {
|
||||
if sanitizeNetworkPortCount((*nics)[idx].PortCount) == 0 {
|
||||
(*nics)[idx].PortCount = len(portDocs)
|
||||
}
|
||||
}
|
||||
@@ -445,10 +740,10 @@ func dedupeStrings(items []string) []string {
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectBoardFallbackDocs(chassisPaths []string) []map[string]interface{} {
|
||||
func (r redfishSnapshotReader) collectBoardFallbackDocs(systemPaths, chassisPaths []string) []map[string]interface{} {
|
||||
out := make([]map[string]interface{}, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
for _, suffix := range []string{"/Boards", "/Backplanes", "/Assembly"} {
|
||||
for _, suffix := range []string{"/Boards", "/Backplanes"} {
|
||||
path := joinPath(chassisPath, suffix)
|
||||
if docs, err := r.getCollectionMembers(path); err == nil && len(docs) > 0 {
|
||||
out = append(out, docs...)
|
||||
@@ -459,6 +754,14 @@ func (r redfishSnapshotReader) collectBoardFallbackDocs(chassisPaths []string) [
|
||||
}
|
||||
}
|
||||
}
|
||||
for _, path := range append(append([]string{}, systemPaths...), chassisPaths...) {
|
||||
for _, suffix := range []string{"/Oem/Public", "/Oem/Public/ThermalConfig", "/ThermalConfig"} {
|
||||
docPath := joinPath(path, suffix)
|
||||
if doc, err := r.getJSON(docPath); err == nil && len(doc) > 0 {
|
||||
out = append(out, doc)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -468,6 +771,9 @@ func applyBoardInfoFallbackFromDocs(board *models.BoardInfo, docs []map[string]i
|
||||
}
|
||||
for _, doc := range docs {
|
||||
candidate := parseBoardInfoFromFRUDoc(doc)
|
||||
if !isLikelyServerProductName(candidate.ProductName) {
|
||||
continue
|
||||
}
|
||||
if board.Manufacturer == "" {
|
||||
board.Manufacturer = candidate.Manufacturer
|
||||
}
|
||||
@@ -486,6 +792,27 @@ func applyBoardInfoFallbackFromDocs(board *models.BoardInfo, docs []map[string]i
|
||||
}
|
||||
}
|
||||
|
||||
func isLikelyServerProductName(v string) bool {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return false
|
||||
}
|
||||
n := strings.ToUpper(v)
|
||||
if strings.Contains(n, "NULL") {
|
||||
return false
|
||||
}
|
||||
componentTokens := []string{
|
||||
"DIMM", "DDR", "NVME", "SSD", "HDD", "GPU", "NIC", "RAID",
|
||||
"PSU", "FAN", "BACKPLANE", "FRU",
|
||||
}
|
||||
for _, token := range componentTokens {
|
||||
if strings.Contains(n, strings.ToUpper(token)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
type redfishSnapshotReader struct {
|
||||
tree map[string]interface{}
|
||||
}
|
||||
@@ -517,24 +844,19 @@ func (r redfishSnapshotReader) getCollectionMembers(collectionPath string) ([]ma
|
||||
if err != nil {
|
||||
return r.fallbackCollectionMembers(collectionPath, err)
|
||||
}
|
||||
refs, ok := collection["Members"].([]interface{})
|
||||
if !ok || len(refs) == 0 {
|
||||
memberPaths := redfishCollectionMemberRefs(collection)
|
||||
if len(memberPaths) == 0 {
|
||||
return r.fallbackCollectionMembers(collectionPath, nil)
|
||||
}
|
||||
out := make([]map[string]interface{}, 0, len(refs))
|
||||
for _, refAny := range refs {
|
||||
ref, ok := refAny.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
memberPath := asString(ref["@odata.id"])
|
||||
if memberPath == "" {
|
||||
continue
|
||||
}
|
||||
out := make([]map[string]interface{}, 0, len(memberPaths))
|
||||
for _, memberPath := range memberPaths {
|
||||
doc, err := r.getJSON(memberPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(asString(doc["@odata.id"])) == "" {
|
||||
doc["@odata.id"] = normalizeRedfishPath(memberPath)
|
||||
}
|
||||
out = append(out, doc)
|
||||
}
|
||||
if len(out) == 0 {
|
||||
@@ -576,6 +898,9 @@ func (r redfishSnapshotReader) fallbackCollectionMembers(collectionPath string,
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(asString(doc["@odata.id"])) == "" {
|
||||
doc["@odata.id"] = normalizeRedfishPath(p)
|
||||
}
|
||||
out = append(out, doc)
|
||||
}
|
||||
return out, nil
|
||||
@@ -660,7 +985,9 @@ func (r redfishSnapshotReader) collectStorage(systemPath string) []models.Storag
|
||||
driveDocs, err := r.getCollectionMembers(driveCollectionPath)
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(driveCollectionPath) {
|
||||
@@ -685,7 +1012,9 @@ func (r redfishSnapshotReader) collectStorage(systemPath string) []models.Storag
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
@@ -822,7 +1151,6 @@ func (r redfishSnapshotReader) probeDirectDiskBayChildren(drivesCollectionPath s
|
||||
|
||||
func (r redfishSnapshotReader) collectNICs(chassisPaths []string) []models.NetworkAdapter {
|
||||
var nics []models.NetworkAdapter
|
||||
seen := make(map[string]struct{})
|
||||
for _, chassisPath := range chassisPaths {
|
||||
adapterDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/NetworkAdapters"))
|
||||
if err != nil {
|
||||
@@ -838,23 +1166,19 @@ func (r redfishSnapshotReader) collectNICs(chassisPaths []string) []models.Netwo
|
||||
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
|
||||
enrichNICFromPCIe(&nic, pcieDoc, functionDocs)
|
||||
}
|
||||
key := firstNonEmpty(nic.SerialNumber, nic.Slot+"|"+nic.Model)
|
||||
if key == "" {
|
||||
continue
|
||||
// Collect MACs from NetworkDeviceFunctions when not found via PCIe path.
|
||||
if len(nic.MACAddresses) == 0 {
|
||||
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, doc)
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
nics = append(nics, nic)
|
||||
}
|
||||
}
|
||||
return nics
|
||||
return dedupeNetworkAdapters(nics)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectPSUs(chassisPaths []string) []models.PSU {
|
||||
var out []models.PSU
|
||||
seen := make(map[string]struct{})
|
||||
seen := make(map[string]int)
|
||||
idx := 1
|
||||
for _, chassisPath := range chassisPaths {
|
||||
if memberDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/PowerSubsystem/PowerSupplies")); err == nil && len(memberDocs) > 0 {
|
||||
@@ -904,7 +1228,10 @@ func (r redfishSnapshotReader) collectGPUs(systemPaths, chassisPaths []string) [
|
||||
}
|
||||
gpu := parseGPU(doc, functionDocs, idx)
|
||||
idx++
|
||||
key := gpuDedupKey(gpu)
|
||||
if shouldSkipGenericGPUDuplicate(out, gpu) {
|
||||
continue
|
||||
}
|
||||
key := gpuDocDedupKey(doc, gpu)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
@@ -915,7 +1242,7 @@ func (r redfishSnapshotReader) collectGPUs(systemPaths, chassisPaths []string) [
|
||||
out = append(out, gpu)
|
||||
}
|
||||
}
|
||||
return out
|
||||
return dropModelOnlyGPUPlaceholders(out)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []string) []models.PCIeDevice {
|
||||
@@ -927,7 +1254,6 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
|
||||
collections = append(collections, joinPath(chassisPath, "/PCIeDevices"))
|
||||
}
|
||||
var out []models.PCIeDevice
|
||||
seen := make(map[string]struct{})
|
||||
for _, collectionPath := range collections {
|
||||
memberDocs, err := r.getCollectionMembers(collectionPath)
|
||||
if err != nil || len(memberDocs) == 0 {
|
||||
@@ -936,14 +1262,6 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
|
||||
for _, doc := range memberDocs {
|
||||
functionDocs := r.getLinkedPCIeFunctions(doc)
|
||||
dev := parsePCIeDevice(doc, functionDocs)
|
||||
key := pcieDeviceDedupKey(dev)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, dev)
|
||||
}
|
||||
}
|
||||
@@ -954,18 +1272,10 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
|
||||
}
|
||||
for idx, fn := range functionDocs {
|
||||
dev := parsePCIeFunction(fn, idx+1)
|
||||
key := pcieDeviceDedupKey(dev)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, dev)
|
||||
}
|
||||
}
|
||||
return out
|
||||
return dedupePCIeDevices(out)
|
||||
}
|
||||
|
||||
func stringsTrimTrailingSlash(s string) string {
|
||||
@@ -974,3 +1284,237 @@ func stringsTrimTrailingSlash(s string) string {
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// collectBMCMAC returns the MAC address of the first active BMC management
|
||||
// interface found in Managers/*/EthernetInterfaces. Returns empty string if
|
||||
// no MAC is available.
|
||||
func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
|
||||
for _, managerPath := range managerPaths {
|
||||
members, err := r.getCollectionMembers(joinPath(managerPath, "/EthernetInterfaces"))
|
||||
if err != nil || len(members) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, doc := range members {
|
||||
mac := strings.TrimSpace(firstNonEmpty(
|
||||
asString(doc["PermanentMACAddress"]),
|
||||
asString(doc["MACAddress"]),
|
||||
))
|
||||
if mac == "" || strings.EqualFold(mac, "00:00:00:00:00:00") {
|
||||
continue
|
||||
}
|
||||
return strings.ToUpper(mac)
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// collectAssemblyFRU reads Chassis/*/Assembly documents and returns FRU entries
|
||||
// for subcomponents (backplanes, PSUs, DIMMs, etc.) that carry meaningful
|
||||
// serial or part numbers. Entries already present in dedicated collections
|
||||
// (PSUs, DIMMs) are included here as well so that all FRU data is available
|
||||
// in one place; deduplication by serial is performed.
|
||||
func (r redfishSnapshotReader) collectAssemblyFRU(chassisPaths []string) []models.FRUInfo {
|
||||
seen := make(map[string]struct{})
|
||||
var out []models.FRUInfo
|
||||
|
||||
add := func(fru models.FRUInfo) {
|
||||
key := strings.ToUpper(strings.TrimSpace(fru.SerialNumber))
|
||||
if key == "" {
|
||||
key = strings.ToUpper(strings.TrimSpace(fru.Description + "|" + fru.PartNumber))
|
||||
}
|
||||
if key == "" || key == "|" {
|
||||
return
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
return
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, fru)
|
||||
}
|
||||
|
||||
for _, chassisPath := range chassisPaths {
|
||||
doc, err := r.getJSON(joinPath(chassisPath, "/Assembly"))
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
assemblies, _ := doc["Assemblies"].([]interface{})
|
||||
for _, aAny := range assemblies {
|
||||
a, ok := aAny.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimSpace(firstNonEmpty(asString(a["Name"]), asString(a["Description"])))
|
||||
model := strings.TrimSpace(asString(a["Model"]))
|
||||
partNumber := strings.TrimSpace(asString(a["PartNumber"]))
|
||||
serial := extractAssemblySerial(a)
|
||||
|
||||
if serial == "" && partNumber == "" {
|
||||
continue
|
||||
}
|
||||
add(models.FRUInfo{
|
||||
Description: name,
|
||||
ProductName: model,
|
||||
SerialNumber: serial,
|
||||
PartNumber: partNumber,
|
||||
})
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// extractAssemblySerial tries to find a serial number in an Assembly entry.
|
||||
// Standard Redfish Assembly has no top-level SerialNumber; vendors put it in Oem.
|
||||
func extractAssemblySerial(a map[string]interface{}) string {
|
||||
// Some implementations expose it at top level.
|
||||
if s := strings.TrimSpace(asString(a["SerialNumber"])); s != "" {
|
||||
return s
|
||||
}
|
||||
// Dig into Oem for vendor-specific structures (e.g. Huawei COMMONb).
|
||||
oem, _ := a["Oem"].(map[string]interface{})
|
||||
for _, v := range oem {
|
||||
subtree, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, v2 := range subtree {
|
||||
node, ok := v2.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if s := strings.TrimSpace(asString(node["SerialNumber"])); s != "" {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// enrichNICMACsFromNetworkDeviceFunctions reads the NetworkDeviceFunctions
|
||||
// collection linked from a NetworkAdapter document and populates the NIC's
|
||||
// MACAddresses from each function's Ethernet.PermanentMACAddress / MACAddress.
|
||||
// Called when PCIe-path enrichment does not produce any MACs.
|
||||
func (r redfishSnapshotReader) enrichNICMACsFromNetworkDeviceFunctions(nic *models.NetworkAdapter, adapterDoc map[string]interface{}) {
|
||||
ndfCol, ok := adapterDoc["NetworkDeviceFunctions"].(map[string]interface{})
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
colPath := asString(ndfCol["@odata.id"])
|
||||
if colPath == "" {
|
||||
return
|
||||
}
|
||||
funcDocs, err := r.getCollectionMembers(colPath)
|
||||
if err != nil || len(funcDocs) == 0 {
|
||||
return
|
||||
}
|
||||
for _, fn := range funcDocs {
|
||||
eth, _ := fn["Ethernet"].(map[string]interface{})
|
||||
if eth == nil {
|
||||
continue
|
||||
}
|
||||
mac := strings.TrimSpace(firstNonEmpty(
|
||||
asString(eth["PermanentMACAddress"]),
|
||||
asString(eth["MACAddress"]),
|
||||
))
|
||||
if mac == "" {
|
||||
continue
|
||||
}
|
||||
nic.MACAddresses = dedupeStrings(append(nic.MACAddresses, strings.ToUpper(mac)))
|
||||
}
|
||||
if len(funcDocs) > 0 && nic.PortCount == 0 {
|
||||
nic.PortCount = sanitizeNetworkPortCount(len(funcDocs))
|
||||
}
|
||||
}
|
||||
|
||||
// collectGPUsFromProcessors finds GPUs that some BMCs (e.g. MSI) expose as
|
||||
// Processor entries with ProcessorType=GPU rather than as PCIe devices.
|
||||
// It supplements the existing gpus slice (already found via PCIe path),
|
||||
// skipping entries already present by UUID or SerialNumber.
|
||||
// Serial numbers are looked up from Chassis members named after each GPU Id.
|
||||
func (r redfishSnapshotReader) collectGPUsFromProcessors(systemPaths, chassisPaths []string, existing []models.GPU) []models.GPU {
|
||||
// Build a lookup: chassis member ID → chassis doc (for serial numbers).
|
||||
chassisByID := make(map[string]map[string]interface{})
|
||||
for _, cp := range chassisPaths {
|
||||
doc, err := r.getJSON(cp)
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
id := strings.TrimSpace(asString(doc["Id"]))
|
||||
if id != "" {
|
||||
chassisByID[strings.ToUpper(id)] = doc
|
||||
}
|
||||
}
|
||||
|
||||
// Build dedup sets from existing GPUs.
|
||||
seenUUID := make(map[string]struct{})
|
||||
seenSerial := make(map[string]struct{})
|
||||
for _, g := range existing {
|
||||
if u := strings.ToUpper(strings.TrimSpace(g.UUID)); u != "" {
|
||||
seenUUID[u] = struct{}{}
|
||||
}
|
||||
if s := strings.ToUpper(strings.TrimSpace(g.SerialNumber)); s != "" {
|
||||
seenSerial[s] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
out := append([]models.GPU{}, existing...)
|
||||
idx := len(existing) + 1
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
procDocs, err := r.getCollectionMembers(joinPath(systemPath, "/Processors"))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, doc := range procDocs {
|
||||
if !strings.EqualFold(strings.TrimSpace(asString(doc["ProcessorType"])), "GPU") {
|
||||
continue
|
||||
}
|
||||
|
||||
// Resolve serial from Chassis/<Id>.
|
||||
gpuID := strings.TrimSpace(asString(doc["Id"]))
|
||||
serial := ""
|
||||
if chassisDoc, ok := chassisByID[strings.ToUpper(gpuID)]; ok {
|
||||
serial = strings.TrimSpace(asString(chassisDoc["SerialNumber"]))
|
||||
}
|
||||
|
||||
uuid := strings.TrimSpace(asString(doc["UUID"]))
|
||||
uuidKey := strings.ToUpper(uuid)
|
||||
serialKey := strings.ToUpper(serial)
|
||||
|
||||
if uuidKey != "" {
|
||||
if _, dup := seenUUID[uuidKey]; dup {
|
||||
continue
|
||||
}
|
||||
seenUUID[uuidKey] = struct{}{}
|
||||
}
|
||||
if serialKey != "" {
|
||||
if _, dup := seenSerial[serialKey]; dup {
|
||||
continue
|
||||
}
|
||||
seenSerial[serialKey] = struct{}{}
|
||||
}
|
||||
|
||||
slotLabel := firstNonEmpty(
|
||||
redfishLocationLabel(doc["Location"]),
|
||||
redfishLocationLabel(doc["PhysicalLocation"]),
|
||||
)
|
||||
if slotLabel == "" && gpuID != "" {
|
||||
slotLabel = gpuID
|
||||
}
|
||||
if slotLabel == "" {
|
||||
slotLabel = fmt.Sprintf("GPU%d", idx)
|
||||
}
|
||||
|
||||
out = append(out, models.GPU{
|
||||
Slot: slotLabel,
|
||||
Model: firstNonEmpty(asString(doc["Model"]), asString(doc["Name"])),
|
||||
Manufacturer: asString(doc["Manufacturer"]),
|
||||
PartNumber: asString(doc["PartNumber"]),
|
||||
SerialNumber: serial,
|
||||
UUID: uuid,
|
||||
Status: mapStatus(doc["Status"]),
|
||||
})
|
||||
idx++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -193,6 +193,7 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
appendDevice(models.HardwareDevice{
|
||||
Kind: models.DeviceKindGPU,
|
||||
Slot: gpu.Slot,
|
||||
Location: gpu.Location,
|
||||
BDF: gpu.BDF,
|
||||
DeviceClass: "DisplayController",
|
||||
VendorID: gpu.VendorID,
|
||||
@@ -206,12 +207,27 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
LinkSpeed: gpu.CurrentLinkSpeed,
|
||||
MaxLinkWidth: gpu.MaxLinkWidth,
|
||||
MaxLinkSpeed: gpu.MaxLinkSpeed,
|
||||
TemperatureC: gpu.Temperature,
|
||||
Status: gpu.Status,
|
||||
StatusCheckedAt: gpu.StatusCheckedAt,
|
||||
StatusChangedAt: gpu.StatusChangedAt,
|
||||
StatusAtCollect: gpu.StatusAtCollect,
|
||||
StatusHistory: gpu.StatusHistory,
|
||||
ErrorDescription: gpu.ErrorDescription,
|
||||
Details: map[string]any{
|
||||
"uuid": gpu.UUID,
|
||||
"video_bios": gpu.VideoBIOS,
|
||||
"irq": gpu.IRQ,
|
||||
"bus_type": gpu.BusType,
|
||||
"dma_size": gpu.DMASize,
|
||||
"dma_mask": gpu.DMAMask,
|
||||
"device_minor": gpu.DeviceMinor,
|
||||
"temperature": gpu.Temperature,
|
||||
"mem_temperature": gpu.MemTemperature,
|
||||
"power": gpu.Power,
|
||||
"max_power": gpu.MaxPower,
|
||||
"clock_speed": gpu.ClockSpeed,
|
||||
},
|
||||
})
|
||||
}
|
||||
for _, nic := range hw.NetworkAdapters {
|
||||
@@ -292,8 +308,14 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
continue
|
||||
}
|
||||
if curr.score > prev.score {
|
||||
curr.item = mergeCanonicalDevice(curr.item, prev.item)
|
||||
curr.score = canonicalScore(curr.item)
|
||||
byKey[key] = curr
|
||||
continue
|
||||
}
|
||||
prev.item = mergeCanonicalDevice(prev.item, curr.item)
|
||||
prev.score = canonicalScore(prev.item)
|
||||
byKey[key] = prev
|
||||
}
|
||||
out := make([]models.HardwareDevice, 0, len(order)+len(noKey))
|
||||
for _, key := range order {
|
||||
@@ -306,6 +328,95 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
return out
|
||||
}
|
||||
|
||||
func mergeCanonicalDevice(primary, secondary models.HardwareDevice) models.HardwareDevice {
|
||||
fillString := func(dst *string, src string) {
|
||||
if strings.TrimSpace(*dst) == "" && strings.TrimSpace(src) != "" {
|
||||
*dst = src
|
||||
}
|
||||
}
|
||||
fillInt := func(dst *int, src int) {
|
||||
if *dst == 0 && src != 0 {
|
||||
*dst = src
|
||||
}
|
||||
}
|
||||
fillFloat := func(dst *float64, src float64) {
|
||||
if *dst == 0 && src != 0 {
|
||||
*dst = src
|
||||
}
|
||||
}
|
||||
|
||||
fillString(&primary.Kind, secondary.Kind)
|
||||
fillString(&primary.Source, secondary.Source)
|
||||
fillString(&primary.Slot, secondary.Slot)
|
||||
fillString(&primary.Location, secondary.Location)
|
||||
fillString(&primary.BDF, secondary.BDF)
|
||||
fillString(&primary.DeviceClass, secondary.DeviceClass)
|
||||
fillInt(&primary.VendorID, secondary.VendorID)
|
||||
fillInt(&primary.DeviceID, secondary.DeviceID)
|
||||
fillString(&primary.Model, secondary.Model)
|
||||
fillString(&primary.PartNumber, secondary.PartNumber)
|
||||
fillString(&primary.Manufacturer, secondary.Manufacturer)
|
||||
fillString(&primary.SerialNumber, secondary.SerialNumber)
|
||||
fillString(&primary.Firmware, secondary.Firmware)
|
||||
fillString(&primary.Type, secondary.Type)
|
||||
fillString(&primary.Interface, secondary.Interface)
|
||||
if primary.Present == nil && secondary.Present != nil {
|
||||
primary.Present = secondary.Present
|
||||
}
|
||||
fillInt(&primary.SizeMB, secondary.SizeMB)
|
||||
fillInt(&primary.SizeGB, secondary.SizeGB)
|
||||
fillInt(&primary.Cores, secondary.Cores)
|
||||
fillInt(&primary.Threads, secondary.Threads)
|
||||
fillInt(&primary.FrequencyMHz, secondary.FrequencyMHz)
|
||||
fillInt(&primary.MaxFreqMHz, secondary.MaxFreqMHz)
|
||||
fillInt(&primary.PortCount, secondary.PortCount)
|
||||
fillString(&primary.PortType, secondary.PortType)
|
||||
if len(primary.MACAddresses) == 0 && len(secondary.MACAddresses) > 0 {
|
||||
primary.MACAddresses = secondary.MACAddresses
|
||||
}
|
||||
fillInt(&primary.LinkWidth, secondary.LinkWidth)
|
||||
fillString(&primary.LinkSpeed, secondary.LinkSpeed)
|
||||
fillInt(&primary.MaxLinkWidth, secondary.MaxLinkWidth)
|
||||
fillString(&primary.MaxLinkSpeed, secondary.MaxLinkSpeed)
|
||||
fillInt(&primary.WattageW, secondary.WattageW)
|
||||
fillString(&primary.InputType, secondary.InputType)
|
||||
fillInt(&primary.InputPowerW, secondary.InputPowerW)
|
||||
fillInt(&primary.OutputPowerW, secondary.OutputPowerW)
|
||||
fillFloat(&primary.InputVoltage, secondary.InputVoltage)
|
||||
fillInt(&primary.TemperatureC, secondary.TemperatureC)
|
||||
fillString(&primary.Status, secondary.Status)
|
||||
if primary.StatusCheckedAt == nil && secondary.StatusCheckedAt != nil {
|
||||
primary.StatusCheckedAt = secondary.StatusCheckedAt
|
||||
}
|
||||
if primary.StatusChangedAt == nil && secondary.StatusChangedAt != nil {
|
||||
primary.StatusChangedAt = secondary.StatusChangedAt
|
||||
}
|
||||
if primary.StatusAtCollect == nil && secondary.StatusAtCollect != nil {
|
||||
primary.StatusAtCollect = secondary.StatusAtCollect
|
||||
}
|
||||
if len(primary.StatusHistory) == 0 && len(secondary.StatusHistory) > 0 {
|
||||
primary.StatusHistory = secondary.StatusHistory
|
||||
}
|
||||
fillString(&primary.ErrorDescription, secondary.ErrorDescription)
|
||||
primary.Details = mergeDetailMaps(primary.Details, secondary.Details)
|
||||
return primary
|
||||
}
|
||||
|
||||
func mergeDetailMaps(primary, secondary map[string]any) map[string]any {
|
||||
if len(secondary) == 0 {
|
||||
return primary
|
||||
}
|
||||
if primary == nil {
|
||||
primary = make(map[string]any, len(secondary))
|
||||
}
|
||||
for k, v := range secondary {
|
||||
if _, exists := primary[k]; !exists {
|
||||
primary[k] = v
|
||||
}
|
||||
}
|
||||
return primary
|
||||
}
|
||||
|
||||
func canonicalKey(item models.HardwareDevice) string {
|
||||
if sn := normalizedSerial(item.SerialNumber); sn != "" {
|
||||
return "sn:" + strings.ToLower(sn)
|
||||
@@ -344,7 +455,7 @@ func convertFirmware(firmware []models.FirmwareInfo) []ReanimatorFirmware {
|
||||
|
||||
result := make([]ReanimatorFirmware, 0, len(firmware))
|
||||
for _, fw := range firmware {
|
||||
if isDeviceBoundFirmwareName(fw.DeviceName) {
|
||||
if isDeviceBoundFirmwareName(fw.DeviceName) || isDeviceBoundFirmwareFQDD(fw.Description) {
|
||||
continue
|
||||
}
|
||||
result = append(result, ReanimatorFirmware{
|
||||
@@ -483,6 +594,23 @@ func convertPCIeFromDevices(devices []models.HardwareDevice, collectedAt string)
|
||||
if model == "" {
|
||||
model = d.PartNumber
|
||||
}
|
||||
temperatureC := d.TemperatureC
|
||||
if temperatureC == 0 {
|
||||
temperatureC = firstNonZeroInt(
|
||||
intFromDetailMap(d.Details, "temperature_c"),
|
||||
intFromDetailMap(d.Details, "temperature"),
|
||||
)
|
||||
}
|
||||
powerW := firstNonZeroInt(
|
||||
intFromDetailMap(d.Details, "power_w"),
|
||||
intFromDetailMap(d.Details, "power"),
|
||||
)
|
||||
voltageV := firstNonZeroFloat(
|
||||
floatFromDetailMap(d.Details, "voltage_v"),
|
||||
floatFromDetailMap(d.Details, "voltage"),
|
||||
floatFromDetailMap(d.Details, "input_voltage"),
|
||||
d.InputVoltage,
|
||||
)
|
||||
status := normalizeStatus(d.Status, false)
|
||||
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusAtCollect, d.StatusHistory, d.ErrorDescription, collectedAt)
|
||||
result = append(result, ReanimatorPCIe{
|
||||
@@ -499,6 +627,9 @@ func convertPCIeFromDevices(devices []models.HardwareDevice, collectedAt string)
|
||||
MaxLinkSpeed: d.MaxLinkSpeed,
|
||||
SerialNumber: normalizedSerial(d.SerialNumber),
|
||||
Firmware: d.Firmware,
|
||||
TemperatureC: temperatureC,
|
||||
PowerW: powerW,
|
||||
VoltageV: voltageV,
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
@@ -536,6 +667,7 @@ func convertPSUsFromDevices(devices []models.HardwareDevice, collectedAt string)
|
||||
InputPowerW: d.InputPowerW,
|
||||
OutputPowerW: d.OutputPowerW,
|
||||
InputVoltage: d.InputVoltage,
|
||||
TemperatureC: d.TemperatureC,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
StatusAtCollect: meta.StatusAtCollection,
|
||||
@@ -558,13 +690,34 @@ func isDeviceBoundFirmwareName(name string) bool {
|
||||
strings.HasPrefix(n, "hdd ") ||
|
||||
strings.HasPrefix(n, "ssd ") ||
|
||||
strings.HasPrefix(n, "nvme ") ||
|
||||
strings.HasPrefix(n, "psu") {
|
||||
strings.HasPrefix(n, "psu") ||
|
||||
// HGX baseboard firmware inventory IDs for device-bound components
|
||||
strings.Contains(n, "_fw_gpu_") ||
|
||||
strings.Contains(n, "_fw_nvswitch_") ||
|
||||
strings.Contains(n, "_inforom_gpu_") {
|
||||
return true
|
||||
}
|
||||
|
||||
return cpuMicrocodeFirmwareRegex.MatchString(strings.TrimSpace(name))
|
||||
}
|
||||
|
||||
// isDeviceBoundFirmwareFQDD returns true if the description looks like a device-bound FQDD
|
||||
// (e.g. NIC.Integrated.1-1-1, PSU.Slot.1, Disk.Bay.0:..., RAID.Backplane.Firmware.0).
|
||||
// These firmware entries are already embedded in the device itself and must not appear
|
||||
// in hardware.firmware.
|
||||
func isDeviceBoundFirmwareFQDD(desc string) bool {
|
||||
d := strings.ToLower(strings.TrimSpace(desc))
|
||||
if d == "" {
|
||||
return false
|
||||
}
|
||||
for _, prefix := range []string{"nic.", "psu.", "disk.", "raid.backplane.", "gpu."} {
|
||||
if strings.HasPrefix(d, prefix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// convertCPUs converts CPU information to Reanimator format
|
||||
func convertCPUs(cpus []models.CPU, collectedAt string) []ReanimatorCPU {
|
||||
if len(cpus) == 0 {
|
||||
@@ -804,6 +957,8 @@ func convertPCIeDevices(hw *models.HardwareConfig, collectedAt string) []Reanima
|
||||
MaxLinkSpeed: gpu.MaxLinkSpeed,
|
||||
SerialNumber: serialNumber,
|
||||
Firmware: gpu.Firmware,
|
||||
TemperatureC: gpu.Temperature,
|
||||
PowerW: gpu.Power,
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
@@ -954,6 +1109,7 @@ func convertPowerSupplies(psus []models.PSU, collectedAt string) []ReanimatorPSU
|
||||
InputPowerW: psu.InputPowerW,
|
||||
OutputPowerW: psu.OutputPowerW,
|
||||
InputVoltage: psu.InputVoltage,
|
||||
TemperatureC: psu.TemperatureC,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
StatusAtCollect: meta.StatusAtCollection,
|
||||
@@ -974,8 +1130,8 @@ type convertedStatusMeta struct {
|
||||
|
||||
func buildStatusMeta(
|
||||
currentStatus string,
|
||||
checkedAt time.Time,
|
||||
changedAt time.Time,
|
||||
checkedAt *time.Time,
|
||||
changedAt *time.Time,
|
||||
statusAtCollection *models.StatusAtCollection,
|
||||
history []models.StatusHistoryEntry,
|
||||
errorDescription string,
|
||||
@@ -989,7 +1145,7 @@ func buildStatusMeta(
|
||||
|
||||
convertedHistory := make([]ReanimatorStatusHistoryEntry, 0, len(history))
|
||||
for _, h := range history {
|
||||
changed := formatOptionalRFC3339(h.ChangedAt)
|
||||
changed := formatOptionalRFC3339(&h.ChangedAt)
|
||||
if changed == "" {
|
||||
continue
|
||||
}
|
||||
@@ -1010,7 +1166,7 @@ func buildStatusMeta(
|
||||
}
|
||||
|
||||
if statusAtCollection != nil {
|
||||
at := formatOptionalRFC3339(statusAtCollection.At)
|
||||
at := formatOptionalRFC3339(&statusAtCollection.At)
|
||||
if at != "" && strings.TrimSpace(statusAtCollection.Status) != "" {
|
||||
meta.StatusAtCollection = &ReanimatorStatusAtCollection{
|
||||
Status: normalizeStatus(statusAtCollection.Status, true),
|
||||
@@ -1035,8 +1191,8 @@ func buildStatusMeta(
|
||||
return meta
|
||||
}
|
||||
|
||||
func formatOptionalRFC3339(t time.Time) string {
|
||||
if t.IsZero() {
|
||||
func formatOptionalRFC3339(t *time.Time) string {
|
||||
if t == nil || t.IsZero() {
|
||||
return ""
|
||||
}
|
||||
return t.UTC().Format(time.RFC3339)
|
||||
@@ -1286,13 +1442,73 @@ func intFromDetailMap(details map[string]any, key string) int {
|
||||
switch n := v.(type) {
|
||||
case int:
|
||||
return n
|
||||
case int64:
|
||||
return int(n)
|
||||
case int32:
|
||||
return int(n)
|
||||
case float64:
|
||||
return int(n)
|
||||
case float32:
|
||||
return int(n)
|
||||
case string:
|
||||
i, err := strconv.Atoi(strings.TrimSpace(n))
|
||||
if err == nil {
|
||||
return i
|
||||
}
|
||||
return 0
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func floatFromDetailMap(details map[string]any, key string) float64 {
|
||||
if details == nil {
|
||||
return 0
|
||||
}
|
||||
v, ok := details[key]
|
||||
if !ok {
|
||||
return 0
|
||||
}
|
||||
switch n := v.(type) {
|
||||
case float64:
|
||||
return n
|
||||
case float32:
|
||||
return float64(n)
|
||||
case int:
|
||||
return float64(n)
|
||||
case int64:
|
||||
return float64(n)
|
||||
case int32:
|
||||
return float64(n)
|
||||
case string:
|
||||
f, err := strconv.ParseFloat(strings.TrimSpace(n), 64)
|
||||
if err == nil {
|
||||
return f
|
||||
}
|
||||
return 0
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func firstNonZeroInt(values ...int) int {
|
||||
for _, v := range values {
|
||||
if v != 0 {
|
||||
return v
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func firstNonZeroFloat(values ...float64) float64 {
|
||||
for _, v := range values {
|
||||
if v != 0 {
|
||||
return v
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// inferStorageStatus determines storage device status
|
||||
func inferStorageStatus(stor models.Storage) string {
|
||||
if !stor.Present {
|
||||
|
||||
@@ -656,6 +656,43 @@ func TestConvertToReanimator_DeduplicatesAllSections(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_StatusFallbackUsesCollectedAt(t *testing.T) {
|
||||
collectedAt := time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC)
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "status-fallback.json",
|
||||
CollectedAt: collectedAt,
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "U.2-1",
|
||||
Model: "PM9A3",
|
||||
SerialNumber: "SSD-001",
|
||||
Present: true,
|
||||
Status: "OK",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected 1 storage entry, got %d", len(out.Hardware.Storage))
|
||||
}
|
||||
|
||||
wantTs := collectedAt.UTC().Format(time.RFC3339)
|
||||
got := out.Hardware.Storage[0]
|
||||
if got.StatusCheckedAt != wantTs {
|
||||
t.Fatalf("expected status_checked_at=%q, got %q", wantTs, got.StatusCheckedAt)
|
||||
}
|
||||
if got.StatusAtCollect == nil || got.StatusAtCollect.At != wantTs {
|
||||
t.Fatalf("expected status_at_collection.at=%q, got %#v", wantTs, got.StatusAtCollect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_FirmwareExcludesDeviceBoundEntries(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "fw-filter-test.json",
|
||||
@@ -737,4 +774,110 @@ func TestConvertToReanimator_UsesCanonicalDevices(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_BindsDeviceVitals(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "vitals.json",
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
|
||||
Devices: []models.HardwareDevice{
|
||||
{
|
||||
Kind: models.DeviceKindGPU,
|
||||
Slot: "#GPU0",
|
||||
Model: "B200 180GB HBM3e",
|
||||
SerialNumber: "GPU-001",
|
||||
BDF: "0000:17:00.0",
|
||||
Details: map[string]any{
|
||||
"temperature": 71,
|
||||
"power": 350,
|
||||
"voltage": 12.2,
|
||||
},
|
||||
},
|
||||
{
|
||||
Kind: models.DeviceKindPSU,
|
||||
Slot: "PSU0",
|
||||
SerialNumber: "PSU-001",
|
||||
Present: boolPtr(true),
|
||||
InputPowerW: 1400,
|
||||
OutputPowerW: 1300,
|
||||
InputVoltage: 229.5,
|
||||
TemperatureC: 44,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
|
||||
if len(out.Hardware.PCIeDevices) != 1 {
|
||||
t.Fatalf("expected one pcie device, got %d", len(out.Hardware.PCIeDevices))
|
||||
}
|
||||
pcie := out.Hardware.PCIeDevices[0]
|
||||
if pcie.TemperatureC != 71 {
|
||||
t.Fatalf("expected GPU temperature 71C, got %d", pcie.TemperatureC)
|
||||
}
|
||||
if pcie.PowerW != 350 {
|
||||
t.Fatalf("expected GPU power 350W, got %d", pcie.PowerW)
|
||||
}
|
||||
if pcie.VoltageV != 12.2 {
|
||||
t.Fatalf("expected device voltage 12.2V, got %.2f", pcie.VoltageV)
|
||||
}
|
||||
|
||||
if len(out.Hardware.PowerSupplies) != 1 {
|
||||
t.Fatalf("expected one PSU, got %d", len(out.Hardware.PowerSupplies))
|
||||
}
|
||||
psu := out.Hardware.PowerSupplies[0]
|
||||
if psu.TemperatureC != 44 {
|
||||
t.Fatalf("expected PSU temperature 44C, got %d", psu.TemperatureC)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_PreservesVitalsAcrossCanonicalDedup(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "dedup-vitals.json",
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
{
|
||||
Slot: "#GPU0",
|
||||
BDF: "0000:17:00.0",
|
||||
DeviceClass: "3D Controller",
|
||||
PartNumber: "Generic Display",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: "GPU-SN-001",
|
||||
},
|
||||
},
|
||||
GPUs: []models.GPU{
|
||||
{
|
||||
Slot: "#GPU0",
|
||||
BDF: "0000:17:00.0",
|
||||
Model: "B200 180GB HBM3e",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: "GPU-SN-001",
|
||||
Temperature: 67,
|
||||
Power: 330,
|
||||
Status: "OK",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.PCIeDevices) != 1 {
|
||||
t.Fatalf("expected deduped one pcie entry, got %d", len(out.Hardware.PCIeDevices))
|
||||
}
|
||||
got := out.Hardware.PCIeDevices[0]
|
||||
if got.TemperatureC != 67 {
|
||||
t.Fatalf("expected deduped GPU temperature 67C, got %d", got.TemperatureC)
|
||||
}
|
||||
if got.PowerW != 330 {
|
||||
t.Fatalf("expected deduped GPU power 330W, got %d", got.PowerW)
|
||||
}
|
||||
}
|
||||
|
||||
func boolPtr(v bool) *bool { return &v }
|
||||
|
||||
@@ -118,6 +118,9 @@ type ReanimatorPCIe struct {
|
||||
MaxLinkSpeed string `json:"max_link_speed,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
TemperatureC int `json:"temperature_c,omitempty"`
|
||||
PowerW int `json:"power_w,omitempty"`
|
||||
VoltageV float64 `json:"voltage_v,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
StatusCheckedAt string `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt string `json:"status_changed_at,omitempty"`
|
||||
@@ -141,6 +144,7 @@ type ReanimatorPSU struct {
|
||||
InputPowerW int `json:"input_power_w,omitempty"`
|
||||
OutputPowerW int `json:"output_power_w,omitempty"`
|
||||
InputVoltage float64 `json:"input_voltage,omitempty"`
|
||||
TemperatureC int `json:"temperature_c,omitempty"`
|
||||
StatusCheckedAt string `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt string `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *ReanimatorStatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
|
||||
@@ -13,6 +13,7 @@ type AnalysisResult struct {
|
||||
SourceType string `json:"source_type,omitempty"` // archive | api
|
||||
Protocol string `json:"protocol,omitempty"` // redfish | ipmi
|
||||
TargetHost string `json:"target_host,omitempty"` // BMC host for live collect
|
||||
SourceTimezone string `json:"source_timezone,omitempty"` // Source timezone/offset used during collection (e.g. +08:00)
|
||||
CollectedAt time.Time `json:"collected_at,omitempty"` // Collection/upload timestamp
|
||||
RawPayloads map[string]any `json:"raw_payloads,omitempty"` // Additional source payloads (e.g. Redfish tree)
|
||||
Events []Event `json:"events"`
|
||||
@@ -147,8 +148,8 @@ type HardwareDevice struct {
|
||||
TemperatureC int `json:"temperature_c,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -166,13 +167,14 @@ type FirmwareInfo struct {
|
||||
|
||||
// BoardInfo represents motherboard/system information
|
||||
type BoardInfo struct {
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
ProductName string `json:"product_name,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
PartNumber string `json:"part_number,omitempty"`
|
||||
Version string `json:"version,omitempty"`
|
||||
UUID string `json:"uuid,omitempty"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
ProductName string `json:"product_name,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
PartNumber string `json:"part_number,omitempty"`
|
||||
Version string `json:"version,omitempty"`
|
||||
UUID string `json:"uuid,omitempty"`
|
||||
BMCMACAddress string `json:"bmc_mac_address,omitempty"`
|
||||
}
|
||||
|
||||
// CPU represents processor information
|
||||
@@ -192,8 +194,8 @@ type CPU struct {
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -216,8 +218,8 @@ type MemoryDIMM struct {
|
||||
Status string `json:"status,omitempty"`
|
||||
Ranks int `json:"ranks,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -239,8 +241,8 @@ type Storage struct {
|
||||
BackplaneID int `json:"backplane_id,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -277,8 +279,8 @@ type PCIeDevice struct {
|
||||
MACAddresses []string `json:"mac_addresses,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -313,8 +315,8 @@ type PSU struct {
|
||||
OutputVoltage float64 `json:"output_voltage,omitempty"`
|
||||
TemperatureC int `json:"temperature_c,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -351,8 +353,8 @@ type GPU struct {
|
||||
CurrentLinkSpeed string `json:"current_link_speed,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
@@ -376,8 +378,8 @@ type NetworkAdapter struct {
|
||||
MACAddresses []string `json:"mac_addresses,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
StatusCheckedAt time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
StatusHistory []StatusHistoryEntry `json:"status_history,omitempty"`
|
||||
ErrorDescription string `json:"error_description,omitempty"`
|
||||
|
||||
@@ -9,29 +9,45 @@ import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const maxSingleFileSize = 10 * 1024 * 1024
|
||||
const maxZipArchiveSize = 50 * 1024 * 1024
|
||||
const maxGzipDecompressedSize = 50 * 1024 * 1024
|
||||
|
||||
var supportedArchiveExt = map[string]struct{}{
|
||||
".gz": {},
|
||||
".tgz": {},
|
||||
".tar": {},
|
||||
".sds": {},
|
||||
".zip": {},
|
||||
".txt": {},
|
||||
".log": {},
|
||||
}
|
||||
|
||||
// ExtractedFile represents a file extracted from archive
|
||||
type ExtractedFile struct {
|
||||
Path string
|
||||
Content []byte
|
||||
ModTime time.Time
|
||||
Truncated bool
|
||||
TruncatedMessage string
|
||||
}
|
||||
|
||||
// ExtractArchive extracts tar.gz or zip archive and returns file contents
|
||||
func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
if !IsSupportedArchiveFilename(archivePath) {
|
||||
return nil, fmt.Errorf("unsupported archive format: %s", strings.ToLower(filepath.Ext(archivePath)))
|
||||
}
|
||||
ext := strings.ToLower(filepath.Ext(archivePath))
|
||||
|
||||
switch ext {
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGz(archivePath)
|
||||
case ".tar":
|
||||
case ".tar", ".sds":
|
||||
return extractTar(archivePath)
|
||||
case ".zip":
|
||||
return extractZip(archivePath)
|
||||
@@ -44,12 +60,15 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
|
||||
// ExtractArchiveFromReader extracts archive from reader
|
||||
func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, error) {
|
||||
if !IsSupportedArchiveFilename(filename) {
|
||||
return nil, fmt.Errorf("unsupported archive format: %s", strings.ToLower(filepath.Ext(filename)))
|
||||
}
|
||||
ext := strings.ToLower(filepath.Ext(filename))
|
||||
|
||||
switch ext {
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGzFromReader(r, filename)
|
||||
case ".tar":
|
||||
case ".tar", ".sds":
|
||||
return extractTarFromReader(r)
|
||||
case ".zip":
|
||||
return extractZipFromReader(r)
|
||||
@@ -60,6 +79,27 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
|
||||
}
|
||||
}
|
||||
|
||||
// IsSupportedArchiveFilename reports whether filename extension is supported by archive extractor.
|
||||
func IsSupportedArchiveFilename(filename string) bool {
|
||||
ext := strings.ToLower(strings.TrimSpace(filepath.Ext(filename)))
|
||||
if ext == "" {
|
||||
return false
|
||||
}
|
||||
_, ok := supportedArchiveExt[ext]
|
||||
return ok
|
||||
}
|
||||
|
||||
// SupportedArchiveExtensions returns sorted list of archive/file extensions
|
||||
// accepted by archive extractor.
|
||||
func SupportedArchiveExtensions() []string {
|
||||
out := make([]string, 0, len(supportedArchiveExt))
|
||||
for ext := range supportedArchiveExt {
|
||||
out = append(out, ext)
|
||||
}
|
||||
sort.Strings(out)
|
||||
return out
|
||||
}
|
||||
|
||||
func extractTarGz(archivePath string) ([]ExtractedFile, error) {
|
||||
f, err := os.Open(archivePath)
|
||||
if err != nil {
|
||||
@@ -111,6 +151,7 @@ func extractTarFromReader(r io.Reader) ([]ExtractedFile, error) {
|
||||
files = append(files, ExtractedFile{
|
||||
Path: header.Name,
|
||||
Content: content,
|
||||
ModTime: header.ModTime,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -152,6 +193,7 @@ func extractTarGzFromReader(r io.Reader, filename string) ([]ExtractedFile, erro
|
||||
file := ExtractedFile{
|
||||
Path: baseName,
|
||||
Content: decompressed,
|
||||
ModTime: gzr.ModTime,
|
||||
}
|
||||
if gzipTruncated {
|
||||
file.Truncated = true
|
||||
@@ -180,6 +222,7 @@ func extractTarGzFromReader(r io.Reader, filename string) ([]ExtractedFile, erro
|
||||
files = append(files, ExtractedFile{
|
||||
Path: header.Name,
|
||||
Content: content,
|
||||
ModTime: header.ModTime,
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -230,6 +273,7 @@ func extractZip(archivePath string) ([]ExtractedFile, error) {
|
||||
files = append(files, ExtractedFile{
|
||||
Path: f.Name,
|
||||
Content: content,
|
||||
ModTime: f.Modified,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -281,6 +325,7 @@ func extractZipFromReader(r io.Reader) ([]ExtractedFile, error) {
|
||||
files = append(files, ExtractedFile{
|
||||
Path: f.Name,
|
||||
Content: content,
|
||||
ModTime: f.Modified,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -288,13 +333,24 @@ func extractZipFromReader(r io.Reader) ([]ExtractedFile, error) {
|
||||
}
|
||||
|
||||
func extractSingleFile(path string) ([]ExtractedFile, error) {
|
||||
info, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("stat file: %w", err)
|
||||
}
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open file: %w", err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
return extractSingleFileFromReader(f, filepath.Base(path))
|
||||
files, err := extractSingleFileFromReader(f, filepath.Base(path))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(files) > 0 {
|
||||
files[0].ModTime = info.ModTime()
|
||||
}
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func extractSingleFileFromReader(r io.Reader, filename string) ([]ExtractedFile, error) {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package parser
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -69,3 +70,57 @@ func TestExtractArchiveFromReaderTXT_TruncatedWhenTooLarge(t *testing.T) {
|
||||
t.Fatalf("expected truncation message")
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsSupportedArchiveFilename(t *testing.T) {
|
||||
cases := []struct {
|
||||
name string
|
||||
want bool
|
||||
}{
|
||||
{name: "dump.tar.gz", want: true},
|
||||
{name: "nvidia-bug-report-1651124000923.log.gz", want: true},
|
||||
{name: "snapshot.zip", want: true},
|
||||
{name: "h3c_20250819.sds", want: true},
|
||||
{name: "report.log", want: true},
|
||||
{name: "xigmanas.txt", want: true},
|
||||
{name: "raw_export.json", want: false},
|
||||
{name: "archive.bin", want: false},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
got := IsSupportedArchiveFilename(tc.name)
|
||||
if got != tc.want {
|
||||
t.Fatalf("IsSupportedArchiveFilename(%q)=%v, want %v", tc.name, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveFromReaderSDS(t *testing.T) {
|
||||
var buf bytes.Buffer
|
||||
tw := tar.NewWriter(&buf)
|
||||
|
||||
payload := []byte("STARTTIME:0\nENDTIME:0\n")
|
||||
if err := tw.WriteHeader(&tar.Header{
|
||||
Name: "bmc/pack.info",
|
||||
Mode: 0o600,
|
||||
Size: int64(len(payload)),
|
||||
}); err != nil {
|
||||
t.Fatalf("write tar header: %v", err)
|
||||
}
|
||||
if _, err := tw.Write(payload); err != nil {
|
||||
t.Fatalf("write tar payload: %v", err)
|
||||
}
|
||||
if err := tw.Close(); err != nil {
|
||||
t.Fatalf("close tar writer: %v", err)
|
||||
}
|
||||
|
||||
files, err := ExtractArchiveFromReader(bytes.NewReader(buf.Bytes()), "sample.sds")
|
||||
if err != nil {
|
||||
t.Fatalf("extract sds from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 extracted file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "bmc/pack.info" {
|
||||
t.Fatalf("expected bmc/pack.info, got %q", files[0].Path)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ type VendorParser interface {
|
||||
// Name returns human-readable parser name
|
||||
Name() string
|
||||
|
||||
// Vendor returns vendor identifier (e.g., "inspur", "supermicro", "dell")
|
||||
// Vendor returns vendor identifier (e.g., "inspur", "dell", "h3c_g6")
|
||||
Vendor() string
|
||||
|
||||
// Version returns parser version string
|
||||
|
||||
@@ -66,11 +66,41 @@ func (p *BMCParser) parseFiles() error {
|
||||
result.Filename = p.result.Filename
|
||||
|
||||
appendExtractionWarnings(result, p.files)
|
||||
if result.CollectedAt.IsZero() {
|
||||
if ts := inferCollectedAtFromExtractedFiles(p.files); !ts.IsZero() {
|
||||
result.CollectedAt = ts.UTC()
|
||||
}
|
||||
}
|
||||
p.result = result
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func inferCollectedAtFromExtractedFiles(files []ExtractedFile) time.Time {
|
||||
var latestReliable time.Time
|
||||
var latestAny time.Time
|
||||
for _, f := range files {
|
||||
ts := f.ModTime
|
||||
if ts.IsZero() {
|
||||
continue
|
||||
}
|
||||
if latestAny.IsZero() || ts.After(latestAny) {
|
||||
latestAny = ts
|
||||
}
|
||||
// Ignore placeholder archive mtimes like 1980-01-01.
|
||||
if ts.Year() < 2000 {
|
||||
continue
|
||||
}
|
||||
if latestReliable.IsZero() || ts.After(latestReliable) {
|
||||
latestReliable = ts
|
||||
}
|
||||
}
|
||||
if !latestReliable.IsZero() {
|
||||
return latestReliable
|
||||
}
|
||||
return latestAny
|
||||
}
|
||||
|
||||
func appendExtractionWarnings(result *models.AnalysisResult, files []ExtractedFile) {
|
||||
if result == nil {
|
||||
return
|
||||
|
||||
@@ -2,6 +2,7 @@ package parser
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
@@ -32,3 +33,30 @@ func TestAppendExtractionWarnings(t *testing.T) {
|
||||
t.Fatalf("expected warning details in RawData")
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferCollectedAtFromExtractedFiles_PrefersReliableMTime(t *testing.T) {
|
||||
files := []ExtractedFile{
|
||||
{Path: "a.log", ModTime: time.Date(1980, 1, 1, 0, 0, 0, 0, time.UTC)},
|
||||
{Path: "b.log", ModTime: time.Date(2025, 12, 12, 10, 14, 49, 0, time.FixedZone("EST", -5*3600))},
|
||||
{Path: "c.log", ModTime: time.Date(2026, 2, 28, 4, 18, 18, 0, time.FixedZone("UTC+8", 8*3600))},
|
||||
}
|
||||
|
||||
got := inferCollectedAtFromExtractedFiles(files)
|
||||
want := files[2].ModTime
|
||||
if !got.Equal(want) {
|
||||
t.Fatalf("expected %s, got %s", want, got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferCollectedAtFromExtractedFiles_FallsBackToAnyMTime(t *testing.T) {
|
||||
files := []ExtractedFile{
|
||||
{Path: "a.log", ModTime: time.Date(1980, 1, 1, 0, 0, 0, 0, time.UTC)},
|
||||
{Path: "b.log", ModTime: time.Date(1970, 1, 2, 0, 0, 0, 0, time.UTC)},
|
||||
}
|
||||
|
||||
got := inferCollectedAtFromExtractedFiles(files)
|
||||
want := files[0].ModTime
|
||||
if !got.Equal(want) {
|
||||
t.Fatalf("expected fallback %s, got %s", want, got)
|
||||
}
|
||||
}
|
||||
|
||||
33
internal/parser/timezone.go
Normal file
33
internal/parser/timezone.go
Normal file
@@ -0,0 +1,33 @@
|
||||
package parser
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const fallbackTimezoneName = "Europe/Moscow"
|
||||
|
||||
var (
|
||||
fallbackTimezoneOnce sync.Once
|
||||
fallbackTimezone *time.Location
|
||||
)
|
||||
|
||||
// DefaultArchiveLocation returns the timezone used for source timestamps
|
||||
// that do not contain an explicit offset.
|
||||
func DefaultArchiveLocation() *time.Location {
|
||||
fallbackTimezoneOnce.Do(func() {
|
||||
loc, err := time.LoadLocation(fallbackTimezoneName)
|
||||
if err != nil {
|
||||
fallbackTimezone = time.FixedZone("MSK", 3*60*60)
|
||||
return
|
||||
}
|
||||
fallbackTimezone = loc
|
||||
})
|
||||
return fallbackTimezone
|
||||
}
|
||||
|
||||
// ParseInDefaultArchiveLocation parses timestamps without timezone information
|
||||
// using Europe/Moscow as the assumed source timezone.
|
||||
func ParseInDefaultArchiveLocation(layout, value string) (time.Time, error) {
|
||||
return time.ParseInLocation(layout, value, DefaultArchiveLocation())
|
||||
}
|
||||
1493
internal/parser/vendors/dell/parser.go
vendored
Normal file
1493
internal/parser/vendors/dell/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
224
internal/parser/vendors/dell/parser_test.go
vendored
Normal file
224
internal/parser/vendors/dell/parser_test.go
vendored
Normal file
@@ -0,0 +1,224 @@
|
||||
package dell
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectNestedTSRZip(t *testing.T) {
|
||||
inner := makeZipArchive(t, map[string][]byte{
|
||||
"tsr/metadata.json": []byte(`{"Make":"Dell Inc.","Model":"PowerEdge R750","ServiceTag":"G37Q064"}`),
|
||||
"tsr/hardware/sysinfo/inventory/sysinfo_DCIM_View.xml": []byte(`<CIM><MESSAGE><SIMPLEREQ/></MESSAGE></CIM>`),
|
||||
})
|
||||
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{
|
||||
{Path: "signature", Content: []byte("ok")},
|
||||
{Path: "TSR20241119143901_G37Q064.pl.zip", Content: inner},
|
||||
})
|
||||
if score < 80 {
|
||||
t.Fatalf("expected high detect score for nested TSR zip, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseNestedTSRZip(t *testing.T) {
|
||||
const viewXML = `<CIM><MESSAGE><SIMPLEREQ>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_SystemView">
|
||||
<PROPERTY NAME="Manufacturer"><VALUE>Dell Inc.</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Model"><VALUE>PowerEdge R750</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="ServiceTag"><VALUE>G37Q064</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="BIOSVersionString"><VALUE>2.19.1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="LifecycleControllerVersion"><VALUE>7.00.30.00</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_CPUView">
|
||||
<PROPERTY NAME="FQDD"><VALUE>CPU.Socket.1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Model"><VALUE>Intel(R) Xeon(R) Gold 6330</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Manufacturer"><VALUE>Intel</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="NumberOfEnabledCores"><VALUE>28</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="NumberOfEnabledThreads"><VALUE>56</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="CurrentClockSpeed"><VALUE>2000</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="MaxClockSpeed"><VALUE>3100</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PPIN"><VALUE>ABCD</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_NICView">
|
||||
<PROPERTY NAME="FQDD"><VALUE>NIC.Slot.1-1-1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="ProductName"><VALUE>Broadcom 57414 Dual Port 10/25GbE SFP28 Adapter</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="VendorName"><VALUE>Broadcom</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="CurrentMACAddress"><VALUE>00:11:22:33:44:55</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="SerialNumber"><VALUE>NICSERIAL1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="FamilyVersion"><VALUE>22.80.17</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PCIVendorID"><VALUE>0x14e4</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PCIDeviceID"><VALUE>0x16d7</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_PowerSupplyView">
|
||||
<PROPERTY NAME="FQDD"><VALUE>PSU.Slot.1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Model"><VALUE>D1400E-S0</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Manufacturer"><VALUE>Dell</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="SerialNumber"><VALUE>PSUSERIAL1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="FirmwareVersion"><VALUE>00.1A</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="TotalOutputPower"><VALUE>1400</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_VideoView">
|
||||
<PROPERTY NAME="FQDD"><VALUE>Video.Slot.38-1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="MarketingName"><VALUE>NVIDIA H100 PCIe</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Description"><VALUE>GH100 [H100 PCIe]</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="Manufacturer"><VALUE>NVIDIA Corporation</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PCIVendorID"><VALUE>10DE</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PCIDeviceID"><VALUE>2331</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="BusNumber"><VALUE>74</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="DeviceNumber"><VALUE>0</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="FunctionNumber"><VALUE>0</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="SerialNumber"><VALUE>1793924039808</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="FirmwareVersion"><VALUE>96.00.AF.00.01</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="GPUGUID"><VALUE>bc681a6d4785dde08c21f49c46c05cc3</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
</SIMPLEREQ></MESSAGE></CIM>`
|
||||
|
||||
const swXML = `<CIM><MESSAGE><SIMPLEREQ>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_SoftwareIdentity">
|
||||
<PROPERTY NAME="ElementName"><VALUE>NIC.Slot.1-1-1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="VersionString"><VALUE>22.80.17</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="ComponentType"><VALUE>Network</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
</SIMPLEREQ></MESSAGE></CIM>`
|
||||
|
||||
const eventsXML = `<Log>
|
||||
<Event AgentID="Lifecycle Controller" Category="System Health" Severity="Warning" Timestamp="2024-11-19T14:39:01-0800">
|
||||
<MessageID>SYS1001</MessageID>
|
||||
<Message>Link is down</Message>
|
||||
<FQDD>NIC.Slot.1-1-1</FQDD>
|
||||
</Event>
|
||||
</Log>`
|
||||
|
||||
const cimSensorXML = `<CIM><MESSAGE><SIMPLEREQ>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="DCIM_GPUSensor">
|
||||
<PROPERTY NAME="DeviceID"><VALUE>Video.Slot.38-1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PrimaryGPUTemperature"><VALUE>290</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="MemoryTemperature"><VALUE>440</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PowerConsumption"><VALUE>295</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="ThermalAlertStatus"><VALUE>5</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
<VALUE.NAMEDINSTANCE><INSTANCE CLASSNAME="CIM_NumericSensor">
|
||||
<PROPERTY NAME="ElementName"><VALUE>PS1 Voltage 1</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="CurrentReading"><VALUE>224.0</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="BaseUnits"><VALUE>5</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="UnitModifier"><VALUE>0</VALUE></PROPERTY>
|
||||
<PROPERTY NAME="PrimaryStatus"><VALUE>5</VALUE></PROPERTY>
|
||||
</INSTANCE></VALUE.NAMEDINSTANCE>
|
||||
</SIMPLEREQ></MESSAGE></CIM>`
|
||||
|
||||
inner := makeZipArchive(t, map[string][]byte{
|
||||
"tsr/metadata.json": []byte(`{
|
||||
"Make":"Dell Inc.",
|
||||
"Model":"PowerEdge R750",
|
||||
"ServiceTag":"G37Q064",
|
||||
"FirmwareVersion":"7.00.30.00",
|
||||
"CollectionDateTime":"2024-11-19 14:39:01.000-0800"
|
||||
}`),
|
||||
"tsr/hardware/sysinfo/inventory/sysinfo_DCIM_View.xml": []byte(viewXML),
|
||||
"tsr/hardware/sysinfo/inventory/sysinfo_DCIM_SoftwareIdentity.xml": []byte(swXML),
|
||||
"tsr/hardware/sysinfo/inventory/sysinfo_CIM_Sensor.xml": []byte(cimSensorXML),
|
||||
"tsr/hardware/sysinfo/lcfiles/curr_lclog.xml": []byte(eventsXML),
|
||||
})
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse([]parser.ExtractedFile{
|
||||
{Path: "signature", Content: []byte("ok")},
|
||||
{Path: "TSR20241119143901_G37Q064.pl.zip", Content: inner},
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
if got := result.Hardware.BoardInfo.Manufacturer; got != "Dell Inc." {
|
||||
t.Fatalf("unexpected board manufacturer: %q", got)
|
||||
}
|
||||
if got := result.Hardware.BoardInfo.ProductName; got != "PowerEdge R750" {
|
||||
t.Fatalf("unexpected board product: %q", got)
|
||||
}
|
||||
if got := result.Hardware.BoardInfo.SerialNumber; got != "G37Q064" {
|
||||
t.Fatalf("unexpected service tag: %q", got)
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) != 1 {
|
||||
t.Fatalf("expected 1 cpu, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
if got := result.Hardware.CPUs[0].Model; got != "Intel(R) Xeon(R) Gold 6330" {
|
||||
t.Fatalf("unexpected cpu model: %q", got)
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 1 {
|
||||
t.Fatalf("expected 1 network adapter, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
adapter := result.Hardware.NetworkAdapters[0]
|
||||
if adapter.Vendor != "Broadcom" {
|
||||
t.Fatalf("unexpected nic vendor: %q", adapter.Vendor)
|
||||
}
|
||||
if adapter.Firmware != "22.80.17" {
|
||||
t.Fatalf("unexpected nic firmware: %q", adapter.Firmware)
|
||||
}
|
||||
if adapter.SerialNumber != "NICSERIAL1" {
|
||||
t.Fatalf("unexpected nic serial: %q", adapter.SerialNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.PowerSupply) != 1 {
|
||||
t.Fatalf("expected 1 psu, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
psu := result.Hardware.PowerSupply[0]
|
||||
if psu.Model != "D1400E-S0" {
|
||||
t.Fatalf("unexpected psu model: %q", psu.Model)
|
||||
}
|
||||
if psu.Firmware != "00.1A" {
|
||||
t.Fatalf("unexpected psu firmware: %q", psu.Firmware)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Fatalf("expected firmware entries")
|
||||
}
|
||||
if len(result.Hardware.GPUs) != 1 {
|
||||
t.Fatalf("expected 1 gpu, got %d", len(result.Hardware.GPUs))
|
||||
}
|
||||
if got := result.Hardware.GPUs[0].Model; got != "NVIDIA H100 PCIe" {
|
||||
t.Fatalf("unexpected gpu model: %q", got)
|
||||
}
|
||||
if got := result.Hardware.GPUs[0].SerialNumber; got != "1793924039808" {
|
||||
t.Fatalf("unexpected gpu serial: %q", got)
|
||||
}
|
||||
if got := result.Hardware.GPUs[0].Temperature; got != 29 {
|
||||
t.Fatalf("unexpected gpu temperature: %d", got)
|
||||
}
|
||||
if len(result.Sensors) == 0 {
|
||||
t.Fatalf("expected sensors from CIM_Sensor")
|
||||
}
|
||||
if len(result.Events) != 1 {
|
||||
t.Fatalf("expected one lifecycle event, got %d", len(result.Events))
|
||||
}
|
||||
if got := string(result.Events[0].Severity); got != "warning" {
|
||||
t.Fatalf("unexpected event severity: %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func makeZipArchive(t *testing.T, files map[string][]byte) []byte {
|
||||
t.Helper()
|
||||
var buf bytes.Buffer
|
||||
zw := zip.NewWriter(&buf)
|
||||
for name, content := range files {
|
||||
w, err := zw.Create(name)
|
||||
if err != nil {
|
||||
t.Fatalf("create zip entry %s: %v", name, err)
|
||||
}
|
||||
if _, err := w.Write(content); err != nil {
|
||||
t.Fatalf("write zip entry %s: %v", name, err)
|
||||
}
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Fatalf("close zip: %v", err)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
2
internal/parser/vendors/generic/parser.go
vendored
2
internal/parser/vendors/generic/parser.go
vendored
@@ -10,7 +10,7 @@ import (
|
||||
)
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
const parserVersion = "1.0.0"
|
||||
const parserVersion = "1.1"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
|
||||
3516
internal/parser/vendors/h3c/parser.go
vendored
Normal file
3516
internal/parser/vendors/h3c/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
962
internal/parser/vendors/h3c/parser_test.go
vendored
Normal file
962
internal/parser/vendors/h3c/parser_test.go
vendored
Normal file
@@ -0,0 +1,962 @@
|
||||
package h3c
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectH3C_GenerationRouting(t *testing.T) {
|
||||
g5 := &G5Parser{}
|
||||
g6 := &G6Parser{}
|
||||
|
||||
g5Files := []parser.ExtractedFile{
|
||||
{Path: "bmc/pack.info", Content: []byte("STARTTIME:0")},
|
||||
{Path: "static/FRUInfo.ini", Content: []byte("[Baseboard]\nBoard Manufacturer=H3C\n")},
|
||||
{Path: "static/hardware_info.ini", Content: []byte("[Processors: Processor 1]\nModel: Intel Xeon\n")},
|
||||
{Path: "static/hardware.info", Content: []byte("[Disk_0_Front_NA]\nSerialNumber=DISK-0\n")},
|
||||
{Path: "static/firmware_version.ini", Content: []byte("[System board]\nBIOS Version: 5.59\n")},
|
||||
{Path: "user/test1.csv", Content: []byte("Record Time Stamp,DescInfo\n2025-01-01 00:00:00,foo\n")},
|
||||
}
|
||||
if gotG5, gotG6 := g5.Detect(g5Files), g6.Detect(g5Files); gotG5 <= gotG6 {
|
||||
t.Fatalf("expected G5 confidence > G6 for G5 sample, got g5=%d g6=%d", gotG5, gotG6)
|
||||
}
|
||||
|
||||
g6Files := []parser.ExtractedFile{
|
||||
{Path: "bmc/pack.info", Content: []byte("STARTTIME:0")},
|
||||
{Path: "static/FRUInfo.ini", Content: []byte("[Baseboard]\nBoard Manufacturer=H3C\n")},
|
||||
{Path: "static/board_info.ini", Content: []byte("[System board]\nBoardMfr=H3C\n")},
|
||||
{Path: "static/firmware_version.json", Content: []byte(`{"BIOS":{"Firmware Name":"BIOS","Firmware Version":"6.10"}}`)},
|
||||
{Path: "static/CPUDetailInfo.xml", Content: []byte("<Root><CPU1><Model>X</Model></CPU1></Root>")},
|
||||
{Path: "static/MemoryDetailInfo.xml", Content: []byte("<Root><DIMM1><Name>A0</Name></DIMM1></Root>")},
|
||||
{Path: "user/Sel.json", Content: []byte(`{"Id":1}`)},
|
||||
}
|
||||
if gotG5, gotG6 := g5.Detect(g6Files), g6.Detect(g6Files); gotG6 <= gotG5 {
|
||||
t.Fatalf("expected G6 confidence > G5 for G6 sample, got g5=%d g6=%d", gotG5, gotG6)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseH3CG6_RaidAndNVMeEnrichment(t *testing.T) {
|
||||
p := &G6Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "static/storage_disk.ini",
|
||||
Content: []byte(`[Disk_000]
|
||||
DiskSlotDesc=Front0
|
||||
Present=YES
|
||||
SerialNumber=SER-0
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/raid.json",
|
||||
Content: []byte(`{
|
||||
"RaidConfig": {
|
||||
"CtrlInfo": [
|
||||
{
|
||||
"CtrlSlot": 1,
|
||||
"CtrlName": "RAID-LSI-9560",
|
||||
"LDInfo": [
|
||||
{
|
||||
"LDID": "0",
|
||||
"LDName": "VD0",
|
||||
"RAIDLevel": "1",
|
||||
"CapacityBytes": 1000000000,
|
||||
"Status": "Optimal"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}`),
|
||||
},
|
||||
{
|
||||
Path: "static/Storage_RAID-LSI-9560-LP-8i-4GB[1].txt",
|
||||
Content: []byte(`Controller Information
|
||||
------------------------------------------------------------------------
|
||||
AssetTag : RAID-LSI-9560
|
||||
|
||||
Logical Device Information
|
||||
------------------------------------------------------------------------
|
||||
LDID : 0
|
||||
Name : VD0
|
||||
RAID Level : 1
|
||||
CapacityBytes : 1000000000
|
||||
Status : Optimal
|
||||
|
||||
Physical Device Information
|
||||
------------------------------------------------------------------------
|
||||
ConnectionID : 0
|
||||
Position : Front0
|
||||
StatusIndicator : OK
|
||||
Protocol : SATA
|
||||
MediaType : SSD
|
||||
Manufacturer : Samsung
|
||||
Model : PM893
|
||||
Revision : GDC1
|
||||
SerialNumber : SER-0
|
||||
CapacityBytes : 480000000000
|
||||
|
||||
ConnectionID : 1
|
||||
Position : Front1
|
||||
StatusIndicator : OK
|
||||
Protocol : SATA
|
||||
MediaType : SSD
|
||||
Manufacturer : Samsung
|
||||
Model : PM893
|
||||
Revision : GDC1
|
||||
SerialNumber : SER-1
|
||||
CapacityBytes : 480000000000
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/NVMe_info.txt",
|
||||
Content: []byte(`[NVMe_0]
|
||||
Present=YES
|
||||
DiskSlotDesc=Front2
|
||||
Model=INTEL SSDPE2KX010T8
|
||||
SerialNumber=NVME-1
|
||||
Firmware=V100
|
||||
CapacityBytes=1000204886016
|
||||
Interface=NVMe
|
||||
Status=OK
|
||||
`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Volumes) != 1 {
|
||||
t.Fatalf("expected 1 volume, got %d", len(result.Hardware.Volumes))
|
||||
}
|
||||
vol := result.Hardware.Volumes[0]
|
||||
if vol.RAIDLevel != "RAID1" {
|
||||
t.Fatalf("expected RAID1 level, got %q", vol.RAIDLevel)
|
||||
}
|
||||
if vol.SizeGB != 1 {
|
||||
t.Fatalf("expected 1GB logical volume, got %d", vol.SizeGB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 3 {
|
||||
t.Fatalf("expected 3 unique storage devices, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
|
||||
var front0 *models.Storage
|
||||
var nvme *models.Storage
|
||||
for i := range result.Hardware.Storage {
|
||||
s := &result.Hardware.Storage[i]
|
||||
if strings.EqualFold(s.SerialNumber, "SER-0") {
|
||||
front0 = s
|
||||
}
|
||||
if strings.EqualFold(s.SerialNumber, "NVME-1") {
|
||||
nvme = s
|
||||
}
|
||||
}
|
||||
if front0 == nil {
|
||||
t.Fatalf("expected merged Front0 disk by serial SER-0")
|
||||
}
|
||||
if front0.Model != "PM893" {
|
||||
t.Fatalf("expected Front0 model PM893, got %q", front0.Model)
|
||||
}
|
||||
if front0.SizeGB != 480 {
|
||||
t.Fatalf("expected Front0 size 480GB, got %d", front0.SizeGB)
|
||||
}
|
||||
if nvme == nil {
|
||||
t.Fatalf("expected NVMe disk by serial NVME-1")
|
||||
}
|
||||
if nvme.Type != "nvme" {
|
||||
t.Fatalf("expected nvme type, got %q", nvme.Type)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseH3CG6(t *testing.T) {
|
||||
p := &G6Parser{}
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "static/FRUInfo.ini",
|
||||
Content: []byte(`[Baseboard]
|
||||
Board Manufacturer=H3C
|
||||
Board Product Name=RS36M2C6SB
|
||||
Product Product Name=H3C UniServer R4700 G6
|
||||
Product Serial Number=210235A4FYH257000010
|
||||
Product Part Number=0235A4FY
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/firmware_version.json",
|
||||
Content: []byte(`{
|
||||
"BMCP": {"Firmware Name":"HDM","Firmware Version":"1.83","Location":"bmc card","Part Model":"-"},
|
||||
"BIOS": {"Firmware Name":"BIOS","Firmware Version":"6.10.53","Location":"system board","Part Model":"-"}
|
||||
}`),
|
||||
},
|
||||
{
|
||||
Path: "static/CPUDetailInfo.xml",
|
||||
Content: []byte(`<Root>
|
||||
<CPU1>
|
||||
<Status>Presence</Status>
|
||||
<Model>INTEL(R) XEON(R) GOLD 6542Y</Model>
|
||||
<ProcessorSpeed>0xb54</ProcessorSpeed>
|
||||
<ProcessorMaxSpeed>0x1004</ProcessorMaxSpeed>
|
||||
<TotalCores>0x18</TotalCores>
|
||||
<TotalThreads>0x30</TotalThreads>
|
||||
<SerialNumber>68-5C-81-C1-0E-A3-4E-40</SerialNumber>
|
||||
<PPIN>68-5C-81-C1-0E-A3-4E-40</PPIN>
|
||||
</CPU1>
|
||||
</Root>`),
|
||||
},
|
||||
{
|
||||
Path: "static/MemoryDetailInfo.xml",
|
||||
Content: []byte(`<Root>
|
||||
<DIMM1>
|
||||
<Status>Presence</Status>
|
||||
<Name>CPU1_CH1_D0 (A0)</Name>
|
||||
<PartNumber>M321R8GA0PB0-CWMXJ</PartNumber>
|
||||
<DIMMTech>RDIMM</DIMMTech>
|
||||
<SerialNumber>80CE032519135C82ED</SerialNumber>
|
||||
<DIMMRanks>0x2</DIMMRanks>
|
||||
<DIMMSize>0x10000</DIMMSize>
|
||||
<CurFreq>0x1130</CurFreq>
|
||||
<MaxFreq>0x15e0</MaxFreq>
|
||||
<DIMMSilk>A0</DIMMSilk>
|
||||
</DIMM1>
|
||||
</Root>`),
|
||||
},
|
||||
{
|
||||
Path: "static/storage_disk.ini",
|
||||
Content: []byte(`[Disk_000]
|
||||
SerialNumber=S6KLNN0Y516813
|
||||
DiskSlotDesc=Front0
|
||||
Present=YES
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/net_cfg.ini",
|
||||
Content: []byte(`[Network Configuration]
|
||||
eth0 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F6
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
|
||||
eth0.2 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F6
|
||||
inet6 addr: fe80::32c6:d7ff:fe94:54f6/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1
|
||||
|
||||
eth1 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F5
|
||||
inet addr:10.201.129.0 Bcast:10.201.143.255 Mask:255.255.240.0
|
||||
inet6 addr: fe80::32c6:d7ff:fe94:54f5/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/psu_cfg.ini",
|
||||
Content: []byte(`[Psu0]
|
||||
SN=210231AGUNH257001569
|
||||
Max_Power(W)=1600
|
||||
Manufacturer=Great Wall
|
||||
Power Status=Input Normal, Output Normal
|
||||
Present_Status=Present
|
||||
Power_ID=1
|
||||
Model=GW-CRPS1600D2
|
||||
Version=03.02.00
|
||||
|
||||
[Psu1]
|
||||
Manufacturer=Great Wall
|
||||
Power_ID=2
|
||||
Version=03.02.00
|
||||
Power Status=Input Normal, Output Normal
|
||||
SN=210231AGUNH257001570
|
||||
Model=GW-CRPS1600D2
|
||||
Present_Status=Present
|
||||
Max_Power(W)=1600
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/hardware_info.ini",
|
||||
Content: []byte(`[Ethernet adapters: Port 1]
|
||||
Device Type : NIC
|
||||
Network Port : Port 1
|
||||
Location : PCIE-[1]
|
||||
MAC Address : E4:3D:1A:6F:B0:30
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
[Ethernet adapters: Port 2]
|
||||
Device Type : NIC
|
||||
Network Port : Port 2
|
||||
Location : PCIE-[1]
|
||||
MAC Address : E4:3D:1A:6F:B0:31
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
|
||||
[PCIe Card: PCIe 1]
|
||||
Location : 1
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
Status : Normal
|
||||
Vendor ID : 0x14E4
|
||||
Device ID : 0x16D7
|
||||
Serial Number : NICSN-G6-001
|
||||
Part Number : NICPN-G6-001
|
||||
Firmware Version : 22.35.1010
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/sensor_info.ini",
|
||||
Content: []byte(`Sensor Name | Reading | Unit | Status| Crit low
|
||||
Inlet_Temp | 20.000 | degrees C | ok | na
|
||||
CPU1_Status | 0x0 | discrete | 0x8080| na
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "user/Sel.json",
|
||||
Content: []byte(`
|
||||
{
|
||||
"Created": "2025-07-14 03:34:18 UTC+08:00",
|
||||
"Severity": "Info",
|
||||
"EntryCode": "Asserted",
|
||||
"EntryType": "Event",
|
||||
"Id": 1,
|
||||
"Level": "Info",
|
||||
"Message": "Processor Presence detected",
|
||||
"SensorName": "CPU1_Status",
|
||||
"SensorType": "Processor"
|
||||
},
|
||||
{
|
||||
"Created": "2025-07-14 20:56:45 UTC+08:00",
|
||||
"Severity": "Critical",
|
||||
"EntryCode": "Asserted",
|
||||
"EntryType": "Event",
|
||||
"Id": 2,
|
||||
"Level": "Critical",
|
||||
"Message": "Power Supply AC lost",
|
||||
"SensorName": "PSU1_Status",
|
||||
"SensorType": "Power Supply"
|
||||
}
|
||||
`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
if result.Hardware.BoardInfo.Manufacturer != "H3C" {
|
||||
t.Fatalf("unexpected board manufacturer: %q", result.Hardware.BoardInfo.Manufacturer)
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName != "H3C UniServer R4700 G6" {
|
||||
t.Fatalf("unexpected board product: %q", result.Hardware.BoardInfo.ProductName)
|
||||
}
|
||||
if result.Hardware.BoardInfo.SerialNumber != "210235A4FYH257000010" {
|
||||
t.Fatalf("unexpected board serial: %q", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) < 2 {
|
||||
t.Fatalf("expected firmware entries, got %d", len(result.Hardware.Firmware))
|
||||
}
|
||||
if len(result.Hardware.CPUs) != 1 {
|
||||
t.Fatalf("expected 1 cpu, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
if result.Hardware.CPUs[0].Cores != 24 {
|
||||
t.Fatalf("expected 24 cores, got %d", result.Hardware.CPUs[0].Cores)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected 1 dimm, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if result.Hardware.Memory[0].SizeMB != 65536 {
|
||||
t.Fatalf("expected 65536MB, got %d", result.Hardware.Memory[0].SizeMB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected 1 disk, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
if result.Hardware.Storage[0].SerialNumber != "S6KLNN0Y516813" {
|
||||
t.Fatalf("unexpected disk serial: %q", result.Hardware.Storage[0].SerialNumber)
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 2 {
|
||||
t.Fatalf("expected 2 PSUs from psu_cfg.ini, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].WattageW == 0 {
|
||||
t.Fatalf("expected PSU wattage parsed, got 0")
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 1 {
|
||||
t.Fatalf("expected 1 host network adapter from hardware_info.ini, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
macs := make(map[string]struct{})
|
||||
var hostNIC models.NetworkAdapter
|
||||
var hostNICFound bool
|
||||
for _, nic := range result.Hardware.NetworkAdapters {
|
||||
if len(nic.MACAddresses) == 0 {
|
||||
t.Fatalf("expected MAC on network adapter %+v", nic)
|
||||
}
|
||||
for _, mac := range nic.MACAddresses {
|
||||
macs[strings.ToLower(mac)] = struct{}{}
|
||||
}
|
||||
if strings.EqualFold(nic.Slot, "PCIe 1") && strings.Contains(strings.ToLower(nic.Model), "bcm957414") {
|
||||
hostNIC = nic
|
||||
hostNICFound = true
|
||||
}
|
||||
}
|
||||
if !hostNICFound {
|
||||
t.Fatalf("expected host NIC from hardware_info.ini, got %+v", result.Hardware.NetworkAdapters)
|
||||
}
|
||||
if _, ok := macs["e4:3d:1a:6f:b0:30"]; !ok {
|
||||
t.Fatalf("expected host NIC MAC e4:3d:1a:6f:b0:30 in adapters, got %+v", result.Hardware.NetworkAdapters)
|
||||
}
|
||||
if _, ok := macs["e4:3d:1a:6f:b0:31"]; !ok {
|
||||
t.Fatalf("expected host NIC MAC e4:3d:1a:6f:b0:31 in adapters, got %+v", result.Hardware.NetworkAdapters)
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(hostNIC.Vendor), "broadcom") {
|
||||
t.Fatalf("expected host NIC vendor enrichment from Vendor ID, got %q", hostNIC.Vendor)
|
||||
}
|
||||
if hostNIC.SerialNumber != "NICSN-G6-001" {
|
||||
t.Fatalf("expected host NIC serial from PCIe card section, got %q", hostNIC.SerialNumber)
|
||||
}
|
||||
if hostNIC.PartNumber != "NICPN-G6-001" {
|
||||
t.Fatalf("expected host NIC part number from PCIe card section, got %q", hostNIC.PartNumber)
|
||||
}
|
||||
if hostNIC.Firmware != "22.35.1010" {
|
||||
t.Fatalf("expected host NIC firmware from PCIe card section, got %q", hostNIC.Firmware)
|
||||
}
|
||||
|
||||
if len(result.Sensors) != 2 {
|
||||
t.Fatalf("expected 2 sensors, got %d", len(result.Sensors))
|
||||
}
|
||||
if result.Sensors[0].Name != "Inlet_Temp" {
|
||||
t.Fatalf("unexpected first sensor: %q", result.Sensors[0].Name)
|
||||
}
|
||||
|
||||
if len(result.Events) != 2 {
|
||||
t.Fatalf("expected 2 events, got %d", len(result.Events))
|
||||
}
|
||||
if result.Events[0].Timestamp.Year() != 2025 || result.Events[0].Timestamp.Month() != 7 {
|
||||
t.Fatalf("expected SEL timestamp from payload, got %s", result.Events[0].Timestamp)
|
||||
}
|
||||
if result.Events[1].Severity != models.SeverityCritical {
|
||||
t.Fatalf("expected critical severity for AC lost event, got %q", result.Events[1].Severity)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseH3CG5_PCIeArgumentsEnrichesNonNVMeStorage(t *testing.T) {
|
||||
p := &G5Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "static/storage_disk.ini",
|
||||
Content: []byte(`[Disk_000]
|
||||
DiskSlotDesc=Front slot 3
|
||||
Present=YES
|
||||
SerialNumber=SAT-03
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/NVMe_info.txt",
|
||||
Content: []byte(`[NVMe_0]
|
||||
Present=YES
|
||||
DiskSlotDesc=Front slot 108
|
||||
SerialNumber=NVME-108
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/PCIe_arguments_table.xml",
|
||||
Content: []byte(`<root>
|
||||
<PCIE100>
|
||||
<base_args>
|
||||
<type>SSD</type>
|
||||
<name>SSD-SATA-960G</name>
|
||||
</base_args>
|
||||
<type_get_args>
|
||||
<bios_args>
|
||||
<vendor_id>0x144D</vendor_id>
|
||||
</bios_args>
|
||||
</type_get_args>
|
||||
</PCIE100>
|
||||
<PCIE200>
|
||||
<base_args>
|
||||
<type>SSD</type>
|
||||
<name>SSD-3.84T-NVMe-SFF</name>
|
||||
</base_args>
|
||||
<type_get_args>
|
||||
<bios_args>
|
||||
<vendor_id>0x144D</vendor_id>
|
||||
</bios_args>
|
||||
</type_get_args>
|
||||
</PCIE200>
|
||||
</root>`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 2 {
|
||||
t.Fatalf("expected 2 storage devices, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
|
||||
var sata *models.Storage
|
||||
var nvme *models.Storage
|
||||
for i := range result.Hardware.Storage {
|
||||
s := &result.Hardware.Storage[i]
|
||||
switch s.SerialNumber {
|
||||
case "SAT-03":
|
||||
sata = s
|
||||
case "NVME-108":
|
||||
nvme = s
|
||||
}
|
||||
}
|
||||
|
||||
if sata == nil {
|
||||
t.Fatalf("expected SATA storage SAT-03")
|
||||
}
|
||||
if sata.Model != "SSD-SATA-960G" {
|
||||
t.Fatalf("expected SATA model enrichment from PCIe table, got %q", sata.Model)
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(sata.Manufacturer), "samsung") {
|
||||
t.Fatalf("expected SATA vendor enrichment to Samsung, got %q", sata.Manufacturer)
|
||||
}
|
||||
|
||||
if nvme == nil {
|
||||
t.Fatalf("expected NVMe storage NVME-108")
|
||||
}
|
||||
if nvme.Model != "SSD-3.84T-NVMe-SFF" {
|
||||
t.Fatalf("expected NVMe model enrichment from PCIe table, got %q", nvme.Model)
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(nvme.Manufacturer), "samsung") {
|
||||
t.Fatalf("expected NVMe vendor enrichment to Samsung, got %q", nvme.Manufacturer)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseH3CG5_VariantLayout(t *testing.T) {
|
||||
p := &G5Parser{}
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "static/FRUInfo.ini",
|
||||
Content: []byte(`[Baseboard]
|
||||
Board Manufacturer=H3C
|
||||
Product Product Name=H3C UniServer R4900 G5
|
||||
Product Serial Number=02A6AX5231C003VM
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/firmware_version.ini",
|
||||
Content: []byte(`[System board]
|
||||
BIOS Version : 5.59 V100R001B05D078
|
||||
ME Version : 4.4.4.202
|
||||
HDM Version : 3.34.01 HDM V100R001B05D078SP01
|
||||
CPLD Version : V00C
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/board_cfg.ini",
|
||||
Content: []byte(`[Board Type]
|
||||
Board Type : R4900 G5
|
||||
|
||||
[Board Version]
|
||||
Board Version : VER.D
|
||||
|
||||
[Customer ID]
|
||||
CustomerID : 255
|
||||
|
||||
[OEM ID]
|
||||
OEM Flag : 1
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/hardware_info.ini",
|
||||
Content: []byte(`[Processors: Processor 1]
|
||||
Model : Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
|
||||
Status : Normal
|
||||
Frequency : 2800 MHz
|
||||
Cores : 24
|
||||
Threads : 48
|
||||
L1 Cache : 1920 KB
|
||||
L2 Cache : 30720 KB
|
||||
L3 Cache : 36864 KB
|
||||
CPU PPIN : 49-A9-50-C0-15-9F-2D-DC
|
||||
|
||||
[Processors: Processor 2]
|
||||
Model : Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
|
||||
Status : Normal
|
||||
Frequency : 2800 MHz
|
||||
Cores : 24
|
||||
Threads : 48
|
||||
CPU PPIN : 49-AC-3D-BF-85-7F-17-58
|
||||
|
||||
[Memory Details: Dimm Index 0]
|
||||
Location : Processor 1
|
||||
Channel : 1
|
||||
Socket ID : A0
|
||||
Status : Normal
|
||||
Size : 65536 MB
|
||||
Maximum Frequency : 3200 MHz
|
||||
Type : DDR4
|
||||
Ranks : 2R DIMM
|
||||
Technology : RDIMM
|
||||
Part Number : M393A8G40AB2-CWE
|
||||
Manufacture : Samsung
|
||||
Serial Number : S02K0D0243351D7079
|
||||
|
||||
[Memory Details: Dimm Index 16]
|
||||
Location : Processor 2
|
||||
Channel : 1
|
||||
Socket ID : A0
|
||||
Status : Normal
|
||||
Size : 65536 MB
|
||||
Maximum Frequency : 3200 MHz
|
||||
Type : DDR4
|
||||
Ranks : 2R DIMM
|
||||
Technology : RDIMM
|
||||
Part Number : M393A8G40AB2-CWE
|
||||
Manufacture : Samsung
|
||||
Serial Number : S02K0D0243351D73F0
|
||||
|
||||
[Ethernet adapters: Port 1]
|
||||
Device Type : NIC
|
||||
Network Port : Port 1
|
||||
Location : PCIE-[1]
|
||||
MAC Address : E4:3D:1A:6F:B0:30
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
[Ethernet adapters: Port 2]
|
||||
Device Type : NIC
|
||||
Network Port : Port 2
|
||||
Location : PCIE-[1]
|
||||
MAC Address : E4:3D:1A:6F:B0:31
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
|
||||
[Ethernet adapters: Port 1]
|
||||
Device Type : NIC
|
||||
Network Port : Port 1
|
||||
Location : PCIE-[4]
|
||||
MAC Address : E8:EB:D3:4F:2E:90
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-MCX512A-ACAT-2*25Gb-F
|
||||
[Ethernet adapters: Port 2]
|
||||
Device Type : NIC
|
||||
Network Port : Port 2
|
||||
Location : PCIE-[4]
|
||||
MAC Address : E8:EB:D3:4F:2E:91
|
||||
Speed : 8.0GT/s
|
||||
Product Name : NIC-MCX512A-ACAT-2*25Gb-F
|
||||
|
||||
[PCIe Card: PCIe 1]
|
||||
Location : 1
|
||||
Product Name : NIC-BCM957414-F-B-25Gb-2P
|
||||
Status : Normal
|
||||
Vendor ID : 0x14E4
|
||||
Device ID : 0x16D7
|
||||
Serial Number : NICSN-G5-001
|
||||
Part Number : NICPN-G5-001
|
||||
Firmware Version : 21.80.1
|
||||
|
||||
[PCIe Card: PCIe 4]
|
||||
Location : 4
|
||||
Product Name : NIC-MCX512A-ACAT-2*25Gb-F
|
||||
Status : Normal
|
||||
Vendor ID : 0x15B3
|
||||
Device ID : 0x1017
|
||||
Serial Number : NICSN-G5-004
|
||||
Part Number : NICPN-G5-004
|
||||
Firmware Version : 28.33.15
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/hardware.info",
|
||||
Content: []byte(`[Disk_0_Front_NA]
|
||||
Present=YES
|
||||
SlotNum=0
|
||||
FrontOrRear=Front
|
||||
SerialNumber=22443C4EE184
|
||||
|
||||
[Nvme_Front slot 21]
|
||||
Present=YES
|
||||
NvmePhySlot=Front slot 21
|
||||
SlotNum=121
|
||||
SerialNumber=NVME-21
|
||||
|
||||
[Nvme_255_121]
|
||||
Present=YES
|
||||
SlotNum=121
|
||||
SerialNumber=NVME-21
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/raid.json",
|
||||
Content: []byte(`{
|
||||
"RAIDCONFIG": {
|
||||
"Ctrl info": [
|
||||
{
|
||||
"CtrlDevice Slot": 3,
|
||||
"CtrlDevice Name": "AVAGO MegaRAID SAS 9460-8i",
|
||||
"LDInfo": [
|
||||
{
|
||||
"LD ID": 0,
|
||||
"LD_name": "SystemRAID",
|
||||
"RAID_level(RAID 0,RAID 1,RAID 5,RAID 6,RAID 00,RAID 10,RAID 50,RAID 60)": "RAID1",
|
||||
"Logical_capicity(per 512byte)": 936640512
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"CtrlDevice Slot": 6,
|
||||
"CtrlDevice Name": "MegaRAID 9560-16i 8GB",
|
||||
"LDInfo": [
|
||||
{
|
||||
"LD ID": 0,
|
||||
"LD_name": "DataRAID",
|
||||
"RAID_level(RAID 0,RAID 1,RAID 5,RAID 6,RAID 00,RAID 10,RAID 50,RAID 60)": "RAID50",
|
||||
"Logical_capicity(per 512byte)": 90004783104
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}`),
|
||||
},
|
||||
{
|
||||
Path: "static/Raid_BP_Conf_Info.ini",
|
||||
Content: []byte(`[BP Information]
|
||||
Description | BP TYPE | I2cPort | BpConnectorNum | FrontOrRear | Node Num | DiskSlotRange |
|
||||
8SFF SAS/SATA | BP_G5_8SFF | AUX_1 | ~ | ~ | ~ | ~ |
|
||||
8SFF SAS/SATA | BP_G5_8SFF | AUX_2 | ~ | ~ | ~ | ~ |
|
||||
8SFF SAS/SATA | BP_G5_8SFF | AUX_3 | ~ | ~ | ~ | ~ |
|
||||
|
||||
[RAID Information]
|
||||
PCIE SLOT | RAID SAS_NUM |
|
||||
3 | 2 |
|
||||
6 | 4 |
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/PCIe_arguments_table.xml",
|
||||
Content: []byte(`<root>
|
||||
<PCIE100>
|
||||
<base_args>
|
||||
<type>SSD</type>
|
||||
<name>SSD-1.92T/3.84T-NVMe-EV-SFF-sa</name>
|
||||
</base_args>
|
||||
<type_get_args>
|
||||
<bios_args>
|
||||
<vendor_id>0x144D</vendor_id>
|
||||
</bios_args>
|
||||
</type_get_args>
|
||||
</PCIE100>
|
||||
</root>`),
|
||||
},
|
||||
{
|
||||
Path: "static/psu_cfg.ini",
|
||||
Content: []byte(`[Active / Standby configuration]
|
||||
Power ID : 1
|
||||
Present Status : Present
|
||||
Cold Status : Active Power
|
||||
Model : DPS-1300AB-6 R
|
||||
SN : 210231ACT9H232000080
|
||||
Max Power(W) : 1300
|
||||
|
||||
Power ID : 2
|
||||
Present Status : Present
|
||||
Cold Status : Active Power
|
||||
Model : DPS-1300AB-6 R
|
||||
SN : 210231ACT9H232000079
|
||||
Max Power(W) : 1300
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/net_cfg.ini",
|
||||
Content: []byte(`[Network Configuration]
|
||||
eth0 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F6
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
|
||||
eth0.2 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F6
|
||||
inet6 addr: fe80::32c6:d7ff:fe94:54f6/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1
|
||||
|
||||
eth1 Link encap:Ethernet HWaddr 30:C6:D7:94:54:F5
|
||||
inet addr:10.201.129.0 Bcast:10.201.143.255 Mask:255.255.240.0
|
||||
inet6 addr: fe80::32c6:d7ff:fe94:54f5/64 Scope:Link
|
||||
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
|
||||
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "static/smartdata/Front0/first_date_analysis.txt",
|
||||
Content: []byte(`The Current System Time Is 2023_09_22_14_19_39
|
||||
Model Info: ATA Micron_5300_MTFD
|
||||
Serial Number: 22443C4EE184
|
||||
`),
|
||||
},
|
||||
{
|
||||
Path: "user/test1.csv",
|
||||
Content: []byte(`Record Time Stamp,Severity Level,Severity Level ID,SensorTypeStr,SensorName,Event Dir,Event Occurred Time,DescInfo,Explanation,Suggestion
|
||||
2025-04-01 08:50:13,Minor,0x1,NA,NA,NA,2025-04-01 08:50:13,"SSH login failed from IP: 10.200.10.121 user: admin"," "," "
|
||||
Pre-Init,Info,0x0,Management Subsystem Health,Health,Assertion event,Pre-Init,"Management controller off-line"," "," "
|
||||
2025-04-01 08:51:10,Major,0x2,Power Supply,PSU1_Status,Assertion event,2025-04-01 08:51:10,"Power Supply AC lost"," "," "
|
||||
`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) != 2 {
|
||||
t.Fatalf("expected 2 CPUs from hardware_info.ini, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
if result.Hardware.CPUs[0].FrequencyMHz != 2800 {
|
||||
t.Fatalf("expected CPU frequency 2800MHz, got %d", result.Hardware.CPUs[0].FrequencyMHz)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) != 2 {
|
||||
t.Fatalf("expected 2 DIMMs from hardware_info.ini, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if result.Hardware.Memory[0].SizeMB != 65536 {
|
||||
t.Fatalf("expected DIMM size 65536MB, got %d", result.Hardware.Memory[0].SizeMB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) < 4 {
|
||||
t.Fatalf("expected firmware entries from firmware_version.ini, got %d", len(result.Hardware.Firmware))
|
||||
}
|
||||
if result.Hardware.BoardInfo.Version == "" {
|
||||
t.Fatalf("expected board version from board_cfg.ini")
|
||||
}
|
||||
if !strings.Contains(result.Hardware.BoardInfo.Description, "CustomerID: 255") {
|
||||
t.Fatalf("expected board description enrichment from board_cfg.ini, got %q", result.Hardware.BoardInfo.Description)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 2 {
|
||||
t.Fatalf("expected 2 unique storage devices from hardware.info, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
var nvmeFound bool
|
||||
var diskModelEnriched bool
|
||||
for _, s := range result.Hardware.Storage {
|
||||
if s.SerialNumber == "NVME-21" {
|
||||
nvmeFound = true
|
||||
if s.Type != "nvme" {
|
||||
t.Fatalf("expected NVME-21 type nvme, got %q", s.Type)
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(s.Manufacturer), "samsung") {
|
||||
t.Fatalf("expected NVME vendor enrichment to Samsung, got %q", s.Manufacturer)
|
||||
}
|
||||
if s.Model != "SSD-1.92T/3.84T-NVMe-EV-SFF-sa" {
|
||||
t.Fatalf("expected NVME model enrichment from PCIe table, got %q", s.Model)
|
||||
}
|
||||
}
|
||||
if s.SerialNumber == "22443C4EE184" && strings.Contains(s.Model, "Micron") {
|
||||
diskModelEnriched = true
|
||||
}
|
||||
}
|
||||
if !nvmeFound {
|
||||
t.Fatalf("expected deduped NVME storage by serial NVME-21")
|
||||
}
|
||||
if !diskModelEnriched {
|
||||
t.Fatalf("expected disk model enrichment from smartdata by serial")
|
||||
}
|
||||
|
||||
if len(result.Hardware.PowerSupply) != 2 {
|
||||
t.Fatalf("expected 2 PSUs from psu_cfg.ini, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].WattageW == 0 {
|
||||
t.Fatalf("expected PSU wattage parsed, got 0")
|
||||
}
|
||||
if len(result.Hardware.NetworkAdapters) != 2 {
|
||||
t.Fatalf("expected 2 host network adapters from hardware_info.ini, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
if len(result.Hardware.NetworkCards) != 2 {
|
||||
t.Fatalf("expected 2 network cards synthesized from adapters, got %d", len(result.Hardware.NetworkCards))
|
||||
}
|
||||
var g5NIC models.NetworkAdapter
|
||||
var g5NICFound bool
|
||||
for _, nic := range result.Hardware.NetworkAdapters {
|
||||
if strings.EqualFold(nic.Slot, "PCIe 1") && strings.Contains(strings.ToLower(nic.Model), "bcm957414") {
|
||||
g5NIC = nic
|
||||
g5NICFound = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !g5NICFound {
|
||||
t.Fatalf("expected host NIC PCIe 1 from hardware_info.ini, got %+v", result.Hardware.NetworkAdapters)
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(g5NIC.Vendor), "broadcom") {
|
||||
t.Fatalf("expected G5 NIC vendor from Vendor ID, got %q", g5NIC.Vendor)
|
||||
}
|
||||
if g5NIC.SerialNumber != "NICSN-G5-001" {
|
||||
t.Fatalf("expected G5 NIC serial from PCIe card section, got %q", g5NIC.SerialNumber)
|
||||
}
|
||||
if g5NIC.PartNumber != "NICPN-G5-001" {
|
||||
t.Fatalf("expected G5 NIC part number from PCIe card section, got %q", g5NIC.PartNumber)
|
||||
}
|
||||
if g5NIC.Firmware != "21.80.1" {
|
||||
t.Fatalf("expected G5 NIC firmware from PCIe card section, got %q", g5NIC.Firmware)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Devices) != 5 {
|
||||
t.Fatalf("expected 5 topology devices from Raid_BP_Conf_Info.ini (3 BP + 2 RAID), got %d", len(result.Hardware.Devices))
|
||||
}
|
||||
var bpFound bool
|
||||
var raidFound bool
|
||||
for _, d := range result.Hardware.Devices {
|
||||
if strings.Contains(d.ID, "h3c-bp-") && strings.Contains(d.Model, "BP_G5_8SFF") {
|
||||
bpFound = true
|
||||
}
|
||||
desc, _ := d.Details["description"].(string)
|
||||
if strings.Contains(d.ID, "h3c-raid-slot-3") && strings.Contains(desc, "SAS ports: 2") {
|
||||
raidFound = true
|
||||
}
|
||||
}
|
||||
if !bpFound || !raidFound {
|
||||
t.Fatalf("expected parsed backplane and RAID topology devices, got %+v", result.Hardware.Devices)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Volumes) != 2 {
|
||||
t.Fatalf("expected 2 RAID volumes (same LD ID on different controllers), got %d", len(result.Hardware.Volumes))
|
||||
}
|
||||
var raid1Found bool
|
||||
var raid50Found bool
|
||||
for _, v := range result.Hardware.Volumes {
|
||||
if strings.Contains(v.Controller, "slot 3") {
|
||||
raid1Found = v.RAIDLevel == "RAID1" && v.CapacityBytes > 0
|
||||
}
|
||||
if strings.Contains(v.Controller, "slot 6") {
|
||||
raid50Found = v.RAIDLevel == "RAID50" && v.CapacityBytes > 0
|
||||
}
|
||||
}
|
||||
if !raid1Found || !raid50Found {
|
||||
t.Fatalf("expected RAID1 and RAID50 volumes with parsed capacities, got %+v", result.Hardware.Volumes)
|
||||
}
|
||||
|
||||
if len(result.Events) != 2 {
|
||||
t.Fatalf("expected 2 CSV events (Pre-Init skipped), got %d", len(result.Events))
|
||||
}
|
||||
if result.Events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected Minor CSV severity mapped to warning, got %q", result.Events[0].Severity)
|
||||
}
|
||||
if result.Events[1].Severity != models.SeverityCritical {
|
||||
t.Fatalf("expected Major CSV severity mapped to critical, got %q", result.Events[1].Severity)
|
||||
}
|
||||
}
|
||||
59
internal/parser/vendors/inspur/asset.go
vendored
59
internal/parser/vendors/inspur/asset.go
vendored
@@ -58,6 +58,7 @@ type AssetJSON struct {
|
||||
} `json:"MemInfo"`
|
||||
|
||||
HddInfo []struct {
|
||||
PresentBitmap []int `json:"PresentBitmap"`
|
||||
SerialNumber string `json:"SerialNumber"`
|
||||
Manufacturer string `json:"Manufacturer"`
|
||||
ModelName string `json:"ModelName"`
|
||||
@@ -161,8 +162,19 @@ func ParseAssetJSON(content []byte) (*models.HardwareConfig, error) {
|
||||
}
|
||||
|
||||
// Parse storage info
|
||||
seenHDDFW := make(map[string]bool)
|
||||
for _, hdd := range asset.HddInfo {
|
||||
slot := normalizeAssetHDDSlot(hdd.LocationString, hdd.Location, hdd.DiskInterfaceType)
|
||||
modelName := strings.TrimSpace(hdd.ModelName)
|
||||
serial := normalizeRedisValue(hdd.SerialNumber)
|
||||
present := bitmapHasAnyValue(hdd.PresentBitmap)
|
||||
if !present && (slot != "" || modelName != "" || serial != "" || hdd.Capacity > 0) {
|
||||
present = true
|
||||
}
|
||||
|
||||
if !present && slot == "" && modelName == "" && serial == "" && hdd.Capacity == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
storageType := "HDD"
|
||||
if hdd.DiskInterfaceType == 5 {
|
||||
storageType = "NVMe"
|
||||
@@ -171,35 +183,21 @@ func ParseAssetJSON(content []byte) (*models.HardwareConfig, error) {
|
||||
}
|
||||
|
||||
// Resolve manufacturer: try vendor ID first, then model name extraction
|
||||
modelName := strings.TrimSpace(hdd.ModelName)
|
||||
manufacturer := resolveManufacturer(hdd.Manufacturer, modelName)
|
||||
|
||||
config.Storage = append(config.Storage, models.Storage{
|
||||
Slot: hdd.LocationString,
|
||||
Slot: slot,
|
||||
Type: storageType,
|
||||
Model: modelName,
|
||||
SizeGB: hdd.Capacity,
|
||||
SerialNumber: hdd.SerialNumber,
|
||||
SerialNumber: serial,
|
||||
Manufacturer: manufacturer,
|
||||
Firmware: hdd.FirmwareVersion,
|
||||
Interface: diskInterfaceToString(hdd.DiskInterfaceType),
|
||||
Present: present,
|
||||
})
|
||||
|
||||
// Add HDD firmware to firmware list (deduplicated by model+version)
|
||||
if hdd.FirmwareVersion != "" {
|
||||
fwKey := modelName + ":" + hdd.FirmwareVersion
|
||||
if !seenHDDFW[fwKey] {
|
||||
slot := hdd.LocationString
|
||||
if slot == "" {
|
||||
slot = fmt.Sprintf("%s %dGB", storageType, hdd.Capacity)
|
||||
}
|
||||
config.Firmware = append(config.Firmware, models.FirmwareInfo{
|
||||
DeviceName: fmt.Sprintf("%s (%s)", modelName, slot),
|
||||
Version: hdd.FirmwareVersion,
|
||||
})
|
||||
seenHDDFW[fwKey] = true
|
||||
}
|
||||
}
|
||||
// Disk firmware is already stored in Storage.Firmware — do not duplicate in Hardware.Firmware.
|
||||
}
|
||||
|
||||
// Parse PCIe info
|
||||
@@ -323,6 +321,29 @@ func diskInterfaceToString(ifType int) string {
|
||||
}
|
||||
}
|
||||
|
||||
func normalizeAssetHDDSlot(locationString string, location int, diskInterfaceType int) string {
|
||||
slot := strings.TrimSpace(locationString)
|
||||
if slot != "" {
|
||||
return slot
|
||||
}
|
||||
if location < 0 {
|
||||
return ""
|
||||
}
|
||||
if diskInterfaceType == 5 {
|
||||
return fmt.Sprintf("OB%02d", location+1)
|
||||
}
|
||||
return fmt.Sprintf("%d", location)
|
||||
}
|
||||
|
||||
func bitmapHasAnyValue(values []int) bool {
|
||||
for _, v := range values {
|
||||
if v != 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func pcieLinkSpeedToString(speed int) string {
|
||||
switch speed {
|
||||
case 1:
|
||||
|
||||
352
internal/parser/vendors/inspur/component.go
vendored
352
internal/parser/vendors/inspur/component.go
vendored
@@ -46,10 +46,21 @@ func ParseComponentLogEvents(content []byte) []models.Event {
|
||||
// Parse RESTful Memory info for Warning/Error status
|
||||
memEvents := parseMemoryEvents(text)
|
||||
events = append(events, memEvents...)
|
||||
events = append(events, parseFanEvents(text)...)
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
// ParseComponentLogSensors extracts sensor readings from component.log JSON sections.
|
||||
func ParseComponentLogSensors(content []byte) []models.SensorReading {
|
||||
text := string(content)
|
||||
var out []models.SensorReading
|
||||
out = append(out, parseFanSensors(text)...)
|
||||
out = append(out, parseDiskBackplaneSensors(text)...)
|
||||
out = append(out, parsePSUSummarySensors(text)...)
|
||||
return out
|
||||
}
|
||||
|
||||
// MemoryRESTInfo represents the RESTful Memory info structure
|
||||
type MemoryRESTInfo struct {
|
||||
MemModules []struct {
|
||||
@@ -210,20 +221,49 @@ func parseHDDInfo(text string, hw *models.HardwareConfig) {
|
||||
})
|
||||
for _, hdd := range hddInfo {
|
||||
if hdd.Present == 1 {
|
||||
hddMap[hdd.LocationString] = struct {
|
||||
slot := strings.TrimSpace(hdd.LocationString)
|
||||
if slot == "" {
|
||||
slot = fmt.Sprintf("HDD%d", hdd.ID)
|
||||
}
|
||||
hddMap[slot] = struct {
|
||||
SN string
|
||||
Model string
|
||||
Firmware string
|
||||
Mfr string
|
||||
}{
|
||||
SN: strings.TrimSpace(hdd.SN),
|
||||
SN: normalizeRedisValue(hdd.SN),
|
||||
Model: strings.TrimSpace(hdd.Model),
|
||||
Firmware: strings.TrimSpace(hdd.Firmware),
|
||||
Firmware: normalizeRedisValue(hdd.Firmware),
|
||||
Mfr: strings.TrimSpace(hdd.Manufacture),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Merge into existing inventory first (asset/other sections).
|
||||
for i := range hw.Storage {
|
||||
slot := strings.TrimSpace(hw.Storage[i].Slot)
|
||||
if slot == "" {
|
||||
continue
|
||||
}
|
||||
detail, ok := hddMap[slot]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if normalizeRedisValue(hw.Storage[i].SerialNumber) == "" {
|
||||
hw.Storage[i].SerialNumber = detail.SN
|
||||
}
|
||||
if hw.Storage[i].Model == "" {
|
||||
hw.Storage[i].Model = detail.Model
|
||||
}
|
||||
if normalizeRedisValue(hw.Storage[i].Firmware) == "" {
|
||||
hw.Storage[i].Firmware = detail.Firmware
|
||||
}
|
||||
if hw.Storage[i].Manufacturer == "" {
|
||||
hw.Storage[i].Manufacturer = detail.Mfr
|
||||
}
|
||||
hw.Storage[i].Present = true
|
||||
}
|
||||
|
||||
// If storage is empty, populate from HDD info
|
||||
if len(hw.Storage) == 0 {
|
||||
for _, hdd := range hddInfo {
|
||||
@@ -240,21 +280,42 @@ func parseHDDInfo(text string, hw *models.HardwareConfig) {
|
||||
if hdd.CapableSpeed == 12 {
|
||||
iface = "SAS"
|
||||
}
|
||||
slot := strings.TrimSpace(hdd.LocationString)
|
||||
if slot == "" {
|
||||
slot = fmt.Sprintf("HDD%d", hdd.ID)
|
||||
}
|
||||
|
||||
hw.Storage = append(hw.Storage, models.Storage{
|
||||
Slot: hdd.LocationString,
|
||||
Slot: slot,
|
||||
Type: storType,
|
||||
Model: model,
|
||||
SizeGB: hdd.Capacity,
|
||||
SerialNumber: strings.TrimSpace(hdd.SN),
|
||||
SerialNumber: normalizeRedisValue(hdd.SN),
|
||||
Manufacturer: extractStorageManufacturer(model),
|
||||
Firmware: strings.TrimSpace(hdd.Firmware),
|
||||
Firmware: normalizeRedisValue(hdd.Firmware),
|
||||
Interface: iface,
|
||||
Present: true,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// FanRESTInfo represents the RESTful fan info structure.
|
||||
type FanRESTInfo struct {
|
||||
Fans []struct {
|
||||
ID int `json:"id"`
|
||||
FanName string `json:"fan_name"`
|
||||
Present string `json:"present"`
|
||||
Status string `json:"status"`
|
||||
StatusStr string `json:"status_str"`
|
||||
SpeedRPM int `json:"speed_rpm"`
|
||||
SpeedPercent int `json:"speed_percent"`
|
||||
MaxSpeedRPM int `json:"max_speed_rpm"`
|
||||
FanModel string `json:"fan_model"`
|
||||
} `json:"fans"`
|
||||
FansPower int `json:"fans_power"`
|
||||
}
|
||||
|
||||
// NetworkAdapterRESTInfo represents the RESTful Network Adapter info structure
|
||||
type NetworkAdapterRESTInfo struct {
|
||||
SysAdapters []struct {
|
||||
@@ -335,6 +396,213 @@ func parseNetworkAdapterInfo(text string, hw *models.HardwareConfig) {
|
||||
}
|
||||
}
|
||||
|
||||
func parseFanSensors(text string) []models.SensorReading {
|
||||
re := regexp.MustCompile(`RESTful fan info:\s*(\{[\s\S]*?\})\s*RESTful diskbackplane`)
|
||||
match := re.FindStringSubmatch(text)
|
||||
if match == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
jsonStr := strings.ReplaceAll(match[1], "\n", "")
|
||||
var fanInfo FanRESTInfo
|
||||
if err := json.Unmarshal([]byte(jsonStr), &fanInfo); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]models.SensorReading, 0, len(fanInfo.Fans)+1)
|
||||
for _, fan := range fanInfo.Fans {
|
||||
name := strings.TrimSpace(fan.FanName)
|
||||
if name == "" {
|
||||
name = fmt.Sprintf("FAN%d", fan.ID)
|
||||
}
|
||||
status := normalizeComponentStatus(fan.StatusStr, fan.Status, fan.Present)
|
||||
raw := fmt.Sprintf("rpm=%d pct=%d model=%s max_rpm=%d", fan.SpeedRPM, fan.SpeedPercent, fan.FanModel, fan.MaxSpeedRPM)
|
||||
out = append(out, models.SensorReading{
|
||||
Name: name,
|
||||
Type: "fan_speed",
|
||||
Value: float64(fan.SpeedRPM),
|
||||
Unit: "RPM",
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
|
||||
if fanInfo.FansPower > 0 {
|
||||
out = append(out, models.SensorReading{
|
||||
Name: "Fans_Power",
|
||||
Type: "power",
|
||||
Value: float64(fanInfo.FansPower),
|
||||
Unit: "W",
|
||||
RawValue: fmt.Sprintf("%d", fanInfo.FansPower),
|
||||
Status: "OK",
|
||||
})
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func parseFanEvents(text string) []models.Event {
|
||||
re := regexp.MustCompile(`RESTful fan info:\s*(\{[\s\S]*?\})\s*RESTful diskbackplane`)
|
||||
match := re.FindStringSubmatch(text)
|
||||
if match == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
jsonStr := strings.ReplaceAll(match[1], "\n", "")
|
||||
var fanInfo FanRESTInfo
|
||||
if err := json.Unmarshal([]byte(jsonStr), &fanInfo); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var events []models.Event
|
||||
for _, fan := range fanInfo.Fans {
|
||||
status := normalizeComponentStatus(fan.StatusStr, fan.Status, fan.Present)
|
||||
if isHealthyComponentStatus(status) {
|
||||
continue
|
||||
}
|
||||
|
||||
name := strings.TrimSpace(fan.FanName)
|
||||
if name == "" {
|
||||
name = fmt.Sprintf("FAN%d", fan.ID)
|
||||
}
|
||||
|
||||
severity := models.SeverityWarning
|
||||
lowStatus := strings.ToLower(status)
|
||||
if strings.Contains(lowStatus, "critical") || strings.Contains(lowStatus, "fail") || strings.Contains(lowStatus, "error") {
|
||||
severity = models.SeverityCritical
|
||||
}
|
||||
|
||||
events = append(events, models.Event{
|
||||
ID: fmt.Sprintf("fan_%d_status", fan.ID),
|
||||
Timestamp: time.Now(),
|
||||
Source: "Fan",
|
||||
SensorType: "fan",
|
||||
SensorName: name,
|
||||
EventType: "Fan Status",
|
||||
Severity: severity,
|
||||
Description: fmt.Sprintf("%s reports %s", name, status),
|
||||
RawData: fmt.Sprintf("rpm=%d pct=%d model=%s", fan.SpeedRPM, fan.SpeedPercent, fan.FanModel),
|
||||
})
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
func parseDiskBackplaneSensors(text string) []models.SensorReading {
|
||||
re := regexp.MustCompile(`RESTful diskbackplane info:\s*(\[[\s\S]*?\])\s*BMC`)
|
||||
match := re.FindStringSubmatch(text)
|
||||
if match == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
jsonStr := strings.ReplaceAll(match[1], "\n", "")
|
||||
var backplaneInfo DiskBackplaneRESTInfo
|
||||
if err := json.Unmarshal([]byte(jsonStr), &backplaneInfo); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]models.SensorReading, 0, len(backplaneInfo))
|
||||
for _, bp := range backplaneInfo {
|
||||
if bp.Present != 1 {
|
||||
continue
|
||||
}
|
||||
name := fmt.Sprintf("Backplane%d_Temp", bp.BackplaneIndex)
|
||||
status := "OK"
|
||||
if bp.Temperature <= 0 {
|
||||
status = "unknown"
|
||||
}
|
||||
raw := fmt.Sprintf("front=%d ports=%d drives=%d cpld=%s", bp.Front, bp.PortCount, bp.DriverCount, bp.CPLDVersion)
|
||||
out = append(out, models.SensorReading{
|
||||
Name: name,
|
||||
Type: "temperature",
|
||||
Value: float64(bp.Temperature),
|
||||
Unit: "C",
|
||||
RawValue: raw,
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parsePSUSummarySensors(text string) []models.SensorReading {
|
||||
re := regexp.MustCompile(`RESTful PSU info:\s*(\{[\s\S]*?\})\s*RESTful Network`)
|
||||
match := re.FindStringSubmatch(text)
|
||||
if match == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
jsonStr := strings.ReplaceAll(match[1], "\n", "")
|
||||
var psuInfo PSURESTInfo
|
||||
if err := json.Unmarshal([]byte(jsonStr), &psuInfo); err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]models.SensorReading, 0, len(psuInfo.PowerSupplies)*3+1)
|
||||
if psuInfo.PresentPowerReading > 0 {
|
||||
out = append(out, models.SensorReading{
|
||||
Name: "PSU_Present_Power_Reading",
|
||||
Type: "power",
|
||||
Value: float64(psuInfo.PresentPowerReading),
|
||||
Unit: "W",
|
||||
RawValue: fmt.Sprintf("%d", psuInfo.PresentPowerReading),
|
||||
Status: "OK",
|
||||
})
|
||||
}
|
||||
|
||||
for _, psu := range psuInfo.PowerSupplies {
|
||||
if psu.Present != 1 {
|
||||
continue
|
||||
}
|
||||
status := normalizeComponentStatus(psu.Status)
|
||||
out = append(out, models.SensorReading{
|
||||
Name: fmt.Sprintf("PSU%d_InputPower", psu.ID),
|
||||
Type: "power",
|
||||
Value: float64(psu.PSInPower),
|
||||
Unit: "W",
|
||||
RawValue: fmt.Sprintf("%d", psu.PSInPower),
|
||||
Status: status,
|
||||
})
|
||||
out = append(out, models.SensorReading{
|
||||
Name: fmt.Sprintf("PSU%d_OutputPower", psu.ID),
|
||||
Type: "power",
|
||||
Value: float64(psu.PSOutPower),
|
||||
Unit: "W",
|
||||
RawValue: fmt.Sprintf("%d", psu.PSOutPower),
|
||||
Status: status,
|
||||
})
|
||||
out = append(out, models.SensorReading{
|
||||
Name: fmt.Sprintf("PSU%d_Temp", psu.ID),
|
||||
Type: "temperature",
|
||||
Value: float64(psu.PSUMaxTemp),
|
||||
Unit: "C",
|
||||
RawValue: fmt.Sprintf("%d", psu.PSUMaxTemp),
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func normalizeComponentStatus(values ...string) string {
|
||||
for _, v := range values {
|
||||
s := strings.TrimSpace(v)
|
||||
if s == "" {
|
||||
continue
|
||||
}
|
||||
return s
|
||||
}
|
||||
return "unknown"
|
||||
}
|
||||
|
||||
func isHealthyComponentStatus(status string) bool {
|
||||
switch strings.ToLower(strings.TrimSpace(status)) {
|
||||
case "", "ok", "normal", "present", "enabled":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
var rawDeviceIDLikeRegex = regexp.MustCompile(`(?i)^(?:0x)?[0-9a-f]{3,4}$`)
|
||||
|
||||
func looksLikeRawDeviceID(v string) bool {
|
||||
@@ -474,28 +742,88 @@ func parseDiskBackplaneInfo(text string, hw *models.HardwareConfig) {
|
||||
return
|
||||
}
|
||||
|
||||
// Create storage entries based on backplane info
|
||||
presentByBackplane := make(map[int]int)
|
||||
totalPresent := 0
|
||||
for _, bp := range backplaneInfo {
|
||||
if bp.Present != 1 {
|
||||
continue
|
||||
}
|
||||
if bp.DriverCount <= 0 {
|
||||
continue
|
||||
}
|
||||
limit := bp.DriverCount
|
||||
if bp.PortCount > 0 && limit > bp.PortCount {
|
||||
limit = bp.PortCount
|
||||
}
|
||||
presentByBackplane[bp.BackplaneIndex] = limit
|
||||
totalPresent += limit
|
||||
}
|
||||
|
||||
if totalPresent == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
existingPresent := countPresentStorage(hw.Storage)
|
||||
remaining := totalPresent - existingPresent
|
||||
if remaining <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for _, bp := range backplaneInfo {
|
||||
if bp.Present != 1 || remaining <= 0 {
|
||||
continue
|
||||
}
|
||||
driveCount := presentByBackplane[bp.BackplaneIndex]
|
||||
if driveCount <= 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
location := "Rear"
|
||||
if bp.Front == 1 {
|
||||
location = "Front"
|
||||
}
|
||||
|
||||
// Create entries for each port (disk slot)
|
||||
for i := 0; i < bp.PortCount; i++ {
|
||||
isPresent := i < bp.DriverCount
|
||||
for i := 0; i < driveCount && remaining > 0; i++ {
|
||||
slot := fmt.Sprintf("BP%d:%d", bp.BackplaneIndex, i)
|
||||
if hasStorageSlot(hw.Storage, slot) {
|
||||
continue
|
||||
}
|
||||
|
||||
hw.Storage = append(hw.Storage, models.Storage{
|
||||
Slot: fmt.Sprintf("%d", i),
|
||||
Present: isPresent,
|
||||
Slot: slot,
|
||||
Present: true,
|
||||
Location: location,
|
||||
BackplaneID: bp.BackplaneIndex,
|
||||
Type: "HDD",
|
||||
})
|
||||
remaining--
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func countPresentStorage(storage []models.Storage) int {
|
||||
count := 0
|
||||
for _, dev := range storage {
|
||||
if dev.Present {
|
||||
count++
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(dev.Slot) != "" && (normalizeRedisValue(dev.Model) != "" || normalizeRedisValue(dev.SerialNumber) != "" || dev.SizeGB > 0) {
|
||||
count++
|
||||
}
|
||||
}
|
||||
return count
|
||||
}
|
||||
|
||||
func hasStorageSlot(storage []models.Storage, slot string) bool {
|
||||
slot = strings.ToLower(strings.TrimSpace(slot))
|
||||
if slot == "" {
|
||||
return false
|
||||
}
|
||||
for _, dev := range storage {
|
||||
if strings.ToLower(strings.TrimSpace(dev.Slot)) == slot {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
114
internal/parser/vendors/inspur/component_test.go
vendored
114
internal/parser/vendors/inspur/component_test.go
vendored
@@ -50,3 +50,117 @@ RESTful fan`
|
||||
t.Fatalf("expected NIC vendor resolved from pci.ids")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseComponentLogSensors_ExtractsFanBackplaneAndPSUSummary(t *testing.T) {
|
||||
text := `RESTful PSU info:
|
||||
{
|
||||
"power_supplies": [
|
||||
{ "id": 0, "present": 1, "status": "OK", "ps_in_power": 123, "ps_out_power": 110, "psu_max_temperature": 41 }
|
||||
],
|
||||
"present_power_reading": 999
|
||||
}
|
||||
RESTful Network Adapter info:
|
||||
{ "sys_adapters": [] }
|
||||
RESTful fan info:
|
||||
{
|
||||
"fans": [
|
||||
{ "id": 1, "fan_name": "FAN0_F_Speed", "present": "OK", "status": "OK", "status_str": "OK", "speed_rpm": 9200, "speed_percent": 35, "max_speed_rpm": 20000, "fan_model": "6056" }
|
||||
],
|
||||
"fans_power": 33
|
||||
}
|
||||
RESTful diskbackplane info:
|
||||
[
|
||||
{ "port_count": 8, "driver_count": 4, "front": 1, "backplane_index": 0, "present": 1, "cpld_version": "3.1", "temperature": 18 }
|
||||
]
|
||||
BMC`
|
||||
|
||||
sensors := ParseComponentLogSensors([]byte(text))
|
||||
if len(sensors) == 0 {
|
||||
t.Fatalf("expected sensors from component.log, got none")
|
||||
}
|
||||
|
||||
has := func(name string) bool {
|
||||
for _, s := range sensors {
|
||||
if s.Name == name {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
if !has("FAN0_F_Speed") {
|
||||
t.Fatalf("expected FAN0_F_Speed sensor in parsed output")
|
||||
}
|
||||
if !has("Backplane0_Temp") {
|
||||
t.Fatalf("expected Backplane0_Temp sensor in parsed output")
|
||||
}
|
||||
if !has("PSU_Present_Power_Reading") {
|
||||
t.Fatalf("expected PSU_Present_Power_Reading sensor in parsed output")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseComponentLogEvents_FanCriticalStatus(t *testing.T) {
|
||||
text := `RESTful fan info:
|
||||
{
|
||||
"fans": [
|
||||
{ "id": 7, "fan_name": "FAN3_R_Speed", "present": "OK", "status": "Critical", "status_str": "Critical", "speed_rpm": 0, "speed_percent": 0, "max_speed_rpm": 20000, "fan_model": "6056" }
|
||||
],
|
||||
"fans_power": 0
|
||||
}
|
||||
RESTful diskbackplane info:
|
||||
[]
|
||||
BMC`
|
||||
|
||||
events := ParseComponentLogEvents([]byte(text))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 fan event, got %d", len(events))
|
||||
}
|
||||
if events[0].EventType != "Fan Status" {
|
||||
t.Fatalf("expected Fan Status event type, got %q", events[0].EventType)
|
||||
}
|
||||
if events[0].Severity != models.SeverityCritical {
|
||||
t.Fatalf("expected critical severity, got %q", events[0].Severity)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseHDDInfo_MergesIntoExistingStorage(t *testing.T) {
|
||||
text := `RESTful HDD info:
|
||||
[
|
||||
{
|
||||
"id": 1,
|
||||
"present": 1,
|
||||
"enable": 1,
|
||||
"SN": "SER123",
|
||||
"model": "Sample SSD",
|
||||
"capacity": 1024,
|
||||
"manufacture": "ACME",
|
||||
"firmware": "1.0.0",
|
||||
"locationstring": "OB01",
|
||||
"capablespeed": 6
|
||||
}
|
||||
]
|
||||
RESTful PSU`
|
||||
|
||||
hw := &models.HardwareConfig{
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "SSD",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
parseHDDInfo(text, hw)
|
||||
if len(hw.Storage) != 1 {
|
||||
t.Fatalf("expected 1 storage item, got %d", len(hw.Storage))
|
||||
}
|
||||
if hw.Storage[0].SerialNumber != "SER123" {
|
||||
t.Fatalf("expected serial from HDD section, got %q", hw.Storage[0].SerialNumber)
|
||||
}
|
||||
if hw.Storage[0].Model != "Sample SSD" {
|
||||
t.Fatalf("expected model from HDD section, got %q", hw.Storage[0].Model)
|
||||
}
|
||||
if hw.Storage[0].Firmware != "1.0.0" {
|
||||
t.Fatalf("expected firmware from HDD section, got %q", hw.Storage[0].Firmware)
|
||||
}
|
||||
}
|
||||
|
||||
8
internal/parser/vendors/inspur/gpu_status.go
vendored
8
internal/parser/vendors/inspur/gpu_status.go
vendored
@@ -70,11 +70,13 @@ func applyGPUStatusFromEvents(hw *models.HardwareConfig, events []models.Event)
|
||||
ChangedAt: e.Timestamp,
|
||||
Details: strings.TrimSpace(e.Description),
|
||||
})
|
||||
gpu.StatusChangedAt = e.Timestamp
|
||||
ts := e.Timestamp
|
||||
gpu.StatusChangedAt = &ts
|
||||
currentStatus[idx] = newStatus
|
||||
}
|
||||
|
||||
gpu.StatusCheckedAt = e.Timestamp
|
||||
ts := e.Timestamp
|
||||
gpu.StatusCheckedAt = &ts
|
||||
}
|
||||
}
|
||||
|
||||
@@ -85,7 +87,7 @@ func applyGPUStatusFromEvents(hw *models.HardwareConfig, events []models.Event)
|
||||
} else {
|
||||
gpu.ErrorDescription = ""
|
||||
}
|
||||
if gpu.StatusCheckedAt.IsZero() && strings.TrimSpace(gpu.Status) == "" {
|
||||
if gpu.StatusCheckedAt == nil && strings.TrimSpace(gpu.Status) == "" {
|
||||
gpu.Status = "OK"
|
||||
}
|
||||
}
|
||||
|
||||
85
internal/parser/vendors/inspur/parser.go
vendored
85
internal/parser/vendors/inspur/parser.go
vendored
@@ -8,6 +8,7 @@ package inspur
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
@@ -15,7 +16,7 @@ import (
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
// IMPORTANT: Increment this version when making changes to parser logic!
|
||||
const parserVersion = "1.2.1"
|
||||
const parserVersion = "1.5"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -86,6 +87,8 @@ func containsInspurMarkers(content []byte) bool {
|
||||
|
||||
// Parse parses Inspur/Kaytus archive
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
selLocation := inferInspurArchiveLocation(files)
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
@@ -123,6 +126,11 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
// Extract events from component.log (memory errors, etc.)
|
||||
componentEvents := ParseComponentLogEvents(f.Content)
|
||||
result.Events = append(result.Events, componentEvents...)
|
||||
|
||||
// Extract additional telemetry sensors from component.log sections
|
||||
// (fan RPM, backplane temperature, PSU summary power, etc.).
|
||||
componentSensors := ParseComponentLogSensors(f.Content)
|
||||
result.Sensors = mergeSensorReadings(result.Sensors, componentSensors)
|
||||
}
|
||||
|
||||
// Enrich runtime component data from Redis snapshot (serials, FW, telemetry),
|
||||
@@ -140,7 +148,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
|
||||
// Parse SEL list (selelist.csv)
|
||||
if f := parser.FindFileByName(files, "selelist.csv"); f != nil {
|
||||
selEvents := ParseSELList(f.Content)
|
||||
selEvents := ParseSELListWithLocation(f.Content, selLocation)
|
||||
result.Events = append(result.Events, selEvents...)
|
||||
}
|
||||
|
||||
@@ -173,11 +181,49 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
// Mark problematic GPUs from IDL errors like "BIOS miss F_GPU6".
|
||||
if result.Hardware != nil {
|
||||
applyGPUStatusFromEvents(result.Hardware, result.Events)
|
||||
enrichStorageFromSerialFallbackFiles(files, result.Hardware)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func inferInspurArchiveLocation(files []parser.ExtractedFile) *time.Location {
|
||||
fallback := parser.DefaultArchiveLocation()
|
||||
f := parser.FindFileByName(files, "timezone.conf")
|
||||
if f == nil {
|
||||
return fallback
|
||||
}
|
||||
locName := parseTimezoneConfigLocation(f.Content)
|
||||
if strings.TrimSpace(locName) == "" {
|
||||
return fallback
|
||||
}
|
||||
loc, err := time.LoadLocation(locName)
|
||||
if err != nil {
|
||||
return fallback
|
||||
}
|
||||
return loc
|
||||
}
|
||||
|
||||
func parseTimezoneConfigLocation(content []byte) string {
|
||||
lines := strings.Split(string(content), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" || strings.HasPrefix(line, "[") || strings.HasPrefix(line, "#") || strings.HasPrefix(line, ";") {
|
||||
continue
|
||||
}
|
||||
parts := strings.SplitN(line, "=", 2)
|
||||
if len(parts) != 2 {
|
||||
continue
|
||||
}
|
||||
key := strings.ToLower(strings.TrimSpace(parts[0]))
|
||||
val := strings.TrimSpace(parts[1])
|
||||
if key == "timezone" && val != "" {
|
||||
return val
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (p *Parser) parseDeviceFruSDR(content []byte, result *models.AnalysisResult) {
|
||||
lines := string(content)
|
||||
|
||||
@@ -262,3 +308,38 @@ func extractSlotNumberFromGPU(slot string) int {
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func mergeSensorReadings(base, extra []models.SensorReading) []models.SensorReading {
|
||||
if len(extra) == 0 {
|
||||
return base
|
||||
}
|
||||
|
||||
out := append([]models.SensorReading{}, base...)
|
||||
seen := make(map[string]struct{}, len(out))
|
||||
for _, s := range out {
|
||||
if key := sensorMergeKey(s); key != "" {
|
||||
seen[key] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
for _, s := range extra {
|
||||
key := sensorMergeKey(s)
|
||||
if key != "" {
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
}
|
||||
out = append(out, s)
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func sensorMergeKey(s models.SensorReading) string {
|
||||
name := strings.ToLower(strings.TrimSpace(s.Name))
|
||||
if name == "" {
|
||||
return ""
|
||||
}
|
||||
return name
|
||||
}
|
||||
|
||||
17
internal/parser/vendors/inspur/sel.go
vendored
17
internal/parser/vendors/inspur/sel.go
vendored
@@ -6,12 +6,19 @@ import (
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
// ParseSELList parses selelist.csv file with SEL events
|
||||
// Format: ID, Date (MM/DD/YYYY), Time (HH:MM:SS), Sensor, Event, Status
|
||||
// Example: 1,04/18/2025,09:31:18,Event Logging Disabled SEL_Status,Log area reset/cleared,Asserted
|
||||
func ParseSELList(content []byte) []models.Event {
|
||||
return ParseSELListWithLocation(content, parser.DefaultArchiveLocation())
|
||||
}
|
||||
|
||||
// ParseSELListWithLocation parses selelist.csv using provided source timezone
|
||||
// for timestamps that don't contain an explicit offset.
|
||||
func ParseSELListWithLocation(content []byte, location *time.Location) []models.Event {
|
||||
var events []models.Event
|
||||
|
||||
text := string(content)
|
||||
@@ -48,7 +55,7 @@ func ParseSELList(content []byte) []models.Event {
|
||||
status := strings.TrimSpace(records[5])
|
||||
|
||||
// Parse timestamp: MM/DD/YYYY HH:MM:SS
|
||||
timestamp := parseSELTimestamp(dateStr, timeStr)
|
||||
timestamp := parseSELTimestamp(dateStr, timeStr, location)
|
||||
|
||||
// Extract sensor type and name
|
||||
sensorType, sensorName := parseSensorInfo(sensorStr)
|
||||
@@ -76,12 +83,16 @@ func ParseSELList(content []byte) []models.Event {
|
||||
}
|
||||
|
||||
// parseSELTimestamp parses MM/DD/YYYY and HH:MM:SS into time.Time
|
||||
func parseSELTimestamp(dateStr, timeStr string) time.Time {
|
||||
func parseSELTimestamp(dateStr, timeStr string, location *time.Location) time.Time {
|
||||
// Combine date and time: MM/DD/YYYY HH:MM:SS
|
||||
timestampStr := dateStr + " " + timeStr
|
||||
|
||||
if location == nil {
|
||||
location = parser.DefaultArchiveLocation()
|
||||
}
|
||||
|
||||
// Try parsing with MM/DD/YYYY format
|
||||
t, err := time.Parse("01/02/2006 15:04:05", timestampStr)
|
||||
t, err := time.ParseInLocation("01/02/2006 15:04:05", timestampStr, location)
|
||||
if err != nil {
|
||||
// Fallback to current time
|
||||
return time.Now()
|
||||
|
||||
33
internal/parser/vendors/inspur/sel_test.go
vendored
Normal file
33
internal/parser/vendors/inspur/sel_test.go
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestParseSELListWithLocation_UsesProvidedTimezone(t *testing.T) {
|
||||
content := []byte("sel elist:\n1,02/28/2026,04:18:18,Sensor X,Event,Asserted\n")
|
||||
shanghai, err := time.LoadLocation("Asia/Shanghai")
|
||||
if err != nil {
|
||||
t.Fatalf("load location: %v", err)
|
||||
}
|
||||
|
||||
events := ParseSELListWithLocation(content, shanghai)
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(events))
|
||||
}
|
||||
|
||||
// 04:18:18 +08:00 == 20:18:18Z (previous day)
|
||||
want := time.Date(2026, 2, 27, 20, 18, 18, 0, time.UTC)
|
||||
if !events[0].Timestamp.UTC().Equal(want) {
|
||||
t.Fatalf("unexpected timestamp: got %s want %s", events[0].Timestamp.UTC(), want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseTimezoneConfigLocation(t *testing.T) {
|
||||
content := []byte("[TimeZoneConfig]\ntimezone=Asia/Shanghai\n")
|
||||
got := parseTimezoneConfigLocation(content)
|
||||
if got != "Asia/Shanghai" {
|
||||
t.Fatalf("unexpected timezone: %q", got)
|
||||
}
|
||||
}
|
||||
148
internal/parser/vendors/inspur/storage_serial_fallback.go
vendored
Normal file
148
internal/parser/vendors/inspur/storage_serial_fallback.go
vendored
Normal file
@@ -0,0 +1,148 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
var bpHDDSerialTokenRegex = regexp.MustCompile(`[A-Za-z0-9]{8,32}`)
|
||||
|
||||
func enrichStorageFromSerialFallbackFiles(files []parser.ExtractedFile, hw *models.HardwareConfig) {
|
||||
if hw == nil {
|
||||
return
|
||||
}
|
||||
f := parser.FindFileByName(files, "BpHDDSerialNumber.info")
|
||||
if f == nil {
|
||||
return
|
||||
}
|
||||
serials := extractBPHDDSerials(f.Content)
|
||||
if len(serials) == 0 {
|
||||
return
|
||||
}
|
||||
applyStorageSerialFallback(hw, serials)
|
||||
}
|
||||
|
||||
func extractBPHDDSerials(content []byte) []string {
|
||||
if len(content) == 0 {
|
||||
return nil
|
||||
}
|
||||
matches := bpHDDSerialTokenRegex.FindAllString(string(content), -1)
|
||||
if len(matches) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]string, 0, len(matches))
|
||||
seen := make(map[string]struct{}, len(matches))
|
||||
for _, m := range matches {
|
||||
v := normalizeRedisValue(m)
|
||||
if !looksLikeStorageSerial(v) {
|
||||
continue
|
||||
}
|
||||
key := strings.ToLower(v)
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, v)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func looksLikeStorageSerial(v string) bool {
|
||||
if len(v) < 8 {
|
||||
return false
|
||||
}
|
||||
hasLetter := false
|
||||
hasDigit := false
|
||||
for _, r := range v {
|
||||
switch {
|
||||
case r >= 'A' && r <= 'Z':
|
||||
hasLetter = true
|
||||
case r >= 'a' && r <= 'z':
|
||||
hasLetter = true
|
||||
case r >= '0' && r <= '9':
|
||||
hasDigit = true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
return hasLetter && hasDigit
|
||||
}
|
||||
|
||||
func applyStorageSerialFallback(hw *models.HardwareConfig, serials []string) {
|
||||
if hw == nil || len(hw.Storage) == 0 || len(serials) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
existing := make(map[string]struct{}, len(hw.Storage))
|
||||
for _, dev := range hw.Storage {
|
||||
if sn := normalizeRedisValue(dev.SerialNumber); sn != "" {
|
||||
existing[strings.ToLower(sn)] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
filtered := make([]string, 0, len(serials))
|
||||
for _, sn := range serials {
|
||||
key := strings.ToLower(sn)
|
||||
if _, ok := existing[key]; ok {
|
||||
continue
|
||||
}
|
||||
filtered = append(filtered, sn)
|
||||
}
|
||||
if len(filtered) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
type target struct {
|
||||
index int
|
||||
rank int
|
||||
slot string
|
||||
}
|
||||
targets := make([]target, 0, len(hw.Storage))
|
||||
for i := range hw.Storage {
|
||||
dev := hw.Storage[i]
|
||||
if normalizeRedisValue(dev.SerialNumber) != "" {
|
||||
continue
|
||||
}
|
||||
if !dev.Present && strings.TrimSpace(dev.Slot) == "" {
|
||||
continue
|
||||
}
|
||||
rank := 0
|
||||
if !dev.Present {
|
||||
rank += 10
|
||||
}
|
||||
if strings.EqualFold(strings.TrimSpace(dev.Type), "NVMe") {
|
||||
rank += 5
|
||||
}
|
||||
if strings.TrimSpace(dev.Slot) == "" {
|
||||
rank += 4
|
||||
}
|
||||
targets = append(targets, target{
|
||||
index: i,
|
||||
rank: rank,
|
||||
slot: strings.ToLower(strings.TrimSpace(dev.Slot)),
|
||||
})
|
||||
}
|
||||
if len(targets) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Slice(targets, func(i, j int) bool {
|
||||
if targets[i].rank != targets[j].rank {
|
||||
return targets[i].rank < targets[j].rank
|
||||
}
|
||||
return targets[i].slot < targets[j].slot
|
||||
})
|
||||
|
||||
for i := 0; i < len(targets) && i < len(filtered); i++ {
|
||||
dev := &hw.Storage[targets[i].index]
|
||||
dev.SerialNumber = filtered[i]
|
||||
if !dev.Present {
|
||||
dev.Present = true
|
||||
}
|
||||
}
|
||||
}
|
||||
106
internal/parser/vendors/inspur/storage_serial_fallback_test.go
vendored
Normal file
106
internal/parser/vendors/inspur/storage_serial_fallback_test.go
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
package inspur
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestParseAssetJSON_HddSlotFallbackAndPresence(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"HddInfo": [
|
||||
{
|
||||
"PresentBitmap": [1],
|
||||
"SerialNumber": "",
|
||||
"Manufacturer": "",
|
||||
"ModelName": "",
|
||||
"FirmwareVersion": "",
|
||||
"Capacity": 0,
|
||||
"Location": 2,
|
||||
"DiskInterfaceType": 5,
|
||||
"MediaType": 1,
|
||||
"LocationString": ""
|
||||
}
|
||||
]
|
||||
}`)
|
||||
|
||||
hw, err := ParseAssetJSON(content)
|
||||
if err != nil {
|
||||
t.Fatalf("ParseAssetJSON failed: %v", err)
|
||||
}
|
||||
if len(hw.Storage) != 1 {
|
||||
t.Fatalf("expected 1 storage entry, got %d", len(hw.Storage))
|
||||
}
|
||||
if hw.Storage[0].Slot != "OB03" {
|
||||
t.Fatalf("expected OB03 slot fallback, got %q", hw.Storage[0].Slot)
|
||||
}
|
||||
if !hw.Storage[0].Present {
|
||||
t.Fatalf("expected fallback storage entry marked present")
|
||||
}
|
||||
if hw.Storage[0].Type != "NVMe" {
|
||||
t.Fatalf("expected NVMe type, got %q", hw.Storage[0].Type)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDiskBackplaneInfo_PopulatesOnlyMissingPresentDrives(t *testing.T) {
|
||||
text := `RESTful diskbackplane info:
|
||||
[
|
||||
{ "port_count": 8, "driver_count": 4, "front": 1, "backplane_index": 0, "present": 1, "cpld_version": "3.1", "temperature": 18 },
|
||||
{ "port_count": 8, "driver_count": 3, "front": 1, "backplane_index": 1, "present": 1, "cpld_version": "3.1", "temperature": 17 }
|
||||
]
|
||||
BMC`
|
||||
|
||||
hw := &models.HardwareConfig{
|
||||
Storage: []models.Storage{
|
||||
{Slot: "OB01", Type: "NVMe", Present: true},
|
||||
{Slot: "OB02", Type: "NVMe", Present: true},
|
||||
{Slot: "OB03", Type: "NVMe", Present: true},
|
||||
{Slot: "OB04", Type: "NVMe", Present: true},
|
||||
},
|
||||
}
|
||||
|
||||
parseDiskBackplaneInfo(text, hw)
|
||||
|
||||
if len(hw.Storage) != 7 {
|
||||
t.Fatalf("expected total storage count 7 after backplane merge, got %d", len(hw.Storage))
|
||||
}
|
||||
bpCount := 0
|
||||
for _, dev := range hw.Storage {
|
||||
if strings.HasPrefix(dev.Slot, "BP0:") || strings.HasPrefix(dev.Slot, "BP1:") {
|
||||
bpCount++
|
||||
}
|
||||
}
|
||||
if bpCount != 3 {
|
||||
t.Fatalf("expected 3 synthetic backplane rows, got %d", bpCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnrichStorageFromSerialFallbackFiles_AssignsSerials(t *testing.T) {
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "onekeylog/configuration/conf/BpHDDSerialNumber.info",
|
||||
Content: []byte{
|
||||
0xA0, 0xA1, 0xA2, 0xA3,
|
||||
'S', '6', 'K', 'N', 'N', 'G', '0', 'W', '4', '2', '8', '5', '5', '2',
|
||||
0x00,
|
||||
'P', 'H', 'Y', 'I', '5', '2', '7', '1', '0', '0', '4', 'B', '1', 'P', '9', 'D', 'G', 'N',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
hw := &models.HardwareConfig{
|
||||
Storage: []models.Storage{
|
||||
{Slot: "BP0:0", Type: "HDD", Present: true},
|
||||
{Slot: "BP0:1", Type: "HDD", Present: true},
|
||||
{Slot: "OB01", Type: "NVMe", Present: true},
|
||||
},
|
||||
}
|
||||
|
||||
enrichStorageFromSerialFallbackFiles(files, hw)
|
||||
|
||||
if hw.Storage[0].SerialNumber == "" || hw.Storage[1].SerialNumber == "" {
|
||||
t.Fatalf("expected serials assigned to present storage entries, got %#v", hw.Storage)
|
||||
}
|
||||
}
|
||||
@@ -112,7 +112,7 @@ func parseVerboseRunTestStartTimes(content []byte) map[string]time.Time {
|
||||
continue
|
||||
}
|
||||
|
||||
ts, err := time.ParseInLocation("2006-01-02 15:04:05", strings.TrimSpace(matches[1]), time.UTC)
|
||||
ts, err := parser.ParseInDefaultArchiveLocation("2006-01-02 15:04:05", strings.TrimSpace(matches[1]))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
@@ -135,7 +135,7 @@ func parseRunLogTestStartTimes(content []byte) map[string]time.Time {
|
||||
if len(matches) != 2 {
|
||||
continue
|
||||
}
|
||||
parsed, err := time.ParseInLocation("Mon, 02 Jan 2006 15:04:05", strings.TrimSpace(matches[1]), time.UTC)
|
||||
parsed, err := parser.ParseInDefaultArchiveLocation("Mon, 02 Jan 2006 15:04:05", strings.TrimSpace(matches[1]))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
@@ -178,7 +178,7 @@ func parseModsStartTime(content []byte) time.Time {
|
||||
if tsRaw == "" {
|
||||
return time.Time{}
|
||||
}
|
||||
ts, err := time.ParseInLocation("Mon Jan 2 15:04:05 2006", tsRaw, time.UTC)
|
||||
ts, err := parser.ParseInDefaultArchiveLocation("Mon Jan 2 15:04:05 2006", tsRaw)
|
||||
if err != nil {
|
||||
return time.Time{}
|
||||
}
|
||||
@@ -230,7 +230,7 @@ func ApplyGPUAndNVSwitchCheckTimes(result *models.AnalysisResult, times componen
|
||||
if ts.IsZero() {
|
||||
continue
|
||||
}
|
||||
gpu.StatusCheckedAt = ts
|
||||
gpu.StatusCheckedAt = &ts
|
||||
status := strings.TrimSpace(gpu.Status)
|
||||
if status == "" {
|
||||
status = "Unknown"
|
||||
@@ -261,7 +261,7 @@ func ApplyGPUAndNVSwitchCheckTimes(result *models.AnalysisResult, times componen
|
||||
continue
|
||||
}
|
||||
|
||||
dev.StatusCheckedAt = ts
|
||||
dev.StatusCheckedAt = &ts
|
||||
status := strings.TrimSpace(dev.Status)
|
||||
if status == "" {
|
||||
status = "Unknown"
|
||||
|
||||
@@ -23,10 +23,10 @@ func TestParseVerboseRunTestStartTimes(t *testing.T) {
|
||||
if gpu.IsZero() {
|
||||
t.Fatalf("expected gpu_fieldiag timestamp")
|
||||
}
|
||||
if nvs.Format(time.RFC3339) != "2026-01-22T09:11:32Z" {
|
||||
if nvs.UTC().Format(time.RFC3339) != "2026-01-22T06:11:32Z" {
|
||||
t.Fatalf("unexpected nvswitch timestamp: %s", nvs.Format(time.RFC3339))
|
||||
}
|
||||
if gpu.Format(time.RFC3339) != "2026-01-22T09:45:36Z" {
|
||||
if gpu.UTC().Format(time.RFC3339) != "2026-01-22T06:45:36Z" {
|
||||
t.Fatalf("unexpected gpu_fieldiag timestamp: %s", gpu.Format(time.RFC3339))
|
||||
}
|
||||
}
|
||||
@@ -40,13 +40,13 @@ Testing nvswitch OK [ 9:25s ]
|
||||
`)
|
||||
|
||||
got := parseRunLogTestStartTimes(content)
|
||||
if got["gpumem"].Format(time.RFC3339) != "2026-01-22T07:42:26Z" {
|
||||
if got["gpumem"].UTC().Format(time.RFC3339) != "2026-01-22T04:42:26Z" {
|
||||
t.Fatalf("unexpected gpumem start: %s", got["gpumem"].Format(time.RFC3339))
|
||||
}
|
||||
if got["gpustress"].Format(time.RFC3339) != "2026-01-22T08:08:38Z" {
|
||||
if got["gpustress"].UTC().Format(time.RFC3339) != "2026-01-22T05:08:38Z" {
|
||||
t.Fatalf("unexpected gpustress start: %s", got["gpustress"].Format(time.RFC3339))
|
||||
}
|
||||
if got["nvswitch"].Format(time.RFC3339) != "2026-01-22T08:15:48Z" {
|
||||
if got["nvswitch"].UTC().Format(time.RFC3339) != "2026-01-22T05:15:48Z" {
|
||||
t.Fatalf("unexpected nvswitch start: %s", got["nvswitch"].Format(time.RFC3339))
|
||||
}
|
||||
}
|
||||
@@ -72,20 +72,20 @@ func TestApplyGPUAndNVSwitchCheckTimes(t *testing.T) {
|
||||
NVSwitchBySlot: map[string]time.Time{"NVSWITCH0": nvsTs},
|
||||
})
|
||||
|
||||
if got := result.Hardware.GPUs[0].StatusCheckedAt; !got.Equal(gpuTs) {
|
||||
t.Fatalf("expected gpu status_checked_at %s, got %s", gpuTs.Format(time.RFC3339), got.Format(time.RFC3339))
|
||||
if got := result.Hardware.GPUs[0].StatusCheckedAt; got == nil || !got.Equal(gpuTs) {
|
||||
t.Fatalf("expected gpu status_checked_at %s, got %v", gpuTs.Format(time.RFC3339), got)
|
||||
}
|
||||
if result.Hardware.GPUs[0].StatusAtCollect == nil || !result.Hardware.GPUs[0].StatusAtCollect.At.Equal(gpuTs) {
|
||||
t.Fatalf("expected gpu status_at_collection.at %s", gpuTs.Format(time.RFC3339))
|
||||
}
|
||||
if got := result.Hardware.PCIeDevices[0].StatusCheckedAt; !got.Equal(nvsTs) {
|
||||
t.Fatalf("expected nvswitch status_checked_at %s, got %s", nvsTs.Format(time.RFC3339), got.Format(time.RFC3339))
|
||||
if got := result.Hardware.PCIeDevices[0].StatusCheckedAt; got == nil || !got.Equal(nvsTs) {
|
||||
t.Fatalf("expected nvswitch status_checked_at %s, got %v", nvsTs.Format(time.RFC3339), got)
|
||||
}
|
||||
if result.Hardware.PCIeDevices[0].StatusAtCollect == nil || !result.Hardware.PCIeDevices[0].StatusAtCollect.At.Equal(nvsTs) {
|
||||
t.Fatalf("expected nvswitch status_at_collection.at %s", nvsTs.Format(time.RFC3339))
|
||||
}
|
||||
if !result.Hardware.PCIeDevices[1].StatusCheckedAt.IsZero() {
|
||||
t.Fatalf("expected non-nvswitch device status_checked_at to stay zero")
|
||||
if result.Hardware.PCIeDevices[1].StatusCheckedAt != nil {
|
||||
t.Fatalf("expected non-nvswitch device status_checked_at to stay nil")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -101,10 +101,10 @@ func TestCollectGPUAndNVSwitchCheckTimes_FromVerboseRun(t *testing.T) {
|
||||
}
|
||||
|
||||
got := CollectGPUAndNVSwitchCheckTimes(files)
|
||||
if got.GPUDefault.Format(time.RFC3339) != "2026-01-22T09:45:36Z" {
|
||||
if got.GPUDefault.UTC().Format(time.RFC3339) != "2026-01-22T06:45:36Z" {
|
||||
t.Fatalf("unexpected GPU check time: %s", got.GPUDefault.Format(time.RFC3339))
|
||||
}
|
||||
if got.NVSwitchDefault.Format(time.RFC3339) != "2026-01-22T09:11:32Z" {
|
||||
if got.NVSwitchDefault.UTC().Format(time.RFC3339) != "2026-01-22T06:11:32Z" {
|
||||
t.Fatalf("unexpected NVSwitch check time: %s", got.NVSwitchDefault.Format(time.RFC3339))
|
||||
}
|
||||
}
|
||||
@@ -128,16 +128,16 @@ MODS start: Thu Jan 22 09:11:32 2026
|
||||
}
|
||||
|
||||
got := CollectGPUAndNVSwitchCheckTimes(files)
|
||||
if got.GPUBySerial["1653925025497"].Format(time.RFC3339) != "2026-01-22T09:45:36Z" {
|
||||
if got.GPUBySerial["1653925025497"].UTC().Format(time.RFC3339) != "2026-01-22T06:45:36Z" {
|
||||
t.Fatalf("unexpected GPU serial check time: %s", got.GPUBySerial["1653925025497"].Format(time.RFC3339))
|
||||
}
|
||||
if got.GPUBySlot["GPUSXM5"].Format(time.RFC3339) != "2026-01-22T09:45:36Z" {
|
||||
if got.GPUBySlot["GPUSXM5"].UTC().Format(time.RFC3339) != "2026-01-22T06:45:36Z" {
|
||||
t.Fatalf("unexpected GPU slot check time: %s", got.GPUBySlot["GPUSXM5"].Format(time.RFC3339))
|
||||
}
|
||||
if got.NVSwitchBySlot["NVSWITCH0"].Format(time.RFC3339) != "2026-01-22T09:11:32Z" {
|
||||
if got.NVSwitchBySlot["NVSWITCH0"].UTC().Format(time.RFC3339) != "2026-01-22T06:11:32Z" {
|
||||
t.Fatalf("unexpected NVSwitch0 check time: %s", got.NVSwitchBySlot["NVSWITCH0"].Format(time.RFC3339))
|
||||
}
|
||||
if got.NVSwitchBySlot["NVSWITCH3"].Format(time.RFC3339) != "2026-01-22T09:11:32Z" {
|
||||
if got.NVSwitchBySlot["NVSWITCH3"].UTC().Format(time.RFC3339) != "2026-01-22T06:11:32Z" {
|
||||
t.Fatalf("unexpected NVSwitch3 check time: %s", got.NVSwitchBySlot["NVSWITCH3"].Format(time.RFC3339))
|
||||
}
|
||||
}
|
||||
|
||||
2
internal/parser/vendors/nvidia/parser.go
vendored
2
internal/parser/vendors/nvidia/parser.go
vendored
@@ -14,7 +14,7 @@ import (
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
// IMPORTANT: Increment this version when making changes to parser logic!
|
||||
const parserVersion = "1.3.0"
|
||||
const parserVersion = "1.4"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
|
||||
@@ -235,8 +235,8 @@ func TestNVIDIAParser_ComponentCheckTimes_RealArchive07900(t *testing.T) {
|
||||
t.Fatalf("expected hardware in parsed result")
|
||||
}
|
||||
|
||||
expectedGPU := time.Date(2026, 1, 22, 9, 45, 36, 0, time.UTC)
|
||||
expectedNVSwitch := time.Date(2026, 1, 22, 9, 11, 32, 0, time.UTC)
|
||||
expectedGPU := time.Date(2026, 1, 22, 6, 45, 36, 0, time.UTC)
|
||||
expectedNVSwitch := time.Date(2026, 1, 22, 6, 11, 32, 0, time.UTC)
|
||||
|
||||
if len(result.Hardware.GPUs) == 0 {
|
||||
t.Fatalf("expected GPUs in parsed result")
|
||||
|
||||
@@ -3,14 +3,33 @@
|
||||
package nvidia_bug_report
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
const parserVersion = "1.0.0"
|
||||
const parserVersion = "1.2"
|
||||
|
||||
var bugReportDateLineRegex = regexp.MustCompile(`(?m)^Date:\s+(.+?)\s*$`)
|
||||
var dateWithTZAbbrevRegex = regexp.MustCompile(`^([A-Za-z]{3}\s+[A-Za-z]{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2})\s+([A-Za-z]{2,5})\s+(\d{4})$`)
|
||||
|
||||
var timezoneAbbrevToOffset = map[string]string{
|
||||
"UTC": "+00:00",
|
||||
"GMT": "+00:00",
|
||||
"EST": "-05:00",
|
||||
"EDT": "-04:00",
|
||||
"CST": "-06:00",
|
||||
"CDT": "-05:00",
|
||||
"MST": "-07:00",
|
||||
"MDT": "-06:00",
|
||||
"PST": "-08:00",
|
||||
"PDT": "-07:00",
|
||||
}
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -81,6 +100,10 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
}
|
||||
|
||||
content := string(files[0].Content)
|
||||
if collectedAt, tzOffset, ok := parseBugReportCollectedAt(content); ok {
|
||||
result.CollectedAt = collectedAt.UTC()
|
||||
result.SourceTimezone = tzOffset
|
||||
}
|
||||
|
||||
// Parse system information
|
||||
parseSystemInfo(content, result)
|
||||
@@ -105,3 +128,49 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func parseBugReportCollectedAt(content string) (time.Time, string, bool) {
|
||||
matches := bugReportDateLineRegex.FindStringSubmatch(content)
|
||||
if len(matches) != 2 {
|
||||
return time.Time{}, "", false
|
||||
}
|
||||
raw := strings.TrimSpace(matches[1])
|
||||
if raw == "" {
|
||||
return time.Time{}, "", false
|
||||
}
|
||||
|
||||
if m := dateWithTZAbbrevRegex.FindStringSubmatch(raw); len(m) == 4 {
|
||||
if offset, ok := timezoneAbbrevToOffset[strings.ToUpper(strings.TrimSpace(m[2]))]; ok {
|
||||
layout := "Mon Jan 2 15:04:05 -07:00 2006"
|
||||
normalized := strings.TrimSpace(m[1]) + " " + offset + " " + strings.TrimSpace(m[3])
|
||||
if ts, err := time.Parse(layout, normalized); err == nil {
|
||||
return ts, offset, true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
layouts := []string{
|
||||
"Mon Jan 2 15:04:05 MST 2006",
|
||||
"Mon Jan 2 15:04:05 2006",
|
||||
}
|
||||
for _, layout := range layouts {
|
||||
ts, err := time.Parse(layout, raw)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
return ts, formatOffset(ts), true
|
||||
}
|
||||
return time.Time{}, "", false
|
||||
}
|
||||
|
||||
func formatOffset(t time.Time) string {
|
||||
_, sec := t.Zone()
|
||||
sign := '+'
|
||||
if sec < 0 {
|
||||
sign = '-'
|
||||
sec = -sec
|
||||
}
|
||||
h := sec / 3600
|
||||
m := (sec % 3600) / 60
|
||||
return fmt.Sprintf("%c%02d:%02d", sign, h, m)
|
||||
}
|
||||
|
||||
54
internal/parser/vendors/nvidia_bug_report/parser_test.go
vendored
Normal file
54
internal/parser/vendors/nvidia_bug_report/parser_test.go
vendored
Normal file
@@ -0,0 +1,54 @@
|
||||
package nvidia_bug_report
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestParseBugReportCollectedAt(t *testing.T) {
|
||||
content := `
|
||||
Start of NVIDIA bug report log file
|
||||
Date: Fri Dec 12 10:14:49 EST 2025
|
||||
`
|
||||
|
||||
got, tz, ok := parseBugReportCollectedAt(content)
|
||||
if !ok {
|
||||
t.Fatalf("expected collected_at to be parsed")
|
||||
}
|
||||
if tz != "-05:00" {
|
||||
t.Fatalf("expected tz offset -05:00, got %q", tz)
|
||||
}
|
||||
wantUTC := time.Date(2025, 12, 12, 15, 14, 49, 0, time.UTC)
|
||||
if !got.UTC().Equal(wantUTC) {
|
||||
t.Fatalf("expected %s, got %s", wantUTC, got.UTC())
|
||||
}
|
||||
}
|
||||
|
||||
func TestNvidiaBugReportParser_SetsCollectedAtAndTimezone(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "nvidia-bug-report-1653925023938.log",
|
||||
Content: []byte(`
|
||||
Start of NVIDIA bug report log file
|
||||
nvidia-bug-report.sh Version: 34275561
|
||||
Date: Fri Dec 12 10:14:49 EST 2025
|
||||
`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
|
||||
if result.SourceTimezone != "-05:00" {
|
||||
t.Fatalf("expected source timezone -05:00, got %q", result.SourceTimezone)
|
||||
}
|
||||
wantUTC := time.Date(2025, 12, 12, 15, 14, 49, 0, time.UTC)
|
||||
if !result.CollectedAt.Equal(wantUTC) {
|
||||
t.Fatalf("expected collected_at %s, got %s", wantUTC, result.CollectedAt)
|
||||
}
|
||||
}
|
||||
261
internal/parser/vendors/supermicro/crashdump.go
vendored
261
internal/parser/vendors/supermicro/crashdump.go
vendored
@@ -1,261 +0,0 @@
|
||||
package supermicro
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// CrashDumpData represents the structure of CDump.txt
|
||||
type CrashDumpData struct {
|
||||
CrashData struct {
|
||||
METADATA Metadata `json:"METADATA"`
|
||||
PROCESSORS ProcessorsData `json:"PROCESSORS"`
|
||||
} `json:"crash_data"`
|
||||
}
|
||||
|
||||
// ProcessorsData contains processor crash data
|
||||
type ProcessorsData struct {
|
||||
Version string `json:"_version"`
|
||||
CPU0 Processors `json:"cpu0"`
|
||||
CPU1 Processors `json:"cpu1"`
|
||||
}
|
||||
|
||||
// Metadata contains crashdump metadata
|
||||
type Metadata struct {
|
||||
CPU0 CPUMetadata `json:"cpu0"`
|
||||
CPU1 CPUMetadata `json:"cpu1"`
|
||||
BMCFWVer string `json:"bmc_fw_ver"`
|
||||
BIOSId string `json:"bios_id"`
|
||||
MEFWVer string `json:"me_fw_ver"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
TriggerType string `json:"trigger_type"`
|
||||
PlatformName string `json:"platform_name"`
|
||||
CrashdumpVer string `json:"crashdump_ver"`
|
||||
ResetDetected string `json:"_reset_detected"`
|
||||
}
|
||||
|
||||
// CPUMetadata contains CPU metadata
|
||||
type CPUMetadata struct {
|
||||
CPUID string `json:"cpuid"`
|
||||
CoreMask string `json:"core_mask"`
|
||||
CHACount string `json:"cha_count"`
|
||||
CoreCount string `json:"core_count"`
|
||||
PPIN string `json:"ppin"`
|
||||
UcodePatchVer string `json:"ucode_patch_ver"`
|
||||
}
|
||||
|
||||
// Processors contains processor crash data
|
||||
type Processors struct {
|
||||
MCA MCAData `json:"MCA"`
|
||||
}
|
||||
|
||||
// MCAData contains Machine Check Architecture data
|
||||
type MCAData struct {
|
||||
Uncore map[string]interface{} `json:"uncore"`
|
||||
}
|
||||
|
||||
// ParseCrashDump parses CDump.txt file
|
||||
func ParseCrashDump(content []byte, result *models.AnalysisResult) error {
|
||||
var data CrashDumpData
|
||||
if err := json.Unmarshal(content, &data); err != nil {
|
||||
return fmt.Errorf("failed to parse CDump.txt: %w", err)
|
||||
}
|
||||
|
||||
// Initialize Hardware.Firmware slice if nil
|
||||
if result.Hardware.Firmware == nil {
|
||||
result.Hardware.Firmware = make([]models.FirmwareInfo, 0)
|
||||
}
|
||||
|
||||
// Parse metadata
|
||||
parseMetadata(&data.CrashData.METADATA, result)
|
||||
|
||||
// Parse CPU information
|
||||
parseCPUInfo(&data.CrashData.METADATA, result)
|
||||
|
||||
// Parse MCA errors
|
||||
parseMCAErrors(&data.CrashData, result)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseMetadata extracts metadata information
|
||||
func parseMetadata(metadata *Metadata, result *models.AnalysisResult) {
|
||||
// Store firmware versions in HardwareConfig.Firmware
|
||||
if metadata.BMCFWVer != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: "BMC",
|
||||
Version: metadata.BMCFWVer,
|
||||
})
|
||||
}
|
||||
|
||||
if metadata.BIOSId != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: "BIOS",
|
||||
Version: metadata.BIOSId,
|
||||
})
|
||||
}
|
||||
|
||||
if metadata.MEFWVer != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: "ME",
|
||||
Version: metadata.MEFWVer,
|
||||
})
|
||||
}
|
||||
|
||||
// Create event for crashdump trigger
|
||||
timestamp := time.Now()
|
||||
if metadata.Timestamp != "" {
|
||||
if t, err := time.Parse(time.RFC3339, metadata.Timestamp); err == nil {
|
||||
timestamp = t
|
||||
}
|
||||
}
|
||||
|
||||
triggerType := metadata.TriggerType
|
||||
if triggerType == "" {
|
||||
triggerType = "Unknown"
|
||||
}
|
||||
|
||||
severity := models.SeverityInfo
|
||||
if metadata.ResetDetected != "" && metadata.ResetDetected != "NONE" {
|
||||
severity = models.SeverityWarning
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: timestamp,
|
||||
Source: "Crashdump",
|
||||
EventType: "System Crashdump",
|
||||
Description: fmt.Sprintf("Crashdump collected (%s)", triggerType),
|
||||
Severity: severity,
|
||||
RawData: fmt.Sprintf("Version: %s, Reset: %s", metadata.CrashdumpVer, metadata.ResetDetected),
|
||||
})
|
||||
}
|
||||
|
||||
// parseCPUInfo extracts CPU information
|
||||
func parseCPUInfo(metadata *Metadata, result *models.AnalysisResult) {
|
||||
cpus := []struct {
|
||||
socket int
|
||||
data CPUMetadata
|
||||
}{
|
||||
{0, metadata.CPU0},
|
||||
{1, metadata.CPU1},
|
||||
}
|
||||
|
||||
for _, cpu := range cpus {
|
||||
if cpu.data.CPUID == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse core count
|
||||
coreCount := 0
|
||||
if cpu.data.CoreCount != "" {
|
||||
if count, err := strconv.ParseInt(strings.TrimPrefix(cpu.data.CoreCount, "0x"), 16, 64); err == nil {
|
||||
coreCount = int(count)
|
||||
}
|
||||
}
|
||||
|
||||
cpuModel := models.CPU{
|
||||
Socket: cpu.socket,
|
||||
Model: fmt.Sprintf("Intel CPU (CPUID: %s)", cpu.data.CPUID),
|
||||
Cores: coreCount,
|
||||
}
|
||||
|
||||
// Add PPIN
|
||||
if cpu.data.PPIN != "" && cpu.data.PPIN != "0x0" {
|
||||
cpuModel.PPIN = cpu.data.PPIN
|
||||
}
|
||||
|
||||
result.Hardware.CPUs = append(result.Hardware.CPUs, cpuModel)
|
||||
|
||||
// Add microcode version to firmware list
|
||||
if cpu.data.UcodePatchVer != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
DeviceName: fmt.Sprintf("CPU%d Microcode", cpu.socket),
|
||||
Version: cpu.data.UcodePatchVer,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// parseMCAErrors extracts Machine Check Architecture errors
|
||||
func parseMCAErrors(crashData *struct {
|
||||
METADATA Metadata `json:"METADATA"`
|
||||
PROCESSORS ProcessorsData `json:"PROCESSORS"`
|
||||
}, result *models.AnalysisResult) {
|
||||
timestamp := time.Now()
|
||||
if crashData.METADATA.Timestamp != "" {
|
||||
if t, err := time.Parse(time.RFC3339, crashData.METADATA.Timestamp); err == nil {
|
||||
timestamp = t
|
||||
}
|
||||
}
|
||||
|
||||
// Parse each CPU's MCA data
|
||||
cpuProcs := []struct {
|
||||
name string
|
||||
data Processors
|
||||
}{
|
||||
{"cpu0", crashData.PROCESSORS.CPU0},
|
||||
{"cpu1", crashData.PROCESSORS.CPU1},
|
||||
}
|
||||
|
||||
for _, cpu := range cpuProcs {
|
||||
if cpu.data.MCA.Uncore == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check each MCA bank for errors
|
||||
for bankName, bankDataRaw := range cpu.data.MCA.Uncore {
|
||||
bankData, ok := bankDataRaw.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Look for status register
|
||||
statusKey := strings.ToLower(bankName) + "_status"
|
||||
statusRaw, ok := bankData[statusKey]
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
statusStr, ok := statusRaw.(string)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse status value
|
||||
status, err := strconv.ParseUint(strings.TrimPrefix(statusStr, "0x"), 16, 64)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if MCA error is valid (bit 63 = Valid)
|
||||
if status&(1<<63) != 0 {
|
||||
// MCA error detected
|
||||
severity := models.SeverityWarning
|
||||
if status&(1<<61) != 0 { // UC bit = uncorrected error
|
||||
severity = models.SeverityCritical
|
||||
}
|
||||
|
||||
description := fmt.Sprintf("MCA Error in %s bank %s", cpu.name, bankName)
|
||||
if status&(1<<61) != 0 {
|
||||
description += " (Uncorrected)"
|
||||
} else {
|
||||
description += " (Corrected)"
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, models.Event{
|
||||
Timestamp: timestamp,
|
||||
Source: "MCA",
|
||||
EventType: "Machine Check",
|
||||
Description: description,
|
||||
Severity: severity,
|
||||
RawData: fmt.Sprintf("Status: %s, CPU: %s, Bank: %s", statusStr, cpu.name, bankName),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
98
internal/parser/vendors/supermicro/parser.go
vendored
98
internal/parser/vendors/supermicro/parser.go
vendored
@@ -1,98 +0,0 @@
|
||||
// Package supermicro provides parser for Supermicro BMC crashdump archives
|
||||
// Tested with: Supermicro SYS-821GE-TNHR (Crashdump format)
|
||||
//
|
||||
// IMPORTANT: Increment parserVersion when modifying parser logic!
|
||||
// This helps track which version was used to parse specific logs.
|
||||
package supermicro
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
// parserVersion - version of this parser module
|
||||
// IMPORTANT: Increment this version when making changes to parser logic!
|
||||
const parserVersion = "1.0.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser implements VendorParser for Supermicro servers
|
||||
type Parser struct{}
|
||||
|
||||
// Name returns human-readable parser name
|
||||
func (p *Parser) Name() string {
|
||||
return "SMC Crash Dump Parser"
|
||||
}
|
||||
|
||||
// Vendor returns vendor identifier
|
||||
func (p *Parser) Vendor() string {
|
||||
return "supermicro"
|
||||
}
|
||||
|
||||
// Version returns parser version
|
||||
// IMPORTANT: Update parserVersion constant when modifying parser logic!
|
||||
func (p *Parser) Version() string {
|
||||
return parserVersion
|
||||
}
|
||||
|
||||
// Detect checks if archive matches Supermicro crashdump format
|
||||
// Returns confidence 0-100
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
|
||||
// Strong indicator for Supermicro Crashdump format
|
||||
if strings.HasSuffix(path, "cdump.txt") {
|
||||
// Check if it's really Supermicro crashdump format
|
||||
if containsCrashdumpMarkers(f.Content) {
|
||||
confidence += 80
|
||||
}
|
||||
}
|
||||
|
||||
// Cap at 100
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
}
|
||||
}
|
||||
|
||||
return confidence
|
||||
}
|
||||
|
||||
// containsCrashdumpMarkers checks if content has Supermicro crashdump markers
|
||||
func containsCrashdumpMarkers(content []byte) bool {
|
||||
s := string(content)
|
||||
// Check for typical Supermicro Crashdump structure
|
||||
return strings.Contains(s, "crash_data") &&
|
||||
strings.Contains(s, "METADATA") &&
|
||||
(strings.Contains(s, "bmc_fw_ver") || strings.Contains(s, "crashdump_ver"))
|
||||
}
|
||||
|
||||
// Parse parses Supermicro crashdump archive
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
}
|
||||
|
||||
// Initialize hardware config
|
||||
result.Hardware = &models.HardwareConfig{
|
||||
CPUs: make([]models.CPU, 0),
|
||||
}
|
||||
|
||||
// Parse CDump.txt (JSON crashdump)
|
||||
if f := parser.FindFileByName(files, "CDump.txt"); f != nil {
|
||||
if err := ParseCrashDump(f.Content, result); err != nil {
|
||||
// Log error but continue parsing other files
|
||||
_ = err // Ignore error for now
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
450
internal/parser/vendors/unraid/parser.go
vendored
450
internal/parser/vendors/unraid/parser.go
vendored
@@ -10,10 +10,11 @@ import (
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser/vendors/pciids"
|
||||
)
|
||||
|
||||
// parserVersion - increment when parsing logic changes.
|
||||
const parserVersion = "1.0.0"
|
||||
const parserVersion = "1.2"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -97,6 +98,10 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
|
||||
// Track storage by slot to avoid duplicates
|
||||
storageBySlot := make(map[string]*models.Storage)
|
||||
hasDetailedMemory := false
|
||||
ethtoolByIface := make(map[string]ethtoolInfo)
|
||||
ethtoolByBDF := make(map[string]ethtoolInfo)
|
||||
ifconfigByIface := make(map[string]ifconfigInfo)
|
||||
|
||||
// Parse different file types
|
||||
for _, f := range files {
|
||||
@@ -116,8 +121,23 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
case strings.HasSuffix(path, "/system/memory.txt") || strings.HasSuffix(path, "\\system\\memory.txt"):
|
||||
parseMemory(content, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/meminfo.txt") || strings.HasSuffix(path, "\\system\\meminfo.txt"):
|
||||
if parseMemoryDIMMs(content, result) > 0 {
|
||||
hasDetailedMemory = true
|
||||
}
|
||||
|
||||
case strings.HasSuffix(path, "/system/ifconfig.txt") || strings.HasSuffix(path, "\\system\\ifconfig.txt"):
|
||||
parseIfconfig(content, ifconfigByIface)
|
||||
|
||||
case strings.HasSuffix(path, "/system/ethtool.txt") || strings.HasSuffix(path, "\\system\\ethtool.txt"):
|
||||
parseEthtool(content, ethtoolByIface, ethtoolByBDF)
|
||||
|
||||
case strings.HasSuffix(path, "/system/lspci.txt") || strings.HasSuffix(path, "\\system\\lspci.txt"):
|
||||
parseLSPCI(content, ifconfigByIface, ethtoolByIface, ethtoolByBDF, result)
|
||||
|
||||
case strings.HasSuffix(path, "/system/vars.txt") || strings.HasSuffix(path, "\\system\\vars.txt"):
|
||||
parseVarsToMap(content, storageBySlot, result)
|
||||
parseHostIdentityFromVars(content, result)
|
||||
|
||||
case strings.Contains(path, "/smart/") && strings.HasSuffix(path, ".txt"):
|
||||
parseSMARTFileToMap(content, f.Path, storageBySlot, result)
|
||||
@@ -127,6 +147,17 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
}
|
||||
}
|
||||
|
||||
if hasDetailedMemory {
|
||||
filtered := make([]models.MemoryDIMM, 0, len(result.Hardware.Memory))
|
||||
for _, dimm := range result.Hardware.Memory {
|
||||
if strings.EqualFold(strings.TrimSpace(dimm.Slot), "system") {
|
||||
continue
|
||||
}
|
||||
filtered = append(filtered, dimm)
|
||||
}
|
||||
result.Hardware.Memory = filtered
|
||||
}
|
||||
|
||||
// Convert storage map to slice
|
||||
for _, disk := range storageBySlot {
|
||||
result.Hardware.Storage = append(result.Hardware.Storage, *disk)
|
||||
@@ -238,19 +269,19 @@ func parseMotherboard(content string, result *models.AnalysisResult) {
|
||||
func parseMemory(content string, result *models.AnalysisResult) {
|
||||
// Parse memory from free output
|
||||
// Example: Mem: 50Gi 11Gi 1.4Gi 565Mi 39Gi 39Gi
|
||||
if m := regexp.MustCompile(`(?m)^Mem:\s+(\d+(?:\.\d+)?)(Ki|Mi|Gi|Ti)`).FindStringSubmatch(content); len(m) >= 3 {
|
||||
if m := regexp.MustCompile(`(?m)^Mem:\s+(\d+(?:\.\d+)?)(Ki|Mi|Gi|Ti|KB|MB|GB|TB)`).FindStringSubmatch(content); len(m) >= 3 {
|
||||
size := parseFloat(m[1])
|
||||
unit := m[2]
|
||||
unit := strings.ToUpper(m[2])
|
||||
|
||||
var sizeMB int
|
||||
switch unit {
|
||||
case "Ki":
|
||||
case "KI", "KB":
|
||||
sizeMB = int(size / 1024)
|
||||
case "Mi":
|
||||
case "MI", "MB":
|
||||
sizeMB = int(size)
|
||||
case "Gi":
|
||||
case "GI", "GB":
|
||||
sizeMB = int(size * 1024)
|
||||
case "Ti":
|
||||
case "TI", "TB":
|
||||
sizeMB = int(size * 1024 * 1024)
|
||||
}
|
||||
|
||||
@@ -266,6 +297,358 @@ func parseMemory(content string, result *models.AnalysisResult) {
|
||||
}
|
||||
}
|
||||
|
||||
func parseMemoryDIMMs(content string, result *models.AnalysisResult) int {
|
||||
blocks := strings.Split(content, "Handle ")
|
||||
added := 0
|
||||
for _, block := range blocks {
|
||||
b := strings.TrimSpace(block)
|
||||
if b == "" || !strings.Contains(b, "DMI type 17") || !strings.Contains(b, "Memory Device") {
|
||||
continue
|
||||
}
|
||||
|
||||
sizeRaw := extractFieldValue(b, "Size:")
|
||||
if sizeRaw == "" || strings.Contains(strings.ToLower(sizeRaw), "no module installed") {
|
||||
continue
|
||||
}
|
||||
|
||||
sizeMB := parseDIMMSizeMB(sizeRaw)
|
||||
if sizeMB <= 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
slot := extractFieldValue(b, "Locator:")
|
||||
if slot == "" {
|
||||
slot = extractFieldValue(b, "Bank Locator:")
|
||||
}
|
||||
if slot == "" {
|
||||
slot = "dimm"
|
||||
}
|
||||
|
||||
dimm := models.MemoryDIMM{
|
||||
Slot: slot,
|
||||
Location: extractFieldValue(b, "Bank Locator:"),
|
||||
Present: true,
|
||||
SizeMB: sizeMB,
|
||||
Type: extractFieldValue(b, "Type:"),
|
||||
MaxSpeedMHz: parseSpeedMTs(extractFieldValue(b, "Speed:")),
|
||||
CurrentSpeedMHz: parseSpeedMTs(extractFieldValue(b, "Configured Memory Speed:")),
|
||||
Manufacturer: strings.TrimSpace(extractFieldValue(b, "Manufacturer:")),
|
||||
SerialNumber: strings.TrimSpace(extractFieldValue(b, "Serial Number:")),
|
||||
PartNumber: strings.TrimSpace(extractFieldValue(b, "Part Number:")),
|
||||
Ranks: parseInt(extractFieldValue(b, "Rank:")),
|
||||
Status: "ok",
|
||||
}
|
||||
if dimm.Type == "" || strings.EqualFold(dimm.Type, "Unknown") {
|
||||
dimm.Type = "DRAM"
|
||||
}
|
||||
if dimm.CurrentSpeedMHz == 0 {
|
||||
dimm.CurrentSpeedMHz = dimm.MaxSpeedMHz
|
||||
}
|
||||
|
||||
result.Hardware.Memory = append(result.Hardware.Memory, dimm)
|
||||
added++
|
||||
}
|
||||
return added
|
||||
}
|
||||
|
||||
func extractFieldValue(block, key string) string {
|
||||
for _, line := range strings.Split(block, "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if strings.HasPrefix(line, key) {
|
||||
return strings.TrimSpace(strings.TrimPrefix(line, key))
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func parseDIMMSizeMB(s string) int {
|
||||
s = strings.TrimSpace(strings.ToUpper(s))
|
||||
if s == "" {
|
||||
return 0
|
||||
}
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) < 2 {
|
||||
return 0
|
||||
}
|
||||
v := parseFloat(parts[0])
|
||||
switch parts[1] {
|
||||
case "KB", "KIB":
|
||||
return int(v / 1024)
|
||||
case "MB", "MIB":
|
||||
return int(v)
|
||||
case "GB", "GIB":
|
||||
return int(v * 1024)
|
||||
case "TB", "TIB":
|
||||
return int(v * 1024 * 1024)
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
func parseSpeedMTs(s string) int {
|
||||
s = strings.TrimSpace(strings.ToUpper(s))
|
||||
if s == "" {
|
||||
return 0
|
||||
}
|
||||
re := regexp.MustCompile(`(\d+)\s*MT/S`)
|
||||
if m := re.FindStringSubmatch(s); len(m) == 2 {
|
||||
return parseInt(m[1])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type ethtoolInfo struct {
|
||||
Interface string
|
||||
BusInfo string
|
||||
Driver string
|
||||
Firmware string
|
||||
SpeedMbps int
|
||||
LinkUp bool
|
||||
}
|
||||
|
||||
type ifconfigInfo struct {
|
||||
Interface string
|
||||
State string
|
||||
Addresses []string
|
||||
}
|
||||
|
||||
func parseIfconfig(content string, out map[string]ifconfigInfo) {
|
||||
lines := strings.Split(strings.ReplaceAll(content, "\r\n", "\n"), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
fields := strings.Fields(line)
|
||||
if len(fields) < 2 {
|
||||
continue
|
||||
}
|
||||
iface := strings.Split(fields[0], "@")[0]
|
||||
if iface == "" || strings.HasPrefix(iface, "lo") || strings.HasPrefix(iface, "docker") || strings.HasPrefix(iface, "veth") {
|
||||
continue
|
||||
}
|
||||
state := fields[1]
|
||||
addrs := make([]string, 0, 2)
|
||||
for _, f := range fields[2:] {
|
||||
if strings.Contains(f, ".") || strings.Contains(f, ":") {
|
||||
addrs = append(addrs, f)
|
||||
}
|
||||
}
|
||||
out[iface] = ifconfigInfo{
|
||||
Interface: iface,
|
||||
State: state,
|
||||
Addresses: addrs,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseEthtool(content string, byIface, byBDF map[string]ethtoolInfo) {
|
||||
sections := strings.Split(content, "--------------------------------")
|
||||
for _, sec := range sections {
|
||||
s := strings.TrimSpace(sec)
|
||||
if s == "" {
|
||||
continue
|
||||
}
|
||||
var info ethtoolInfo
|
||||
for _, line := range strings.Split(s, "\n") {
|
||||
t := strings.TrimSpace(line)
|
||||
switch {
|
||||
case strings.HasPrefix(t, "Settings for "):
|
||||
info.Interface = strings.TrimSuffix(strings.TrimPrefix(t, "Settings for "), ":")
|
||||
case strings.HasPrefix(t, "driver:"):
|
||||
info.Driver = strings.TrimSpace(strings.TrimPrefix(t, "driver:"))
|
||||
case strings.HasPrefix(t, "firmware-version:"):
|
||||
info.Firmware = strings.TrimSpace(strings.TrimPrefix(t, "firmware-version:"))
|
||||
case strings.HasPrefix(t, "bus-info:"):
|
||||
info.BusInfo = normalizeBDF(strings.TrimSpace(strings.TrimPrefix(t, "bus-info:")))
|
||||
case strings.HasPrefix(t, "Speed:"):
|
||||
info.SpeedMbps = parseSpeedMbps(strings.TrimSpace(strings.TrimPrefix(t, "Speed:")))
|
||||
case strings.HasPrefix(t, "Link detected:"):
|
||||
info.LinkUp = strings.EqualFold(strings.TrimSpace(strings.TrimPrefix(t, "Link detected:")), "yes")
|
||||
}
|
||||
}
|
||||
if info.Interface != "" {
|
||||
byIface[info.Interface] = info
|
||||
}
|
||||
if info.BusInfo != "" {
|
||||
byBDF[info.BusInfo] = info
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseLSPCI(
|
||||
content string,
|
||||
iface map[string]ifconfigInfo,
|
||||
ethtoolByIface map[string]ethtoolInfo,
|
||||
ethtoolByBDF map[string]ethtoolInfo,
|
||||
result *models.AnalysisResult,
|
||||
) {
|
||||
lines := strings.Split(strings.ReplaceAll(content, "\r\n", "\n"), "\n")
|
||||
lspciLineRe := regexp.MustCompile(`^([0-9a-fA-F:.]+)\s+(.+?)\s+\[[0-9a-fA-F]{4}\]:\s+(.+?)\s+\[([0-9a-fA-F]{4}):([0-9a-fA-F]{4})\]`)
|
||||
|
||||
hasPCIe := make(map[string]bool)
|
||||
hasAdapter := make(map[string]bool)
|
||||
|
||||
for _, line := range lines {
|
||||
m := lspciLineRe.FindStringSubmatch(strings.TrimSpace(line))
|
||||
if len(m) != 6 {
|
||||
continue
|
||||
}
|
||||
|
||||
bdf := normalizeBDF(m[1])
|
||||
class := strings.TrimSpace(m[2])
|
||||
desc := strings.TrimSpace(m[3])
|
||||
vendorID := parseHexID(m[4])
|
||||
deviceID := parseHexID(m[5])
|
||||
|
||||
if bdf == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
if isInterestingPCIClass(class) && !hasPCIe[bdf] {
|
||||
vendor := pciids.VendorName(vendorID)
|
||||
result.Hardware.PCIeDevices = append(result.Hardware.PCIeDevices, models.PCIeDevice{
|
||||
Slot: bdf,
|
||||
BDF: bdf,
|
||||
DeviceClass: class,
|
||||
Description: desc,
|
||||
VendorID: vendorID,
|
||||
DeviceID: deviceID,
|
||||
Manufacturer: vendor,
|
||||
Status: "ok",
|
||||
})
|
||||
hasPCIe[bdf] = true
|
||||
}
|
||||
|
||||
if !isNICClass(class) || hasAdapter[bdf] {
|
||||
continue
|
||||
}
|
||||
|
||||
etInfo := ethtoolByBDF[bdf]
|
||||
if etInfo.Interface == "" {
|
||||
for _, it := range ethtoolByIface {
|
||||
if normalizeBDF(it.BusInfo) == bdf {
|
||||
etInfo = it
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if etInfo.Driver == "bonding" {
|
||||
continue
|
||||
}
|
||||
|
||||
model := desc
|
||||
if devName := pciids.DeviceName(vendorID, deviceID); devName != "" {
|
||||
model = devName
|
||||
}
|
||||
vendor := pciids.VendorName(vendorID)
|
||||
if vendor == "" {
|
||||
vendor = firstWords(desc, 2)
|
||||
}
|
||||
|
||||
slot := etInfo.Interface
|
||||
if slot == "" {
|
||||
slot = bdf
|
||||
}
|
||||
status := "ok"
|
||||
if etInfo.Interface != "" && !etInfo.LinkUp {
|
||||
status = "warning"
|
||||
} else if etInfo.Interface != "" {
|
||||
if ifInfo, ok := iface[etInfo.Interface]; ok && !strings.EqualFold(ifInfo.State, "UP") {
|
||||
status = "warning"
|
||||
}
|
||||
}
|
||||
|
||||
adapter := models.NetworkAdapter{
|
||||
Slot: slot,
|
||||
Location: bdf,
|
||||
Present: true,
|
||||
Model: model,
|
||||
Vendor: vendor,
|
||||
VendorID: vendorID,
|
||||
DeviceID: deviceID,
|
||||
Firmware: strings.TrimSpace(etInfo.Firmware),
|
||||
PortCount: 1,
|
||||
Status: status,
|
||||
}
|
||||
result.Hardware.NetworkAdapters = append(result.Hardware.NetworkAdapters, adapter)
|
||||
|
||||
result.Hardware.NetworkCards = append(result.Hardware.NetworkCards, models.NIC{
|
||||
Name: slot,
|
||||
Model: model,
|
||||
SpeedMbps: etInfo.SpeedMbps,
|
||||
})
|
||||
|
||||
hasAdapter[bdf] = true
|
||||
}
|
||||
}
|
||||
|
||||
func isNICClass(class string) bool {
|
||||
c := strings.ToLower(strings.TrimSpace(class))
|
||||
return strings.Contains(c, "ethernet controller") || strings.Contains(c, "network controller")
|
||||
}
|
||||
|
||||
func isInterestingPCIClass(class string) bool {
|
||||
c := strings.ToLower(strings.TrimSpace(class))
|
||||
if isNICClass(c) {
|
||||
return true
|
||||
}
|
||||
switch {
|
||||
case strings.Contains(c, "scsi storage controller"),
|
||||
strings.Contains(c, "sata controller"),
|
||||
strings.Contains(c, "raid bus controller"),
|
||||
strings.Contains(c, "vga compatible controller"),
|
||||
strings.Contains(c, "3d controller"),
|
||||
strings.Contains(c, "display controller"),
|
||||
strings.Contains(c, "non-volatile memory controller"),
|
||||
strings.Contains(c, "processing accelerators"):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func parseHexID(s string) int {
|
||||
v, err := strconv.ParseInt(strings.TrimSpace(s), 16, 32)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return int(v)
|
||||
}
|
||||
|
||||
func parseSpeedMbps(s string) int {
|
||||
s = strings.TrimSpace(strings.ToUpper(s))
|
||||
if s == "" || s == "UNKNOWN!" {
|
||||
return 0
|
||||
}
|
||||
if m := regexp.MustCompile(`(\d+)MB/S`).FindStringSubmatch(s); len(m) == 2 {
|
||||
return parseInt(m[1])
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func normalizeBDF(bdf string) string {
|
||||
bdf = strings.TrimSpace(strings.ToLower(bdf))
|
||||
if bdf == "" {
|
||||
return ""
|
||||
}
|
||||
if strings.Count(bdf, ":") == 1 {
|
||||
return "0000:" + bdf
|
||||
}
|
||||
return bdf
|
||||
}
|
||||
|
||||
func firstWords(s string, n int) string {
|
||||
parts := strings.Fields(strings.TrimSpace(s))
|
||||
if len(parts) == 0 {
|
||||
return ""
|
||||
}
|
||||
if len(parts) < n {
|
||||
n = len(parts)
|
||||
}
|
||||
return strings.Join(parts[:n], " ")
|
||||
}
|
||||
|
||||
func parseVarsToMap(content string, storageBySlot map[string]*models.Storage, result *models.AnalysisResult) {
|
||||
// Normalize line endings
|
||||
content = strings.ReplaceAll(content, "\r\n", "\n")
|
||||
@@ -385,6 +768,57 @@ func parseVarsToMap(content string, storageBySlot map[string]*models.Storage, re
|
||||
}
|
||||
}
|
||||
|
||||
func parseHostIdentityFromVars(content string, result *models.AnalysisResult) {
|
||||
if result == nil || result.Hardware == nil {
|
||||
return
|
||||
}
|
||||
serial := strings.TrimSpace(result.Hardware.BoardInfo.SerialNumber)
|
||||
if isUsableHostIdentifier(serial) {
|
||||
return
|
||||
}
|
||||
|
||||
flashGUID := findVarValue(content, "flashGUID")
|
||||
regGUID := findVarValue(content, "regGUID")
|
||||
rawUUID := findVarValue(content, "uuid")
|
||||
|
||||
candidates := []string{flashGUID, regGUID, rawUUID}
|
||||
for _, candidate := range candidates {
|
||||
candidate = strings.TrimSpace(candidate)
|
||||
if !isUsableHostIdentifier(candidate) {
|
||||
continue
|
||||
}
|
||||
result.Hardware.BoardInfo.SerialNumber = candidate
|
||||
if result.Hardware.BoardInfo.UUID == "" && candidate == rawUUID {
|
||||
result.Hardware.BoardInfo.UUID = candidate
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func findVarValue(content, key string) string {
|
||||
re := regexp.MustCompile(`(?m)^\s*\[` + regexp.QuoteMeta(key) + `\]\s*=>\s*(.+?)\s*$`)
|
||||
if m := re.FindStringSubmatch(content); len(m) == 2 {
|
||||
return strings.TrimSpace(m[1])
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func isUsableHostIdentifier(v string) bool {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return false
|
||||
}
|
||||
l := strings.ToLower(v)
|
||||
if l == "n/a" || l == "unknown" || l == "none" || l == "not available" {
|
||||
return false
|
||||
}
|
||||
// Unraid may redact GUID values as 1... or 1..7 in diagnostics.
|
||||
if strings.Contains(v, "...") || strings.Contains(v, "..") {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func extractDiskSection(content, diskName string) string {
|
||||
// Find the start of this disk's array section
|
||||
startPattern := regexp.MustCompile(`(?m)^\s+\[` + regexp.QuoteMeta(diskName) + `\]\s+=>\s+Array\s*\n\s+\(`)
|
||||
@@ -559,7 +993,7 @@ func parseSyslogLine(line string) (time.Time, string, models.Severity) {
|
||||
|
||||
// Parse timestamp (add current year)
|
||||
year := time.Now().Year()
|
||||
if ts, err := time.Parse("Jan 2 15:04:05 2006", timeStr+" "+strconv.Itoa(year)); err == nil {
|
||||
if ts, err := parser.ParseInDefaultArchiveLocation("Jan 2 15:04:05 2006", timeStr+" "+strconv.Itoa(year)); err == nil {
|
||||
timestamp = ts
|
||||
}
|
||||
}
|
||||
|
||||
116
internal/parser/vendors/unraid/parser_test.go
vendored
116
internal/parser/vendors/unraid/parser_test.go
vendored
@@ -275,3 +275,119 @@ func TestParser_Metadata(t *testing.T) {
|
||||
t.Error("Version() should not be empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_MemoryDIMMsFromMeminfo(t *testing.T) {
|
||||
memInfo := `MemTotal: 53393436 kB
|
||||
|
||||
Handle 0x002D, DMI type 17, 34 bytes
|
||||
Memory Device
|
||||
Size: 16 GB
|
||||
Locator: Node0_Dimm1
|
||||
Bank Locator: Node0_Bank0
|
||||
Type: DDR3
|
||||
Speed: 1333 MT/s
|
||||
Manufacturer: Samsung
|
||||
Serial Number: 238F7649
|
||||
Part Number: M393B2G70BH0-
|
||||
Rank: 4
|
||||
Configured Memory Speed: 1333 MT/s
|
||||
|
||||
Handle 0x002F, DMI type 17, 34 bytes
|
||||
Memory Device
|
||||
Size: No Module Installed
|
||||
Locator: Node0_Dimm2
|
||||
`
|
||||
|
||||
files := []parser.ExtractedFile{
|
||||
{Path: "diagnostics/system/memory.txt", Content: []byte("Mem: 50Gi")},
|
||||
{Path: "diagnostics/system/meminfo.txt", Content: []byte(memInfo)},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if got := len(result.Hardware.Memory); got != 1 {
|
||||
t.Fatalf("expected only installed DIMM entries, got %d entries", got)
|
||||
}
|
||||
dimm := result.Hardware.Memory[0]
|
||||
if dimm.Slot != "Node0_Dimm1" {
|
||||
t.Errorf("Slot = %q, want Node0_Dimm1", dimm.Slot)
|
||||
}
|
||||
if dimm.SizeMB != 16*1024 {
|
||||
t.Errorf("SizeMB = %d, want %d", dimm.SizeMB, 16*1024)
|
||||
}
|
||||
if dimm.Type != "DDR3" {
|
||||
t.Errorf("Type = %q, want DDR3", dimm.Type)
|
||||
}
|
||||
if dimm.SerialNumber != "238F7649" {
|
||||
t.Errorf("SerialNumber = %q, want 238F7649", dimm.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_NetworkAndPCIeFromLSPCIAndEthtool(t *testing.T) {
|
||||
lspci := `03:00.0 SCSI storage controller [0100]: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
|
||||
07:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8211/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 06)
|
||||
`
|
||||
ethtool := `Settings for eth0:
|
||||
Speed: 1000Mb/s
|
||||
Link detected: yes
|
||||
driver: r8168
|
||||
firmware-version:
|
||||
bus-info: 0000:07:00.0
|
||||
--------------------------------
|
||||
`
|
||||
files := []parser.ExtractedFile{
|
||||
{Path: "diagnostics/system/lspci.txt", Content: []byte(lspci)},
|
||||
{Path: "diagnostics/system/ethtool.txt", Content: []byte(ethtool)},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 1 {
|
||||
t.Fatalf("expected 1 network adapter, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
nic := result.Hardware.NetworkAdapters[0]
|
||||
if nic.Location != "0000:07:00.0" {
|
||||
t.Errorf("Location = %q, want 0000:07:00.0", nic.Location)
|
||||
}
|
||||
if nic.Model == "" {
|
||||
t.Error("Model should not be empty")
|
||||
}
|
||||
if nic.Vendor == "" {
|
||||
t.Error("Vendor should not be empty")
|
||||
}
|
||||
|
||||
if len(result.Hardware.PCIeDevices) < 2 {
|
||||
t.Fatalf("expected at least 2 PCIe devices, got %d", len(result.Hardware.PCIeDevices))
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_HostSerialFallbackFromVarsUUID(t *testing.T) {
|
||||
vars := ` [flashGUID] => 1...
|
||||
[regGUID] => 1...7
|
||||
[uuid] => 2713440667722491190
|
||||
`
|
||||
files := []parser.ExtractedFile{
|
||||
{Path: "diagnostics/system/vars.txt", Content: []byte(vars)},
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse() error = %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware.BoardInfo.SerialNumber != "2713440667722491190" {
|
||||
t.Fatalf("BoardInfo.SerialNumber = %q, want vars uuid", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
if result.Hardware.BoardInfo.UUID != "2713440667722491190" {
|
||||
t.Fatalf("BoardInfo.UUID = %q, want vars uuid", result.Hardware.BoardInfo.UUID)
|
||||
}
|
||||
}
|
||||
|
||||
5
internal/parser/vendors/vendors.go
vendored
5
internal/parser/vendors/vendors.go
vendored
@@ -4,18 +4,17 @@ package vendors
|
||||
|
||||
import (
|
||||
// Import vendor modules to trigger their init() registration
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/h3c"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/supermicro"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"
|
||||
|
||||
// Future vendors:
|
||||
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe"
|
||||
// _ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo"
|
||||
)
|
||||
|
||||
4
internal/parser/vendors/xigmanas/parser.go
vendored
4
internal/parser/vendors/xigmanas/parser.go
vendored
@@ -12,7 +12,7 @@ import (
|
||||
)
|
||||
|
||||
// parserVersion - increment when parsing logic changes.
|
||||
const parserVersion = "2.1.0"
|
||||
const parserVersion = "2.2"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -431,7 +431,7 @@ func parseEventTimestamp(line string) time.Time {
|
||||
prefixRe := regexp.MustCompile(`^[A-Z][a-z]{2}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2}`)
|
||||
if prefix := prefixRe.FindString(line); prefix != "" {
|
||||
year := time.Now().Year()
|
||||
if ts, err := time.Parse("Jan 2 15:04:05 2006", prefix+" "+strconv.Itoa(year)); err == nil {
|
||||
if ts, err := parser.ParseInDefaultArchiveLocation("Jan 2 15:04:05 2006", prefix+" "+strconv.Itoa(year)); err == nil {
|
||||
return ts
|
||||
}
|
||||
}
|
||||
|
||||
@@ -300,7 +300,7 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
})
|
||||
}
|
||||
|
||||
return dedupeDevices(all)
|
||||
return annotateDuplicateSerials(dedupeDevices(all))
|
||||
}
|
||||
|
||||
func isEmptyPCIeDevice(p models.PCIeDevice) bool {
|
||||
@@ -443,9 +443,22 @@ func shouldMergeDevices(a, b models.HardwareDevice) bool {
|
||||
bSN := strings.ToLower(normalizedSerial(b.SerialNumber))
|
||||
aBDF := strings.ToLower(strings.TrimSpace(a.BDF))
|
||||
bBDF := strings.ToLower(strings.TrimSpace(b.BDF))
|
||||
aSlot := normalizeSlot(a.Slot)
|
||||
bSlot := normalizeSlot(b.Slot)
|
||||
|
||||
// Memory DIMMs can legitimately share serial number in some dumps.
|
||||
// Never merge DIMMs with different slots.
|
||||
if a.Kind == models.DeviceKindMemory && b.Kind == models.DeviceKindMemory {
|
||||
if aSlot != "" && bSlot != "" && aSlot != bSlot {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Hard conflicts.
|
||||
if aSN != "" && bSN != "" && aSN == bSN {
|
||||
if a.Kind == models.DeviceKindMemory && b.Kind == models.DeviceKindMemory {
|
||||
return aSlot != "" && bSlot != "" && aSlot == bSlot
|
||||
}
|
||||
return true
|
||||
}
|
||||
if aSN != "" && bSN != "" && aSN != bSN {
|
||||
@@ -465,7 +478,7 @@ func shouldMergeDevices(a, b models.HardwareDevice) bool {
|
||||
if hasMACOverlap(a.MACAddresses, b.MACAddresses) {
|
||||
return true
|
||||
}
|
||||
if normalizeSlot(a.Slot) != "" && normalizeSlot(a.Slot) == normalizeSlot(b.Slot) {
|
||||
if aSlot != "" && aSlot == bSlot {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
@@ -481,7 +494,7 @@ func shouldMergeDevices(a, b models.HardwareDevice) bool {
|
||||
if sameManufacturer(a, b) {
|
||||
score += 2
|
||||
}
|
||||
if normalizeSlot(a.Slot) != "" && normalizeSlot(a.Slot) == normalizeSlot(b.Slot) {
|
||||
if aSlot != "" && aSlot == bSlot {
|
||||
score += 2
|
||||
}
|
||||
if hasMACOverlap(a.MACAddresses, b.MACAddresses) {
|
||||
@@ -555,10 +568,10 @@ func mergeDevices(primary, secondary models.HardwareDevice) models.HardwareDevic
|
||||
fillFloat(&primary.InputVoltage, secondary.InputVoltage)
|
||||
fillInt(&primary.TemperatureC, secondary.TemperatureC)
|
||||
fillString(&primary.Status, secondary.Status)
|
||||
if primary.StatusCheckedAt.IsZero() && !secondary.StatusCheckedAt.IsZero() {
|
||||
if primary.StatusCheckedAt == nil && secondary.StatusCheckedAt != nil {
|
||||
primary.StatusCheckedAt = secondary.StatusCheckedAt
|
||||
}
|
||||
if primary.StatusChangedAt.IsZero() && !secondary.StatusChangedAt.IsZero() {
|
||||
if primary.StatusChangedAt == nil && secondary.StatusChangedAt != nil {
|
||||
primary.StatusChangedAt = secondary.StatusChangedAt
|
||||
}
|
||||
if primary.StatusAtCollect == nil && secondary.StatusAtCollect != nil {
|
||||
@@ -732,3 +745,35 @@ func buildFirmwareBySlot(firmware []models.FirmwareInfo) map[string]slotFirmware
|
||||
func normalizeSlotKey(slot string) string {
|
||||
return strings.ToLower(strings.TrimSpace(slot))
|
||||
}
|
||||
|
||||
func annotateDuplicateSerials(items []models.HardwareDevice) []models.HardwareDevice {
|
||||
if len(items) < 2 {
|
||||
return items
|
||||
}
|
||||
|
||||
countByKindSerial := make(map[string]int)
|
||||
for _, d := range items {
|
||||
serial := normalizedSerial(d.SerialNumber)
|
||||
if serial == "" {
|
||||
continue
|
||||
}
|
||||
key := d.Kind + "|" + strings.ToLower(serial)
|
||||
countByKindSerial[key]++
|
||||
}
|
||||
|
||||
seenByKindSerial := make(map[string]int)
|
||||
for i := range items {
|
||||
serial := normalizedSerial(items[i].SerialNumber)
|
||||
if serial == "" {
|
||||
continue
|
||||
}
|
||||
key := items[i].Kind + "|" + strings.ToLower(serial)
|
||||
if countByKindSerial[key] < 2 {
|
||||
continue
|
||||
}
|
||||
seenByKindSerial[key]++
|
||||
items[i].SerialNumber = serial + " (DUP#" + strconv.Itoa(seenByKindSerial[key]) + ")"
|
||||
}
|
||||
|
||||
return items
|
||||
}
|
||||
|
||||
@@ -65,6 +65,54 @@ func TestBuildHardwareDevices_SkipsEmptyMemorySlots(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_MemorySameSerialDifferentSlots_NotDeduped(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{Slot: "Node0_Dimm1", Location: "Node0_Bank0", Present: true, SizeMB: 16384, SerialNumber: "238F7649", PartNumber: "M393B2G70BH0-"},
|
||||
{Slot: "Node0_Dimm3", Location: "Node0_Bank0", Present: true, SizeMB: 16384, SerialNumber: "238F7649", PartNumber: "M393B2G70BH0-"},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
memorySlots := make(map[string]bool)
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindMemory {
|
||||
continue
|
||||
}
|
||||
memorySlots[d.Slot] = true
|
||||
}
|
||||
|
||||
if len(memorySlots) != 2 {
|
||||
t.Fatalf("expected 2 memory devices, got %d", len(memorySlots))
|
||||
}
|
||||
if !memorySlots["Node0_Dimm1"] || !memorySlots["Node0_Dimm3"] {
|
||||
t.Fatalf("expected both Node0_Dimm1 and Node0_Dimm3 to remain")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_DuplicateSerials_AreAnnotated(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{Slot: "A1", Location: "BANK0", Present: true, SizeMB: 16384, SerialNumber: "SN-1"},
|
||||
{Slot: "A2", Location: "BANK1", Present: true, SizeMB: 16384, SerialNumber: "SN-1"},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
var serials []string
|
||||
for _, d := range devices {
|
||||
if d.Kind == models.DeviceKindMemory {
|
||||
serials = append(serials, d.SerialNumber)
|
||||
}
|
||||
}
|
||||
if len(serials) != 2 {
|
||||
t.Fatalf("expected 2 memory devices, got %d", len(serials))
|
||||
}
|
||||
if serials[0] != "SN-1 (DUP#1)" || serials[1] != "SN-1 (DUP#2)" {
|
||||
t.Fatalf("unexpected annotated serials: %+v", serials)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_DedupCrossKindByBDF(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
PCIeDevices: []models.PCIeDevice{
|
||||
|
||||
17
internal/server/file_support_test.go
Normal file
17
internal/server/file_support_test.go
Normal file
@@ -0,0 +1,17 @@
|
||||
package server
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestIsSupportedConvertFileName_AcceptsNvidiaBugReportGzip(t *testing.T) {
|
||||
if !isSupportedConvertFileName("nvidia-bug-report-1651124000923.log.gz") {
|
||||
t.Fatalf("expected .log.gz bug-report to be supported")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAnalyzeUploadedFile_RejectsUnsupportedExtension(t *testing.T) {
|
||||
s := &Server{}
|
||||
_, _, _, err := s.analyzeUploadedFile("unsupported.bin", "application/octet-stream", []byte("abc"))
|
||||
if err == nil {
|
||||
t.Fatalf("expected unsupported archive error")
|
||||
}
|
||||
}
|
||||
46
internal/server/file_types_test.go
Normal file
46
internal/server/file_types_test.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestHandleGetFileTypes(t *testing.T) {
|
||||
s := &Server{}
|
||||
req := httptest.NewRequest(http.MethodGet, "/api/file-types", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
s.handleGetFileTypes(rec, req)
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d", rec.Code)
|
||||
}
|
||||
|
||||
var payload struct {
|
||||
ArchiveExtensions []string `json:"archive_extensions"`
|
||||
UploadExtensions []string `json:"upload_extensions"`
|
||||
ConvertExtensions []string `json:"convert_extensions"`
|
||||
}
|
||||
if err := json.NewDecoder(rec.Body).Decode(&payload); err != nil {
|
||||
t.Fatalf("decode payload: %v", err)
|
||||
}
|
||||
if len(payload.ArchiveExtensions) == 0 || len(payload.UploadExtensions) == 0 || len(payload.ConvertExtensions) == 0 {
|
||||
t.Fatalf("expected non-empty extensions in payload")
|
||||
}
|
||||
if !containsString(payload.ArchiveExtensions, ".gz") {
|
||||
t.Fatalf("expected .gz in archive extensions")
|
||||
}
|
||||
if !containsString(payload.UploadExtensions, ".json") || !containsString(payload.ConvertExtensions, ".json") {
|
||||
t.Fatalf("expected .json in upload/convert extensions")
|
||||
}
|
||||
}
|
||||
|
||||
func containsString(items []string, target string) bool {
|
||||
for _, item := range items {
|
||||
if item == target {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/rand"
|
||||
@@ -47,7 +48,8 @@ func (s *Server) handleIndex(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
func (s *Server) handleUpload(w http.ResponseWriter, r *http.Request) {
|
||||
if err := r.ParseMultipartForm(uploadMultipartMaxBytes()); err != nil {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, uploadMultipartMaxBytes())
|
||||
if err := r.ParseMultipartForm(uploadMultipartFormMemoryBytes()); err != nil {
|
||||
jsonError(w, "File too large", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
@@ -70,61 +72,16 @@ func (s *Server) handleUpload(w http.ResponseWriter, r *http.Request) {
|
||||
vendor string
|
||||
)
|
||||
|
||||
if rawPkg, ok, err := parseRawExportBundle(payload); err != nil {
|
||||
jsonError(w, "Failed to parse raw export bundle: "+err.Error(), http.StatusBadRequest)
|
||||
result, vendor, rawPkg, err := s.analyzeUploadedFile(header.Filename, header.Header.Get("Content-Type"), payload)
|
||||
if err != nil {
|
||||
jsonError(w, "Failed to parse uploaded file: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
} else if ok {
|
||||
replayed, replayVendor, replayErr := s.reanalyzeRawExportPackage(rawPkg)
|
||||
if replayErr != nil {
|
||||
jsonError(w, "Failed to reanalyze raw export package: "+replayErr.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
result = replayed
|
||||
vendor = replayVendor
|
||||
if strings.TrimSpace(vendor) == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
}
|
||||
if strings.TrimSpace(vendor) == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
if rawPkg != nil {
|
||||
s.SetRawExport(rawPkg)
|
||||
} else if looksLikeJSONSnapshot(header.Filename, payload) {
|
||||
if rawPkg, ok, err := parseRawExportPackage(payload); err != nil {
|
||||
jsonError(w, "Failed to parse raw export package: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
} else if ok {
|
||||
replayed, replayVendor, replayErr := s.reanalyzeRawExportPackage(rawPkg)
|
||||
if replayErr != nil {
|
||||
jsonError(w, "Failed to reanalyze raw export package: "+replayErr.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
result = replayed
|
||||
vendor = replayVendor
|
||||
if strings.TrimSpace(vendor) == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
s.SetRawExport(rawPkg)
|
||||
} else {
|
||||
snapshotResult, snapshotErr := parseUploadedSnapshot(payload)
|
||||
if snapshotErr != nil {
|
||||
jsonError(w, "Failed to parse snapshot: "+snapshotErr.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
result = snapshotResult
|
||||
vendor = strings.TrimSpace(snapshotResult.Protocol)
|
||||
if vendor == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
s.SetRawExport(newRawExportFromUploadedFile(header.Filename, header.Header.Get("Content-Type"), payload, result))
|
||||
}
|
||||
} else {
|
||||
// Parse archive
|
||||
p := parser.NewBMCParser()
|
||||
if err := p.ParseFromReader(bytes.NewReader(payload), header.Filename); err != nil {
|
||||
jsonError(w, "Failed to parse archive: "+err.Error(), http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
result = p.Result()
|
||||
applyArchiveSourceMetadata(result)
|
||||
vendor = p.DetectedVendor()
|
||||
s.SetRawExport(newRawExportFromUploadedFile(header.Filename, header.Header.Get("Content-Type"), payload, result))
|
||||
}
|
||||
|
||||
s.SetResult(result)
|
||||
@@ -143,13 +100,63 @@ func (s *Server) handleUpload(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Server) analyzeUploadedFile(filename, mimeType string, payload []byte) (*models.AnalysisResult, string, *RawExportPackage, error) {
|
||||
if rawPkg, ok, err := parseRawExportBundle(payload); err != nil {
|
||||
return nil, "", nil, err
|
||||
} else if ok {
|
||||
result, vendor, err := s.reanalyzeRawExportPackage(rawPkg)
|
||||
if err != nil {
|
||||
return nil, "", nil, err
|
||||
}
|
||||
if strings.TrimSpace(vendor) == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
return result, vendor, rawPkg, nil
|
||||
}
|
||||
|
||||
if looksLikeJSONSnapshot(filename, payload) {
|
||||
if rawPkg, ok, err := parseRawExportPackage(payload); err != nil {
|
||||
return nil, "", nil, err
|
||||
} else if ok {
|
||||
result, vendor, err := s.reanalyzeRawExportPackage(rawPkg)
|
||||
if err != nil {
|
||||
return nil, "", nil, err
|
||||
}
|
||||
if strings.TrimSpace(vendor) == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
return result, vendor, rawPkg, nil
|
||||
}
|
||||
|
||||
snapshotResult, err := parseUploadedSnapshot(payload)
|
||||
if err != nil {
|
||||
return nil, "", nil, err
|
||||
}
|
||||
vendor := strings.TrimSpace(snapshotResult.Protocol)
|
||||
if vendor == "" {
|
||||
vendor = "snapshot"
|
||||
}
|
||||
return snapshotResult, vendor, newRawExportFromUploadedFile(filename, mimeType, payload, snapshotResult), nil
|
||||
}
|
||||
if !parser.IsSupportedArchiveFilename(filename) {
|
||||
return nil, "", nil, fmt.Errorf("unsupported archive format: %s", strings.ToLower(filepath.Ext(filename)))
|
||||
}
|
||||
|
||||
p := parser.NewBMCParser()
|
||||
if err := p.ParseFromReader(bytes.NewReader(payload), filename); err != nil {
|
||||
return nil, "", nil, err
|
||||
}
|
||||
result := p.Result()
|
||||
applyArchiveSourceMetadata(result)
|
||||
return result, p.DetectedVendor(), newRawExportFromUploadedFile(filename, mimeType, payload, result), nil
|
||||
}
|
||||
|
||||
func uploadMultipartMaxBytes() int64 {
|
||||
// Large Redfish raw bundles can easily exceed 100 MiB once raw trees and logs
|
||||
// are embedded. Keep the default high but bounded for a normal workstation.
|
||||
// Limit for incoming multipart request body.
|
||||
const (
|
||||
defMB = 512
|
||||
defMB = 2048
|
||||
minMB = 100
|
||||
maxMB = 2048
|
||||
maxMB = 8192
|
||||
)
|
||||
mb := defMB
|
||||
if v := strings.TrimSpace(os.Getenv("LOGPILE_UPLOAD_MAX_MB")); v != "" {
|
||||
@@ -166,6 +173,35 @@ func uploadMultipartMaxBytes() int64 {
|
||||
return int64(mb) << 20
|
||||
}
|
||||
|
||||
func convertMultipartMaxBytes() int64 {
|
||||
// Convert mode typically uploads a folder with many files,
|
||||
// so it has a larger independent limit.
|
||||
const (
|
||||
defMB = 16384
|
||||
minMB = 512
|
||||
maxMB = 65536
|
||||
)
|
||||
mb := defMB
|
||||
if v := strings.TrimSpace(os.Getenv("LOGPILE_CONVERT_MAX_MB")); v != "" {
|
||||
if n, err := strconv.Atoi(v); err == nil {
|
||||
if n < minMB {
|
||||
n = minMB
|
||||
}
|
||||
if n > maxMB {
|
||||
n = maxMB
|
||||
}
|
||||
mb = n
|
||||
}
|
||||
}
|
||||
return int64(mb) << 20
|
||||
}
|
||||
|
||||
func uploadMultipartFormMemoryBytes() int64 {
|
||||
// Keep a small in-memory threshold; file parts spill to temp files.
|
||||
const formMemoryMB = 32
|
||||
return int64(formMemoryMB) << 20
|
||||
}
|
||||
|
||||
func (s *Server) reanalyzeRawExportPackage(pkg *RawExportPackage) (*models.AnalysisResult, string, error) {
|
||||
if pkg == nil {
|
||||
return nil, "", fmt.Errorf("empty package")
|
||||
@@ -198,9 +234,10 @@ func (s *Server) reanalyzeRawExportPackage(pkg *RawExportPackage) (*models.Analy
|
||||
if strings.TrimSpace(result.TargetHost) == "" {
|
||||
result.TargetHost = strings.TrimSpace(pkg.Source.TargetHost)
|
||||
}
|
||||
if result.CollectedAt.IsZero() {
|
||||
result.CollectedAt = time.Now().UTC()
|
||||
if strings.TrimSpace(result.SourceTimezone) == "" {
|
||||
result.SourceTimezone = strings.TrimSpace(pkg.Source.SourceTimezone)
|
||||
}
|
||||
result.CollectedAt = inferRawExportCollectedAt(result, pkg)
|
||||
if strings.TrimSpace(result.Filename) == "" {
|
||||
target := result.TargetHost
|
||||
if target == "" {
|
||||
@@ -243,6 +280,39 @@ func (s *Server) handleGetParsers(w http.ResponseWriter, r *http.Request) {
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Server) handleGetFileTypes(w http.ResponseWriter, r *http.Request) {
|
||||
archiveExt := parser.SupportedArchiveExtensions()
|
||||
uploadExt := append([]string{}, archiveExt...)
|
||||
uploadExt = append(uploadExt, ".json")
|
||||
|
||||
jsonResponse(w, map[string]any{
|
||||
"archive_extensions": archiveExt,
|
||||
"upload_extensions": uniqueSortedExtensions(uploadExt),
|
||||
"convert_extensions": uniqueSortedExtensions(uploadExt),
|
||||
})
|
||||
}
|
||||
|
||||
func uniqueSortedExtensions(exts []string) []string {
|
||||
if len(exts) == 0 {
|
||||
return nil
|
||||
}
|
||||
seen := make(map[string]struct{}, len(exts))
|
||||
out := make([]string, 0, len(exts))
|
||||
for _, e := range exts {
|
||||
e = strings.ToLower(strings.TrimSpace(e))
|
||||
if e == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[e]; ok {
|
||||
continue
|
||||
}
|
||||
seen[e] = struct{}{}
|
||||
out = append(out, e)
|
||||
}
|
||||
sort.Strings(out)
|
||||
return out
|
||||
}
|
||||
|
||||
func (s *Server) handleGetEvents(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil {
|
||||
@@ -314,10 +384,11 @@ func (s *Server) handleGetConfig(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
response := map[string]interface{}{
|
||||
"source_type": result.SourceType,
|
||||
"protocol": result.Protocol,
|
||||
"target_host": result.TargetHost,
|
||||
"collected_at": result.CollectedAt,
|
||||
"source_type": result.SourceType,
|
||||
"protocol": result.Protocol,
|
||||
"target_host": result.TargetHost,
|
||||
"source_timezone": result.SourceTimezone,
|
||||
"collected_at": result.CollectedAt,
|
||||
}
|
||||
if result.RawPayloads != nil {
|
||||
if fetchErrors, ok := result.RawPayloads["redfish_fetch_errors"]; ok {
|
||||
@@ -1002,13 +1073,14 @@ func (s *Server) handleGetStatus(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
jsonResponse(w, map[string]interface{}{
|
||||
"loaded": true,
|
||||
"filename": result.Filename,
|
||||
"vendor": s.GetDetectedVendor(),
|
||||
"source_type": result.SourceType,
|
||||
"protocol": result.Protocol,
|
||||
"target_host": result.TargetHost,
|
||||
"collected_at": result.CollectedAt,
|
||||
"loaded": true,
|
||||
"filename": result.Filename,
|
||||
"vendor": s.GetDetectedVendor(),
|
||||
"source_type": result.SourceType,
|
||||
"protocol": result.Protocol,
|
||||
"target_host": result.TargetHost,
|
||||
"source_timezone": result.SourceTimezone,
|
||||
"collected_at": result.CollectedAt,
|
||||
"stats": map[string]int{
|
||||
"events": len(result.Events),
|
||||
"sensors": len(result.Sensors),
|
||||
@@ -1076,10 +1148,325 @@ func (s *Server) handleExportReanimator(w http.ResponseWriter, r *http.Request)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleConvertReanimatorBatch(w http.ResponseWriter, r *http.Request) {
|
||||
r.Body = http.MaxBytesReader(w, r.Body, convertMultipartMaxBytes())
|
||||
if err := r.ParseMultipartForm(uploadMultipartFormMemoryBytes()); err != nil {
|
||||
if strings.Contains(strings.ToLower(err.Error()), "too large") {
|
||||
msg := fmt.Sprintf(
|
||||
"File too large. Increase LOGPILE_CONVERT_MAX_MB (current limit: %d MB)",
|
||||
convertMultipartMaxBytes()>>20,
|
||||
)
|
||||
jsonError(w, msg, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
jsonError(w, "Failed to parse multipart form", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
form := r.MultipartForm
|
||||
if form == nil {
|
||||
jsonError(w, "No files provided", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
files := form.File["files[]"]
|
||||
if len(files) == 0 {
|
||||
files = form.File["files"]
|
||||
}
|
||||
if len(files) == 0 {
|
||||
jsonError(w, "No files provided", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
tempDir, err := os.MkdirTemp("", "logpile-convert-input-*")
|
||||
if err != nil {
|
||||
jsonError(w, "Не удалось создать временную директорию", http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
inputFiles := make([]convertInputFile, 0, len(files))
|
||||
var skipped int
|
||||
for _, fh := range files {
|
||||
if fh == nil {
|
||||
continue
|
||||
}
|
||||
if !isSupportedConvertFileName(fh.Filename) {
|
||||
skipped++
|
||||
continue
|
||||
}
|
||||
|
||||
tmpFile, err := os.CreateTemp(tempDir, "input-*")
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
src, err := fh.Open()
|
||||
if err != nil {
|
||||
_ = tmpFile.Close()
|
||||
_ = os.Remove(tmpFile.Name())
|
||||
continue
|
||||
}
|
||||
_, err = io.Copy(tmpFile, src)
|
||||
_ = src.Close()
|
||||
_ = tmpFile.Close()
|
||||
if err != nil {
|
||||
_ = os.Remove(tmpFile.Name())
|
||||
continue
|
||||
}
|
||||
|
||||
mimeType := ""
|
||||
if fh.Header != nil {
|
||||
mimeType = fh.Header.Get("Content-Type")
|
||||
}
|
||||
|
||||
inputFiles = append(inputFiles, convertInputFile{
|
||||
Name: fh.Filename,
|
||||
Path: tmpFile.Name(),
|
||||
MIMEType: mimeType,
|
||||
})
|
||||
}
|
||||
|
||||
if len(inputFiles) == 0 {
|
||||
_ = os.RemoveAll(tempDir)
|
||||
jsonError(w, "Нет файлов поддерживаемого типа для конвертации", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
job := s.jobManager.CreateJob(CollectRequest{
|
||||
Host: "convert.local",
|
||||
Protocol: "convert",
|
||||
Port: 0,
|
||||
Username: "convert",
|
||||
AuthType: "password",
|
||||
TLSMode: "insecure",
|
||||
})
|
||||
s.markConvertJob(job.ID)
|
||||
s.jobManager.AppendJobLog(job.ID, fmt.Sprintf("Запущена пакетная конвертация: %d файлов", len(inputFiles)))
|
||||
if skipped > 0 {
|
||||
s.jobManager.AppendJobLog(job.ID, fmt.Sprintf("Пропущено неподдерживаемых файлов: %d", skipped))
|
||||
}
|
||||
s.jobManager.UpdateJobStatus(job.ID, CollectStatusRunning, 0, "")
|
||||
|
||||
go s.runConvertJob(job.ID, tempDir, inputFiles, skipped, len(files))
|
||||
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
w.WriteHeader(http.StatusAccepted)
|
||||
_ = json.NewEncoder(w).Encode(map[string]any{
|
||||
"job_id": job.ID,
|
||||
"status": CollectStatusRunning,
|
||||
"accepted": len(inputFiles),
|
||||
"skipped": skipped,
|
||||
"total_files": len(files),
|
||||
})
|
||||
}
|
||||
|
||||
type convertInputFile struct {
|
||||
Name string
|
||||
Path string
|
||||
MIMEType string
|
||||
}
|
||||
|
||||
func (s *Server) runConvertJob(jobID, tempDir string, inputFiles []convertInputFile, skipped, total int) {
|
||||
defer os.RemoveAll(tempDir)
|
||||
|
||||
resultFile, err := os.CreateTemp("", "logpile-convert-result-*.zip")
|
||||
if err != nil {
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusFailed, 100, "не удалось создать zip")
|
||||
return
|
||||
}
|
||||
resultPath := resultFile.Name()
|
||||
defer resultFile.Close()
|
||||
|
||||
zw := zip.NewWriter(resultFile)
|
||||
failures := make([]string, 0)
|
||||
success := 0
|
||||
totalProcess := len(inputFiles)
|
||||
|
||||
for i, in := range inputFiles {
|
||||
s.jobManager.AppendJobLog(jobID, fmt.Sprintf("Обработка %s", in.Name))
|
||||
payload, err := os.ReadFile(in.Path)
|
||||
if err != nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: %v", in.Name, err))
|
||||
continue
|
||||
}
|
||||
|
||||
result, _, _, err := s.analyzeUploadedFile(in.Name, in.MIMEType, payload)
|
||||
if err != nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: %v", in.Name, err))
|
||||
progress := ((i + 1) * 100) / totalProcess
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusRunning, progress, "")
|
||||
continue
|
||||
}
|
||||
if result == nil || result.Hardware == nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: no hardware data", in.Name))
|
||||
progress := ((i + 1) * 100) / totalProcess
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusRunning, progress, "")
|
||||
continue
|
||||
}
|
||||
|
||||
reanimatorData, err := exporter.ConvertToReanimator(result)
|
||||
if err != nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: %v", in.Name, err))
|
||||
progress := ((i + 1) * 100) / totalProcess
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusRunning, progress, "")
|
||||
continue
|
||||
}
|
||||
|
||||
entryPath := sanitizeZipPath(in.Name)
|
||||
entry, err := zw.Create(entryPath + ".reanimator.json")
|
||||
if err != nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: %v", in.Name, err))
|
||||
progress := ((i + 1) * 100) / totalProcess
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusRunning, progress, "")
|
||||
continue
|
||||
}
|
||||
|
||||
encoder := json.NewEncoder(entry)
|
||||
encoder.SetIndent("", " ")
|
||||
if err := encoder.Encode(reanimatorData); err != nil {
|
||||
failures = append(failures, fmt.Sprintf("%s: %v", in.Name, err))
|
||||
} else {
|
||||
success++
|
||||
}
|
||||
|
||||
progress := ((i + 1) * 100) / totalProcess
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusRunning, progress, "")
|
||||
}
|
||||
|
||||
if success == 0 {
|
||||
_ = zw.Close()
|
||||
_ = os.Remove(resultPath)
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusFailed, 100, "Не удалось конвертировать ни один файл")
|
||||
return
|
||||
}
|
||||
|
||||
summaryLines := []string{fmt.Sprintf("Конвертировано %d из %d файлов", success, total)}
|
||||
if skipped > 0 {
|
||||
summaryLines = append(summaryLines, fmt.Sprintf("Пропущено неподдерживаемых: %d", skipped))
|
||||
}
|
||||
summaryLines = append(summaryLines, failures...)
|
||||
if entry, err := zw.Create("convert-summary.txt"); err == nil {
|
||||
_, _ = io.WriteString(entry, strings.Join(summaryLines, "\n"))
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
_ = os.Remove(resultPath)
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusFailed, 100, "Не удалось упаковать результаты")
|
||||
return
|
||||
}
|
||||
|
||||
s.setConvertArtifact(jobID, ConvertArtifact{
|
||||
Path: resultPath,
|
||||
Summary: summaryLines[0],
|
||||
})
|
||||
s.jobManager.UpdateJobStatus(jobID, CollectStatusSuccess, 100, "")
|
||||
}
|
||||
|
||||
func (s *Server) handleConvertStatus(w http.ResponseWriter, r *http.Request) {
|
||||
jobID := strings.TrimSpace(r.PathValue("id"))
|
||||
if !isValidCollectJobID(jobID) {
|
||||
jsonError(w, "Invalid convert job id", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if !s.isConvertJob(jobID) {
|
||||
jsonError(w, "Convert job not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
job, ok := s.jobManager.GetJob(jobID)
|
||||
if !ok {
|
||||
jsonError(w, "Convert job not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
jsonResponse(w, job.toStatusResponse())
|
||||
}
|
||||
|
||||
func (s *Server) handleConvertDownload(w http.ResponseWriter, r *http.Request) {
|
||||
jobID := strings.TrimSpace(r.PathValue("id"))
|
||||
if !isValidCollectJobID(jobID) {
|
||||
jsonError(w, "Invalid convert job id", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
if !s.isConvertJob(jobID) {
|
||||
jsonError(w, "Convert job not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
job, ok := s.jobManager.GetJob(jobID)
|
||||
if !ok {
|
||||
jsonError(w, "Convert job not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
if job.Status != CollectStatusSuccess {
|
||||
jsonError(w, "Convert job is not finished yet", http.StatusConflict)
|
||||
return
|
||||
}
|
||||
|
||||
artifact, ok := s.getConvertArtifact(jobID)
|
||||
if !ok || strings.TrimSpace(artifact.Path) == "" {
|
||||
jsonError(w, "Convert result not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
file, err := os.Open(artifact.Path)
|
||||
if err != nil {
|
||||
jsonError(w, "Convert result not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
defer func() {
|
||||
_ = os.Remove(artifact.Path)
|
||||
s.clearConvertArtifact(jobID)
|
||||
}()
|
||||
|
||||
stat, err := file.Stat()
|
||||
if err != nil {
|
||||
jsonError(w, "Convert result not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "application/zip")
|
||||
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=%q", "logpile-convert.zip"))
|
||||
if artifact.Summary != "" {
|
||||
w.Header().Set("X-Convert-Summary", artifact.Summary)
|
||||
}
|
||||
http.ServeContent(w, r, "logpile-convert.zip", stat.ModTime(), file)
|
||||
}
|
||||
|
||||
func isSupportedConvertFileName(filename string) bool {
|
||||
name := strings.TrimSpace(filename)
|
||||
if name == "" {
|
||||
return false
|
||||
}
|
||||
if strings.HasSuffix(strings.ToLower(name), ".json") {
|
||||
return true
|
||||
}
|
||||
return parser.IsSupportedArchiveFilename(name)
|
||||
}
|
||||
|
||||
func sanitizeZipPath(filename string) string {
|
||||
path := filepath.Clean(filename)
|
||||
if path == "." || path == "/" {
|
||||
path = filepath.Base(filename)
|
||||
}
|
||||
path = strings.TrimPrefix(path, string(filepath.Separator))
|
||||
if strings.HasPrefix(path, "..") {
|
||||
path = filepath.Base(path)
|
||||
}
|
||||
path = filepath.ToSlash(path)
|
||||
if path == "" {
|
||||
path = filepath.Base(filename)
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func (s *Server) handleClear(w http.ResponseWriter, r *http.Request) {
|
||||
s.SetResult(nil)
|
||||
s.SetDetectedVendor("")
|
||||
s.SetRawExport(nil)
|
||||
for _, artifact := range s.clearAllConvertArtifacts() {
|
||||
if strings.TrimSpace(artifact.Path) != "" {
|
||||
_ = os.Remove(artifact.Path)
|
||||
}
|
||||
}
|
||||
jsonResponse(w, map[string]string{
|
||||
"status": "ok",
|
||||
"message": "Data cleared",
|
||||
@@ -1269,7 +1656,113 @@ func applyArchiveSourceMetadata(result *models.AnalysisResult) {
|
||||
result.SourceType = models.SourceTypeArchive
|
||||
result.Protocol = ""
|
||||
result.TargetHost = ""
|
||||
result.CollectedAt = time.Now().UTC()
|
||||
if result.CollectedAt.IsZero() {
|
||||
result.CollectedAt = inferArchiveCollectedAt(result)
|
||||
}
|
||||
}
|
||||
|
||||
func inferArchiveCollectedAt(result *models.AnalysisResult) time.Time {
|
||||
if result == nil {
|
||||
return time.Now().UTC()
|
||||
}
|
||||
|
||||
var latestReliable time.Time
|
||||
var latestAny time.Time
|
||||
for _, event := range result.Events {
|
||||
if event.Timestamp.IsZero() {
|
||||
continue
|
||||
}
|
||||
// Drop obviously bad epochs from broken RTC logs.
|
||||
if event.Timestamp.Year() < 2000 {
|
||||
continue
|
||||
}
|
||||
if latestAny.IsZero() || event.Timestamp.After(latestAny) {
|
||||
latestAny = event.Timestamp
|
||||
}
|
||||
if !isReliableCollectedAtEvent(event) {
|
||||
continue
|
||||
}
|
||||
if latestReliable.IsZero() || event.Timestamp.After(latestReliable) {
|
||||
latestReliable = event.Timestamp
|
||||
}
|
||||
}
|
||||
if !latestReliable.IsZero() {
|
||||
return latestReliable.UTC()
|
||||
}
|
||||
if !latestAny.IsZero() {
|
||||
return latestAny.UTC()
|
||||
}
|
||||
if fromFilename, ok := inferCollectedAtFromFilename(result.Filename); ok {
|
||||
return fromFilename.UTC()
|
||||
}
|
||||
return time.Now().UTC()
|
||||
}
|
||||
|
||||
func isReliableCollectedAtEvent(event models.Event) bool {
|
||||
// component.log-derived synthetic states are created "at parse time"
|
||||
// and must not override real log timestamps.
|
||||
src := strings.ToLower(strings.TrimSpace(event.Source))
|
||||
etype := strings.ToLower(strings.TrimSpace(event.EventType))
|
||||
stype := strings.ToLower(strings.TrimSpace(event.SensorType))
|
||||
if etype == "fan status" && (src == "fan" || stype == "fan") {
|
||||
return false
|
||||
}
|
||||
if etype == "memory status" && (src == "memory" || stype == "memory") {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var (
|
||||
filenameDateTimeRegex = regexp.MustCompile(`(?i)(\d{8})[-_](\d{4})(\d{2})?`)
|
||||
filenameDateRegex = regexp.MustCompile(`(?i)(\d{4})-(\d{2})-(\d{2})`)
|
||||
)
|
||||
|
||||
func inferCollectedAtFromFilename(name string) (time.Time, bool) {
|
||||
base := strings.TrimSpace(filepath.Base(name))
|
||||
if base == "" {
|
||||
return time.Time{}, false
|
||||
}
|
||||
|
||||
if m := filenameDateTimeRegex.FindStringSubmatch(base); len(m) == 4 {
|
||||
datePart := m[1]
|
||||
timePart := m[2]
|
||||
if strings.TrimSpace(m[3]) != "" {
|
||||
timePart += m[3]
|
||||
} else {
|
||||
timePart += "00"
|
||||
}
|
||||
if ts, err := parser.ParseInDefaultArchiveLocation("20060102 150405", datePart+" "+timePart); err == nil {
|
||||
return ts, true
|
||||
}
|
||||
}
|
||||
|
||||
if m := filenameDateRegex.FindStringSubmatch(base); len(m) == 4 {
|
||||
datePart := m[1] + "-" + m[2] + "-" + m[3]
|
||||
if ts, err := parser.ParseInDefaultArchiveLocation("2006-01-02 15:04:05", datePart+" 00:00:00"); err == nil {
|
||||
return ts, true
|
||||
}
|
||||
}
|
||||
|
||||
return time.Time{}, false
|
||||
}
|
||||
|
||||
func inferRawExportCollectedAt(result *models.AnalysisResult, pkg *RawExportPackage) time.Time {
|
||||
if result != nil && !result.CollectedAt.IsZero() {
|
||||
return result.CollectedAt.UTC()
|
||||
}
|
||||
if pkg != nil {
|
||||
if !pkg.CollectedAtHint.IsZero() {
|
||||
return pkg.CollectedAtHint.UTC()
|
||||
}
|
||||
if _, finishedAt, ok := collectLogTimeBounds(pkg.Source.CollectLogs); ok {
|
||||
return finishedAt.UTC()
|
||||
}
|
||||
if !pkg.ExportedAt.IsZero() {
|
||||
return pkg.ExportedAt.UTC()
|
||||
}
|
||||
}
|
||||
return time.Now().UTC()
|
||||
}
|
||||
|
||||
func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectRequest) {
|
||||
@@ -1279,7 +1772,14 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
|
||||
result.SourceType = models.SourceTypeAPI
|
||||
result.Protocol = req.Protocol
|
||||
result.TargetHost = req.Host
|
||||
result.CollectedAt = time.Now().UTC()
|
||||
if strings.TrimSpace(result.SourceTimezone) == "" && result.RawPayloads != nil {
|
||||
if tz, ok := result.RawPayloads["source_timezone"].(string); ok {
|
||||
result.SourceTimezone = strings.TrimSpace(tz)
|
||||
}
|
||||
}
|
||||
if result.CollectedAt.IsZero() {
|
||||
result.CollectedAt = time.Now().UTC()
|
||||
}
|
||||
if strings.TrimSpace(result.Filename) == "" {
|
||||
result.Filename = fmt.Sprintf("%s://%s", req.Protocol, req.Host)
|
||||
}
|
||||
|
||||
29
internal/server/multipart_limits_test.go
Normal file
29
internal/server/multipart_limits_test.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package server
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestConvertMultipartMaxBytes_Default(t *testing.T) {
|
||||
t.Setenv("LOGPILE_CONVERT_MAX_MB", "")
|
||||
got := convertMultipartMaxBytes()
|
||||
want := int64(16384) << 20
|
||||
if got != want {
|
||||
t.Fatalf("convertMultipartMaxBytes()=%d, want %d", got, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertMultipartMaxBytes_EnvClamp(t *testing.T) {
|
||||
t.Setenv("LOGPILE_CONVERT_MAX_MB", "42")
|
||||
if got := convertMultipartMaxBytes(); got != (int64(512) << 20) {
|
||||
t.Fatalf("expected min clamp 512MB, got %d", got)
|
||||
}
|
||||
|
||||
t.Setenv("LOGPILE_CONVERT_MAX_MB", "999999")
|
||||
if got := convertMultipartMaxBytes(); got != (int64(65536) << 20) {
|
||||
t.Fatalf("expected max clamp 65536MB, got %d", got)
|
||||
}
|
||||
|
||||
t.Setenv("LOGPILE_CONVERT_MAX_MB", "12288")
|
||||
if got := convertMultipartMaxBytes(); got != (int64(12288) << 20) {
|
||||
t.Fatalf("expected exact env value 12288MB, got %d", got)
|
||||
}
|
||||
}
|
||||
@@ -26,6 +26,9 @@ type RawExportPackage struct {
|
||||
ExportedAt time.Time `json:"exported_at"`
|
||||
Source RawExportSource `json:"source"`
|
||||
Analysis *models.AnalysisResult `json:"analysis_result,omitempty"`
|
||||
// CollectedAtHint is extracted from parser_fields.json when importing
|
||||
// a raw-export bundle and represents original collection time.
|
||||
CollectedAtHint time.Time `json:"-"`
|
||||
}
|
||||
|
||||
type RawExportSource struct {
|
||||
@@ -36,6 +39,7 @@ type RawExportSource struct {
|
||||
Data string `json:"data,omitempty"`
|
||||
Protocol string `json:"protocol,omitempty"`
|
||||
TargetHost string `json:"target_host,omitempty"`
|
||||
SourceTimezone string `json:"source_timezone,omitempty"`
|
||||
RawPayloads map[string]any `json:"raw_payloads,omitempty"`
|
||||
CollectLogs []string `json:"collect_logs,omitempty"`
|
||||
CollectMeta *CollectRequestMeta `json:"collect_meta,omitempty"`
|
||||
@@ -53,6 +57,7 @@ func newRawExportFromUploadedFile(filename, mimeType string, payload []byte, res
|
||||
Data: base64.StdEncoding.EncodeToString(payload),
|
||||
Protocol: resultProtocol(result),
|
||||
TargetHost: resultTargetHost(result),
|
||||
SourceTimezone: resultSourceTimezone(result),
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -79,6 +84,7 @@ func newRawExportFromLiveCollect(result *models.AnalysisResult, req CollectReque
|
||||
Kind: "live_redfish",
|
||||
Protocol: req.Protocol,
|
||||
TargetHost: req.Host,
|
||||
SourceTimezone: resultSourceTimezone(result),
|
||||
RawPayloads: rawPayloads,
|
||||
CollectLogs: append([]string(nil), logs...),
|
||||
CollectMeta: &meta,
|
||||
@@ -158,23 +164,62 @@ func parseRawExportBundle(payload []byte) (*RawExportPackage, bool, error) {
|
||||
if err != nil {
|
||||
return nil, false, nil
|
||||
}
|
||||
var pkgBody []byte
|
||||
var parserFieldsBody []byte
|
||||
|
||||
for _, f := range zr.File {
|
||||
if f.Name != rawExportBundlePackageFile {
|
||||
if f.Name != rawExportBundlePackageFile && f.Name != rawExportBundleFieldsFile {
|
||||
continue
|
||||
}
|
||||
rc, err := f.Open()
|
||||
if err != nil {
|
||||
return nil, true, err
|
||||
}
|
||||
defer rc.Close()
|
||||
body, err := io.ReadAll(rc)
|
||||
rc.Close()
|
||||
if err != nil {
|
||||
return nil, true, err
|
||||
}
|
||||
pkg, ok, err := parseRawExportPackage(body)
|
||||
switch f.Name {
|
||||
case rawExportBundlePackageFile:
|
||||
pkgBody = body
|
||||
case rawExportBundleFieldsFile:
|
||||
parserFieldsBody = body
|
||||
}
|
||||
}
|
||||
|
||||
if len(pkgBody) == 0 {
|
||||
return nil, false, nil
|
||||
}
|
||||
pkg, ok, err := parseRawExportPackage(pkgBody)
|
||||
if err != nil || !ok {
|
||||
return pkg, ok, err
|
||||
}
|
||||
return nil, false, nil
|
||||
if ts, ok := parseCollectedAtHint(parserFieldsBody); ok {
|
||||
pkg.CollectedAtHint = ts.UTC()
|
||||
}
|
||||
return pkg, true, nil
|
||||
}
|
||||
|
||||
func parseCollectedAtHint(parserFieldsBody []byte) (time.Time, bool) {
|
||||
if len(parserFieldsBody) == 0 {
|
||||
return time.Time{}, false
|
||||
}
|
||||
var payload struct {
|
||||
CollectedAt string `json:"collected_at"`
|
||||
}
|
||||
if err := json.Unmarshal(parserFieldsBody, &payload); err != nil {
|
||||
return time.Time{}, false
|
||||
}
|
||||
collectedAt := strings.TrimSpace(payload.CollectedAt)
|
||||
if collectedAt == "" {
|
||||
return time.Time{}, false
|
||||
}
|
||||
ts, err := time.Parse(time.RFC3339Nano, collectedAt)
|
||||
if err != nil {
|
||||
return time.Time{}, false
|
||||
}
|
||||
return ts, true
|
||||
}
|
||||
|
||||
func buildHumanReadableCollectionLog(pkg *RawExportPackage, result *models.AnalysisResult, clientVersion string) string {
|
||||
@@ -197,6 +242,11 @@ func buildHumanReadableCollectionLog(pkg *RawExportPackage, result *models.Analy
|
||||
if pkg.Source.Filename != "" {
|
||||
fmt.Fprintf(&b, "Source Filename: %s\n", pkg.Source.Filename)
|
||||
}
|
||||
if startedAt, finishedAt, ok := collectLogTimeBounds(pkg.Source.CollectLogs); ok {
|
||||
fmt.Fprintf(&b, "Collection Started: %s\n", startedAt.Format(time.RFC3339Nano))
|
||||
fmt.Fprintf(&b, "Collection Finished: %s\n", finishedAt.Format(time.RFC3339Nano))
|
||||
fmt.Fprintf(&b, "Collection Duration: %s\n", formatRawExportDuration(finishedAt.Sub(startedAt)))
|
||||
}
|
||||
}
|
||||
|
||||
if pkg != nil && len(pkg.Source.CollectLogs) > 0 {
|
||||
@@ -279,6 +329,42 @@ func buildHumanReadableCollectionLog(pkg *RawExportPackage, result *models.Analy
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func collectLogTimeBounds(lines []string) (time.Time, time.Time, bool) {
|
||||
var first time.Time
|
||||
var last time.Time
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
tsToken := line
|
||||
if idx := strings.IndexByte(line, ' '); idx > 0 {
|
||||
tsToken = line[:idx]
|
||||
}
|
||||
ts, err := time.Parse(time.RFC3339Nano, tsToken)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if first.IsZero() || ts.Before(first) {
|
||||
first = ts
|
||||
}
|
||||
if last.IsZero() || ts.After(last) {
|
||||
last = ts
|
||||
}
|
||||
}
|
||||
if first.IsZero() || last.IsZero() || last.Before(first) {
|
||||
return time.Time{}, time.Time{}, false
|
||||
}
|
||||
return first, last, true
|
||||
}
|
||||
|
||||
func formatRawExportDuration(d time.Duration) string {
|
||||
if d < 0 {
|
||||
d = 0
|
||||
}
|
||||
return d.Round(time.Second).String()
|
||||
}
|
||||
|
||||
func buildParserFieldSummary(result *models.AnalysisResult) map[string]any {
|
||||
out := map[string]any{
|
||||
"generated_at": time.Now().UTC(),
|
||||
@@ -292,6 +378,7 @@ func buildParserFieldSummary(result *models.AnalysisResult) map[string]any {
|
||||
out["source_type"] = result.SourceType
|
||||
out["protocol"] = result.Protocol
|
||||
out["target_host"] = result.TargetHost
|
||||
out["source_timezone"] = result.SourceTimezone
|
||||
out["collected_at"] = result.CollectedAt
|
||||
|
||||
if result.Hardware == nil {
|
||||
@@ -341,3 +428,10 @@ func resultTargetHost(result *models.AnalysisResult) string {
|
||||
}
|
||||
return result.TargetHost
|
||||
}
|
||||
|
||||
func resultSourceTimezone(result *models.AnalysisResult) string {
|
||||
if result == nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(result.SourceTimezone)
|
||||
}
|
||||
|
||||
101
internal/server/raw_export_test.go
Normal file
101
internal/server/raw_export_test.go
Normal file
@@ -0,0 +1,101 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestCollectLogTimeBounds(t *testing.T) {
|
||||
lines := []string{
|
||||
"2026-02-28T13:10:13.7442032Z Задача поставлена в очередь",
|
||||
"not-a-timestamp line",
|
||||
"2026-02-28T13:31:00.5077486Z Сбор завершен",
|
||||
}
|
||||
|
||||
startedAt, finishedAt, ok := collectLogTimeBounds(lines)
|
||||
if !ok {
|
||||
t.Fatalf("expected bounds to be parsed")
|
||||
}
|
||||
if got := formatRawExportDuration(finishedAt.Sub(startedAt)); got != "20m47s" {
|
||||
t.Fatalf("unexpected duration: %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHumanReadableCollectionLog_IncludesDurationHeader(t *testing.T) {
|
||||
pkg := &RawExportPackage{
|
||||
Format: rawExportFormatV1,
|
||||
Source: RawExportSource{
|
||||
Kind: "live_redfish",
|
||||
CollectLogs: []string{
|
||||
"2026-02-28T13:10:13.7442032Z Redfish: подключение к BMC...",
|
||||
"2026-02-28T13:31:00.5077486Z Сбор завершен",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
logText := buildHumanReadableCollectionLog(pkg, nil, "LOGPile test")
|
||||
for _, token := range []string{
|
||||
"Collection Started:",
|
||||
"Collection Finished:",
|
||||
"Collection Duration:",
|
||||
} {
|
||||
if !strings.Contains(logText, token) {
|
||||
t.Fatalf("expected %q in log header", token)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRawExportBundle_ExtractsCollectedAtHintFromParserFields(t *testing.T) {
|
||||
pkg := &RawExportPackage{
|
||||
Format: rawExportFormatV1,
|
||||
ExportedAt: time.Date(2026, 2, 25, 9, 59, 41, 479023400, time.UTC),
|
||||
Source: RawExportSource{
|
||||
Kind: "live_redfish",
|
||||
},
|
||||
}
|
||||
pkgJSON, err := json.Marshal(pkg)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal pkg: %v", err)
|
||||
}
|
||||
|
||||
parserFields := []byte(`{"collected_at":"2026-02-25T09:58:05.9129753Z"}`)
|
||||
|
||||
var buf bytes.Buffer
|
||||
zw := zip.NewWriter(&buf)
|
||||
|
||||
jf, err := zw.Create(rawExportBundlePackageFile)
|
||||
if err != nil {
|
||||
t.Fatalf("create package file: %v", err)
|
||||
}
|
||||
if _, err := jf.Write(pkgJSON); err != nil {
|
||||
t.Fatalf("write package file: %v", err)
|
||||
}
|
||||
|
||||
ff, err := zw.Create(rawExportBundleFieldsFile)
|
||||
if err != nil {
|
||||
t.Fatalf("create parser fields file: %v", err)
|
||||
}
|
||||
if _, err := ff.Write(parserFields); err != nil {
|
||||
t.Fatalf("write parser fields file: %v", err)
|
||||
}
|
||||
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Fatalf("close zip writer: %v", err)
|
||||
}
|
||||
|
||||
gotPkg, ok, err := parseRawExportBundle(buf.Bytes())
|
||||
if err != nil {
|
||||
t.Fatalf("parse bundle: %v", err)
|
||||
}
|
||||
if !ok || gotPkg == nil {
|
||||
t.Fatalf("expected valid raw export bundle")
|
||||
}
|
||||
want := time.Date(2026, 2, 25, 9, 58, 5, 912975300, time.UTC)
|
||||
if !gotPkg.CollectedAtHint.Equal(want) {
|
||||
t.Fatalf("expected collected_at hint %s, got %s", want, gotPkg.CollectedAtHint)
|
||||
}
|
||||
}
|
||||
@@ -32,17 +32,26 @@ type Server struct {
|
||||
result *models.AnalysisResult
|
||||
detectedVendor string
|
||||
rawExport *RawExportPackage
|
||||
convertJobs map[string]struct{}
|
||||
convertOutput map[string]ConvertArtifact
|
||||
|
||||
jobManager *JobManager
|
||||
collectors *collector.Registry
|
||||
}
|
||||
|
||||
type ConvertArtifact struct {
|
||||
Path string
|
||||
Summary string
|
||||
}
|
||||
|
||||
func New(cfg Config) *Server {
|
||||
s := &Server{
|
||||
config: cfg,
|
||||
mux: http.NewServeMux(),
|
||||
jobManager: NewJobManager(),
|
||||
collectors: collector.NewDefaultRegistry(),
|
||||
config: cfg,
|
||||
mux: http.NewServeMux(),
|
||||
jobManager: NewJobManager(),
|
||||
collectors: collector.NewDefaultRegistry(),
|
||||
convertJobs: make(map[string]struct{}),
|
||||
convertOutput: make(map[string]ConvertArtifact),
|
||||
}
|
||||
s.setupRoutes()
|
||||
return s
|
||||
@@ -63,6 +72,7 @@ func (s *Server) setupRoutes() {
|
||||
s.mux.HandleFunc("POST /api/upload", s.handleUpload)
|
||||
s.mux.HandleFunc("GET /api/status", s.handleGetStatus)
|
||||
s.mux.HandleFunc("GET /api/parsers", s.handleGetParsers)
|
||||
s.mux.HandleFunc("GET /api/file-types", s.handleGetFileTypes)
|
||||
s.mux.HandleFunc("GET /api/events", s.handleGetEvents)
|
||||
s.mux.HandleFunc("GET /api/sensors", s.handleGetSensors)
|
||||
s.mux.HandleFunc("GET /api/config", s.handleGetConfig)
|
||||
@@ -72,6 +82,9 @@ func (s *Server) setupRoutes() {
|
||||
s.mux.HandleFunc("GET /api/export/csv", s.handleExportCSV)
|
||||
s.mux.HandleFunc("GET /api/export/json", s.handleExportJSON)
|
||||
s.mux.HandleFunc("GET /api/export/reanimator", s.handleExportReanimator)
|
||||
s.mux.HandleFunc("POST /api/convert", s.handleConvertReanimatorBatch)
|
||||
s.mux.HandleFunc("GET /api/convert/{id}", s.handleConvertStatus)
|
||||
s.mux.HandleFunc("GET /api/convert/{id}/download", s.handleConvertDownload)
|
||||
s.mux.HandleFunc("DELETE /api/clear", s.handleClear)
|
||||
s.mux.HandleFunc("POST /api/shutdown", s.handleShutdown)
|
||||
s.mux.HandleFunc("POST /api/collect", s.handleCollectStart)
|
||||
@@ -154,3 +167,47 @@ func (s *Server) GetDetectedVendor() string {
|
||||
defer s.mu.RUnlock()
|
||||
return s.detectedVendor
|
||||
}
|
||||
|
||||
func (s *Server) markConvertJob(id string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
s.convertJobs[id] = struct{}{}
|
||||
}
|
||||
|
||||
func (s *Server) isConvertJob(id string) bool {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
_, ok := s.convertJobs[id]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (s *Server) setConvertArtifact(id string, artifact ConvertArtifact) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
s.convertOutput[id] = artifact
|
||||
}
|
||||
|
||||
func (s *Server) getConvertArtifact(id string) (ConvertArtifact, bool) {
|
||||
s.mu.RLock()
|
||||
defer s.mu.RUnlock()
|
||||
artifact, ok := s.convertOutput[id]
|
||||
return artifact, ok
|
||||
}
|
||||
|
||||
func (s *Server) clearConvertArtifact(id string) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
delete(s.convertOutput, id)
|
||||
}
|
||||
|
||||
func (s *Server) clearAllConvertArtifacts() []ConvertArtifact {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
out := make([]ConvertArtifact, 0, len(s.convertOutput))
|
||||
for _, artifact := range s.convertOutput {
|
||||
out = append(out, artifact)
|
||||
}
|
||||
s.convertOutput = make(map[string]ConvertArtifact)
|
||||
s.convertJobs = make(map[string]struct{})
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -30,6 +30,138 @@ func TestApplyArchiveSourceMetadata(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyArchiveSourceMetadata_PreservesExistingCollectedAt(t *testing.T) {
|
||||
expected := time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC)
|
||||
result := &models.AnalysisResult{
|
||||
CollectedAt: expected,
|
||||
}
|
||||
|
||||
applyArchiveSourceMetadata(result)
|
||||
|
||||
if !result.CollectedAt.Equal(expected) {
|
||||
t.Fatalf("expected collected_at to be preserved: got %s want %s", result.CollectedAt, expected)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyArchiveSourceMetadata_InferCollectedAtFromEvents(t *testing.T) {
|
||||
oldTs := time.Date(2026, 2, 10, 13, 0, 0, 0, time.UTC)
|
||||
newTs := time.Date(2026, 2, 10, 15, 30, 0, 0, time.UTC)
|
||||
result := &models.AnalysisResult{
|
||||
Events: []models.Event{
|
||||
{Timestamp: oldTs},
|
||||
{Timestamp: newTs},
|
||||
},
|
||||
}
|
||||
|
||||
applyArchiveSourceMetadata(result)
|
||||
|
||||
if !result.CollectedAt.Equal(newTs) {
|
||||
t.Fatalf("expected collected_at from latest event: got %s want %s", result.CollectedAt, newTs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyArchiveSourceMetadata_InferCollectedAtFromFilename(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Filename: "dump_23E100203_20260228-0428.tar.gz",
|
||||
}
|
||||
|
||||
applyArchiveSourceMetadata(result)
|
||||
|
||||
// 2026-02-28 04:28 in Europe/Moscow => 2026-02-28 01:28 UTC
|
||||
want := time.Date(2026, 2, 28, 1, 28, 0, 0, time.UTC)
|
||||
if !result.CollectedAt.Equal(want) {
|
||||
t.Fatalf("expected collected_at from filename: got %s want %s", result.CollectedAt, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyArchiveSourceMetadata_IgnoresSyntheticComponentNowEvents(t *testing.T) {
|
||||
realTs := time.Date(2026, 2, 28, 4, 18, 18, 217225000, time.FixedZone("UTC+8", 8*3600))
|
||||
syntheticNow := time.Date(2026, 3, 5, 10, 0, 0, 0, time.UTC)
|
||||
result := &models.AnalysisResult{
|
||||
Events: []models.Event{
|
||||
{
|
||||
Timestamp: realTs,
|
||||
Source: "spx_restservice_ext",
|
||||
SensorType:"syslog",
|
||||
EventType: "System Log",
|
||||
},
|
||||
{
|
||||
Timestamp: syntheticNow,
|
||||
Source: "Fan",
|
||||
SensorType: "fan",
|
||||
EventType: "Fan Status",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
applyArchiveSourceMetadata(result)
|
||||
|
||||
if !result.CollectedAt.Equal(realTs.UTC()) {
|
||||
t.Fatalf("expected collected_at from real log timestamp: got %s want %s", result.CollectedAt, realTs.UTC())
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferRawExportCollectedAt_PrefersResultCollectedAt(t *testing.T) {
|
||||
expected := time.Date(2026, 2, 25, 8, 0, 0, 0, time.UTC)
|
||||
result := &models.AnalysisResult{CollectedAt: expected}
|
||||
pkg := &RawExportPackage{
|
||||
ExportedAt: time.Date(2026, 2, 25, 9, 59, 41, 0, time.UTC),
|
||||
Source: RawExportSource{
|
||||
CollectLogs: []string{
|
||||
"2026-02-25T09:00:00Z step1",
|
||||
"2026-02-25T09:10:00Z step2",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
got := inferRawExportCollectedAt(result, pkg)
|
||||
if !got.Equal(expected) {
|
||||
t.Fatalf("expected collected_at from result: got %s want %s", got, expected)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInferRawExportCollectedAt_UsesCollectLogsThenExportedAt(t *testing.T) {
|
||||
hintTs := time.Date(2026, 2, 25, 9, 58, 5, 912975300, time.UTC)
|
||||
pkgWithLogs := &RawExportPackage{
|
||||
ExportedAt: time.Date(2026, 2, 25, 9, 59, 41, 0, time.UTC),
|
||||
CollectedAtHint: hintTs,
|
||||
Source: RawExportSource{
|
||||
CollectLogs: []string{
|
||||
"2026-02-25T09:10:13.7442032Z started",
|
||||
"2026-02-25T09:31:00.5077486Z finished",
|
||||
},
|
||||
},
|
||||
}
|
||||
got := inferRawExportCollectedAt(&models.AnalysisResult{}, pkgWithLogs)
|
||||
if !got.Equal(hintTs) {
|
||||
t.Fatalf("expected collected_at from parser_fields hint: got %s want %s", got, hintTs)
|
||||
}
|
||||
|
||||
pkgFromLogs := &RawExportPackage{
|
||||
ExportedAt: time.Date(2026, 2, 25, 9, 59, 41, 0, time.UTC),
|
||||
Source: RawExportSource{
|
||||
CollectLogs: []string{
|
||||
"2026-02-25T09:10:13.7442032Z started",
|
||||
"2026-02-25T09:31:00.5077486Z finished",
|
||||
},
|
||||
},
|
||||
}
|
||||
got = inferRawExportCollectedAt(&models.AnalysisResult{}, pkgFromLogs)
|
||||
wantFromLogs := time.Date(2026, 2, 25, 9, 31, 0, 507748600, time.UTC)
|
||||
if !got.Equal(wantFromLogs) {
|
||||
t.Fatalf("expected collected_at from collect logs: got %s want %s", got, wantFromLogs)
|
||||
}
|
||||
|
||||
pkgWithoutLogs := &RawExportPackage{
|
||||
ExportedAt: time.Date(2026, 2, 25, 9, 59, 41, 479023400, time.UTC),
|
||||
}
|
||||
got = inferRawExportCollectedAt(&models.AnalysisResult{}, pkgWithoutLogs)
|
||||
wantFromExportedAt := time.Date(2026, 2, 25, 9, 59, 41, 479023400, time.UTC)
|
||||
if !got.Equal(wantFromExportedAt) {
|
||||
t.Fatalf("expected collected_at from exported_at: got %s want %s", got, wantFromExportedAt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCollectSourceMetadata(t *testing.T) {
|
||||
req := CollectRequest{
|
||||
Host: "bmc-api.local",
|
||||
@@ -76,6 +208,32 @@ func TestApplyCollectSourceMetadata(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCollectSourceMetadata_PreservesCollectedAtAndTimezone(t *testing.T) {
|
||||
req := CollectRequest{
|
||||
Host: "bmc-api.local",
|
||||
Protocol: "redfish",
|
||||
Port: 443,
|
||||
Username: "admin",
|
||||
AuthType: "password",
|
||||
Password: "super-secret",
|
||||
TLSMode: "strict",
|
||||
}
|
||||
collectedAt := time.Date(2026, 2, 28, 4, 18, 18, 0, time.FixedZone("UTC+8", 8*3600))
|
||||
result := &models.AnalysisResult{
|
||||
CollectedAt: collectedAt,
|
||||
SourceTimezone: "+08:00",
|
||||
}
|
||||
|
||||
applyCollectSourceMetadata(result, req)
|
||||
|
||||
if !result.CollectedAt.Equal(collectedAt) {
|
||||
t.Fatalf("expected collected_at to be preserved: got %s want %s", result.CollectedAt, collectedAt)
|
||||
}
|
||||
if result.SourceTimezone != "+08:00" {
|
||||
t.Fatalf("expected source_timezone to be preserved, got %q", result.SourceTimezone)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStatusAndConfigExposeSourceMetadata(t *testing.T) {
|
||||
s := &Server{}
|
||||
s.SetDetectedVendor("nvidia")
|
||||
|
||||
@@ -84,13 +84,6 @@ main {
|
||||
}
|
||||
|
||||
.upload-area button {
|
||||
background: #3498db;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 0.75rem 1.5rem;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 1rem;
|
||||
margin: 1rem 0;
|
||||
}
|
||||
|
||||
@@ -166,6 +159,9 @@ main {
|
||||
|
||||
.api-form-actions {
|
||||
margin-top: 0.9rem;
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.6rem;
|
||||
}
|
||||
|
||||
#api-connect-form.is-disabled {
|
||||
@@ -173,19 +169,39 @@ main {
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
#api-connect-btn {
|
||||
#api-connect-btn,
|
||||
#convert-folder-btn,
|
||||
#convert-run-btn,
|
||||
#cancel-job-btn,
|
||||
.upload-area button {
|
||||
background: #3498db;
|
||||
color: white;
|
||||
color: #fff;
|
||||
border: none;
|
||||
padding: 0.6rem 1.2rem;
|
||||
border-radius: 4px;
|
||||
border-radius: 6px;
|
||||
padding: 0.6rem 1rem;
|
||||
font-size: 0.95rem;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: background-color 0.2s ease, opacity 0.2s ease;
|
||||
}
|
||||
|
||||
#api-connect-btn:hover {
|
||||
#api-connect-btn:hover,
|
||||
#convert-folder-btn:hover,
|
||||
#convert-run-btn:hover,
|
||||
#cancel-job-btn:hover,
|
||||
.upload-area button:hover {
|
||||
background: #2980b9;
|
||||
}
|
||||
|
||||
#convert-run-btn:disabled,
|
||||
#convert-folder-btn:disabled,
|
||||
#api-connect-btn:disabled,
|
||||
#cancel-job-btn:disabled,
|
||||
.upload-area button:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.api-connect-status {
|
||||
margin-top: 0.75rem;
|
||||
font-size: 0.85rem;
|
||||
@@ -199,6 +215,38 @@ main {
|
||||
color: #dc3545;
|
||||
}
|
||||
|
||||
.api-connect-status.info {
|
||||
color: #0f4dba;
|
||||
}
|
||||
|
||||
.convert-progress {
|
||||
margin-top: 0.9rem;
|
||||
}
|
||||
|
||||
.convert-progress-meta {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 0.35rem;
|
||||
font-size: 0.82rem;
|
||||
color: #475569;
|
||||
}
|
||||
|
||||
.convert-progress-track {
|
||||
height: 12px;
|
||||
border-radius: 999px;
|
||||
border: 1px solid #cbd5e1;
|
||||
background: #e2e8f0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.convert-progress-bar {
|
||||
height: 100%;
|
||||
width: 0%;
|
||||
background: linear-gradient(90deg, #2563eb, #0ea5e9);
|
||||
transition: width 0.2s ease;
|
||||
}
|
||||
|
||||
.job-status {
|
||||
margin-top: 1rem;
|
||||
border: 1px solid #d0d7de;
|
||||
@@ -220,15 +268,6 @@ main {
|
||||
color: #2c3e50;
|
||||
}
|
||||
|
||||
#cancel-job-btn {
|
||||
background: #dc3545;
|
||||
color: #fff;
|
||||
border: none;
|
||||
border-radius: 4px;
|
||||
padding: 0.45rem 0.75rem;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
#cancel-job-btn:disabled {
|
||||
background: #9ca3af;
|
||||
cursor: default;
|
||||
@@ -242,6 +281,28 @@ main {
|
||||
font-size: 0.9rem;
|
||||
}
|
||||
|
||||
.job-progress {
|
||||
height: 22px;
|
||||
border-radius: 999px;
|
||||
border: 1px solid #cbd5e1;
|
||||
background: #e2e8f0;
|
||||
overflow: hidden;
|
||||
margin-bottom: 0.8rem;
|
||||
}
|
||||
|
||||
.job-progress-bar {
|
||||
height: 100%;
|
||||
min-width: 2.5rem;
|
||||
background: linear-gradient(90deg, #2563eb, #0ea5e9);
|
||||
color: #fff;
|
||||
font-size: 0.78rem;
|
||||
font-weight: 700;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
transition: width 0.25s ease;
|
||||
}
|
||||
|
||||
.meta-label {
|
||||
color: #64748b;
|
||||
font-weight: 600;
|
||||
@@ -323,38 +384,43 @@ main {
|
||||
|
||||
.parsers-title {
|
||||
font-size: 0.85rem;
|
||||
color: #666;
|
||||
margin-bottom: 0.5rem;
|
||||
color: #4b5563;
|
||||
margin-bottom: 0.6rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.parsers-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.5rem;
|
||||
gap: 0.6rem;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.parser-item {
|
||||
.parser-chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.5rem;
|
||||
background: #f8f9fa;
|
||||
padding: 0.4rem 0.8rem;
|
||||
border-radius: 4px;
|
||||
border: 1px solid #e0e0e0;
|
||||
gap: 0.45rem;
|
||||
background: #eef6ff;
|
||||
padding: 0.38rem 0.72rem;
|
||||
border-radius: 999px;
|
||||
border: 1px solid #bfdcff;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.parser-name {
|
||||
.parser-chip-name {
|
||||
font-size: 0.85rem;
|
||||
color: #2c3e50;
|
||||
color: #1f2937;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.parser-version {
|
||||
font-size: 0.75rem;
|
||||
color: #888;
|
||||
background: #e8e8e8;
|
||||
padding: 0.1rem 0.4rem;
|
||||
border-radius: 3px;
|
||||
.parser-chip-version {
|
||||
font-size: 0.72rem;
|
||||
color: #1d4ed8;
|
||||
background: #dbeafe;
|
||||
padding: 0.12rem 0.42rem;
|
||||
border-radius: 999px;
|
||||
border: 1px solid #bfdbfe;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace;
|
||||
}
|
||||
|
||||
/* File Info */
|
||||
|
||||
@@ -4,12 +4,19 @@ document.addEventListener('DOMContentLoaded', () => {
|
||||
initSourceType();
|
||||
initApiSource();
|
||||
initUpload();
|
||||
initConvertMode();
|
||||
initTabs();
|
||||
initFilters();
|
||||
loadParsersInfo();
|
||||
loadSupportedFileTypes();
|
||||
});
|
||||
|
||||
let sourceType = 'archive';
|
||||
let convertFiles = [];
|
||||
let isConvertRunning = false;
|
||||
const CONVERT_MAX_FILES_PER_BATCH = 1000;
|
||||
let supportedUploadExtensions = null;
|
||||
let supportedConvertExtensions = null;
|
||||
let apiConnectPayload = null;
|
||||
let collectionJob = null;
|
||||
let collectionJobPollTimer = null;
|
||||
@@ -29,7 +36,13 @@ function initSourceType() {
|
||||
}
|
||||
|
||||
function setSourceType(nextType) {
|
||||
sourceType = nextType === 'api' ? 'api' : 'archive';
|
||||
if (nextType === 'api') {
|
||||
sourceType = 'api';
|
||||
} else if (nextType === 'convert') {
|
||||
sourceType = 'convert';
|
||||
} else {
|
||||
sourceType = 'archive';
|
||||
}
|
||||
|
||||
document.querySelectorAll('.source-switch-btn').forEach(button => {
|
||||
button.classList.toggle('active', button.dataset.sourceType === sourceType);
|
||||
@@ -37,8 +50,12 @@ function setSourceType(nextType) {
|
||||
|
||||
const archiveContent = document.getElementById('archive-source-content');
|
||||
const apiSourceContent = document.getElementById('api-source-content');
|
||||
const convertSourceContent = document.getElementById('convert-source-content');
|
||||
archiveContent.classList.toggle('hidden', sourceType !== 'archive');
|
||||
apiSourceContent.classList.toggle('hidden', sourceType !== 'api');
|
||||
if (convertSourceContent) {
|
||||
convertSourceContent.classList.toggle('hidden', sourceType !== 'convert');
|
||||
}
|
||||
}
|
||||
|
||||
function initApiSource() {
|
||||
@@ -281,7 +298,6 @@ function pollCollectionJobStatus() {
|
||||
renderCollectionJob();
|
||||
}
|
||||
} else if (prevStatus !== collectionJob.status && collectionJob.status === 'running') {
|
||||
appendJobLog('Сбор выполняется...');
|
||||
renderCollectionJob();
|
||||
}
|
||||
})
|
||||
@@ -335,9 +351,11 @@ function renderCollectionJob() {
|
||||
const jobIdValue = document.getElementById('job-id-value');
|
||||
const statusValue = document.getElementById('job-status-value');
|
||||
const progressValue = document.getElementById('job-progress-value');
|
||||
const etaValue = document.getElementById('job-eta-value');
|
||||
const progressBar = document.getElementById('job-progress-bar');
|
||||
const logsList = document.getElementById('job-logs-list');
|
||||
const cancelButton = document.getElementById('cancel-job-btn');
|
||||
if (!jobStatusBlock || !jobIdValue || !statusValue || !progressValue || !logsList || !cancelButton) {
|
||||
if (!jobStatusBlock || !jobIdValue || !statusValue || !progressValue || !etaValue || !progressBar || !logsList || !cancelButton) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -357,12 +375,16 @@ function renderCollectionJob() {
|
||||
failed: 'Сбор завершился ошибкой',
|
||||
canceled: 'Сбор отменен'
|
||||
}[collectionJob.status];
|
||||
const progressLabel = isTerminal
|
||||
? terminalMessage
|
||||
: 'Сбор данных...';
|
||||
progressValue.textContent = `${collectionJob.progress}% · ${progressLabel}`;
|
||||
const activity = isTerminal ? terminalMessage : latestCollectionActivityMessage();
|
||||
const eta = isTerminal ? '-' : latestCollectionETA();
|
||||
const progressPercent = Math.max(0, Math.min(100, Number(collectionJob.progress) || 0));
|
||||
|
||||
logsList.innerHTML = collectionJob.logs.map((log) => (
|
||||
progressValue.textContent = activity;
|
||||
etaValue.textContent = eta;
|
||||
progressBar.style.width = `${progressPercent}%`;
|
||||
progressBar.textContent = `${progressPercent}%`;
|
||||
|
||||
logsList.innerHTML = [...collectionJob.logs].reverse().map((log) => (
|
||||
`<li><span class="log-time">${escapeHtml(log.time)}</span><span class="log-message">${escapeHtml(log.message)}</span></li>`
|
||||
)).join('');
|
||||
|
||||
@@ -370,6 +392,39 @@ function renderCollectionJob() {
|
||||
setApiFormBlocked(!isTerminal);
|
||||
}
|
||||
|
||||
function latestCollectionActivityMessage() {
|
||||
if (!collectionJob || !Array.isArray(collectionJob.logs) || collectionJob.logs.length === 0) {
|
||||
return 'Сбор данных...';
|
||||
}
|
||||
const last = String(collectionJob.logs[collectionJob.logs.length - 1].message || '').trim();
|
||||
if (!last) {
|
||||
return 'Сбор данных...';
|
||||
}
|
||||
// Job logs already contain server timestamp prefix. Show concise step text in progress label.
|
||||
const cleaned = last.replace(/^\d{4}-\d{2}-\d{2}T[^\s]+\s+/, '').trim();
|
||||
if (!cleaned) {
|
||||
return 'Сбор данных...';
|
||||
}
|
||||
return cleaned.replace(/\s*[,(]?\s*ETA[^,;)]*/i, '').trim() || 'Сбор данных...';
|
||||
}
|
||||
|
||||
function latestCollectionETA() {
|
||||
if (!collectionJob || !Array.isArray(collectionJob.logs) || collectionJob.logs.length === 0) {
|
||||
return '-';
|
||||
}
|
||||
const last = String(collectionJob.logs[collectionJob.logs.length - 1].message || '').trim();
|
||||
const cleaned = last.replace(/^\d{4}-\d{2}-\d{2}T[^\s]+\s+/, '').trim();
|
||||
if (!cleaned) {
|
||||
return '-';
|
||||
}
|
||||
const match = cleaned.match(/ETA[^,;)]*/i);
|
||||
if (!match) {
|
||||
return '-';
|
||||
}
|
||||
const eta = match[0].replace(/^ETA\s*[:=~≈-]?\s*/i, '').trim();
|
||||
return eta || '-';
|
||||
}
|
||||
|
||||
function isCollectionJobTerminal(status) {
|
||||
return ['success', 'failed', 'canceled'].includes(normalizeJobStatus(status));
|
||||
}
|
||||
@@ -485,12 +540,12 @@ async function loadParsersInfo() {
|
||||
const container = document.getElementById('parsers-info');
|
||||
|
||||
if (data.parsers && data.parsers.length > 0) {
|
||||
let html = '<p class="parsers-title">Поддерживаемые платформы:</p><div class="parsers-list">';
|
||||
let html = '<p class="parsers-title">Подключенные парсеры:</p><div class="parsers-list">';
|
||||
data.parsers.forEach(p => {
|
||||
html += `<div class="parser-item">
|
||||
<span class="parser-name">${escapeHtml(p.name)}</span>
|
||||
<span class="parser-version">v${escapeHtml(p.version)}</span>
|
||||
</div>`;
|
||||
html += `<span class="parser-chip">
|
||||
<span class="parser-chip-name">${escapeHtml(p.name)}</span>
|
||||
<span class="parser-chip-version">v${escapeHtml(p.version)}</span>
|
||||
</span>`;
|
||||
});
|
||||
html += '</div>';
|
||||
container.innerHTML = html;
|
||||
@@ -561,6 +616,348 @@ async function uploadFile(file) {
|
||||
}
|
||||
}
|
||||
|
||||
function initConvertMode() {
|
||||
const folderInput = document.getElementById('convert-folder-input');
|
||||
const runButton = document.getElementById('convert-run-btn');
|
||||
if (!folderInput || !runButton) {
|
||||
return;
|
||||
}
|
||||
|
||||
folderInput.addEventListener('change', () => {
|
||||
convertFiles = Array.from(folderInput.files || []).filter(file => file && file.name);
|
||||
renderConvertSummary();
|
||||
});
|
||||
|
||||
runButton.addEventListener('click', async () => {
|
||||
await runConvertBatch();
|
||||
});
|
||||
renderConvertSummary();
|
||||
}
|
||||
|
||||
function renderConvertSummary() {
|
||||
const summary = document.getElementById('convert-folder-summary');
|
||||
if (!summary) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (convertFiles.length === 0) {
|
||||
summary.textContent = 'Выберите папку с файлами, включая вложенные каталоги.';
|
||||
summary.className = 'api-connect-status';
|
||||
return;
|
||||
}
|
||||
|
||||
const selectedFiles = convertFiles.filter(file => file && file.name);
|
||||
const supportedFiles = selectedFiles.filter(file => isSupportedConvertFileName(file.webkitRelativePath || file.name));
|
||||
const skippedCount = selectedFiles.length - supportedFiles.length;
|
||||
const previewCount = 5;
|
||||
const previewFiles = supportedFiles.slice(0, previewCount).map(file => escapeHtml(file.webkitRelativePath || file.name));
|
||||
const remaining = supportedFiles.length - previewFiles.length;
|
||||
const previewText = previewFiles.length > 0 ? `Примеры: ${previewFiles.join(', ')}` : '';
|
||||
const skippedText = skippedCount > 0 ? ` Пропущено неподдерживаемых: ${skippedCount}.` : '';
|
||||
const batchCount = Math.ceil(supportedFiles.length / CONVERT_MAX_FILES_PER_BATCH);
|
||||
const batchesText = batchCount > 1 ? ` Будет ${batchCount} прохода(ов) по ${CONVERT_MAX_FILES_PER_BATCH} файлов.` : '';
|
||||
|
||||
summary.innerHTML = `<strong>${supportedFiles.length}</strong> файлов готовы к конвертации.${previewText ? ` ${previewText}` : ''}${remaining > 0 ? ` и ещё ${remaining}` : ''}.${skippedText}${batchesText}`;
|
||||
summary.className = 'api-connect-status';
|
||||
}
|
||||
|
||||
async function runConvertBatch() {
|
||||
const runButton = document.getElementById('convert-run-btn');
|
||||
if (!runButton || isConvertRunning) {
|
||||
return;
|
||||
}
|
||||
if (convertFiles.length === 0) {
|
||||
renderConvertStatus('Нет файлов для конвертации', 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
const selectedFiles = convertFiles.filter(file => file && file.name);
|
||||
const supportedFiles = selectedFiles.filter(file => isSupportedConvertFileName(file.webkitRelativePath || file.name));
|
||||
if (supportedFiles.length === 0) {
|
||||
renderConvertStatus('В выбранной папке нет файлов поддерживаемого типа', 'error');
|
||||
return;
|
||||
}
|
||||
const batches = chunkFiles(supportedFiles, CONVERT_MAX_FILES_PER_BATCH);
|
||||
|
||||
isConvertRunning = true;
|
||||
runButton.disabled = true;
|
||||
renderConvertProgress(0, 'Подготовка загрузки...');
|
||||
renderConvertStatus(`Выполняю пакетную конвертацию (${batches.length} проходов)...`, 'info');
|
||||
|
||||
try {
|
||||
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
|
||||
const passSummaries = [];
|
||||
|
||||
for (let batchIdx = 0; batchIdx < batches.length; batchIdx++) {
|
||||
const batchFiles = batches[batchIdx];
|
||||
const pass = batchIdx + 1;
|
||||
const passLabel = `Проход ${pass}/${batches.length}`;
|
||||
const passStart = Math.round((batchIdx / batches.length) * 100);
|
||||
const passEnd = Math.round(((batchIdx + 1) / batches.length) * 100);
|
||||
|
||||
const formData = new FormData();
|
||||
batchFiles.forEach(file => {
|
||||
const relativePath = file.webkitRelativePath || file.name || 'file';
|
||||
formData.append('files[]', file, relativePath);
|
||||
});
|
||||
|
||||
const startResponse = await uploadConvertBatch(formData, (percent) => {
|
||||
const clamped = Math.max(0, Math.min(100, Number(percent) || 0));
|
||||
const uploadPhase = passStart + Math.round((passEnd - passStart) * 0.3 * (clamped / 100));
|
||||
renderConvertProgress(uploadPhase, `${passLabel}: загрузка ${clamped}%`);
|
||||
});
|
||||
|
||||
if (!startResponse.ok) {
|
||||
const errorPayload = parseConvertErrorPayload(startResponse.bodyText);
|
||||
hideConvertProgress();
|
||||
renderConvertStatus(`${passLabel}: ${errorPayload.error || 'пакетная конвертация завершилась с ошибкой'}`, 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
if (!startResponse.jobId) {
|
||||
hideConvertProgress();
|
||||
renderConvertStatus(`${passLabel}: сервер не вернул идентификатор задачи`, 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
await waitForConvertJob(startResponse.jobId, (statusPayload) => {
|
||||
const serverProgress = Math.max(0, Math.min(100, Number(statusPayload.progress || 0)));
|
||||
const phase = 0.3 + 0.7 * (serverProgress / 100);
|
||||
const combined = passStart + Math.round((passEnd - passStart) * phase);
|
||||
renderConvertProgress(combined, `${passLabel}: конвертация ${serverProgress}%`);
|
||||
});
|
||||
|
||||
const downloadResponse = await downloadConvertArchive(startResponse.jobId);
|
||||
if (!downloadResponse.ok) {
|
||||
const errorPayload = parseConvertErrorPayload(downloadResponse.bodyText);
|
||||
hideConvertProgress();
|
||||
renderConvertStatus(`${passLabel}: ${errorPayload.error || 'не удалось скачать результат'}`, 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
const suffix = batches.length > 1 ? `-part${pass}` : '';
|
||||
downloadBlob(downloadResponse.blob, `logpile-convert-${timestamp}${suffix}.zip`);
|
||||
passSummaries.push(downloadResponse.summaryHeader || `${passLabel}: завершено`);
|
||||
}
|
||||
|
||||
hideConvertProgress();
|
||||
renderConvertStatus(passSummaries.join(' | '), 'success');
|
||||
} catch (err) {
|
||||
hideConvertProgress();
|
||||
renderConvertStatus('Ошибка соединения при конвертации', 'error');
|
||||
} finally {
|
||||
isConvertRunning = false;
|
||||
runButton.disabled = false;
|
||||
}
|
||||
}
|
||||
|
||||
function chunkFiles(files, chunkSize) {
|
||||
const safeChunkSize = Math.max(1, Number(chunkSize) || 1);
|
||||
const chunks = [];
|
||||
for (let i = 0; i < files.length; i += safeChunkSize) {
|
||||
chunks.push(files.slice(i, i + safeChunkSize));
|
||||
}
|
||||
return chunks;
|
||||
}
|
||||
|
||||
function uploadConvertBatch(formData, onUploadPercent) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const xhr = new XMLHttpRequest();
|
||||
xhr.open('POST', '/api/convert');
|
||||
xhr.responseType = 'text';
|
||||
|
||||
xhr.upload.addEventListener('progress', (event) => {
|
||||
if (!event.lengthComputable) {
|
||||
return;
|
||||
}
|
||||
const percent = Math.max(0, Math.min(100, Math.round((event.loaded / event.total) * 100)));
|
||||
onUploadPercent(percent);
|
||||
});
|
||||
|
||||
xhr.addEventListener('load', () => {
|
||||
if (xhr.status >= 200 && xhr.status < 300) {
|
||||
let body = {};
|
||||
try {
|
||||
body = JSON.parse(xhr.responseText || '{}');
|
||||
} catch (err) {
|
||||
body = {};
|
||||
}
|
||||
resolve({
|
||||
ok: true,
|
||||
status: xhr.status,
|
||||
jobId: body.job_id || ''
|
||||
});
|
||||
return;
|
||||
}
|
||||
resolve({
|
||||
ok: false,
|
||||
status: xhr.status,
|
||||
bodyText: xhr.responseText || ''
|
||||
});
|
||||
});
|
||||
|
||||
xhr.addEventListener('error', () => {
|
||||
reject(new Error('network'));
|
||||
});
|
||||
|
||||
xhr.send(formData);
|
||||
});
|
||||
}
|
||||
|
||||
async function waitForConvertJob(jobId, onProgress) {
|
||||
while (true) {
|
||||
const response = await fetch(`/api/convert/${encodeURIComponent(jobId)}`);
|
||||
const payload = await response.json().catch(() => ({}));
|
||||
if (!response.ok) {
|
||||
throw new Error(payload.error || 'Не удалось получить статус конвертации');
|
||||
}
|
||||
|
||||
if (onProgress) {
|
||||
onProgress(payload);
|
||||
}
|
||||
|
||||
const status = String(payload.status || '').toLowerCase();
|
||||
if (status === 'success') {
|
||||
return payload;
|
||||
}
|
||||
if (status === 'failed' || status === 'canceled') {
|
||||
throw new Error(payload.error || 'Конвертация завершилась ошибкой');
|
||||
}
|
||||
|
||||
await delay(900);
|
||||
}
|
||||
}
|
||||
|
||||
async function downloadConvertArchive(jobId) {
|
||||
const response = await fetch(`/api/convert/${encodeURIComponent(jobId)}/download`);
|
||||
if (!response.ok) {
|
||||
return {
|
||||
ok: false,
|
||||
bodyText: await response.text().catch(() => '')
|
||||
};
|
||||
}
|
||||
return {
|
||||
ok: true,
|
||||
blob: await response.blob(),
|
||||
summaryHeader: response.headers.get('X-Convert-Summary') || ''
|
||||
};
|
||||
}
|
||||
|
||||
function delay(ms) {
|
||||
return new Promise((resolve) => {
|
||||
window.setTimeout(resolve, ms);
|
||||
});
|
||||
}
|
||||
|
||||
function parseConvertErrorPayload(bodyText) {
|
||||
if (!bodyText) {
|
||||
return {};
|
||||
}
|
||||
try {
|
||||
return JSON.parse(bodyText);
|
||||
} catch (err) {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
|
||||
function isSupportedConvertFileName(filename) {
|
||||
const name = String(filename || '').trim().toLowerCase();
|
||||
if (!name) {
|
||||
return false;
|
||||
}
|
||||
if (Array.isArray(supportedConvertExtensions) && supportedConvertExtensions.length > 0) {
|
||||
return supportedConvertExtensions.some(ext => name.endsWith(ext));
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
async function loadSupportedFileTypes() {
|
||||
try {
|
||||
const response = await fetch('/api/file-types');
|
||||
const payload = await response.json();
|
||||
if (!response.ok) {
|
||||
return;
|
||||
}
|
||||
if (Array.isArray(payload.upload_extensions)) {
|
||||
supportedUploadExtensions = payload.upload_extensions
|
||||
.map(ext => String(ext || '').trim().toLowerCase())
|
||||
.filter(Boolean);
|
||||
}
|
||||
if (Array.isArray(payload.convert_extensions)) {
|
||||
supportedConvertExtensions = payload.convert_extensions
|
||||
.map(ext => String(ext || '').trim().toLowerCase())
|
||||
.filter(Boolean);
|
||||
}
|
||||
applyUploadAcceptExtensions();
|
||||
renderConvertSummary();
|
||||
} catch (err) {
|
||||
// Keep permissive fallback if endpoint is temporarily unavailable.
|
||||
}
|
||||
}
|
||||
|
||||
function applyUploadAcceptExtensions() {
|
||||
const fileInput = document.getElementById('file-input');
|
||||
if (!fileInput || !Array.isArray(supportedUploadExtensions) || supportedUploadExtensions.length === 0) {
|
||||
return;
|
||||
}
|
||||
fileInput.setAttribute('accept', supportedUploadExtensions.join(','));
|
||||
}
|
||||
|
||||
function renderConvertStatus(message, status) {
|
||||
const statusNode = document.getElementById('convert-status');
|
||||
if (!statusNode) {
|
||||
return;
|
||||
}
|
||||
|
||||
statusNode.textContent = message || '';
|
||||
statusNode.className = 'api-connect-status';
|
||||
if (status === 'success') {
|
||||
statusNode.classList.add('success');
|
||||
} else if (status === 'error') {
|
||||
statusNode.classList.add('error');
|
||||
} else if (status === 'info') {
|
||||
statusNode.classList.add('info');
|
||||
}
|
||||
}
|
||||
|
||||
function renderConvertProgress(percent, label) {
|
||||
const wrap = document.getElementById('convert-progress');
|
||||
const bar = document.getElementById('convert-progress-bar');
|
||||
const value = document.getElementById('convert-progress-value');
|
||||
const text = document.getElementById('convert-progress-label');
|
||||
if (!wrap || !bar || !value || !text) {
|
||||
return;
|
||||
}
|
||||
|
||||
const safePercent = Math.max(0, Math.min(100, Number(percent) || 0));
|
||||
wrap.classList.remove('hidden');
|
||||
bar.style.width = `${safePercent}%`;
|
||||
value.textContent = `${safePercent}%`;
|
||||
text.textContent = label || 'Выполняется...';
|
||||
}
|
||||
|
||||
function hideConvertProgress() {
|
||||
const wrap = document.getElementById('convert-progress');
|
||||
if (!wrap) {
|
||||
return;
|
||||
}
|
||||
wrap.classList.add('hidden');
|
||||
}
|
||||
|
||||
function downloadBlob(blob, filename) {
|
||||
const url = URL.createObjectURL(blob);
|
||||
const link = document.createElement('a');
|
||||
link.style.display = 'none';
|
||||
link.href = url;
|
||||
link.download = filename;
|
||||
document.body.appendChild(link);
|
||||
link.click();
|
||||
document.body.removeChild(link);
|
||||
window.setTimeout(() => {
|
||||
URL.revokeObjectURL(url);
|
||||
}, 3000);
|
||||
}
|
||||
|
||||
// Tab navigation
|
||||
function initTabs() {
|
||||
const tabs = document.querySelectorAll('.tab');
|
||||
@@ -956,9 +1353,16 @@ function renderConfig(data) {
|
||||
// Network tab
|
||||
html += '<div class="config-tab-content" id="config-network">';
|
||||
const networkRows = networkAdapters;
|
||||
const normalizeNetworkPortCount = (value) => {
|
||||
const num = Number(value);
|
||||
if (!Number.isFinite(num) || num <= 0 || num > 256) {
|
||||
return null;
|
||||
}
|
||||
return Math.trunc(num);
|
||||
};
|
||||
if (networkRows.length > 0) {
|
||||
const nicCount = networkRows.length;
|
||||
const totalPorts = networkRows.reduce((sum, n) => sum + (n.port_count || 0), 0);
|
||||
const totalPorts = networkRows.reduce((sum, n) => sum + (normalizeNetworkPortCount(n.port_count) || 0), 0);
|
||||
const nicTypes = [...new Set(networkRows.map(n => n.port_type).filter(t => t))];
|
||||
const nicModels = [...new Set(networkRows.map(n => n.model).filter(m => m))];
|
||||
html += `<h3>Сетевые адаптеры</h3>
|
||||
@@ -972,11 +1376,12 @@ function renderConfig(data) {
|
||||
networkRows.forEach(nic => {
|
||||
const macs = nic.mac_addresses ? nic.mac_addresses.join(', ') : '-';
|
||||
const statusClass = nic.status === 'OK' ? '' : 'status-warning';
|
||||
const displayPortCount = normalizeNetworkPortCount(nic.port_count);
|
||||
html += `<tr>
|
||||
<td>${escapeHtml(nic.location || nic.slot || '-')}</td>
|
||||
<td>${escapeHtml(nic.model || '-')}</td>
|
||||
<td>${escapeHtml(nic.manufacturer || nic.vendor || '-')}</td>
|
||||
<td>${nic.port_count || '-'}</td>
|
||||
<td>${displayPortCount ?? '-'}</td>
|
||||
<td>${escapeHtml(nic.port_type || '-')}</td>
|
||||
<td><code>${escapeHtml(macs)}</code></td>
|
||||
<td class="${statusClass}">${escapeHtml(nic.status || '-')}</td>
|
||||
|
||||
@@ -17,14 +17,15 @@
|
||||
<div class="source-switch" role="tablist" aria-label="Источник данных">
|
||||
<button type="button" class="source-switch-btn active" data-source-type="archive">Архив</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="api">API</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="convert">Convert</button>
|
||||
</div>
|
||||
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.zip,.txt,.log" hidden>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, zip, json, txt, log</p>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
@@ -78,7 +79,11 @@
|
||||
<span class="meta-label">Статус:</span>
|
||||
<span id="job-status-value" class="job-status-badge">Queued</span>
|
||||
</div>
|
||||
<div><span class="meta-label">Прогресс:</span> <span id="job-progress-value">0% · Шаг 0 из 4</span></div>
|
||||
<div><span class="meta-label">Этап:</span> <span id="job-progress-value">Сбор данных...</span></div>
|
||||
<div><span class="meta-label">ETA:</span> <span id="job-eta-value">-</span></div>
|
||||
</div>
|
||||
<div class="job-progress" aria-label="Прогресс задачи">
|
||||
<div id="job-progress-bar" class="job-progress-bar" style="width: 0%">0%</div>
|
||||
</div>
|
||||
<div class="job-status-logs">
|
||||
<p class="meta-label">Журнал шагов:</p>
|
||||
@@ -86,6 +91,27 @@
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div id="convert-source-content" class="api-placeholder hidden">
|
||||
<h3>Пакетная выгрузка Reanimator</h3>
|
||||
<p>Выберите папку с файлами поддерживаемого типа. Для каждого файла будет создан отдельный экспорт Reanimator.</p>
|
||||
<div class="api-form-actions">
|
||||
<input type="file" id="convert-folder-input" webkitdirectory directory multiple hidden>
|
||||
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Выбрать папку</button>
|
||||
<button id="convert-run-btn" type="button">Конвертировать в Reanimator</button>
|
||||
</div>
|
||||
<div id="convert-progress" class="convert-progress hidden" aria-live="polite">
|
||||
<div class="convert-progress-meta">
|
||||
<span id="convert-progress-label">Подготовка...</span>
|
||||
<span id="convert-progress-value">0%</span>
|
||||
</div>
|
||||
<div class="convert-progress-track">
|
||||
<div id="convert-progress-bar" class="convert-progress-bar" style="width: 0%"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div id="convert-folder-summary" class="api-connect-status"></div>
|
||||
<div id="convert-status" class="api-connect-status"></div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section id="data-section" class="hidden">
|
||||
|
||||
Reference in New Issue
Block a user