15 Commits

Author SHA1 Message Date
5c2a21aff1 chore: update bible and chart submodules
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-11 12:17:40 +03:00
Mikhail Chusavitin
9df13327aa feat(collect): remove power-on/off, add skip-hung for Redfish collection
Remove power-on and power-off functionality from the Redfish collector;
keep host power-state detection and show a warning in the UI when the
host is powered off before collection starts.

Add a "Пропустить зависшие" (skip hung) button that lets the user abort
stuck Redfish collection phases without losing already-collected data.
Introduces a two-level context model in Collect(): the outer job context
covers the full lifecycle including replay; an inner collectCtx covers
snapshot, prefetch, and plan-B phases only. Closing the skipCh cancels
collectCtx immediately — aborts all in-flight HTTP requests and exits
plan-B loops — then replay runs on whatever rawTree was collected.

Signal path: UI → POST /api/collect/{id}/skip → JobManager.SkipJob()
→ close(skipCh) → goroutine in Collect() → cancelCollect().

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 13:12:38 +03:00
Mikhail Chusavitin
7e9af89c46 Add xFusion file-export parser support 2026-04-04 15:07:10 +03:00
Mikhail Chusavitin
db74df9994 fix(redfish): trim MSI replay noise and unify NIC classes 2026-04-01 17:49:00 +03:00
Mikhail Chusavitin
bb82387d48 fix(redfish): narrow MSI PCIeFunctions crawl 2026-04-01 16:50:51 +03:00
Mikhail Chusavitin
475f6ac472 fix(export): keep storage inventory without serials 2026-04-01 16:50:19 +03:00
Mikhail Chusavitin
93ce676f04 fix(redfish): recover MSI NIC serials from PCIe functions 2026-04-01 15:48:47 +03:00
Mikhail Chusavitin
c47c34fd11 feat(hpe): improve inventory extraction and export fidelity 2026-03-30 15:04:17 +03:00
Mikhail Chusavitin
d8c3256e41 chore(hpe_ilo_ahs): normalize module version format — v1.0 2026-03-30 13:43:10 +03:00
Mikhail Chusavitin
1b2d978d29 test: add real fixture coverage for HPE AHS parser 2026-03-30 13:41:02 +03:00
Mikhail Chusavitin
0f310d57c4 feat: HPE iLO support — profile, post-probe hang fix, replay parser fixes, AHS parser
- Add HPE iLO Redfish profile (priority 20): matches on manufacturer/OEM/iLO signals,
  adds SmartStorage/SmartStorageConfig to critical paths, sets realistic ETA baseline
  and rate policy for iLO's known slowness
- Fix post-probe hang on HPE iLO: skip numeric probing of collections where
  Members@odata.count == len(Members); add 4s postProbeClient timeout as safety net
- Exclude /WorkloadPerformanceAdvisor from crawl paths
- Fix replay parser: skip absent CPU sockets, absent DIMM slots, absent drive bays
- Filter N/A version entries from firmware inventory
- Remove drive firmware from general firmware list (already in Storage[].Firmware)
- Add HPE AHS (.ahs) archive parser with hybrid SMBIOS/Redfish extraction

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 13:39:14 +03:00
Mikhail Chusavitin
3547ef9083 Skip placeholder firmware versions in API output 2026-03-26 18:42:54 +03:00
Mikhail Chusavitin
99f0d6217c Improve Multillect Redfish replay and power detection 2026-03-26 18:41:02 +03:00
Mikhail Chusavitin
8acbba3cc9 feat: add reanimator easy bee parser 2026-03-25 19:36:13 +03:00
Mikhail Chusavitin
8942991f0c Add Inspur Group OEM Redfish profile 2026-03-25 15:08:40 +03:00
44 changed files with 7081 additions and 833 deletions

2
bible

Submodule bible updated: 52444350c1...456c1f022c

View File

@@ -27,11 +27,14 @@ All modes converge on the same normalized hardware model and exporter pipeline.
## Current vendor coverage
- Dell TSR
- Reanimator Easy Bee support bundles
- H3C SDS G5/G6
- Inspur / Kaytus
- HPE iLO AHS
- NVIDIA HGX Field Diagnostics
- NVIDIA Bug Report
- Unraid
- xFusion iBMC dump / file export
- XigmaNAS
- Generic fallback parser

View File

@@ -35,18 +35,27 @@ If the collector adds a fallback, probe, or normalization rule, replay must mirr
### Preflight and host power
- `Probe()` may be used before collection to verify API connectivity and current host `PowerState`
- if the host is off and the user chose power-on, the collector may issue `ComputerSystem.Reset`
with `ResetType=On`
- power-on attempts are bounded and logged
- after a successful power-on, the collector waits an extra stabilization window, then checks
`PowerState` again and only starts collection if the host is still on
- if the collector powered on the host itself for collection, it must attempt to power it back off
after collection completes
- if the host was already on before collection, the collector must not power it off afterward
- if power-on fails, collection still continues against the powered-off host
- all power-control decisions and attempts must be visible in the collection log so they are
preserved in raw-export bundles
- `Probe()` is used before collection to verify API connectivity and report current host `PowerState`
- if the host is off, the collector logs a warning and proceeds with collection; inventory data may
be incomplete when the host is powered off
- power-on and power-off are not performed by the collector
### Skip hung requests
Redfish collection uses a two-level context model:
- `ctx` — job lifetime context, cancelled only on explicit job cancel
- `collectCtx` — collection phase context, derived from `ctx`; covers snapshot, prefetch, and plan-B
`collectCtx` is cancelled when the user presses "Пропустить зависшие" (skip hung).
On skip, all in-flight HTTP requests in the current phase are aborted immediately via context
cancellation, the crawler and plan-B loops exit, and execution proceeds to the replay phase using
whatever was collected in `rawTree`. The result is partial but valid.
The skip signal travels: UI button → `POST /api/collect/{id}/skip``JobManager.SkipJob()`
closes `skipCh` → goroutine in `Collect()``cancelCollect()`.
The skip button is visible during `running` state and hidden once the job reaches a terminal state.
### Discovery model
@@ -100,6 +109,13 @@ Live Redfish collection must expose profile-match diagnostics:
- the collect page should render active modules as chips from structured status data, not by
parsing log lines
Profile matching may use stable platform grammar signals in addition to vendor strings:
- discovered member/resource naming from lightweight discovery collections
- firmware inventory member IDs
- OEM action names and linked target paths embedded in discovery documents
- replay-only snapshot hints such as OEM assembly/type markers when they are present in
`raw_payloads.redfish_tree`
On replay, profile-derived analysis directives may enable vendor-specific inventory linking
helpers such as processor-GPU fallback, chassis-ID alias resolution, and bounded storage recovery.
Replay should now resolve a structured analysis plan inside `redfishprofile/`, analogous to the

View File

@@ -50,12 +50,15 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
| Vendor ID | Input family | Notes |
|-----------|--------------|-------|
| `dell` | TSR ZIP archives | Broad hardware, firmware, sensors, lifecycle events |
| `easy_bee` | `bee-support-*.tar.gz` | Imports embedded `export/bee-audit.json` snapshot from reanimator-easy-bee bundles |
| `h3c_g5` | H3C SDS G5 bundles | INI/XML/CSV-driven hardware and event parsing |
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
| `xfusion` | xFusion iBMC `tar.gz` dump / file export | AppDump + RTOSDump + LogDump merge for hardware and firmware |
| `xigmanas` | XigmaNAS plain logs | FreeBSD/NAS-oriented inventory |
| `generic` | fallback | Low-confidence text fallback when nothing else matches |
@@ -120,6 +123,55 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
---
### HPE iLO AHS (`hpe_ilo_ahs`)
**Status:** Ready (v1.0.0). Tested on HPE ProLiant Gen11 `.ahs` export from iLO 6.
**Archive format:** `.ahs` single-file Active Health System export.
**Detection:** Single-file input with `ABJR` container header and HPE AHS member names
such as `CUST_INFO.DAT`, `*.zbb`, `ilo_boot_support.zbb`.
**Extracted data (current):**
- System board identity (manufacturer, model, serial, part number)
- iLO / System ROM / SPS top-level firmware
- CPU inventory (model-level)
- Memory DIMM inventory for populated slots
- PSU inventory
- PCIe / OCP NIC inventory from SMBIOS-style slot records
- Storage controller and physical drives from embedded Redfish JSON inside `zbb` members
- Basic iLO event log entries with timestamps when present
**Implementation note:** The format is proprietary. Parser support is intentionally hybrid:
container parsing (`ABJR` + gzip) plus structured extraction from embedded Redfish objects and
printable SMBIOS/FRU payloads. This is sufficient for inventory-grade parsing without decoding the
entire internal `zbb` schema.
---
### xFusion iBMC Dump / File Export (`xfusion`)
**Status:** Ready (v1.1.0). Tested on xFusion G5500 V7 `tar.gz` exports.
**Archive format:** `tar.gz` dump exported from the iBMC UI, including `AppDump/`, `RTOSDump/`,
and `LogDump/` trees.
**Detection:** `AppDump/FruData/fruinfo.txt`, `AppDump/card_manage/card_info`,
`RTOSDump/versioninfo/app_revision.txt`, and `LogDump/netcard/netcard_info.txt`.
**Extracted data (current):**
- Board / FRU inventory from `fruinfo.txt`
- CPU inventory from `CpuMem/cpu_info`
- Memory DIMM inventory from `CpuMem/mem_info`
- GPU inventory from `card_info`
- OCP NIC inventory by merging `card_info` with `LogDump/netcard/netcard_info.txt`
- PSU inventory from `BMC/psu_info.txt`
- Physical storage from `StorageMgnt/PhysicalDrivesInfo/*/disk_info`
- System firmware entries from `RTOSDump/versioninfo/app_revision.txt`
- Maintenance events from `LogDump/maintenance_log`
---
### Generic text fallback (`generic`)
**Status:** Ready (v1.0.0).
@@ -139,10 +191,13 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
| Vendor | ID | Status | Tested on |
|--------|----|--------|-----------|
| Dell TSR | `dell` | Ready | TSR nested zip archives |
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
| xFusion iBMC dump | `xfusion` | Ready | G5500 V7 file-export `tar.gz` bundles |
| XigmaNAS | `xigmanas` | Ready | FreeBSD NAS logs |
| H3C SDS G5 | `h3c_g5` | Ready | H3C UniServer R4900 G5 SDS archives |
| H3C SDS G6 | `h3c_g6` | Ready | H3C UniServer R4700 G6 SDS archives |

View File

@@ -258,6 +258,9 @@ at parse time before storing in any model struct. Use the regex
**Date:** 2026-03-12
**Context:** `shouldAdaptiveNVMeProbe` was introduced in `2fa4a12` to recover NVMe drives on
Supermicro BMCs that expose empty `Drives` collections but serve disks at direct `Disk.Bay.N`
---
paths. The function returns `true` for any chassis with an empty `Members` array. On
Supermicro HGX systems (SYS-A21GE-NBRT and similar) ~35 sub-chassis (GPU, NVSwitch,
PCIeRetimer, ERoT, IRoT, BMC, FPGA) all carry `ChassisType=Module/Component/Zone` and
@@ -918,3 +921,202 @@ hardware change.
- Hardware event history (last 7 days) visible in Reanimator `EventLogs` section.
- No impact on existing inventory pipeline or offline archive replay (archives without `redfish_log_entries` key silently skip parsing).
- Adds extra HTTP requests during live collection (sequential, after tree-walk completes).
---
## ADL-036 — Redfish profile matching may use platform grammar hints beyond vendor strings
**Date:** 2026-03-25
**Context:**
Some BMCs expose unusable `Manufacturer` / `Model` values (`NULL`, placeholders, or generic SoC
names) while still exposing a stable platform-specific Redfish grammar: repeated member names,
firmware inventory IDs, OEM action names, and target-path quirks. Matching only on vendor
strings forced such systems into fallback mode even when the platform shape was consistent.
**Decision:**
- Extend `redfishprofile.MatchSignals` with doc-derived hint tokens collected from discovery docs
and replay snapshots.
- Allow profile matchers to score on stable platform grammar such as:
- collection member naming (`outboardPCIeCard*`, drive slot grammars)
- firmware inventory member IDs
- OEM action/type markers and linked target paths
- During live collection, gather only lightweight extra hint collections needed for matching
(`NetworkInterfaces`, `NetworkAdapters`, `Drives`, `UpdateService/FirmwareInventory`), not slow
deep inventory branches.
- Keep such profiles out of fallback aggregation unless they are proven safe as broad additive
hints.
**Consequences:**
- Platform-family profiles can activate even when vendor strings are absent or set to `NULL`.
- Matching logic becomes more robust for OEM BMC implementations that differ mainly by Redfish
grammar rather than by explicit vendor strings.
- Live collection gains a small amount of extra discovery I/O to harvest stable member IDs, but
avoids slow deep probes such as `Assembly` just for profile selection.
---
## ADL-037 — easy-bee archives are parsed from the embedded bee-audit snapshot
**Date:** 2026-03-25
**Context:**
`reanimator-easy-bee` support bundles already contain a normalized hardware snapshot in
`export/bee-audit.json` plus supporting logs and techdump files. Rebuilding the same inventory
from raw `techdump/` files inside LOGPile would duplicate parser logic and create drift between
the producer utility and archive importer.
**Decision:**
- Add a dedicated `easy_bee` vendor parser for `bee-support-*.tar.gz` bundles.
- Detect the bundle by `manifest.txt` (`bee_version=...`) plus `export/bee-audit.json`.
- Parse the archive from the embedded snapshot first; treat `techdump/` and runtime files as
secondary context only.
- Normalize snapshot-only fields needed by LOGPile, notably:
- flatten `hardware.sensors` groups into `[]SensorReading`
- turn runtime issues/status into `[]Event`
- synthesize a board FRU entry when the snapshot does not include FRU data
**Consequences:**
- LOGPile stays aligned with the schema emitted by `reanimator-easy-bee`.
- Adding support required only a thin archive adapter instead of a full hardware parser.
- If the upstream utility changes the embedded snapshot schema, the `easy_bee` adapter is the
only place that must be updated.
---
## ADL-038 — HPE AHS parser uses hybrid extraction instead of full `zbb` schema decoding
**Date:** 2026-03-30
**Context:** HPE iLO Active Health System exports (`.ahs`) are proprietary `ABJR` containers
with gzip-compressed `zbb` payloads. The sample inventory data contains two practical signal
families: printable SMBIOS/FRU-style strings and embedded Redfish JSON subtrees, especially for
storage controllers and drives. Full `zbb` binary schema decoding is not documented and would add
significant complexity before proving user value.
**Decision:** Support HPE AHS with a hybrid parser:
- decode the outer `ABJR` container
- gunzip embedded members when applicable
- extract inventory from printable SMBIOS/FRU payloads
- extract storage/controller/backplane details from embedded Redfish JSON objects
- enrich firmware and PSU inventory from auxiliary package payloads such as `bcert.pkg`
- do not attempt complete semantic decoding of the internal `zbb` record format
**Consequences:**
- Parser reaches inventory-grade usefulness quickly for HPE `.ahs` uploads.
- Storage inventory is stronger than text-only parsing because it reuses structured Redfish data when present.
- Auxiliary package payloads can supply missing firmware/PSU fields even when the main SMBIOS-like blob is incomplete.
- Future deeper `zbb` decoding can be added incrementally without replacing the current parser contract.
---
## ADL-039 — Canonical inventory keeps DIMMs with unknown capacity when identity is known
**Date:** 2026-03-30
**Context:** Some sources, notably HPE iLO AHS SMBIOS-like blobs, expose installed DIMM identity
(slot, serial, part number, manufacturer) but do not include capacity. The parser already extracts
those modules into `Hardware.Memory`, but canonical device building and export previously dropped
them because `size_mb == 0`.
**Decision:** Treat a DIMM as installed inventory when `present=true` and it has identifying
memory fields such as serial number or part number, even if `size_mb` is unknown.
**Consequences:**
- HPE AHS uploads now show real installed memory modules instead of hiding them.
- Empty slots still stay filtered because they lack inventory identity or are marked absent.
- Specification/export can include "size unknown" memory entries without inventing capacity data.
---
## ADL-040 — HPE Redfish normalization prefers chassis `Devices/*` over generic PCIe topology labels
**Date:** 2026-03-30
**Context:** HPE ProLiant Gen11 Redfish snapshots expose parallel inventory trees. `Chassis/*/PCIeDevices/*`
is good for topology presence, but often reports only generic `DeviceType` values such as
`SingleFunction`. `Chassis/*/Devices/*` carries the concrete slot label, richer device type, and
product-vs-spare part identifiers for the same physical NIC/controller. Replay fallback over empty
storage volume collections can also discover `Volumes/Capabilities` children, which are not real
logical volumes.
**Decision:**
- Treat Redfish `SKU` as a valid fallback for `hardware.board.part_number` when `PartNumber` is empty.
- Ignore `Volumes/Capabilities` documents during logical-volume parsing.
- Enrich `Chassis/*/PCIeDevices/*` entries with matching `Chassis/*/Devices/*` documents by
serial/name/part identity.
- Keep `pcie.device_class` semantic; do not replace it with model or part-number strings when
Redfish exposes only generic topology labels.
**Consequences:**
- HPE Redfish imports now keep the server SKU in `hardware.board.part_number`.
- Empty volume collections no longer produce fake `Capabilities` volume records.
- HPE PCIe inventory gets better slot labels like `OCP 3.0 Slot 15` plus concrete classes such as
`LOM/NIC` or `SAS/SATA Storage Controller`.
- `part_number` remains available separately for model identity, without polluting the class field.
---
## ADL-041 — Redfish replay drops topology-only PCIe noise classes from canonical inventory
**Date:** 2026-04-01
**Context:** Some Redfish BMCs, especially MSI/AMI GPU systems, expose a very wide PCIe topology
tree under `Chassis/*/PCIeDevices/*`. Besides real endpoint devices, the replay sees bridge stages,
CPU-side helper functions, IMC/mesh signal-processing nodes, USB/SPI side controllers, and GPU
display-function duplicates reported as generic `Display Device`. Keeping all of them in
`hardware.pcie_devices` pollutes downstream exports such as Reanimator and hides the actual
endpoint inventory signal.
**Decision:**
- Filter topology-only PCIe records during Redfish replay, not in the UI layer.
- Drop PCIe entries with replay-resolved classes:
- `Bridge`
- `Processor`
- `SignalProcessingController`
- `SerialBusController`
- Drop `DisplayController` entries when the source Redfish PCIe document is the generic MSI-style
`Description: "Display Device"` duplicate.
- Drop PCIe network endpoints when their PCIe functions already link to `NetworkDeviceFunctions`,
because those devices are represented canonically in `hardware.network_adapters`.
- When `Systems/*/NetworkInterfaces/*` links back to a chassis `NetworkAdapter`, match against the
fully enriched chassis NIC identity to avoid creating a second ghost NIC row with the raw
`NetworkAdapter_*` slot/name.
- Treat generic Redfish object names such as `NetworkAdapter_*` and `PCIeDevice_*` as placeholder
models and replace them from PCI IDs when a concrete vendor/device match exists.
- Drop MSI-style storage service PCIe endpoints whose resolved device names are only
`Volume Management Device NVMe RAID Controller` or `PCIe Switch management endpoint`; storage
inventory already comes from the Redfish storage tree.
- Normalize Ethernet-class NICs into the single exported class `NetworkController`; do not split
`EthernetController` into a separate top-level inventory section.
- Keep endpoint classes such as `NetworkController`, `MassStorageController`, and dedicated GPU
inventory coming from `hardware.gpus`.
**Consequences:**
- `hardware.pcie_devices` becomes closer to real endpoint inventory instead of raw PCIe topology.
- Reanimator exports stop showing MSI bridge/processor/display duplicate noise.
- Reanimator exports no longer duplicate the same MSI NIC as both `PCIeDevice_*` and
`NetworkAdapter_*`.
- Replay no longer creates extra NIC rows from `Systems/NetworkInterfaces` when the same adapter
was already normalized from `Chassis/NetworkAdapters`.
- MSI VMD / PCIe switch storage service endpoints no longer pollute PCIe inventory.
- UI/Reanimator group all Ethernet NICs under the same `NETWORKCONTROLLER` section.
- Canonical NIC inventory prefers resolved PCI product names over generic Redfish placeholder names.
- The raw Redfish snapshot still remains available in `raw_payloads.redfish_tree` for low-level
troubleshooting if topology details are ever needed.
---
## ADL-042 — xFusion file-export archives merge AppDump inventory with RTOS/Log snapshots
**Date:** 2026-04-04
**Context:** xFusion iBMC `tar.gz` exports expose the base inventory in `AppDump/`, but the most
useful NIC and firmware details live elsewhere: NIC firmware/MAC snapshots in
`LogDump/netcard/netcard_info.txt` and system firmware versions in
`RTOSDump/versioninfo/app_revision.txt`. Parsing only `AppDump/` left xFusion uploads detectable but
incomplete for UI and Reanimator consumers.
**Decision:**
- Treat xFusion file-export `tar.gz` bundles as a first-class archive parser input.
- Merge OCP NIC identity from `AppDump/card_manage/card_info` with the latest per-slot snapshot
from `LogDump/netcard/netcard_info.txt` to produce `hardware.network_adapters`.
- Import system-level firmware from `RTOSDump/versioninfo/app_revision.txt` into
`hardware.firmware`.
- Allow FRU fallback from `RTOSDump/versioninfo/fruinfo.txt` when `AppDump/FruData/fruinfo.txt`
is absent.
**Consequences:**
- xFusion uploads now preserve NIC BDF, MAC, firmware, and serial identity in normalized output.
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
- xFusion archives participate more reliably in canonical device/export flows without special UI
cases.

File diff suppressed because it is too large Load Diff

View File

@@ -31,8 +31,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
if emit != nil {
emit(Progress{Status: "running", Progress: 10, Message: "Redfish snapshot: replay service root..."})
}
serviceRootDoc, err := r.getJSON("/redfish/v1")
if err != nil {
if _, err := r.getJSON("/redfish/v1"); err != nil {
log.Printf("redfish replay: service root /redfish/v1 missing from snapshot, continuing with defaults: %v", err)
}
@@ -61,8 +60,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
fruDoc = chassisFRUDoc
}
boardFallbackDocs := r.collectBoardFallbackDocs(systemPaths, chassisPaths)
resourceHints := append(append([]string{}, systemPaths...), append(chassisPaths, managerPaths...)...)
profileSignals := redfishprofile.CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc, resourceHints)
profileSignals := redfishprofile.CollectSignalsFromTree(tree)
profileMatch := redfishprofile.MatchProfiles(profileSignals)
analysisPlan := redfishprofile.ResolveAnalysisPlan(profileMatch, tree, redfishprofile.DiscoveredResources{
SystemPaths: systemPaths,
@@ -98,20 +96,27 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
networkProtocolDoc, _ := r.getJSON(joinPath(primaryManager, "/NetworkProtocol"))
firmware := parseFirmware(systemDoc, biosDoc, managerDoc, networkProtocolDoc)
firmware = dedupeFirmwareInfo(append(firmware, r.collectFirmwareInventory()...))
boardInfo.BMCMACAddress = r.collectBMCMAC(managerPaths)
firmware = filterStorageDriveFirmware(firmware, storageDevices)
bmcManagementSummary := r.collectBMCManagementSummary(managerPaths)
boardInfo.BMCMACAddress = strings.TrimSpace(firstNonEmpty(
asString(bmcManagementSummary["mac_address"]),
r.collectBMCMAC(managerPaths),
))
assemblyFRU := r.collectAssemblyFRU(chassisPaths)
collectedAt, sourceTimezone := inferRedfishCollectionTime(managerDoc, rawPayloads)
inventoryLastModifiedAt := inferInventoryLastModifiedTime(r.tree)
logEntryEvents := parseRedfishLogEntries(rawPayloads, collectedAt)
sensorHintSummary, sensorHintEvents := r.collectSensorsListHints(chassisPaths, collectedAt)
bmcManagementEvent := buildBMCManagementSummaryEvent(bmcManagementSummary, collectedAt)
result := &models.AnalysisResult{
CollectedAt: collectedAt,
InventoryLastModifiedAt: inventoryLastModifiedAt,
SourceTimezone: sourceTimezone,
Events: append(append(append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+len(driveFetchWarningEvents)+len(logEntryEvents)+1), healthEvents...), discreteEvents...), driveFetchWarningEvents...), logEntryEvents...),
FRU: assemblyFRU,
Sensors: dedupeSensorReadings(append(append(thresholdSensors, thermalSensors...), powerSensors...)),
RawPayloads: cloneRawPayloads(rawPayloads),
SourceTimezone: sourceTimezone,
Events: append(append(append(append(append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+len(driveFetchWarningEvents)+len(logEntryEvents)+len(sensorHintEvents)+2), healthEvents...), discreteEvents...), driveFetchWarningEvents...), logEntryEvents...), sensorHintEvents...), bmcManagementEvent...),
FRU: assemblyFRU,
Sensors: dedupeSensorReadings(append(append(thresholdSensors, thermalSensors...), powerSensors...)),
RawPayloads: cloneRawPayloads(rawPayloads),
Hardware: &models.HardwareConfig{
BoardInfo: boardInfo,
CPUs: processors,
@@ -123,7 +128,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
PowerSupply: psus,
NetworkAdapters: nics,
Firmware: firmware,
},
},
}
match := profileMatch
for _, profile := range match.Profiles {
@@ -157,6 +162,12 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
if strings.TrimSpace(sourceTimezone) != "" {
result.RawPayloads["source_timezone"] = sourceTimezone
}
if len(sensorHintSummary) > 0 {
result.RawPayloads["redfish_sensor_hints"] = sensorHintSummary
}
if len(bmcManagementSummary) > 0 {
result.RawPayloads["redfish_bmc_network_summary"] = bmcManagementSummary
}
appendMissingServerModelWarning(result, systemDoc, joinPath(primarySystem, "/Oem/Public/FRU"), joinPath(primaryChassis, "/Oem/Public/FRU"))
return result, nil
}
@@ -277,7 +288,6 @@ func redfishFetchErrorsFromRawPayloads(rawPayloads map[string]any) map[string]st
}
}
func buildDriveFetchWarningEvents(rawPayloads map[string]any) []models.Event {
errs := redfishFetchErrorsFromRawPayloads(rawPayloads)
if len(errs) == 0 {
@@ -327,6 +337,153 @@ func buildDriveFetchWarningEvents(rawPayloads map[string]any) []models.Event {
}
}
func (r redfishSnapshotReader) collectSensorsListHints(chassisPaths []string, collectedAt time.Time) (map[string]any, []models.Event) {
summary := make(map[string]any)
var events []models.Event
var presentDIMMs []string
dimmTotal := 0
dimmPresent := 0
physicalDriveSlots := 0
activePhysicalDriveSlots := 0
logicalDriveStatus := ""
for _, chassisPath := range chassisPaths {
doc, err := r.getJSON(joinPath(chassisPath, "/SensorsList"))
if err != nil || len(doc) == 0 {
continue
}
sensors, ok := doc["SensorsList"].([]interface{})
if !ok {
continue
}
for _, item := range sensors {
sensor, ok := item.(map[string]interface{})
if !ok {
continue
}
name := strings.TrimSpace(asString(sensor["SensorName"]))
sensorType := strings.TrimSpace(asString(sensor["SensorType"]))
status := strings.TrimSpace(asString(sensor["Status"]))
switch {
case strings.HasPrefix(name, "DIMM") && strings.HasSuffix(name, "_Status") && strings.EqualFold(sensorType, "Memory"):
dimmTotal++
if redfishSlotStatusLooksPresent(status) {
dimmPresent++
presentDIMMs = append(presentDIMMs, strings.TrimSuffix(name, "_Status"))
}
case strings.EqualFold(sensorType, "Drive Slot"):
if strings.EqualFold(name, "Logical_Drive") {
logicalDriveStatus = firstNonEmpty(logicalDriveStatus, status)
continue
}
physicalDriveSlots++
if redfishSlotStatusLooksPresent(status) {
activePhysicalDriveSlots++
}
}
}
}
if dimmTotal > 0 {
sort.Strings(presentDIMMs)
summary["memory_slots"] = map[string]any{
"total": dimmTotal,
"present_count": dimmPresent,
"present_slots": presentDIMMs,
"source": "SensorsList",
}
events = append(events, models.Event{
Timestamp: replayEventTimestamp(collectedAt),
Source: "Redfish",
EventType: "Collection Info",
Severity: models.SeverityInfo,
Description: fmt.Sprintf("Memory slot sensors report %d populated positions out of %d", dimmPresent, dimmTotal),
RawData: firstNonEmpty(strings.Join(presentDIMMs, ", "), "no populated DIMM slots reported"),
})
}
if physicalDriveSlots > 0 || logicalDriveStatus != "" {
summary["drive_slots"] = map[string]any{
"physical_total": physicalDriveSlots,
"physical_active_count": activePhysicalDriveSlots,
"logical_drive_status": logicalDriveStatus,
"source": "SensorsList",
}
rawParts := []string{
fmt.Sprintf("physical_active=%d/%d", activePhysicalDriveSlots, physicalDriveSlots),
}
if logicalDriveStatus != "" {
rawParts = append(rawParts, "logical_drive="+logicalDriveStatus)
}
events = append(events, models.Event{
Timestamp: replayEventTimestamp(collectedAt),
Source: "Redfish",
EventType: "Collection Info",
Severity: models.SeverityInfo,
Description: fmt.Sprintf("Drive slot sensors report %d active physical slots out of %d", activePhysicalDriveSlots, physicalDriveSlots),
RawData: strings.Join(rawParts, "; "),
})
}
return summary, events
}
func buildBMCManagementSummaryEvent(summary map[string]any, collectedAt time.Time) []models.Event {
if len(summary) == 0 {
return nil
}
desc := fmt.Sprintf(
"BMC management interface %s link=%s ip=%s",
firstNonEmpty(asString(summary["interface_id"]), "unknown"),
firstNonEmpty(asString(summary["link_status"]), "unknown"),
firstNonEmpty(asString(summary["ipv4_address"]), "n/a"),
)
rawParts := make([]string, 0, 8)
for _, part := range []string{
"mac_address=" + strings.TrimSpace(asString(summary["mac_address"])),
"speed_mbps=" + strings.TrimSpace(asString(summary["speed_mbps"])),
"lldp_chassis_name=" + strings.TrimSpace(asString(summary["lldp_chassis_name"])),
"lldp_port_desc=" + strings.TrimSpace(asString(summary["lldp_port_desc"])),
"lldp_port_id=" + strings.TrimSpace(asString(summary["lldp_port_id"])),
"ipv4_gateway=" + strings.TrimSpace(asString(summary["ipv4_gateway"])),
} {
if !strings.HasSuffix(part, "=") {
rawParts = append(rawParts, part)
}
}
if vlan := asInt(summary["lldp_vlan_id"]); vlan > 0 {
rawParts = append(rawParts, fmt.Sprintf("lldp_vlan_id=%d", vlan))
}
if asBool(summary["ncsi_enabled"]) {
rawParts = append(rawParts, "ncsi_enabled=true")
}
return []models.Event{
{
Timestamp: replayEventTimestamp(collectedAt),
Source: "Redfish",
EventType: "Collection Info",
Severity: models.SeverityInfo,
Description: desc,
RawData: strings.Join(rawParts, "; "),
},
}
}
func redfishSlotStatusLooksPresent(status string) bool {
switch strings.ToLower(strings.TrimSpace(status)) {
case "ok", "enabled", "present", "warning", "critical":
return true
default:
return false
}
}
func replayEventTimestamp(collectedAt time.Time) time.Time {
if !collectedAt.IsZero() {
return collectedAt
}
return time.Now()
}
func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo {
docs, err := r.getCollectionMembers("/redfish/v1/UpdateService/FirmwareInventory")
if err != nil || len(docs) == 0 {
@@ -342,6 +499,10 @@ func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo
if strings.TrimSpace(version) == "" {
continue
}
// Skip placeholder version strings that carry no useful information.
if strings.EqualFold(strings.TrimSpace(version), "N/A") {
continue
}
name := firmwareInventoryDeviceName(doc)
name = strings.TrimSpace(name)
if name == "" {
@@ -394,6 +555,32 @@ func dedupeFirmwareInfo(items []models.FirmwareInfo) []models.FirmwareInfo {
return out
}
// filterStorageDriveFirmware removes from fw any entries whose DeviceName+Version
// already appear as a storage drive's Model+Firmware. Drive firmware is already
// represented in the Storage section and should not be duplicated in the general
// firmware list.
func filterStorageDriveFirmware(fw []models.FirmwareInfo, storage []models.Storage) []models.FirmwareInfo {
if len(storage) == 0 {
return fw
}
driveFW := make(map[string]struct{}, len(storage))
for _, d := range storage {
model := strings.ToLower(strings.TrimSpace(d.Model))
rev := strings.ToLower(strings.TrimSpace(d.Firmware))
if model != "" && rev != "" {
driveFW[model+"|"+rev] = struct{}{}
}
}
out := fw[:0:0]
for _, f := range fw {
key := strings.ToLower(strings.TrimSpace(f.DeviceName)) + "|" + strings.ToLower(strings.TrimSpace(f.Version))
if _, skip := driveFW[key]; !skip {
out = append(out, f)
}
}
return out
}
func (r redfishSnapshotReader) collectThresholdSensors(chassisPaths []string) []models.SensorReading {
out := make([]models.SensorReading, 0)
seen := make(map[string]struct{})
@@ -859,6 +1046,9 @@ func (r redfishSnapshotReader) fallbackCollectionMembers(collectionPath string,
if err != nil {
continue
}
if redfishFallbackMemberLooksLikePlaceholder(collectionPath, doc) {
continue
}
if strings.TrimSpace(asString(doc["@odata.id"])) == "" {
doc["@odata.id"] = normalizeRedfishPath(p)
}
@@ -867,6 +1057,135 @@ func (r redfishSnapshotReader) fallbackCollectionMembers(collectionPath string,
return out, nil
}
func redfishFallbackMemberLooksLikePlaceholder(collectionPath string, doc map[string]interface{}) bool {
if len(doc) == 0 {
return true
}
path := normalizeRedfishPath(collectionPath)
switch {
case strings.HasSuffix(path, "/NetworkAdapters"):
return redfishNetworkAdapterPlaceholderDoc(doc)
case strings.HasSuffix(path, "/PCIeDevices"):
return redfishPCIePlaceholderDoc(doc)
case strings.Contains(path, "/Storage"):
return redfishStoragePlaceholderDoc(doc)
default:
return false
}
}
func redfishNetworkAdapterPlaceholderDoc(doc map[string]interface{}) bool {
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
asHexOrInt(doc["VendorId"]) != 0 ||
asHexOrInt(doc["DeviceId"]) != 0 {
return false
}
return redfishDocHasOnlyAllowedKeys(doc,
"@odata.context",
"@odata.id",
"@odata.type",
"Id",
"Name",
)
}
func redfishPCIePlaceholderDoc(doc map[string]interface{}) bool {
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
asHexOrInt(doc["VendorId"]) != 0 ||
asHexOrInt(doc["DeviceId"]) != 0 {
return false
}
return redfishDocHasOnlyAllowedKeys(doc,
"@odata.context",
"@odata.id",
"@odata.type",
"Id",
"Name",
)
}
func redfishStoragePlaceholderDoc(doc map[string]interface{}) bool {
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
asHexOrInt(doc["VendorId"]) != 0 ||
asHexOrInt(doc["DeviceId"]) != 0 {
return false
}
if !redfishDocHasOnlyAllowedKeys(doc,
"@odata.id",
"@odata.type",
"Drives",
"Drives@odata.count",
"LogicalDisk",
"PhysicalDisk",
"Name",
) {
return false
}
return redfishFieldIsEmptyCollection(doc["Drives"]) &&
redfishFieldIsZeroLike(doc["Drives@odata.count"]) &&
redfishFieldIsEmptyCollection(doc["LogicalDisk"]) &&
redfishFieldIsEmptyCollection(doc["PhysicalDisk"])
}
func redfishDocHasOnlyAllowedKeys(doc map[string]interface{}, allowed ...string) bool {
if len(doc) == 0 {
return false
}
allowedSet := make(map[string]struct{}, len(allowed))
for _, key := range allowed {
allowedSet[key] = struct{}{}
}
for key := range doc {
if _, ok := allowedSet[key]; !ok {
return false
}
}
return true
}
func redfishFieldIsEmptyCollection(v any) bool {
switch x := v.(type) {
case nil:
return true
case []interface{}:
return len(x) == 0
default:
return false
}
}
func redfishFieldIsZeroLike(v any) bool {
switch x := v.(type) {
case nil:
return true
case int:
return x == 0
case int32:
return x == 0
case int64:
return x == 0
case float64:
return x == 0
case string:
x = strings.TrimSpace(x)
return x == "" || x == "0"
default:
return false
}
}
func cloneRawPayloads(src map[string]any) map[string]any {
if len(src) == 0 {
return nil
@@ -925,6 +1244,15 @@ func (r redfishSnapshotReader) getLinkedPCIeFunctions(doc map[string]interface{}
}
return out
}
if ref, ok := links["PCIeFunction"].(map[string]interface{}); ok {
memberPath := asString(ref["@odata.id"])
if memberPath != "" {
memberDoc, err := r.getJSON(memberPath)
if err == nil {
return []map[string]interface{}{memberDoc}
}
}
}
}
if pcieFunctions, ok := doc["PCIeFunctions"].(map[string]interface{}); ok {
if collectionPath := asString(pcieFunctions["@odata.id"]); collectionPath != "" {
@@ -937,6 +1265,33 @@ func (r redfishSnapshotReader) getLinkedPCIeFunctions(doc map[string]interface{}
return nil
}
func dedupeJSONDocsByPath(docs []map[string]interface{}) []map[string]interface{} {
if len(docs) == 0 {
return nil
}
seen := make(map[string]struct{}, len(docs))
out := make([]map[string]interface{}, 0, len(docs))
for _, doc := range docs {
if len(doc) == 0 {
continue
}
key := normalizeRedfishPath(asString(doc["@odata.id"]))
if key == "" {
payload, err := json.Marshal(doc)
if err != nil {
continue
}
key = string(payload)
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, doc)
}
return out
}
func (r redfishSnapshotReader) getLinkedSupplementalDocs(doc map[string]interface{}, keys ...string) []map[string]interface{} {
if len(doc) == 0 || len(keys) == 0 {
return nil
@@ -973,6 +1328,12 @@ func (r redfishSnapshotReader) collectProcessors(systemPath string) []models.CPU
!strings.EqualFold(pt, "CPU") && !strings.EqualFold(pt, "General") {
continue
}
// Skip absent processor sockets — empty slots with no CPU installed.
if status, ok := doc["Status"].(map[string]interface{}); ok {
if strings.EqualFold(asString(status["State"]), "Absent") {
continue
}
}
cpu := parseCPUs([]map[string]interface{}{doc})[0]
if cpu.Socket == 0 && socketIdx > 0 && strings.TrimSpace(asString(doc["Socket"])) == "" {
cpu.Socket = socketIdx
@@ -999,6 +1360,10 @@ func (r redfishSnapshotReader) collectMemory(systemPath string) []models.MemoryD
out := make([]models.MemoryDIMM, 0, len(memberDocs))
for _, doc := range memberDocs {
dimm := parseMemory([]map[string]interface{}{doc})[0]
// Skip empty DIMM slots — no installed memory.
if !dimm.Present {
continue
}
supplementalDocs := r.getLinkedSupplementalDocs(doc, "MemoryMetrics", "EnvironmentMetrics", "Metrics")
if len(supplementalDocs) > 0 {
dimm.Details = mergeGenericDetails(dimm.Details, redfishMemoryDetailsAcrossDocs(doc, supplementalDocs...))

View File

@@ -31,7 +31,7 @@ func (r redfishSnapshotReader) enrichNICsFromNetworkInterfaces(nics *[]models.Ne
// the real NIC that came from Chassis/NetworkAdapters (e.g. "RISER 5
// slot 1 (7)"). Try to find the real NIC via the Links.NetworkAdapter
// cross-reference before creating a ghost entry.
if linkedIdx := r.findNICIndexByLinkedNetworkAdapter(iface, bySlot); linkedIdx >= 0 {
if linkedIdx := r.findNICIndexByLinkedNetworkAdapter(iface, *nics, bySlot); linkedIdx >= 0 {
idx = linkedIdx
ok = true
}
@@ -75,28 +75,53 @@ func (r redfishSnapshotReader) collectNICs(chassisPaths []string) []models.Netwo
continue
}
for _, doc := range adapterDocs {
nic := parseNIC(doc)
for _, pciePath := range networkAdapterPCIeDevicePaths(doc) {
pcieDoc, err := r.getJSON(pciePath)
if err != nil {
continue
}
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
supplementalDocs := r.getLinkedSupplementalDocs(pcieDoc, "EnvironmentMetrics", "Metrics")
for _, fn := range functionDocs {
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
}
enrichNICFromPCIe(&nic, pcieDoc, functionDocs, supplementalDocs)
}
if len(nic.MACAddresses) == 0 {
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, doc)
}
nics = append(nics, nic)
nics = append(nics, r.buildNICFromAdapterDoc(doc))
}
}
return dedupeNetworkAdapters(nics)
}
func (r redfishSnapshotReader) buildNICFromAdapterDoc(adapterDoc map[string]interface{}) models.NetworkAdapter {
nic := parseNIC(adapterDoc)
adapterFunctionDocs := r.getNetworkAdapterFunctionDocs(adapterDoc)
for _, pciePath := range networkAdapterPCIeDevicePaths(adapterDoc) {
pcieDoc, err := r.getJSON(pciePath)
if err != nil {
continue
}
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
for _, adapterFnDoc := range adapterFunctionDocs {
functionDocs = append(functionDocs, r.getLinkedPCIeFunctions(adapterFnDoc)...)
}
functionDocs = dedupeJSONDocsByPath(functionDocs)
supplementalDocs := r.getLinkedSupplementalDocs(pcieDoc, "EnvironmentMetrics", "Metrics")
for _, fn := range functionDocs {
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
}
enrichNICFromPCIe(&nic, pcieDoc, functionDocs, supplementalDocs)
}
if len(nic.MACAddresses) == 0 {
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, adapterDoc)
}
return nic
}
func (r redfishSnapshotReader) getNetworkAdapterFunctionDocs(adapterDoc map[string]interface{}) []map[string]interface{} {
ndfCol, ok := adapterDoc["NetworkDeviceFunctions"].(map[string]interface{})
if !ok {
return nil
}
colPath := asString(ndfCol["@odata.id"])
if colPath == "" {
return nil
}
funcDocs, err := r.getCollectionMembers(colPath)
if err != nil {
return nil
}
return funcDocs
}
func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []string) []models.PCIeDevice {
collections := make([]string, 0, len(systemPaths)+len(chassisPaths))
for _, systemPath := range systemPaths {
@@ -116,13 +141,16 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
if looksLikeGPU(doc, functionDocs) {
continue
}
if replayPCIeDeviceBackedByCanonicalNIC(doc, functionDocs) {
continue
}
supplementalDocs := r.getLinkedSupplementalDocs(doc, "EnvironmentMetrics", "Metrics")
supplementalDocs = append(supplementalDocs, r.getChassisScopedPCIeSupplementalDocs(doc)...)
for _, fn := range functionDocs {
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
}
dev := parsePCIeDeviceWithSupplementalDocs(doc, functionDocs, supplementalDocs)
if isUnidentifiablePCIeDevice(dev) {
if shouldSkipReplayPCIeDevice(doc, dev) {
continue
}
out = append(out, dev)
@@ -136,41 +164,185 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
for idx, fn := range functionDocs {
supplementalDocs := r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")
dev := parsePCIeFunctionWithSupplementalDocs(fn, supplementalDocs, idx+1)
if shouldSkipReplayPCIeDevice(fn, dev) {
continue
}
out = append(out, dev)
}
}
return dedupePCIeDevices(out)
}
func (r redfishSnapshotReader) getChassisScopedPCIeSupplementalDocs(doc map[string]interface{}) []map[string]interface{} {
if !looksLikeNVSwitchPCIeDoc(doc) {
return nil
func shouldSkipReplayPCIeDevice(doc map[string]interface{}, dev models.PCIeDevice) bool {
if isUnidentifiablePCIeDevice(dev) {
return true
}
if replayNetworkFunctionBackedByCanonicalNIC(doc, dev) {
return true
}
if isReplayStorageServiceEndpoint(doc, dev) {
return true
}
if isReplayNoisePCIeClass(dev.DeviceClass) {
return true
}
if isReplayDisplayDeviceDuplicate(doc, dev) {
return true
}
return false
}
func replayPCIeDeviceBackedByCanonicalNIC(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
if !looksLikeReplayNetworkPCIeDevice(doc, functionDocs) {
return false
}
for _, fn := range functionDocs {
if hasRedfishLinkedMember(fn, "NetworkDeviceFunctions") {
return true
}
}
return false
}
func replayNetworkFunctionBackedByCanonicalNIC(doc map[string]interface{}, dev models.PCIeDevice) bool {
if !looksLikeReplayNetworkClass(dev.DeviceClass) {
return false
}
return hasRedfishLinkedMember(doc, "NetworkDeviceFunctions")
}
func looksLikeReplayNetworkPCIeDevice(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
for _, fn := range functionDocs {
if looksLikeReplayNetworkClass(asString(fn["DeviceClass"])) {
return true
}
}
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
asString(doc["DeviceType"]),
asString(doc["Description"]),
asString(doc["Name"]),
asString(doc["Model"]),
}, " ")))
return strings.Contains(joined, "network")
}
func looksLikeReplayNetworkClass(class string) bool {
class = strings.ToLower(strings.TrimSpace(class))
return strings.Contains(class, "network") || strings.Contains(class, "ethernet")
}
func isReplayStorageServiceEndpoint(doc map[string]interface{}, dev models.PCIeDevice) bool {
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
if class != "massstoragecontroller" && class != "mass storage controller" {
return false
}
name := strings.ToLower(strings.TrimSpace(firstNonEmpty(
dev.PartNumber,
asString(doc["PartNumber"]),
asString(doc["Description"]),
)))
if strings.Contains(name, "pcie switch management endpoint") {
return true
}
if strings.Contains(name, "volume management device nvme raid controller") {
return true
}
return false
}
func hasRedfishLinkedMember(doc map[string]interface{}, key string) bool {
links, ok := doc["Links"].(map[string]interface{})
if !ok {
return false
}
if asInt(links[key+"@odata.count"]) > 0 {
return true
}
linked, ok := links[key]
if !ok {
return false
}
switch v := linked.(type) {
case []interface{}:
return len(v) > 0
case map[string]interface{}:
if asString(v["@odata.id"]) != "" {
return true
}
return len(v) > 0
default:
return false
}
}
func isReplayNoisePCIeClass(class string) bool {
switch strings.ToLower(strings.TrimSpace(class)) {
case "bridge", "processor", "signalprocessingcontroller", "signal processing controller", "serialbuscontroller", "serial bus controller":
return true
default:
return false
}
}
func isReplayDisplayDeviceDuplicate(doc map[string]interface{}, dev models.PCIeDevice) bool {
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
if class != "displaycontroller" && class != "display controller" {
return false
}
return strings.EqualFold(strings.TrimSpace(asString(doc["Description"])), "Display Device")
}
func (r redfishSnapshotReader) getChassisScopedPCIeSupplementalDocs(doc map[string]interface{}) []map[string]interface{} {
docPath := normalizeRedfishPath(asString(doc["@odata.id"]))
chassisPath := chassisPathForPCIeDoc(docPath)
if chassisPath == "" {
return nil
}
out := make([]map[string]interface{}, 0, 4)
for _, path := range []string{
joinPath(chassisPath, "/EnvironmentMetrics"),
joinPath(chassisPath, "/ThermalSubsystem/ThermalMetrics"),
} {
supplementalDoc, err := r.getJSON(path)
if err != nil || len(supplementalDoc) == 0 {
continue
out := make([]map[string]interface{}, 0, 6)
if looksLikeNVSwitchPCIeDoc(doc) {
for _, path := range []string{
joinPath(chassisPath, "/EnvironmentMetrics"),
joinPath(chassisPath, "/ThermalSubsystem/ThermalMetrics"),
} {
supplementalDoc, err := r.getJSON(path)
if err != nil || len(supplementalDoc) == 0 {
continue
}
out = append(out, supplementalDoc)
}
}
deviceDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/Devices"))
if err == nil {
for _, deviceDoc := range deviceDocs {
if !redfishPCIeMatchesChassisDeviceDoc(doc, deviceDoc) {
continue
}
out = append(out, deviceDoc)
}
out = append(out, supplementalDoc)
}
return out
}
// collectBMCMAC returns the MAC address of the first active BMC management
// interface found in Managers/*/EthernetInterfaces. Returns empty string if
// no MAC is available.
// collectBMCMAC returns the MAC address of the best BMC management interface
// found in Managers/*/EthernetInterfaces. Prefer an active link with an IP
// address over a passive sideband interface.
func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
summary := r.collectBMCManagementSummary(managerPaths)
if len(summary) == 0 {
return ""
}
return strings.ToUpper(strings.TrimSpace(asString(summary["mac_address"])))
}
func (r redfishSnapshotReader) collectBMCManagementSummary(managerPaths []string) map[string]any {
bestScore := -1
var best map[string]any
for _, managerPath := range managerPaths {
members, err := r.getCollectionMembers(joinPath(managerPath, "/EthernetInterfaces"))
collectionPath := joinPath(managerPath, "/EthernetInterfaces")
collectionDoc, _ := r.getJSON(collectionPath)
ncsiEnabled, lldpMode, lldpByEth := redfishManagerEthernetCollectionHints(collectionDoc)
members, err := r.getCollectionMembers(collectionPath)
if err != nil || len(members) == 0 {
continue
}
@@ -182,16 +354,146 @@ func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
if mac == "" || strings.EqualFold(mac, "00:00:00:00:00:00") {
continue
}
return strings.ToUpper(mac)
ifaceID := strings.TrimSpace(firstNonEmpty(asString(doc["Id"]), asString(doc["Name"])))
summary := map[string]any{
"manager_path": managerPath,
"interface_id": ifaceID,
"hostname": strings.TrimSpace(asString(doc["HostName"])),
"fqdn": strings.TrimSpace(asString(doc["FQDN"])),
"mac_address": strings.ToUpper(mac),
"link_status": strings.TrimSpace(asString(doc["LinkStatus"])),
"speed_mbps": asInt(doc["SpeedMbps"]),
"interface_name": strings.TrimSpace(asString(doc["Name"])),
"interface_desc": strings.TrimSpace(asString(doc["Description"])),
"ncsi_enabled": ncsiEnabled,
"lldp_mode": lldpMode,
"ipv4_address": redfishManagerIPv4Field(doc, "Address"),
"ipv4_gateway": redfishManagerIPv4Field(doc, "Gateway"),
"ipv4_subnet": redfishManagerIPv4Field(doc, "SubnetMask"),
"ipv6_address": redfishManagerIPv6Field(doc, "Address"),
"link_is_active": strings.EqualFold(strings.TrimSpace(asString(doc["LinkStatus"])), "LinkActive"),
"interface_score": 0,
}
if lldp, ok := lldpByEth[strings.ToLower(ifaceID)]; ok {
summary["lldp_chassis_name"] = lldp["ChassisName"]
summary["lldp_port_desc"] = lldp["PortDesc"]
summary["lldp_port_id"] = lldp["PortId"]
if vlan := asInt(lldp["VlanId"]); vlan > 0 {
summary["lldp_vlan_id"] = vlan
}
}
score := redfishManagerInterfaceScore(summary)
summary["interface_score"] = score
if score > bestScore {
bestScore = score
best = summary
}
}
}
return best
}
func redfishManagerEthernetCollectionHints(collectionDoc map[string]interface{}) (bool, string, map[string]map[string]interface{}) {
lldpByEth := make(map[string]map[string]interface{})
if len(collectionDoc) == 0 {
return false, "", lldpByEth
}
oem, _ := collectionDoc["Oem"].(map[string]interface{})
public, _ := oem["Public"].(map[string]interface{})
ncsiEnabled := asBool(public["NcsiEnabled"])
lldp, _ := public["LLDP"].(map[string]interface{})
lldpMode := strings.TrimSpace(asString(lldp["LLDPMode"]))
if members, ok := lldp["Members"].([]interface{}); ok {
for _, item := range members {
member, ok := item.(map[string]interface{})
if !ok {
continue
}
ethIndex := strings.ToLower(strings.TrimSpace(asString(member["EthIndex"])))
if ethIndex == "" {
continue
}
lldpByEth[ethIndex] = member
}
}
return ncsiEnabled, lldpMode, lldpByEth
}
func redfishManagerIPv4Field(doc map[string]interface{}, key string) string {
if len(doc) == 0 {
return ""
}
for _, field := range []string{"IPv4Addresses", "IPv4StaticAddresses"} {
list, ok := doc[field].([]interface{})
if !ok {
continue
}
for _, item := range list {
entry, ok := item.(map[string]interface{})
if !ok {
continue
}
value := strings.TrimSpace(asString(entry[key]))
if value != "" {
return value
}
}
}
return ""
}
func redfishManagerIPv6Field(doc map[string]interface{}, key string) string {
if len(doc) == 0 {
return ""
}
list, ok := doc["IPv6Addresses"].([]interface{})
if !ok {
return ""
}
for _, item := range list {
entry, ok := item.(map[string]interface{})
if !ok {
continue
}
value := strings.TrimSpace(asString(entry[key]))
if value != "" {
return value
}
}
return ""
}
func redfishManagerInterfaceScore(summary map[string]any) int {
score := 0
if strings.EqualFold(strings.TrimSpace(asString(summary["link_status"])), "LinkActive") {
score += 100
}
if strings.TrimSpace(asString(summary["ipv4_address"])) != "" {
score += 40
}
if strings.TrimSpace(asString(summary["ipv6_address"])) != "" {
score += 10
}
if strings.TrimSpace(asString(summary["mac_address"])) != "" {
score += 10
}
if asInt(summary["speed_mbps"]) > 0 {
score += 5
}
if ifaceID := strings.ToLower(strings.TrimSpace(asString(summary["interface_id"]))); ifaceID != "" && !strings.HasPrefix(ifaceID, "usb") {
score += 3
}
if asBool(summary["ncsi_enabled"]) {
score += 1
}
return score
}
// findNICIndexByLinkedNetworkAdapter resolves a NetworkInterface document to an
// existing NIC in bySlot by following Links.NetworkAdapter → the Chassis
// NetworkAdapter doc → its slot label. Returns -1 if no match is found.
func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[string]interface{}, bySlot map[string]int) int {
// NetworkAdapter doc and reconstructing the canonical NIC identity. Returns -1
// if no match is found.
func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[string]interface{}, existing []models.NetworkAdapter, bySlot map[string]int) int {
links, ok := iface["Links"].(map[string]interface{})
if !ok {
return -1
@@ -208,15 +510,58 @@ func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[stri
if err != nil || len(adapterDoc) == 0 {
return -1
}
adapterNIC := parseNIC(adapterDoc)
adapterNIC := r.buildNICFromAdapterDoc(adapterDoc)
if serial := normalizeRedfishIdentityField(adapterNIC.SerialNumber); serial != "" {
for idx, nic := range existing {
if strings.EqualFold(normalizeRedfishIdentityField(nic.SerialNumber), serial) {
return idx
}
}
}
if bdf := strings.TrimSpace(adapterNIC.BDF); bdf != "" {
for idx, nic := range existing {
if strings.EqualFold(strings.TrimSpace(nic.BDF), bdf) {
return idx
}
}
}
if slot := strings.ToLower(strings.TrimSpace(adapterNIC.Slot)); slot != "" {
if idx, ok := bySlot[slot]; ok {
return idx
}
}
for idx, nic := range existing {
if networkAdaptersShareMACs(nic, adapterNIC) {
return idx
}
}
return -1
}
func networkAdaptersShareMACs(a, b models.NetworkAdapter) bool {
if len(a.MACAddresses) == 0 || len(b.MACAddresses) == 0 {
return false
}
seen := make(map[string]struct{}, len(a.MACAddresses))
for _, mac := range a.MACAddresses {
normalized := strings.ToUpper(strings.TrimSpace(mac))
if normalized == "" {
continue
}
seen[normalized] = struct{}{}
}
for _, mac := range b.MACAddresses {
normalized := strings.ToUpper(strings.TrimSpace(mac))
if normalized == "" {
continue
}
if _, ok := seen[normalized]; ok {
return true
}
}
return false
}
// enrichNICMACsFromNetworkDeviceFunctions reads the NetworkDeviceFunctions
// collection linked from a NetworkAdapter document and populates the NIC's
// MACAddresses from each function's Ethernet.PermanentMACAddress / MACAddress.

View File

@@ -14,13 +14,16 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
driveDocs, err := r.getCollectionMembers(driveCollectionPath)
if err == nil {
for _, driveDoc := range driveDocs {
if !isVirtualStorageDrive(driveDoc) {
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
}
}
if len(driveDocs) == 0 {
for _, driveDoc := range r.probeDirectDiskBayChildren(driveCollectionPath) {
if isAbsentDriveDoc(driveDoc) {
continue
}
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
}
@@ -43,7 +46,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
if err != nil {
continue
}
if !isVirtualStorageDrive(driveDoc) {
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
}
@@ -51,7 +54,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
continue
}
if looksLikeDrive(member) {
if isVirtualStorageDrive(member) {
if isAbsentDriveDoc(member) || isVirtualStorageDrive(member) {
continue
}
supplementalDocs := r.getLinkedSupplementalDocs(member, "DriveMetrics", "EnvironmentMetrics", "Metrics")
@@ -63,14 +66,14 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
driveDocs, err := r.getCollectionMembers(joinPath(enclosurePath, "/Drives"))
if err == nil {
for _, driveDoc := range driveDocs {
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
}
}
if len(driveDocs) == 0 {
for _, driveDoc := range r.probeDirectDiskBayChildren(joinPath(enclosurePath, "/Drives")) {
if isVirtualStorageDrive(driveDoc) {
if isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
continue
}
out = append(out, parseDrive(driveDoc))
@@ -83,7 +86,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
if len(plan.KnownStorageDriveCollections) > 0 {
for _, driveDoc := range r.collectKnownStorageMembers(systemPath, plan.KnownStorageDriveCollections) {
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
}
@@ -98,7 +101,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
}
for _, devAny := range devices {
devDoc, ok := devAny.(map[string]interface{})
if !ok || !looksLikeDrive(devDoc) || isVirtualStorageDrive(devDoc) {
if !ok || !looksLikeDrive(devDoc) || isAbsentDriveDoc(devDoc) || isVirtualStorageDrive(devDoc) {
continue
}
out = append(out, parseDrive(devDoc))
@@ -112,7 +115,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
continue
}
for _, driveDoc := range driveDocs {
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
continue
}
out = append(out, parseDrive(driveDoc))
@@ -124,7 +127,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
continue
}
for _, driveDoc := range r.probeSupermicroNVMeDiskBays(chassisPath) {
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
continue
}
out = append(out, parseDrive(driveDoc))

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,67 @@
package redfishprofile
func hpeProfile() Profile {
return staticProfile{
name: "hpe",
priority: 20,
safeForFallback: true,
matchFn: func(s MatchSignals) int {
score := 0
if containsFold(s.SystemManufacturer, "hpe") ||
containsFold(s.SystemManufacturer, "hewlett packard") ||
containsFold(s.ChassisManufacturer, "hpe") ||
containsFold(s.ChassisManufacturer, "hewlett packard") {
score += 80
}
for _, ns := range s.OEMNamespaces {
if containsFold(ns, "hpe") {
score += 30
break
}
}
if containsFold(s.ServiceRootProduct, "ilo") {
score += 30
}
if containsFold(s.ManagerManufacturer, "hpe") || containsFold(s.ManagerManufacturer, "ilo") {
score += 20
}
return min(score, 100)
},
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
// HPE ProLiant SmartStorage RAID controller inventory is not reachable
// via standard Redfish Storage paths — it requires the HPE OEM SmartStorage tree.
ensureScopedPathPolicy(plan, AcquisitionScopedPathPolicy{
SystemCriticalSuffixes: []string{
"/SmartStorage",
"/SmartStorageConfig",
},
ManagerCriticalSuffixes: []string{
"/LicenseService",
},
})
// HPE iLO responds more slowly than average BMCs under load; give the
// ETA estimator a realistic baseline so progress reports are accurate.
ensureETABaseline(plan, AcquisitionETABaseline{
DiscoverySeconds: 12,
SnapshotSeconds: 180,
PrefetchSeconds: 30,
CriticalPlanBSeconds: 40,
ProfilePlanBSeconds: 25,
})
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
EnableProfilePlanB: true,
})
// HPE iLO starts throttling under high request rates. Setting a higher
// latency tolerance prevents the adaptive throttler from treating normal
// iLO slowness as a reason to stall the collection.
ensureRatePolicy(plan, AcquisitionRatePolicy{
TargetP95LatencyMS: 1200,
ThrottleP95LatencyMS: 2500,
MinSnapshotWorkers: 2,
MinPrefetchWorkers: 1,
DisablePrefetchOnErrors: true,
})
addPlanNote(plan, "hpe ilo acquisition extensions enabled")
},
}
}

View File

@@ -0,0 +1,149 @@
package redfishprofile
import (
"regexp"
"strings"
)
var (
outboardCardHintRe = regexp.MustCompile(`/outboardPCIeCard\d+(?:/|$)`)
obDriveHintRe = regexp.MustCompile(`/Drives/OB\d+$`)
fpDriveHintRe = regexp.MustCompile(`/Drives/FP00HDD\d+$`)
vrFirmwareHintRe = regexp.MustCompile(`^CPU\d+_PVCC.*_VR$`)
)
var inspurGroupOEMFirmwareHints = map[string]struct{}{
"Front_HDD_CPLD0": {},
"MainBoard0CPLD": {},
"MainBoardCPLD": {},
"PDBBoardCPLD": {},
"SCMCPLD": {},
"SWBoardCPLD": {},
}
func inspurGroupOEMPlatformsProfile() Profile {
return staticProfile{
name: "inspur-group-oem-platforms",
priority: 25,
safeForFallback: false,
matchFn: func(s MatchSignals) int {
topologyScore := 0
boardScore := 0
chassisOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Chassis/", outboardCardHintRe)
systemOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Systems/", outboardCardHintRe)
obDrives := matchedPathTokens(s.ResourceHints, "", obDriveHintRe)
fpDrives := matchedPathTokens(s.ResourceHints, "", fpDriveHintRe)
firmwareNames, vrFirmwareNames := inspurGroupOEMFirmwareMatches(s.ResourceHints)
if len(chassisOutboard) > 0 {
topologyScore += 20
}
if len(systemOutboard) > 0 {
topologyScore += 10
}
switch {
case len(obDrives) > 0 && len(fpDrives) > 0:
topologyScore += 15
}
switch {
case len(firmwareNames) >= 2:
boardScore += 15
}
switch {
case len(vrFirmwareNames) >= 2:
boardScore += 10
}
if anySignalContains(s, "COMMONbAssembly") {
boardScore += 12
}
if anySignalContains(s, "EnvironmentMetrcs") {
boardScore += 8
}
if anySignalContains(s, "GetServerAllUSBStatus") {
boardScore += 8
}
if topologyScore == 0 || boardScore == 0 {
return 0
}
return min(topologyScore+boardScore, 100)
},
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
addPlanNote(plan, "Inspur Group OEM platform fingerprint matched")
},
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
d.EnableGenericGraphicsControllerDedup = true
},
}
}
func matchedPathTokens(paths []string, requiredPrefix string, re *regexp.Regexp) []string {
seen := make(map[string]struct{})
for _, rawPath := range paths {
path := normalizePath(rawPath)
if path == "" || (requiredPrefix != "" && !strings.HasPrefix(path, requiredPrefix)) {
continue
}
token := re.FindString(path)
if token == "" {
continue
}
token = strings.Trim(token, "/")
if token == "" {
continue
}
seen[token] = struct{}{}
}
out := make([]string, 0, len(seen))
for token := range seen {
out = append(out, token)
}
return dedupeSorted(out)
}
func inspurGroupOEMFirmwareMatches(paths []string) ([]string, []string) {
firmwareNames := make(map[string]struct{})
vrNames := make(map[string]struct{})
for _, rawPath := range paths {
path := normalizePath(rawPath)
if !strings.HasPrefix(path, "/redfish/v1/UpdateService/FirmwareInventory/") {
continue
}
name := strings.TrimSpace(path[strings.LastIndex(path, "/")+1:])
if name == "" {
continue
}
if _, ok := inspurGroupOEMFirmwareHints[name]; ok {
firmwareNames[name] = struct{}{}
}
if vrFirmwareHintRe.MatchString(name) {
vrNames[name] = struct{}{}
}
}
return mapKeysSorted(firmwareNames), mapKeysSorted(vrNames)
}
func anySignalContains(signals MatchSignals, needle string) bool {
needle = strings.TrimSpace(needle)
if needle == "" {
return false
}
for _, signal := range signals.ResourceHints {
if strings.Contains(signal, needle) {
return true
}
}
for _, signal := range signals.DocHints {
if strings.Contains(signal, needle) {
return true
}
}
return false
}
func mapKeysSorted(items map[string]struct{}) []string {
out := make([]string, 0, len(items))
for item := range items {
out = append(out, item)
}
return dedupeSorted(out)
}

View File

@@ -0,0 +1,182 @@
package redfishprofile
import (
"archive/zip"
"encoding/json"
"os"
"path/filepath"
"testing"
)
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsSelectsMatchedMode(t *testing.T) {
tree := map[string]interface{}{
"/redfish/v1": map[string]interface{}{
"@odata.id": "/redfish/v1",
},
"/redfish/v1/Systems": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1"},
},
},
"/redfish/v1/Systems/1": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1",
"Oem": map[string]interface{}{
"Public": map[string]interface{}{
"USB": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1/Oem/Public/GetServerAllUSBStatus",
},
},
},
"NetworkInterfaces": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces",
},
},
"/redfish/v1/Systems/1/NetworkInterfaces": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard0"},
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard1"},
},
},
"/redfish/v1/Chassis": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1"},
},
},
"/redfish/v1/Chassis/1": map[string]interface{}{
"@odata.id": "/redfish/v1/Chassis/1",
"Actions": map[string]interface{}{
"Oem": map[string]interface{}{
"Public": map[string]interface{}{
"NvGpuPowerLimitWatts": map[string]interface{}{
"target": "/redfish/v1/Chassis/1/GPU/EnvironmentMetrcs",
},
},
},
},
"Drives": map[string]interface{}{
"@odata.id": "/redfish/v1/Chassis/1/Drives",
},
"NetworkAdapters": map[string]interface{}{
"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters",
},
},
"/redfish/v1/Chassis/1/Drives": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/OB01"},
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/FP00HDD00"},
},
},
"/redfish/v1/Chassis/1/NetworkAdapters": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard0"},
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard1"},
},
},
"/redfish/v1/Chassis/1/Assembly": map[string]interface{}{
"Assemblies": []interface{}{
map[string]interface{}{
"Oem": map[string]interface{}{
"COMMONb": map[string]interface{}{
"COMMONbAssembly": map[string]interface{}{
"@odata.type": "#COMMONbAssembly.v1_0_0.COMMONbAssembly",
},
},
},
},
},
},
"/redfish/v1/Managers": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/Managers/1"},
},
},
"/redfish/v1/Managers/1": map[string]interface{}{
"Actions": map[string]interface{}{
"Oem": map[string]interface{}{
"#PublicManager.ExportConfFile": map[string]interface{}{
"target": "/redfish/v1/Managers/1/Actions/Oem/Public/ExportConfFile",
},
},
},
},
"/redfish/v1/UpdateService/FirmwareInventory": map[string]interface{}{
"Members": []interface{}{
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/Front_HDD_CPLD0"},
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/SCMCPLD"},
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU0_PVCCD_HV_VR"},
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU1_PVCCIN_VR"},
},
},
}
signals := CollectSignalsFromTree(tree)
match := MatchProfiles(signals)
if match.Mode != ModeMatched {
t.Fatalf("expected matched mode, got %q", match.Mode)
}
assertProfileSelected(t, match, "inspur-group-oem-platforms")
}
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsDoesNotFalsePositiveOnExampleRawExports(t *testing.T) {
examples := []string{
"2026-03-18 (G5500 V7) - 210619KUGGXGS2000015.zip",
"2026-03-11 (SYS-821GE-TNHR) - A514359X5C08846.zip",
"2026-03-15 (CG480-S5063) - P5T0006091.zip",
"2026-03-18 (CG290-S3063) - PAT0011258.zip",
"2024-04-25 (AS -4124GQ-TNMI) - S490387X4418273.zip",
}
for _, name := range examples {
t.Run(name, func(t *testing.T) {
tree := loadRawExportTreeFromExampleZip(t, name)
match := MatchProfiles(CollectSignalsFromTree(tree))
assertProfileNotSelected(t, match, "inspur-group-oem-platforms")
})
}
}
func loadRawExportTreeFromExampleZip(t *testing.T, name string) map[string]interface{} {
t.Helper()
path := filepath.Join("..", "..", "..", "example", name)
f, err := os.Open(path)
if err != nil {
t.Fatalf("open example zip %s: %v", path, err)
}
defer f.Close()
info, err := f.Stat()
if err != nil {
t.Fatalf("stat example zip %s: %v", path, err)
}
zr, err := zip.NewReader(f, info.Size())
if err != nil {
t.Fatalf("read example zip %s: %v", path, err)
}
for _, file := range zr.File {
if file.Name != "raw_export.json" {
continue
}
rc, err := file.Open()
if err != nil {
t.Fatalf("open %s in %s: %v", file.Name, path, err)
}
defer rc.Close()
var payload struct {
Source struct {
RawPayloads struct {
RedfishTree map[string]interface{} `json:"redfish_tree"`
} `json:"raw_payloads"`
} `json:"source"`
}
if err := json.NewDecoder(rc).Decode(&payload); err != nil {
t.Fatalf("decode raw_export.json from %s: %v", path, err)
}
if len(payload.Source.RawPayloads.RedfishTree) == 0 {
t.Fatalf("example %s has empty redfish_tree", path)
}
return payload.Source.RawPayloads.RedfishTree
}
t.Fatalf("raw_export.json not found in %s", path)
return nil
}

View File

@@ -55,6 +55,8 @@ func BuiltinProfiles() []Profile {
msiProfile(),
supermicroProfile(),
dellProfile(),
hpeProfile(),
inspurGroupOEMPlatformsProfile(),
hgxProfile(),
xfusionProfile(),
}

View File

@@ -2,7 +2,14 @@ package redfishprofile
import "strings"
func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string]interface{}, resourceHints []string) MatchSignals {
func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string]interface{}, resourceHints []string, hintDocs ...map[string]interface{}) MatchSignals {
resourceHints = append([]string{}, resourceHints...)
docHints := make([]string, 0)
for _, doc := range append([]map[string]interface{}{serviceRootDoc, systemDoc, chassisDoc, managerDoc}, hintDocs...) {
embeddedPaths, embeddedHints := collectDocSignalHints(doc)
resourceHints = append(resourceHints, embeddedPaths...)
docHints = append(docHints, embeddedHints...)
}
signals := MatchSignals{
ServiceRootVendor: lookupString(serviceRootDoc, "Vendor"),
ServiceRootProduct: lookupString(serviceRootDoc, "Product"),
@@ -13,6 +20,7 @@ func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string
ChassisModel: lookupString(chassisDoc, "Model"),
ManagerManufacturer: lookupString(managerDoc, "Manufacturer"),
ResourceHints: resourceHints,
DocHints: docHints,
}
signals.OEMNamespaces = dedupeSorted(append(
oemNamespaces(serviceRootDoc),
@@ -50,6 +58,7 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
managerPath := memberPath("/redfish/v1/Managers", "/redfish/v1/Managers/1")
resourceHints := make([]string, 0, len(tree))
hintDocs := make([]map[string]interface{}, 0, len(tree))
for path := range tree {
path = strings.TrimSpace(path)
if path == "" {
@@ -57,6 +66,13 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
}
resourceHints = append(resourceHints, path)
}
for _, v := range tree {
doc, ok := v.(map[string]interface{})
if !ok {
continue
}
hintDocs = append(hintDocs, doc)
}
return CollectSignals(
getDoc("/redfish/v1"),
@@ -64,9 +80,72 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
getDoc(chassisPath),
getDoc(managerPath),
resourceHints,
hintDocs...,
)
}
func collectDocSignalHints(doc map[string]interface{}) ([]string, []string) {
if len(doc) == 0 {
return nil, nil
}
paths := make([]string, 0)
hints := make([]string, 0)
var walk func(any)
walk = func(v any) {
switch x := v.(type) {
case map[string]interface{}:
for rawKey, child := range x {
key := strings.TrimSpace(rawKey)
if key != "" {
hints = append(hints, key)
}
if s, ok := child.(string); ok {
s = strings.TrimSpace(s)
if s != "" {
switch key {
case "@odata.id", "target":
paths = append(paths, s)
case "@odata.type":
hints = append(hints, s)
default:
if isInterestingSignalString(s) {
hints = append(hints, s)
if strings.HasPrefix(s, "/") {
paths = append(paths, s)
}
}
}
}
}
walk(child)
}
case []interface{}:
for _, child := range x {
walk(child)
}
}
}
walk(doc)
return paths, hints
}
func isInterestingSignalString(s string) bool {
switch {
case strings.HasPrefix(s, "/"):
return true
case strings.HasPrefix(s, "#"):
return true
case strings.Contains(s, "COMMONb"):
return true
case strings.Contains(s, "EnvironmentMetrcs"):
return true
case strings.Contains(s, "GetServerAllUSBStatus"):
return true
default:
return false
}
}
func lookupString(doc map[string]interface{}, key string) string {
if len(doc) == 0 {
return ""

View File

@@ -17,6 +17,7 @@ type MatchSignals struct {
ManagerManufacturer string
OEMNamespaces []string
ResourceHints []string
DocHints []string
}
type AcquisitionPlan struct {
@@ -110,12 +111,12 @@ type AnalysisDirectives struct {
}
type ResolvedAnalysisPlan struct {
Match MatchResult
Directives AnalysisDirectives
Notes []string
ProcessorGPUChassisLookupModes []string
KnownStorageDriveCollections []string
KnownStorageVolumeCollections []string
Match MatchResult
Directives AnalysisDirectives
Notes []string
ProcessorGPUChassisLookupModes []string
KnownStorageDriveCollections []string
KnownStorageVolumeCollections []string
}
type Profile interface {
@@ -146,6 +147,7 @@ type ProfileScore struct {
func normalizeSignals(signals MatchSignals) MatchSignals {
signals.OEMNamespaces = dedupeSorted(signals.OEMNamespaces)
signals.ResourceHints = dedupeSorted(signals.ResourceHints)
signals.DocHints = dedupeSorted(signals.DocHints)
return signals
}

View File

@@ -15,9 +15,8 @@ type Request struct {
Password string
Token string
TLSMode string
PowerOnIfHostOff bool
StopHostAfterCollect bool
DebugPayloads bool
DebugPayloads bool
SkipHungCh <-chan struct{}
}
type Progress struct {
@@ -65,10 +64,9 @@ type PhaseTelemetry struct {
type ProbeResult struct {
Reachable bool
Protocol string
HostPowerState string
HostPoweredOn bool
PowerControlAvailable bool
SystemPath string
HostPowerState string
HostPoweredOn bool
SystemPath string
}
type Connector interface {

View File

@@ -43,13 +43,13 @@ func ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, erro
TargetHost: targetHost,
CollectedAt: collectedAt,
Hardware: ReanimatorHardware{
Board: convertBoard(result.Hardware.BoardInfo),
Firmware: dedupeFirmware(convertFirmware(result.Hardware.Firmware)),
CPUs: dedupeCPUs(convertCPUsFromDevices(devices, collectedAt, result.Hardware.BoardInfo.SerialNumber, buildCPUMicrocodeBySocket(result.Hardware.Firmware))),
Memory: dedupeMemory(convertMemoryFromDevices(devices, collectedAt)),
Storage: dedupeStorage(convertStorageFromDevices(devices, collectedAt)),
PCIeDevices: dedupePCIe(convertPCIeFromDevices(devices, collectedAt)),
PowerSupplies: dedupePSUs(convertPSUsFromDevices(devices, collectedAt)),
Board: convertBoard(result.Hardware.BoardInfo),
Firmware: dedupeFirmware(convertFirmware(result.Hardware.Firmware)),
CPUs: dedupeCPUs(convertCPUsFromDevices(devices, collectedAt, result.Hardware.BoardInfo.SerialNumber, buildCPUMicrocodeBySocket(result.Hardware.Firmware))),
Memory: dedupeMemory(convertMemoryFromDevices(devices, collectedAt)),
Storage: dedupeStorage(convertStorageFromDevices(devices, collectedAt)),
PCIeDevices: dedupePCIe(convertPCIeFromDevices(devices, collectedAt)),
PowerSupplies: dedupePSUs(convertPSUsFromDevices(devices, collectedAt)),
Sensors: convertSensors(result.Sensors),
EventLogs: convertEventLogs(result.Events, collectedAt),
},
@@ -358,10 +358,12 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
prev.score = canonicalScore(prev.item)
byKey[key] = prev
}
// Secondary pass: for items without serial/BDF (noKey), try to merge into an
// existing keyed entry with the same model+manufacturer. This handles the case
// where a device appears both in PCIeDevices (with BDF) and NetworkAdapters
// (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard with the same model.
// Secondary pass: for PCIe-class items without serial/BDF (noKey), try to merge
// into an existing keyed entry with the same model+manufacturer. This handles
// the case where a device appears both in PCIeDevices (with BDF) and
// NetworkAdapters (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard
// with the same model. Do not apply this to storage: repeated NVMe slots often
// share the same model string and would collapse incorrectly.
// deviceIdentity returns the best available model name for secondary matching,
// preferring Model over DeviceClass (which may hold a resolved device name).
deviceIdentity := func(d models.HardwareDevice) string {
@@ -377,6 +379,10 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
var unmatched []models.HardwareDevice
for _, item := range noKey {
mergeKind := canonicalMergeKind(item.Kind)
if mergeKind != "pcie-class" {
unmatched = append(unmatched, item)
continue
}
identity := deviceIdentity(item)
mfr := strings.ToLower(strings.TrimSpace(item.Manufacturer))
if identity == "" {
@@ -669,7 +675,17 @@ func convertMemoryFromDevices(devices []models.HardwareDevice, collectedAt strin
}
present := boolFromPresentPtr(d.Present, true)
status := normalizeStatus(d.Status, true)
if !present || d.SizeMB == 0 || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
mem := models.MemoryDIMM{
Present: present,
SizeMB: d.SizeMB,
Type: d.Type,
Description: stringFromDetailMap(d.Details, "description"),
Manufacturer: d.Manufacturer,
SerialNumber: d.SerialNumber,
PartNumber: d.PartNumber,
Status: d.Status,
}
if !mem.IsInstalledInventory() || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
continue
}
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
@@ -711,18 +727,16 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
if isVirtualExportStorageDevice(d) {
continue
}
if strings.TrimSpace(d.SerialNumber) == "" {
continue
}
present := d.Present == nil || *d.Present
if !present {
if !shouldExportStorageDevice(d) {
continue
}
present := boolFromPresentPtr(d.Present, true)
status := inferStorageStatus(models.Storage{Present: present})
if strings.TrimSpace(d.Status) != "" {
status = normalizeStatus(d.Status, false)
status = normalizeStatus(d.Status, !present)
}
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
presentValue := present
result = append(result, ReanimatorStorage{
Slot: d.Slot,
Type: d.Type,
@@ -732,6 +746,7 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
Manufacturer: d.Manufacturer,
Firmware: d.Firmware,
Interface: d.Interface,
Present: &presentValue,
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
@@ -1334,7 +1349,7 @@ func convertMemory(memory []models.MemoryDIMM, collectedAt string) []ReanimatorM
result := make([]ReanimatorMemory, 0, len(memory))
for _, mem := range memory {
if !mem.Present || mem.SizeMB == 0 || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
if !mem.IsInstalledInventory() || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
continue
}
status := normalizeStatus(mem.Status, true)
@@ -1376,14 +1391,16 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
result := make([]ReanimatorStorage, 0, len(storage))
for _, stor := range storage {
// Skip storage without serial number
if stor.SerialNumber == "" {
if isVirtualLegacyStorageDevice(stor) {
continue
}
if !shouldExportLegacyStorage(stor) {
continue
}
status := inferStorageStatus(stor)
if strings.TrimSpace(stor.Status) != "" {
status = normalizeStatus(stor.Status, false)
status = normalizeStatus(stor.Status, !stor.Present)
}
meta := buildStatusMeta(
status,
@@ -1393,6 +1410,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
stor.ErrorDescription,
collectedAt,
)
present := stor.Present
result = append(result, ReanimatorStorage{
Slot: stor.Slot,
@@ -1403,6 +1421,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
Manufacturer: stor.Manufacturer,
Firmware: stor.Firmware,
Interface: stor.Interface,
Present: &present,
RemainingEndurancePct: stor.RemainingEndurancePct,
Status: status,
StatusCheckedAt: meta.StatusCheckedAt,
@@ -1414,6 +1433,53 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
return result
}
func shouldExportStorageDevice(d models.HardwareDevice) bool {
if normalizedSerial(d.SerialNumber) != "" {
return true
}
if strings.TrimSpace(d.Slot) != "" {
return true
}
if hasMeaningfulExporterText(d.Model) {
return true
}
if hasMeaningfulExporterText(d.Type) || hasMeaningfulExporterText(d.Interface) {
return true
}
if d.SizeGB > 0 {
return true
}
return d.Present != nil
}
func shouldExportLegacyStorage(stor models.Storage) bool {
if normalizedSerial(stor.SerialNumber) != "" {
return true
}
if strings.TrimSpace(stor.Slot) != "" {
return true
}
if hasMeaningfulExporterText(stor.Model) {
return true
}
if hasMeaningfulExporterText(stor.Type) || hasMeaningfulExporterText(stor.Interface) {
return true
}
if stor.SizeGB > 0 {
return true
}
return stor.Present
}
func isVirtualLegacyStorageDevice(stor models.Storage) bool {
return isVirtualExportStorageDevice(models.HardwareDevice{
Kind: models.DeviceKindStorage,
Slot: stor.Slot,
Model: stor.Model,
Manufacturer: stor.Manufacturer,
})
}
// convertPCIeDevices converts PCIe devices, GPUs, and network adapters to Reanimator format
func convertPCIeDevices(hw *models.HardwareConfig, collectedAt string) []ReanimatorPCIe {
result := make([]ReanimatorPCIe, 0)
@@ -2180,10 +2246,8 @@ func normalizePCIeDeviceClass(d models.HardwareDevice) string {
func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
switch strings.ToLower(strings.TrimSpace(deviceClass)) {
case "", "network", "network controller", "networkcontroller":
case "", "network", "network controller", "networkcontroller", "ethernet", "ethernet controller", "ethernetcontroller":
return "NetworkController"
case "ethernet", "ethernet controller", "ethernetcontroller":
return "EthernetController"
case "fibre channel", "fibre channel controller", "fibrechannelcontroller", "fc":
return "FibreChannelController"
case "display", "displaycontroller", "display controller", "vga":
@@ -2204,8 +2268,6 @@ func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
func normalizeNetworkDeviceClass(portType, model, description string) string {
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{portType, model, description}, " ")))
switch {
case strings.Contains(joined, "ethernet"):
return "EthernetController"
case strings.Contains(joined, "fibre channel") || strings.Contains(joined, " fibrechannel") || strings.Contains(joined, "fc "):
return "FibreChannelController"
default:

View File

@@ -259,6 +259,29 @@ func TestConvertMemory(t *testing.T) {
}
}
func TestConvertMemory_KeepsInstalledDIMMWithUnknownSize(t *testing.T) {
memory := []models.MemoryDIMM{
{
Slot: "PROC 1 DIMM 3",
Present: true,
SizeMB: 0,
Manufacturer: "Hynix",
PartNumber: "HMCG88AEBRA115N",
SerialNumber: "2B5F92C6",
Status: "OK",
},
}
result := convertMemory(memory, "2026-03-30T10:00:00Z")
if len(result) != 1 {
t.Fatalf("expected 1 inventory-only DIMM, got %d", len(result))
}
if result[0].PartNumber != "HMCG88AEBRA115N" || result[0].SerialNumber != "2B5F92C6" || result[0].SizeMB != 0 {
t.Fatalf("unexpected converted memory: %+v", result[0])
}
}
func TestConvertToReanimator_CPUSerialIsNotSynthesizedAndSocketIsDeduped(t *testing.T) {
input := &models.AnalysisResult{
Filename: "cpu-dedupe.json",
@@ -424,20 +447,26 @@ func TestConvertStorage(t *testing.T) {
Slot: "OB02",
Type: "NVMe",
Model: "INTEL SSDPF2KX076T1",
SerialNumber: "", // No serial - should be skipped
SerialNumber: "",
Present: true,
},
}
result := convertStorage(storage, "2026-02-10T15:30:00Z")
if len(result) != 1 {
t.Fatalf("expected 1 storage device (skipped one without serial), got %d", len(result))
if len(result) != 2 {
t.Fatalf("expected both inventory slots to be exported, got %d", len(result))
}
if result[0].Status != "Unknown" {
t.Errorf("expected Unknown status, got %q", result[0].Status)
}
if result[1].SerialNumber != "" {
t.Errorf("expected empty serial for second storage slot, got %q", result[1].SerialNumber)
}
if result[1].Present == nil || !*result[1].Present {
t.Fatalf("expected present=true to be preserved for populated slot without serial")
}
}
func TestConvertToReanimator_SkipsAMIVirtualStorageDevices(t *testing.T) {
@@ -971,6 +1000,52 @@ func TestConvertToReanimator_StatusFallbackUsesCollectedAt(t *testing.T) {
}
}
func TestConvertToReanimator_ExportsStorageInventoryWithoutSerial(t *testing.T) {
collectedAt := time.Date(2026, 4, 1, 9, 0, 0, 0, time.UTC)
input := &models.AnalysisResult{
Filename: "nvme-inventory.json",
CollectedAt: collectedAt,
Hardware: &models.HardwareConfig{
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
Storage: []models.Storage{
{
Slot: "OB01",
Type: "NVMe",
Model: "PM9A3",
SerialNumber: "SSD-001",
Present: true,
},
{
Slot: "OB02",
Type: "NVMe",
Model: "PM9A3",
Present: true,
},
{
Slot: "OB03",
Type: "NVMe",
Model: "PM9A3",
Present: false,
},
},
},
}
out, err := ConvertToReanimator(input)
if err != nil {
t.Fatalf("ConvertToReanimator() failed: %v", err)
}
if len(out.Hardware.Storage) != 3 {
t.Fatalf("expected 3 storage entries including inventory slots without serial, got %d", len(out.Hardware.Storage))
}
if out.Hardware.Storage[1].Slot != "OB02" || out.Hardware.Storage[1].SerialNumber != "" {
t.Fatalf("expected OB02 storage slot without serial to survive export, got %#v", out.Hardware.Storage[1])
}
if out.Hardware.Storage[2].Present == nil || *out.Hardware.Storage[2].Present {
t.Fatalf("expected OB03 to preserve present=false, got %#v", out.Hardware.Storage[2])
}
}
func TestConvertToReanimator_FirmwareExcludesDeviceBoundEntries(t *testing.T) {
input := &models.AnalysisResult{
Filename: "fw-filter-test.json",
@@ -1658,6 +1733,43 @@ func TestConvertToReanimator_ExportsContractV24Telemetry(t *testing.T) {
}
}
func TestConvertToReanimator_UnifiesEthernetAndNetworkControllers(t *testing.T) {
input := &models.AnalysisResult{
Hardware: &models.HardwareConfig{
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-123"},
Devices: []models.HardwareDevice{
{
Kind: models.DeviceKindPCIe,
Slot: "PCIe1",
DeviceClass: "EthernetController",
Present: boolPtr(true),
SerialNumber: "ETH-001",
},
{
Kind: models.DeviceKindNetwork,
Slot: "NIC1",
Model: "Ethernet Adapter",
Present: boolPtr(true),
SerialNumber: "NIC-001",
},
},
},
}
out, err := ConvertToReanimator(input)
if err != nil {
t.Fatalf("ConvertToReanimator() failed: %v", err)
}
if len(out.Hardware.PCIeDevices) != 2 {
t.Fatalf("expected two pcie-class exports, got %d", len(out.Hardware.PCIeDevices))
}
for _, dev := range out.Hardware.PCIeDevices {
if dev.DeviceClass != "NetworkController" {
t.Fatalf("expected unified NetworkController class, got %+v", dev)
}
}
}
func TestConvertToReanimator_PreservesLegacyStorageAndPSUDetails(t *testing.T) {
input := &models.AnalysisResult{
Filename: "legacy-details.json",

29
internal/models/memory.go Normal file
View File

@@ -0,0 +1,29 @@
package models
import "strings"
// HasInventoryIdentity reports whether the DIMM has enough identifying
// inventory data to treat it as a populated module even when size is unknown.
func (m MemoryDIMM) HasInventoryIdentity() bool {
return strings.TrimSpace(m.SerialNumber) != "" ||
strings.TrimSpace(m.PartNumber) != "" ||
strings.TrimSpace(m.Type) != "" ||
strings.TrimSpace(m.Technology) != "" ||
strings.TrimSpace(m.Description) != ""
}
// IsInstalledInventory reports whether the DIMM represents an installed module
// that should be kept in canonical inventory and exports.
func (m MemoryDIMM) IsInstalledInventory() bool {
if !m.Present {
return false
}
status := strings.ToLower(strings.TrimSpace(m.Status))
switch status {
case "empty", "absent", "not installed":
return false
}
return m.SizeMB > 0 || m.HasInventoryIdentity()
}

View File

@@ -19,6 +19,7 @@ const maxZipArchiveSize = 50 * 1024 * 1024
const maxGzipDecompressedSize = 50 * 1024 * 1024
var supportedArchiveExt = map[string]struct{}{
".ahs": {},
".gz": {},
".tgz": {},
".tar": {},
@@ -45,6 +46,8 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
ext := strings.ToLower(filepath.Ext(archivePath))
switch ext {
case ".ahs":
return extractSingleFile(archivePath)
case ".gz", ".tgz":
return extractTarGz(archivePath)
case ".tar", ".sds":
@@ -66,6 +69,8 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
ext := strings.ToLower(filepath.Ext(filename))
switch ext {
case ".ahs":
return extractSingleFileFromReader(r, filename)
case ".gz", ".tgz":
return extractTarGzFromReader(r, filename)
case ".tar", ".sds":

View File

@@ -76,6 +76,7 @@ func TestIsSupportedArchiveFilename(t *testing.T) {
name string
want bool
}{
{name: "HPE_CZ2D1X0GS3_20260330.ahs", want: true},
{name: "dump.tar.gz", want: true},
{name: "nvidia-bug-report-1651124000923.log.gz", want: true},
{name: "snapshot.zip", want: true},
@@ -124,3 +125,20 @@ func TestExtractArchiveFromReaderSDS(t *testing.T) {
t.Fatalf("expected bmc/pack.info, got %q", files[0].Path)
}
}
func TestExtractArchiveFromReaderAHS(t *testing.T) {
payload := []byte("ABJRtest")
files, err := ExtractArchiveFromReader(bytes.NewReader(payload), "sample.ahs")
if err != nil {
t.Fatalf("extract ahs from reader: %v", err)
}
if len(files) != 1 {
t.Fatalf("expected 1 extracted file, got %d", len(files))
}
if files[0].Path != "sample.ahs" {
t.Fatalf("expected sample.ahs, got %q", files[0].Path)
}
if string(files[0].Content) != string(payload) {
t.Fatalf("content mismatch")
}
}

View File

@@ -0,0 +1,601 @@
package easy_bee
import (
"encoding/json"
"fmt"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
const parserVersion = "1.0"
func init() {
parser.Register(&Parser{})
}
// Parser imports support bundles produced by reanimator-easy-bee.
// These archives embed a ready-to-use hardware snapshot in export/bee-audit.json.
type Parser struct{}
func (p *Parser) Name() string {
return "Reanimator Easy Bee Parser"
}
func (p *Parser) Vendor() string {
return "easy_bee"
}
func (p *Parser) Version() string {
return parserVersion
}
func (p *Parser) Detect(files []parser.ExtractedFile) int {
confidence := 0
hasManifest := false
hasBeeAudit := false
hasRuntimeHealth := false
hasTechdump := false
hasBundlePrefix := false
for _, f := range files {
path := strings.ToLower(strings.TrimSpace(f.Path))
content := strings.ToLower(string(f.Content))
if !hasBundlePrefix && strings.Contains(path, "bee-support-") {
hasBundlePrefix = true
confidence += 5
}
if (strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") &&
strings.Contains(content, "bee_version=") {
hasManifest = true
confidence += 35
if strings.Contains(content, "export_dir=") {
confidence += 10
}
}
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
hasBeeAudit = true
confidence += 55
}
if hasBundlePrefix && (strings.HasSuffix(path, "/export/runtime-health.json") || path == "runtime-health.json") {
hasRuntimeHealth = true
confidence += 10
}
if hasBundlePrefix && !hasTechdump && strings.Contains(path, "/export/techdump/") {
hasTechdump = true
confidence += 10
}
}
if hasManifest && hasBeeAudit {
return 100
}
if hasBeeAudit && (hasRuntimeHealth || hasTechdump) {
confidence += 10
}
if confidence > 100 {
return 100
}
return confidence
}
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
snapshotFile := findSnapshotFile(files)
if snapshotFile == nil {
return nil, fmt.Errorf("easy-bee snapshot not found")
}
var snapshot beeSnapshot
if err := json.Unmarshal(snapshotFile.Content, &snapshot); err != nil {
return nil, fmt.Errorf("decode %s: %w", snapshotFile.Path, err)
}
manifest := parseManifest(files)
result := &models.AnalysisResult{
SourceType: strings.TrimSpace(snapshot.SourceType),
Protocol: strings.TrimSpace(snapshot.Protocol),
TargetHost: firstNonEmpty(snapshot.TargetHost, manifest.Host),
SourceTimezone: strings.TrimSpace(snapshot.SourceTimezone),
CollectedAt: chooseCollectedAt(snapshot, manifest),
InventoryLastModifiedAt: snapshot.InventoryLastModifiedAt,
RawPayloads: snapshot.RawPayloads,
Events: make([]models.Event, 0),
FRU: append([]models.FRUInfo(nil), snapshot.FRU...),
Sensors: make([]models.SensorReading, 0),
Hardware: &models.HardwareConfig{
Firmware: append([]models.FirmwareInfo(nil), snapshot.Hardware.Firmware...),
BoardInfo: snapshot.Hardware.Board,
Devices: append([]models.HardwareDevice(nil), snapshot.Hardware.Devices...),
CPUs: append([]models.CPU(nil), snapshot.Hardware.CPUs...),
Memory: append([]models.MemoryDIMM(nil), snapshot.Hardware.Memory...),
Storage: append([]models.Storage(nil), snapshot.Hardware.Storage...),
Volumes: append([]models.StorageVolume(nil), snapshot.Hardware.Volumes...),
PCIeDevices: normalizePCIeDevices(snapshot.Hardware.PCIeDevices),
GPUs: append([]models.GPU(nil), snapshot.Hardware.GPUs...),
NetworkCards: append([]models.NIC(nil), snapshot.Hardware.NetworkCards...),
NetworkAdapters: normalizeNetworkAdapters(snapshot.Hardware.NetworkAdapters),
PowerSupply: append([]models.PSU(nil), snapshot.Hardware.PowerSupply...),
},
}
result.Events = append(result.Events, snapshot.Events...)
result.Events = append(result.Events, convertRuntimeToEvents(snapshot.Runtime, result.CollectedAt)...)
result.Events = append(result.Events, convertEventLogs(snapshot.Hardware.EventLogs)...)
result.Sensors = append(result.Sensors, snapshot.Sensors...)
result.Sensors = append(result.Sensors, flattenSensorGroups(snapshot.Hardware.Sensors)...)
if len(result.FRU) == 0 {
if boardFRU, ok := buildBoardFRU(snapshot.Hardware.Board); ok {
result.FRU = append(result.FRU, boardFRU)
}
}
if result.Hardware == nil || (result.Hardware.BoardInfo.SerialNumber == "" &&
len(result.Hardware.CPUs) == 0 &&
len(result.Hardware.Memory) == 0 &&
len(result.Hardware.Storage) == 0 &&
len(result.Hardware.PCIeDevices) == 0 &&
len(result.Hardware.Devices) == 0) {
return nil, fmt.Errorf("unsupported easy-bee snapshot format")
}
return result, nil
}
type beeSnapshot struct {
SourceType string `json:"source_type,omitempty"`
Protocol string `json:"protocol,omitempty"`
TargetHost string `json:"target_host,omitempty"`
SourceTimezone string `json:"source_timezone,omitempty"`
CollectedAt time.Time `json:"collected_at,omitempty"`
InventoryLastModifiedAt time.Time `json:"inventory_last_modified_at,omitempty"`
RawPayloads map[string]any `json:"raw_payloads,omitempty"`
Events []models.Event `json:"events,omitempty"`
FRU []models.FRUInfo `json:"fru,omitempty"`
Sensors []models.SensorReading `json:"sensors,omitempty"`
Hardware beeHardware `json:"hardware"`
Runtime beeRuntime `json:"runtime,omitempty"`
}
type beeHardware struct {
Board models.BoardInfo `json:"board"`
Firmware []models.FirmwareInfo `json:"firmware,omitempty"`
Devices []models.HardwareDevice `json:"devices,omitempty"`
CPUs []models.CPU `json:"cpus,omitempty"`
Memory []models.MemoryDIMM `json:"memory,omitempty"`
Storage []models.Storage `json:"storage,omitempty"`
Volumes []models.StorageVolume `json:"volumes,omitempty"`
PCIeDevices []models.PCIeDevice `json:"pcie_devices,omitempty"`
GPUs []models.GPU `json:"gpus,omitempty"`
NetworkCards []models.NIC `json:"network_cards,omitempty"`
NetworkAdapters []models.NetworkAdapter `json:"network_adapters,omitempty"`
PowerSupply []models.PSU `json:"power_supplies,omitempty"`
Sensors beeSensorGroups `json:"sensors,omitempty"`
EventLogs []beeEventLog `json:"event_logs,omitempty"`
}
type beeSensorGroups struct {
Fans []beeFanSensor `json:"fans,omitempty"`
Power []beePowerSensor `json:"power,omitempty"`
Temperatures []beeTemperatureSensor `json:"temperatures,omitempty"`
Other []beeOtherSensor `json:"other,omitempty"`
}
type beeFanSensor struct {
Name string `json:"name"`
Location string `json:"location,omitempty"`
RPM int `json:"rpm,omitempty"`
Status string `json:"status,omitempty"`
}
type beePowerSensor struct {
Name string `json:"name"`
Location string `json:"location,omitempty"`
VoltageV float64 `json:"voltage_v,omitempty"`
CurrentA float64 `json:"current_a,omitempty"`
PowerW float64 `json:"power_w,omitempty"`
Status string `json:"status,omitempty"`
}
type beeTemperatureSensor struct {
Name string `json:"name"`
Location string `json:"location,omitempty"`
Celsius float64 `json:"celsius,omitempty"`
ThresholdWarningCelsius float64 `json:"threshold_warning_celsius,omitempty"`
ThresholdCriticalCelsius float64 `json:"threshold_critical_celsius,omitempty"`
Status string `json:"status,omitempty"`
}
type beeOtherSensor struct {
Name string `json:"name"`
Location string `json:"location,omitempty"`
Value float64 `json:"value,omitempty"`
Unit string `json:"unit,omitempty"`
Status string `json:"status,omitempty"`
}
type beeRuntime struct {
Status string `json:"status,omitempty"`
CheckedAt time.Time `json:"checked_at,omitempty"`
NetworkStatus string `json:"network_status,omitempty"`
Issues []beeRuntimeIssue `json:"issues,omitempty"`
Services []beeRuntimeStatus `json:"services,omitempty"`
Interfaces []beeInterface `json:"interfaces,omitempty"`
}
type beeRuntimeIssue struct {
Code string `json:"code,omitempty"`
Severity string `json:"severity,omitempty"`
Description string `json:"description,omitempty"`
}
type beeRuntimeStatus struct {
Name string `json:"name,omitempty"`
Status string `json:"status,omitempty"`
}
type beeInterface struct {
Name string `json:"name,omitempty"`
State string `json:"state,omitempty"`
IPv4 []string `json:"ipv4,omitempty"`
Outcome string `json:"outcome,omitempty"`
}
type beeEventLog struct {
Source string `json:"source,omitempty"`
EventTime string `json:"event_time,omitempty"`
Severity string `json:"severity,omitempty"`
MessageID string `json:"message_id,omitempty"`
Message string `json:"message,omitempty"`
RawPayload map[string]any `json:"raw_payload,omitempty"`
}
type manifestMetadata struct {
Host string
GeneratedAtUTC time.Time
}
func findSnapshotFile(files []parser.ExtractedFile) *parser.ExtractedFile {
for i := range files {
path := strings.ToLower(strings.TrimSpace(files[i].Path))
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
return &files[i]
}
}
for i := range files {
path := strings.ToLower(strings.TrimSpace(files[i].Path))
if strings.HasSuffix(path, ".json") && strings.Contains(path, "reanimator") {
return &files[i]
}
}
return nil
}
func parseManifest(files []parser.ExtractedFile) manifestMetadata {
var meta manifestMetadata
for _, f := range files {
path := strings.ToLower(strings.TrimSpace(f.Path))
if !(strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") {
continue
}
lines := strings.Split(string(f.Content), "\n")
for _, line := range lines {
key, value, ok := strings.Cut(strings.TrimSpace(line), "=")
if !ok {
continue
}
switch strings.TrimSpace(key) {
case "host":
meta.Host = strings.TrimSpace(value)
case "generated_at_utc":
if ts, err := time.Parse(time.RFC3339, strings.TrimSpace(value)); err == nil {
meta.GeneratedAtUTC = ts.UTC()
}
}
}
break
}
return meta
}
func chooseCollectedAt(snapshot beeSnapshot, manifest manifestMetadata) time.Time {
switch {
case !snapshot.CollectedAt.IsZero():
return snapshot.CollectedAt.UTC()
case !snapshot.Runtime.CheckedAt.IsZero():
return snapshot.Runtime.CheckedAt.UTC()
case !manifest.GeneratedAtUTC.IsZero():
return manifest.GeneratedAtUTC.UTC()
default:
return time.Time{}
}
}
func convertRuntimeToEvents(runtime beeRuntime, fallback time.Time) []models.Event {
events := make([]models.Event, 0)
ts := runtime.CheckedAt
if ts.IsZero() {
ts = fallback
}
if status := strings.TrimSpace(runtime.Status); status != "" {
desc := "Bee runtime status: " + status
if networkStatus := strings.TrimSpace(runtime.NetworkStatus); networkStatus != "" {
desc += " (network: " + networkStatus + ")"
}
events = append(events, models.Event{
Timestamp: ts,
Source: "Bee Runtime",
EventType: "Runtime Status",
Severity: mapSeverity(status),
Description: desc,
})
}
for _, issue := range runtime.Issues {
desc := strings.TrimSpace(issue.Description)
if desc == "" {
desc = "Bee runtime issue"
}
events = append(events, models.Event{
Timestamp: ts,
Source: "Bee Runtime",
EventType: "Runtime Issue",
Severity: mapSeverity(issue.Severity),
Description: desc,
RawData: strings.TrimSpace(issue.Code),
})
}
for _, svc := range runtime.Services {
status := strings.TrimSpace(svc.Status)
if status == "" || strings.EqualFold(status, "active") {
continue
}
events = append(events, models.Event{
Timestamp: ts,
Source: "systemd",
EventType: "Service Status",
Severity: mapSeverity(status),
Description: fmt.Sprintf("%s is %s", strings.TrimSpace(svc.Name), status),
})
}
for _, iface := range runtime.Interfaces {
state := strings.TrimSpace(iface.State)
outcome := strings.TrimSpace(iface.Outcome)
if state == "" && outcome == "" {
continue
}
if strings.EqualFold(state, "up") && strings.EqualFold(outcome, "lease_acquired") {
continue
}
desc := fmt.Sprintf("interface %s state=%s outcome=%s", strings.TrimSpace(iface.Name), state, outcome)
events = append(events, models.Event{
Timestamp: ts,
Source: "network",
EventType: "Interface Status",
Severity: models.SeverityWarning,
Description: strings.TrimSpace(desc),
})
}
return events
}
func convertEventLogs(items []beeEventLog) []models.Event {
events := make([]models.Event, 0, len(items))
for _, item := range items {
message := strings.TrimSpace(item.Message)
if message == "" {
continue
}
ts := parseEventTime(item.EventTime)
rawData := strings.TrimSpace(item.MessageID)
events = append(events, models.Event{
Timestamp: ts,
Source: firstNonEmpty(strings.TrimSpace(item.Source), "Reanimator"),
EventType: "Event Log",
Severity: mapSeverity(item.Severity),
Description: message,
RawData: rawData,
})
}
return events
}
func parseEventTime(raw string) time.Time {
raw = strings.TrimSpace(raw)
if raw == "" {
return time.Time{}
}
layouts := []string{time.RFC3339Nano, time.RFC3339}
for _, layout := range layouts {
if ts, err := time.Parse(layout, raw); err == nil {
return ts.UTC()
}
}
return time.Time{}
}
func flattenSensorGroups(groups beeSensorGroups) []models.SensorReading {
result := make([]models.SensorReading, 0, len(groups.Fans)+len(groups.Power)+len(groups.Temperatures)+len(groups.Other))
for _, fan := range groups.Fans {
result = append(result, models.SensorReading{
Name: sensorName(fan.Name, fan.Location),
Type: "fan",
Value: float64(fan.RPM),
Unit: "RPM",
Status: strings.TrimSpace(fan.Status),
})
}
for _, power := range groups.Power {
name := sensorName(power.Name, power.Location)
status := strings.TrimSpace(power.Status)
if power.PowerW != 0 {
result = append(result, models.SensorReading{
Name: name,
Type: "power",
Value: power.PowerW,
Unit: "W",
Status: status,
})
}
if power.VoltageV != 0 {
result = append(result, models.SensorReading{
Name: name + " Voltage",
Type: "voltage",
Value: power.VoltageV,
Unit: "V",
Status: status,
})
}
if power.CurrentA != 0 {
result = append(result, models.SensorReading{
Name: name + " Current",
Type: "current",
Value: power.CurrentA,
Unit: "A",
Status: status,
})
}
}
for _, temp := range groups.Temperatures {
result = append(result, models.SensorReading{
Name: sensorName(temp.Name, temp.Location),
Type: "temperature",
Value: temp.Celsius,
Unit: "C",
Status: strings.TrimSpace(temp.Status),
})
}
for _, other := range groups.Other {
result = append(result, models.SensorReading{
Name: sensorName(other.Name, other.Location),
Type: "other",
Value: other.Value,
Unit: strings.TrimSpace(other.Unit),
Status: strings.TrimSpace(other.Status),
})
}
return result
}
func sensorName(name, location string) string {
name = strings.TrimSpace(name)
location = strings.TrimSpace(location)
if name == "" {
return location
}
if location == "" {
return name
}
return name + " [" + location + "]"
}
func normalizePCIeDevices(items []models.PCIeDevice) []models.PCIeDevice {
out := append([]models.PCIeDevice(nil), items...)
for i := range out {
slot := strings.TrimSpace(out[i].Slot)
if out[i].BDF == "" && looksLikeBDF(slot) {
out[i].BDF = slot
}
if out[i].Slot == "" && out[i].BDF != "" {
out[i].Slot = out[i].BDF
}
}
return out
}
func normalizeNetworkAdapters(items []models.NetworkAdapter) []models.NetworkAdapter {
out := append([]models.NetworkAdapter(nil), items...)
for i := range out {
slot := strings.TrimSpace(out[i].Slot)
if out[i].BDF == "" && looksLikeBDF(slot) {
out[i].BDF = slot
}
if out[i].Slot == "" && out[i].BDF != "" {
out[i].Slot = out[i].BDF
}
}
return out
}
func looksLikeBDF(value string) bool {
value = strings.TrimSpace(value)
if len(value) != len("0000:00:00.0") {
return false
}
for i, r := range value {
switch i {
case 4, 7:
if r != ':' {
return false
}
case 10:
if r != '.' {
return false
}
default:
if !((r >= '0' && r <= '9') || (r >= 'a' && r <= 'f') || (r >= 'A' && r <= 'F')) {
return false
}
}
}
return true
}
func buildBoardFRU(board models.BoardInfo) (models.FRUInfo, bool) {
if strings.TrimSpace(board.SerialNumber) == "" &&
strings.TrimSpace(board.Manufacturer) == "" &&
strings.TrimSpace(board.ProductName) == "" &&
strings.TrimSpace(board.PartNumber) == "" {
return models.FRUInfo{}, false
}
return models.FRUInfo{
Description: "System Board",
Manufacturer: strings.TrimSpace(board.Manufacturer),
ProductName: strings.TrimSpace(board.ProductName),
SerialNumber: strings.TrimSpace(board.SerialNumber),
PartNumber: strings.TrimSpace(board.PartNumber),
}, true
}
func mapSeverity(raw string) models.Severity {
switch strings.ToLower(strings.TrimSpace(raw)) {
case "critical", "crit", "error", "failed", "failure":
return models.SeverityCritical
case "warning", "warn", "partial", "degraded", "inactive", "activating", "deactivating":
return models.SeverityWarning
default:
return models.SeverityInfo
}
}
func firstNonEmpty(values ...string) string {
for _, value := range values {
value = strings.TrimSpace(value)
if value != "" {
return value
}
}
return ""
}

View File

@@ -0,0 +1,219 @@
package easy_bee
import (
"testing"
"time"
"git.mchus.pro/mchus/logpile/internal/parser"
)
func TestDetectBeeSupportArchive(t *testing.T) {
p := &Parser{}
files := []parser.ExtractedFile{
{
Path: "bee-support-debian-20260325-162030/manifest.txt",
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
},
{
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
Content: []byte(`{"hardware":{"board":{"serial_number":"SN-BEE-001"}}}`),
},
{
Path: "bee-support-debian-20260325-162030/export/runtime-health.json",
Content: []byte(`{"status":"PARTIAL"}`),
},
}
if got := p.Detect(files); got < 90 {
t.Fatalf("expected high confidence detect score, got %d", got)
}
}
func TestDetectRejectsNonBeeArchive(t *testing.T) {
p := &Parser{}
files := []parser.ExtractedFile{
{
Path: "random/manifest.txt",
Content: []byte("host=test\n"),
},
{
Path: "random/export/runtime-health.json",
Content: []byte(`{"status":"OK"}`),
},
}
if got := p.Detect(files); got != 0 {
t.Fatalf("expected detect score 0, got %d", got)
}
}
func TestParseBeeAuditSnapshot(t *testing.T) {
p := &Parser{}
files := []parser.ExtractedFile{
{
Path: "bee-support-debian-20260325-162030/manifest.txt",
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
},
{
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
Content: []byte(`{
"source_type": "manual",
"target_host": "debian",
"collected_at": "2026-03-25T16:08:09Z",
"runtime": {
"status": "PARTIAL",
"checked_at": "2026-03-25T16:07:56Z",
"network_status": "OK",
"issues": [
{
"code": "nvidia_kernel_module_missing",
"severity": "warning",
"description": "NVIDIA kernel module is not loaded."
}
],
"services": [
{
"name": "bee-web",
"status": "inactive"
}
]
},
"hardware": {
"board": {
"manufacturer": "Supermicro",
"product_name": "AS-4124GQ-TNMI",
"serial_number": "S490387X4418273",
"part_number": "H12DGQ-NT6",
"uuid": "d868ae00-a61f-11ee-8000-7cc255e10309"
},
"firmware": [
{
"device_name": "BIOS",
"version": "2.8"
}
],
"cpus": [
{
"status": "OK",
"status_checked_at": "2026-03-25T16:08:09Z",
"socket": 1,
"model": "AMD EPYC 7763 64-Core Processor",
"cores": 64,
"threads": 128,
"frequency_mhz": 2450,
"max_frequency_mhz": 3525
}
],
"memory": [
{
"status": "OK",
"status_checked_at": "2026-03-25T16:08:09Z",
"slot": "P1-DIMMA1",
"location": "P0_Node0_Channel0_Dimm0",
"present": true,
"size_mb": 32768,
"type": "DDR4",
"max_speed_mhz": 3200,
"current_speed_mhz": 2933,
"manufacturer": "SK Hynix",
"serial_number": "80AD01224887286666",
"part_number": "HMA84GR7DJR4N-XN"
}
],
"storage": [
{
"status": "Unknown",
"status_checked_at": "2026-03-25T16:08:09Z",
"slot": "nvme0n1",
"type": "NVMe",
"model": "KCD6XLUL960G",
"serial_number": "2470A00XT5M8",
"interface": "NVMe",
"present": true
}
],
"pcie_devices": [
{
"status": "OK",
"status_checked_at": "2026-03-25T16:08:09Z",
"slot": "0000:05:00.0",
"vendor_id": 5555,
"device_id": 4123,
"device_class": "EthernetController",
"manufacturer": "Mellanox Technologies",
"model": "MT28908 Family [ConnectX-6]",
"link_width": 16,
"link_speed": "Gen4",
"max_link_width": 16,
"max_link_speed": "Gen4",
"mac_addresses": ["94:6d:ae:9a:75:4a"],
"present": true
}
],
"sensors": {
"power": [
{
"name": "PPT",
"location": "amdgpu-pci-1100",
"power_w": 95
}
],
"temperatures": [
{
"name": "Composite",
"location": "nvme-pci-0600",
"celsius": 28.85,
"threshold_warning_celsius": 72.85,
"threshold_critical_celsius": 81.85,
"status": "OK"
}
]
}
}
}`),
},
}
result, err := p.Parse(files)
if err != nil {
t.Fatalf("parse failed: %v", err)
}
if result.Hardware == nil {
t.Fatal("expected hardware to be populated")
}
if result.TargetHost != "debian" {
t.Fatalf("expected target host debian, got %q", result.TargetHost)
}
wantCollectedAt := time.Date(2026, 3, 25, 16, 8, 9, 0, time.UTC)
if !result.CollectedAt.Equal(wantCollectedAt) {
t.Fatalf("expected collected_at %s, got %s", wantCollectedAt, result.CollectedAt)
}
if result.Hardware.BoardInfo.SerialNumber != "S490387X4418273" {
t.Fatalf("unexpected board serial %q", result.Hardware.BoardInfo.SerialNumber)
}
if len(result.Hardware.CPUs) != 1 {
t.Fatalf("expected 1 cpu, got %d", len(result.Hardware.CPUs))
}
if len(result.Hardware.Memory) != 1 {
t.Fatalf("expected 1 dimm, got %d", len(result.Hardware.Memory))
}
if len(result.Hardware.Storage) != 1 {
t.Fatalf("expected 1 storage device, got %d", len(result.Hardware.Storage))
}
if len(result.Hardware.PCIeDevices) != 1 {
t.Fatalf("expected 1 pcie device, got %d", len(result.Hardware.PCIeDevices))
}
if result.Hardware.PCIeDevices[0].BDF != "0000:05:00.0" {
t.Fatalf("expected BDF to be normalized from slot, got %q", result.Hardware.PCIeDevices[0].BDF)
}
if len(result.Sensors) != 2 {
t.Fatalf("expected 2 flattened sensors, got %d", len(result.Sensors))
}
if len(result.Events) < 3 {
t.Fatalf("expected runtime events to be created, got %d", len(result.Events))
}
if len(result.FRU) == 0 {
t.Fatal("expected board FRU fallback to be populated")
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,316 @@
package hpe_ilo_ahs
import (
"bytes"
"compress/gzip"
"encoding/binary"
"os"
"path/filepath"
"testing"
"git.mchus.pro/mchus/logpile/internal/parser"
)
func TestDetectAHS(t *testing.T) {
p := &Parser{}
score := p.Detect([]parser.ExtractedFile{{
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
Content: makeAHSArchive(t, []ahsTestEntry{{Name: "CUST_INFO.DAT", Payload: []byte("x")}}),
}})
if score < 80 {
t.Fatalf("expected high confidence detect, got %d", score)
}
}
func TestParseAHSInventory(t *testing.T) {
p := &Parser{}
content := makeAHSArchive(t, []ahsTestEntry{
{Name: "CUST_INFO.DAT", Payload: make([]byte, 16)},
{Name: "0000088-2026-03-30.zbb", Payload: gzipBytes(t, []byte(sampleInventoryBlob()))},
{Name: "bcert.pkg", Payload: []byte(sampleBCertBlob())},
})
result, err := p.Parse([]parser.ExtractedFile{{
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
Content: content,
}})
if err != nil {
t.Fatalf("parse failed: %v", err)
}
if result.Hardware == nil {
t.Fatalf("expected hardware section")
}
board := result.Hardware.BoardInfo
if board.Manufacturer != "HPE" {
t.Fatalf("unexpected board manufacturer: %q", board.Manufacturer)
}
if board.ProductName != "ProLiant DL380 Gen11" {
t.Fatalf("unexpected board product: %q", board.ProductName)
}
if board.SerialNumber != "CZ2D1X0GS3" {
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
}
if board.PartNumber != "P52560-421" {
t.Fatalf("unexpected board part number: %q", board.PartNumber)
}
if len(result.Hardware.CPUs) != 1 || result.Hardware.CPUs[0].Model != "Intel(R) Xeon(R) Gold 6444Y" {
t.Fatalf("unexpected CPUs: %+v", result.Hardware.CPUs)
}
if len(result.Hardware.Memory) != 1 {
t.Fatalf("expected one DIMM, got %d", len(result.Hardware.Memory))
}
if result.Hardware.Memory[0].PartNumber != "HMCG88AEBRA115N" {
t.Fatalf("unexpected DIMM part number: %q", result.Hardware.Memory[0].PartNumber)
}
if len(result.Hardware.NetworkAdapters) != 2 {
t.Fatalf("expected two network adapters, got %d", len(result.Hardware.NetworkAdapters))
}
if len(result.Hardware.PowerSupply) != 1 {
t.Fatalf("expected one PSU, got %d", len(result.Hardware.PowerSupply))
}
if result.Hardware.PowerSupply[0].SerialNumber != "5XUWB0C4DJG4BV" {
t.Fatalf("unexpected PSU serial: %q", result.Hardware.PowerSupply[0].SerialNumber)
}
if result.Hardware.PowerSupply[0].Firmware != "2.00" {
t.Fatalf("unexpected PSU firmware: %q", result.Hardware.PowerSupply[0].Firmware)
}
if len(result.Hardware.Storage) != 1 {
t.Fatalf("expected one physical drive, got %d", len(result.Hardware.Storage))
}
drive := result.Hardware.Storage[0]
if drive.Model != "SAMSUNGMZ7L3480HCHQ-00A07" {
t.Fatalf("unexpected drive model: %q", drive.Model)
}
if drive.SerialNumber != "S664NC0Y502720" {
t.Fatalf("unexpected drive serial: %q", drive.SerialNumber)
}
if drive.SizeGB != 480 {
t.Fatalf("unexpected drive size: %d", drive.SizeGB)
}
if len(result.Hardware.Firmware) == 0 {
t.Fatalf("expected firmware inventory")
}
foundILO := false
foundControllerFW := false
foundNICFW := false
foundBackplaneFW := false
for _, item := range result.Hardware.Firmware {
if item.DeviceName == "iLO 6" && item.Version == "v1.63p20" {
foundILO = true
}
if item.DeviceName == "HPE MR408i-o Gen11" && item.Version == "52.26.3-5379" {
foundControllerFW = true
}
if item.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && item.Version == "20.28.41" {
foundNICFW = true
}
if item.DeviceName == "8 SFF 24G x1NVMe/SAS UBM3 BC BP" && item.Version == "1.24" {
foundBackplaneFW = true
}
}
if !foundILO {
t.Fatalf("expected iLO firmware entry")
}
if !foundControllerFW {
t.Fatalf("expected controller firmware entry")
}
if !foundNICFW {
t.Fatalf("expected broadcom firmware entry")
}
if !foundBackplaneFW {
t.Fatalf("expected backplane firmware entry")
}
broadcomFound := false
backplaneFound := false
for _, nic := range result.Hardware.NetworkAdapters {
if nic.SerialNumber == "1CH0150001" && nic.Firmware == "20.28.41" {
broadcomFound = true
}
}
for _, dev := range result.Hardware.Devices {
if dev.DeviceClass == "storage_backplane" && dev.Firmware == "1.24" {
backplaneFound = true
}
}
if !broadcomFound {
t.Fatalf("expected broadcom adapter firmware to be enriched")
}
if !backplaneFound {
t.Fatalf("expected backplane canonical device")
}
if len(result.Hardware.Devices) < 6 {
t.Fatalf("expected canonical devices, got %d", len(result.Hardware.Devices))
}
if len(result.Events) == 0 {
t.Fatalf("expected parsed events")
}
}
func TestParseExampleAHS(t *testing.T) {
path := filepath.Join("..", "..", "..", "..", "example", "HPE_CZ2D1X0GS3_20260330.ahs")
content, err := os.ReadFile(path)
if err != nil {
t.Skipf("example fixture unavailable: %v", err)
}
p := &Parser{}
result, err := p.Parse([]parser.ExtractedFile{{
Path: filepath.Base(path),
Content: content,
}})
if err != nil {
t.Fatalf("parse example failed: %v", err)
}
if result.Hardware == nil {
t.Fatalf("expected hardware section")
}
board := result.Hardware.BoardInfo
if board.ProductName != "ProLiant DL380 Gen11" {
t.Fatalf("unexpected board product: %q", board.ProductName)
}
if board.SerialNumber != "CZ2D1X0GS3" {
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
}
if len(result.Hardware.Storage) < 2 {
t.Fatalf("expected at least two drives, got %d", len(result.Hardware.Storage))
}
if len(result.Hardware.PowerSupply) != 2 {
t.Fatalf("expected exactly two PSUs, got %d: %+v", len(result.Hardware.PowerSupply), result.Hardware.PowerSupply)
}
foundController := false
foundBackplaneFW := false
foundNICFW := false
for _, device := range result.Hardware.Devices {
if device.Model == "HPE MR408i-o Gen11" && device.SerialNumber == "PXSFQ0BBIJY3B3" {
foundController = true
}
if device.DeviceClass == "storage_backplane" && device.Firmware == "1.24" {
foundBackplaneFW = true
}
}
if !foundController {
t.Fatalf("expected MR408i-o controller in canonical devices")
}
for _, fw := range result.Hardware.Firmware {
if fw.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && fw.Version == "20.28.41" {
foundNICFW = true
}
}
if !foundBackplaneFW {
t.Fatalf("expected backplane device in canonical devices")
}
if !foundNICFW {
t.Fatalf("expected broadcom firmware from bcert/pkg lockdown")
}
}
type ahsTestEntry struct {
Name string
Payload []byte
Flag uint32
}
func makeAHSArchive(t *testing.T, entries []ahsTestEntry) []byte {
t.Helper()
var buf bytes.Buffer
for _, entry := range entries {
header := make([]byte, ahsHeaderSize)
copy(header[:4], []byte("ABJR"))
binary.LittleEndian.PutUint16(header[4:6], 0x0300)
binary.LittleEndian.PutUint16(header[6:8], 0x0002)
binary.LittleEndian.PutUint32(header[8:12], uint32(len(entry.Payload)))
flag := entry.Flag
if flag == 0 {
flag = 0x80000002
if len(entry.Payload) >= 2 && entry.Payload[0] == 0x1f && entry.Payload[1] == 0x8b {
flag = 0x80000001
}
}
binary.LittleEndian.PutUint32(header[16:20], flag)
copy(header[20:52], []byte(entry.Name))
buf.Write(header)
buf.Write(entry.Payload)
}
return buf.Bytes()
}
func gzipBytes(t *testing.T, payload []byte) []byte {
t.Helper()
var buf bytes.Buffer
zw := gzip.NewWriter(&buf)
if _, err := zw.Write(payload); err != nil {
t.Fatalf("gzip payload: %v", err)
}
if err := zw.Close(); err != nil {
t.Fatalf("close gzip writer: %v", err)
}
return buf.Bytes()
}
func sampleInventoryBlob() string {
return stringsJoin(
"iLO 6 v1.63p20 built on Sep 13 2024",
"HPE",
"ProLiant DL380 Gen11",
"CZ2D1X0GS3",
"P52560-421",
"Proc 1",
"Intel(R) Corporation",
"Intel(R) Xeon(R) Gold 6444Y",
"PROC 1 DIMM 3",
"Hynix",
"HMCG88AEBRA115N",
"2B5F92C6",
"Power Supply 1",
"5XUWB0C4DJG4BV",
"P03178-B21",
"PciRoot(0x1)/Pci(0x5,0x0)/Pci(0x0,0x0)",
"NIC.Slot.1.1",
"Network Controller",
"Slot 1",
"MCX512A-ACAT",
"MT2230478382",
"PciRoot(0x3)/Pci(0x1,0x0)/Pci(0x0,0x0)",
"OCP.Slot.15.1",
"Broadcom NetXtreme Gigabit Ethernet - NIC",
"OCP Slot 15",
"P51183-001",
"1CH0150001",
"20.28.41",
"System ROM",
"v2.22 (06/19/2024)",
"03/30/2026 09:47:33",
"iLO network link down.",
`{"@odata.id":"/redfish/v1/Systems/1/Storage/DE00A000/Controllers/0","@odata.type":"#StorageController.v1_7_0.StorageController","Id":"0","Name":"HPE MR408i-o Gen11","FirmwareVersion":"52.26.3-5379","Manufacturer":"HPE","Model":"HPE MR408i-o Gen11","PartNumber":"P58543-001","SKU":"P58335-B21","SerialNumber":"PXSFQ0BBIJY3B3","Status":{"State":"Enabled","Health":"OK"},"Location":{"PartLocation":{"ServiceLabel":"Slot=14","LocationType":"Slot","LocationOrdinalValue":14}},"PCIeInterface":{"PCIeType":"Gen4","LanesInUse":8}}`,
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000","@odata.type":"#Fabric.v1_3_0.Fabric","Id":"DE00A000","Name":"8 SFF 24G x1NVMe/SAS UBM3 BC BP","FabricType":"MultiProtocol"}`,
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000/Switches/1","@odata.type":"#Switch.v1_9_1.Switch","Id":"1","Name":"Direct Attached","Model":"UBM3","FirmwareVersion":"1.24","SupportedProtocols":["SAS","SATA","NVMe"],"SwitchType":"MultiProtocol","Status":{"State":"Enabled","Health":"OK"}}`,
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/0","@odata.type":"#Drive.v1_17_0.Drive","Id":"0","Name":"480GB 6G SATA SSD","Status":{"State":"StandbyOffline","Health":"OK"},"PhysicalLocation":{"PartLocation":{"ServiceLabel":"Slot=14:Port=1:Box=3:Bay=1","LocationType":"Bay","LocationOrdinalValue":1}},"CapacityBytes":480103981056,"MediaType":"SSD","Model":"SAMSUNGMZ7L3480HCHQ-00A07","Protocol":"SATA","Revision":"JXTC604Q","SerialNumber":"S664NC0Y502720","PredictedMediaLifeLeftPercent":100}`,
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/64515","@odata.type":"#Drive.v1_17_0.Drive","Id":"64515","Name":"Empty Bay","Status":{"State":"Absent","Health":"OK"}}`,
)
}
func sampleBCertBlob() string {
return `<BC><MfgRecord><PowerSupplySlot id="0"><Present>Yes</Present><SerialNumber>5XUWB0C4DJG4BV</SerialNumber><FirmwareVersion>2.00</FirmwareVersion><SparePartNumber>P44412-001</SparePartNumber></PowerSupplySlot><FirmwareLockdown><SystemProgrammableLogicDevice>0x12</SystemProgrammableLogicDevice><ServerPlatformServicesSPSFirmware>6.1.4.47</ServerPlatformServicesSPSFirmware><STMicroGen11TPM>1.512</STMicroGen11TPM><HPEMR408i-oGen11>52.26.3-5379</HPEMR408i-oGen11><UBM3>UBM3/1.24</UBM3><BCM57191Gb4pBASE-TOCP3>20.28.41</BCM57191Gb4pBASE-TOCP3></FirmwareLockdown></MfgRecord></BC>`
}
func stringsJoin(parts ...string) string {
return string(bytes.Join(func() [][]byte {
out := make([][]byte, 0, len(parts))
for _, part := range parts {
out = append(out, []byte(part))
}
return out
}(), []byte{0}))
}

View File

@@ -5,7 +5,9 @@ package vendors
import (
// Import vendor modules to trigger their init() registration
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/easy_bee"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/h3c"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe_ilo_ahs"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"

View File

@@ -10,6 +10,33 @@ import (
"git.mchus.pro/mchus/logpile/internal/parser"
)
type xfusionNICCard struct {
Slot string
Model string
ProductName string
Vendor string
VendorID int
DeviceID int
BDF string
SerialNumber string
PartNumber string
}
type xfusionNetcardPort struct {
BDF string
MAC string
ActualMAC string
}
type xfusionNetcardSnapshot struct {
Timestamp time.Time
Slot string
ProductName string
Manufacturer string
Firmware string
Ports []xfusionNetcardPort
}
// ── FRU ──────────────────────────────────────────────────────────────────────
// parseFRUInfo parses fruinfo.txt and populates result.FRU and result.Hardware.BoardInfo.
@@ -232,15 +259,15 @@ func parseCPUInfo(content []byte) []models.CPU {
}
cpus = append(cpus, models.CPU{
Socket: socketNum,
Model: model,
Cores: cores,
Threads: threads,
L1CacheKB: l1,
L2CacheKB: l2,
L3CacheKB: l3,
Socket: socketNum,
Model: model,
Cores: cores,
Threads: threads,
L1CacheKB: l1,
L2CacheKB: l2,
L3CacheKB: l3,
SerialNumber: sn,
Status: "ok",
Status: "ok",
})
}
return cpus
@@ -338,9 +365,9 @@ func parseMemInfo(content []byte) []models.MemoryDIMM {
// ── Card Info (GPU + NIC) ─────────────────────────────────────────────────────
// parseCardInfo parses card_info file, extracting GPU and NIC entries.
// parseCardInfo parses card_info file, extracting GPU and OCP NIC card inventory.
// The file has named sections ("GPU Card Info", "OCP Card Info", etc.) each with a pipe-table.
func parseCardInfo(content []byte) (gpus []models.GPU, nics []models.NIC) {
func parseCardInfo(content []byte) (gpus []models.GPU, nicCards []xfusionNICCard) {
sections := splitPipeSections(content)
// Build BDF and VendorID/DeviceID map from PCIe Card Info: slot → info
@@ -396,17 +423,22 @@ func parseCardInfo(content []byte) (gpus []models.GPU, nics []models.NIC) {
}
// OCP Card Info: NIC cards
for i, row := range sections["ocp card info"] {
desc := strings.TrimSpace(row["card desc"])
sn := strings.TrimSpace(row["serialnumber"])
nics = append(nics, models.NIC{
Name: fmt.Sprintf("OCP%d", i+1),
Model: desc,
SerialNumber: sn,
for _, row := range sections["ocp card info"] {
slot := strings.TrimSpace(row["slot"])
pcie := slotPCIe[slot]
nicCards = append(nicCards, xfusionNICCard{
Slot: slot,
Model: strings.TrimSpace(row["card desc"]),
ProductName: strings.TrimSpace(row["card desc"]),
VendorID: parseHexInt(row["vender id"]),
DeviceID: parseHexInt(row["device id"]),
BDF: pcie.bdf,
SerialNumber: strings.TrimSpace(row["serialnumber"]),
PartNumber: strings.TrimSpace(row["partnum"]),
})
}
return gpus, nics
return gpus, nicCards
}
// splitPipeSections parses a multi-section file where each section starts with a
@@ -462,6 +494,301 @@ func parseHexInt(s string) int {
return int(n)
}
func parseNetcardInfo(content []byte) []xfusionNetcardSnapshot {
if len(content) == 0 {
return nil
}
var snapshots []xfusionNetcardSnapshot
var current *xfusionNetcardSnapshot
var currentPort *xfusionNetcardPort
flushPort := func() {
if current == nil || currentPort == nil {
return
}
current.Ports = append(current.Ports, *currentPort)
currentPort = nil
}
flushSnapshot := func() {
if current == nil || !current.hasData() {
return
}
flushPort()
snapshots = append(snapshots, *current)
current = nil
}
for _, rawLine := range strings.Split(string(content), "\n") {
line := strings.TrimSpace(rawLine)
if line == "" {
flushPort()
continue
}
if ts, ok := parseXFusionUTCTimestamp(line); ok {
if current == nil {
current = &xfusionNetcardSnapshot{Timestamp: ts}
continue
}
if current.hasData() {
flushSnapshot()
current = &xfusionNetcardSnapshot{Timestamp: ts}
continue
}
current.Timestamp = ts
continue
}
if current == nil {
current = &xfusionNetcardSnapshot{}
}
if port := parseNetcardPortHeader(line); port != nil {
flushPort()
currentPort = port
continue
}
if currentPort != nil {
if value, ok := parseSimpleKV(line, "MacAddr"); ok {
currentPort.MAC = value
continue
}
if value, ok := parseSimpleKV(line, "ActualMac"); ok {
currentPort.ActualMAC = value
continue
}
}
if value, ok := parseSimpleKV(line, "ProductName"); ok {
current.ProductName = value
continue
}
if value, ok := parseSimpleKV(line, "Manufacture"); ok {
current.Manufacturer = value
continue
}
if value, ok := parseSimpleKV(line, "FirmwareVersion"); ok {
current.Firmware = value
continue
}
if value, ok := parseSimpleKV(line, "SlotId"); ok {
current.Slot = value
}
}
flushSnapshot()
bestIndexBySlot := make(map[string]int)
for i, snapshot := range snapshots {
slot := strings.TrimSpace(snapshot.Slot)
if slot == "" {
continue
}
prevIdx, exists := bestIndexBySlot[slot]
if !exists || snapshot.isBetterThan(snapshots[prevIdx]) {
bestIndexBySlot[slot] = i
}
}
ordered := make([]xfusionNetcardSnapshot, 0, len(bestIndexBySlot))
for i, snapshot := range snapshots {
slot := strings.TrimSpace(snapshot.Slot)
bestIdx, ok := bestIndexBySlot[slot]
if !ok || bestIdx != i {
continue
}
ordered = append(ordered, snapshot)
delete(bestIndexBySlot, slot)
}
return ordered
}
func mergeNetworkAdapters(cards []xfusionNICCard, snapshots []xfusionNetcardSnapshot) ([]models.NetworkAdapter, []models.NIC) {
bySlotCard := make(map[string]xfusionNICCard, len(cards))
bySlotSnapshot := make(map[string]xfusionNetcardSnapshot, len(snapshots))
orderedSlots := make([]string, 0, len(cards)+len(snapshots))
seenSlots := make(map[string]struct{}, len(cards)+len(snapshots))
for _, card := range cards {
slot := strings.TrimSpace(card.Slot)
if slot == "" {
continue
}
bySlotCard[slot] = card
if _, seen := seenSlots[slot]; !seen {
orderedSlots = append(orderedSlots, slot)
seenSlots[slot] = struct{}{}
}
}
for _, snapshot := range snapshots {
slot := strings.TrimSpace(snapshot.Slot)
if slot == "" {
continue
}
bySlotSnapshot[slot] = snapshot
if _, seen := seenSlots[slot]; !seen {
orderedSlots = append(orderedSlots, slot)
seenSlots[slot] = struct{}{}
}
}
adapters := make([]models.NetworkAdapter, 0, len(orderedSlots))
legacyNICs := make([]models.NIC, 0, len(orderedSlots))
for _, slot := range orderedSlots {
card := bySlotCard[slot]
snapshot := bySlotSnapshot[slot]
model := firstNonEmpty(card.Model, snapshot.ProductName)
description := ""
if !strings.EqualFold(strings.TrimSpace(model), strings.TrimSpace(snapshot.ProductName)) {
description = strings.TrimSpace(snapshot.ProductName)
}
macs := snapshot.macAddresses()
bdf := firstNonEmpty(snapshot.primaryBDF(), card.BDF)
firmware := normalizeXFusionValue(snapshot.Firmware)
manufacturer := firstNonEmpty(snapshot.Manufacturer, card.Vendor)
portCount := len(snapshot.Ports)
if portCount == 0 && len(macs) > 0 {
portCount = len(macs)
}
if portCount == 0 {
portCount = 1
}
adapters = append(adapters, models.NetworkAdapter{
Slot: slot,
Location: "OCP",
Present: true,
BDF: bdf,
Model: model,
Description: description,
Vendor: manufacturer,
VendorID: card.VendorID,
DeviceID: card.DeviceID,
SerialNumber: card.SerialNumber,
PartNumber: card.PartNumber,
Firmware: firmware,
PortCount: portCount,
PortType: "ethernet",
MACAddresses: macs,
Status: "ok",
})
legacyNICs = append(legacyNICs, models.NIC{
Name: fmt.Sprintf("OCP%s", slot),
Model: model,
Description: description,
MACAddress: firstNonEmpty(macs...),
SerialNumber: card.SerialNumber,
})
}
return adapters, legacyNICs
}
func parseXFusionUTCTimestamp(line string) (time.Time, bool) {
ts, err := time.Parse("2006-01-02 15:04:05 MST", strings.TrimSpace(line))
if err != nil {
return time.Time{}, false
}
return ts, true
}
func parseNetcardPortHeader(line string) *xfusionNetcardPort {
fields := strings.Fields(strings.TrimSpace(line))
if len(fields) < 2 || !strings.HasPrefix(strings.ToLower(fields[0]), "port") {
return nil
}
joined := strings.Join(fields[1:], " ")
if !strings.HasPrefix(strings.ToLower(joined), "bdf:") {
return nil
}
return &xfusionNetcardPort{BDF: strings.TrimSpace(joined[len("BDF:"):])}
}
func parseSimpleKV(line, key string) (string, bool) {
idx := strings.Index(line, ":")
if idx < 0 {
return "", false
}
gotKey := strings.TrimSpace(line[:idx])
if !strings.EqualFold(gotKey, key) {
return "", false
}
return strings.TrimSpace(line[idx+1:]), true
}
func normalizeXFusionValue(value string) string {
value = strings.TrimSpace(value)
switch strings.ToUpper(value) {
case "", "N/A", "NA", "UNKNOWN":
return ""
default:
return value
}
}
func (s xfusionNetcardSnapshot) hasData() bool {
return strings.TrimSpace(s.Slot) != "" ||
strings.TrimSpace(s.ProductName) != "" ||
strings.TrimSpace(s.Manufacturer) != "" ||
strings.TrimSpace(s.Firmware) != "" ||
len(s.Ports) > 0
}
func (s xfusionNetcardSnapshot) score() int {
score := len(s.Ports)
if normalizeXFusionValue(s.Firmware) != "" {
score += 10
}
score += len(s.macAddresses()) * 2
return score
}
func (s xfusionNetcardSnapshot) isBetterThan(other xfusionNetcardSnapshot) bool {
if s.score() != other.score() {
return s.score() > other.score()
}
if !s.Timestamp.Equal(other.Timestamp) {
return s.Timestamp.After(other.Timestamp)
}
return len(s.Ports) > len(other.Ports)
}
func (s xfusionNetcardSnapshot) primaryBDF() string {
for _, port := range s.Ports {
if bdf := strings.TrimSpace(port.BDF); bdf != "" {
return bdf
}
}
return ""
}
func (s xfusionNetcardSnapshot) macAddresses() []string {
out := make([]string, 0, len(s.Ports))
seen := make(map[string]struct{}, len(s.Ports))
for _, port := range s.Ports {
for _, candidate := range []string{port.ActualMAC, port.MAC} {
mac := normalizeMAC(candidate)
if mac == "" {
continue
}
if _, exists := seen[mac]; exists {
continue
}
seen[mac] = struct{}{}
out = append(out, mac)
break
}
}
return out
}
func normalizeMAC(value string) string {
value = strings.ToUpper(strings.TrimSpace(value))
switch value {
case "", "N/A", "NA", "UNKNOWN", "00:00:00:00:00:00":
return ""
default:
return value
}
}
// ── PSU ───────────────────────────────────────────────────────────────────────
// parsePSUInfo parses the pipe-delimited psu_info.txt.
@@ -525,6 +852,11 @@ func parsePSUInfo(content []byte) []models.PSU {
func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
// File may contain multiple controller blocks; parse key:value pairs from each.
// We only look at the first occurrence of each key (first controller).
seen := make(map[string]struct{}, len(result.Hardware.Firmware))
for _, fw := range result.Hardware.Firmware {
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
seen[key] = struct{}{}
}
text := string(content)
blocks := strings.Split(text, "RAID Controller #")
for _, block := range blocks[1:] { // skip pre-block preamble
@@ -532,7 +864,7 @@ func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
name := firstNonEmpty(fields["Component Name"], fields["Controller Name"], fields["Controller Type"])
firmware := fields["Firmware Version"]
if name != "" && firmware != "" {
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
appendXFusionFirmware(result, seen, models.FirmwareInfo{
DeviceName: name,
Description: fields["Controller Name"],
Version: firmware,
@@ -541,6 +873,86 @@ func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
}
}
func parseAppRevision(content []byte, result *models.AnalysisResult) {
type firmwareLine struct {
deviceName string
description string
buildKey string
}
known := map[string]firmwareLine{
"Active iBMC Version": {deviceName: "iBMC", description: "active iBMC", buildKey: "Active iBMC Built"},
"Active BIOS Version": {deviceName: "BIOS", description: "active BIOS", buildKey: "Active BIOS Built"},
"CPLD Version": {deviceName: "CPLD", description: "mainboard CPLD"},
"SDK Version": {deviceName: "SDK", description: "iBMC SDK", buildKey: "SDK Built"},
"Active Uboot Version": {deviceName: "U-Boot", description: "active U-Boot"},
"Active Secure Bootloader Version": {deviceName: "Secure Bootloader", description: "active secure bootloader"},
"Active Secure Firmware Version": {deviceName: "Secure Firmware", description: "active secure firmware"},
}
values := parseAlignedKeyValues(content)
if result.Hardware.BoardInfo.ProductName == "" {
if productName := values["Product Name"]; productName != "" {
result.Hardware.BoardInfo.ProductName = productName
}
}
seen := make(map[string]struct{}, len(result.Hardware.Firmware))
for _, fw := range result.Hardware.Firmware {
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
seen[key] = struct{}{}
}
for key, meta := range known {
version := normalizeXFusionValue(values[key])
if version == "" {
continue
}
appendXFusionFirmware(result, seen, models.FirmwareInfo{
DeviceName: meta.deviceName,
Description: meta.description,
Version: version,
BuildTime: normalizeXFusionValue(values[meta.buildKey]),
})
}
}
func parseAlignedKeyValues(content []byte) map[string]string {
values := make(map[string]string)
for _, rawLine := range strings.Split(string(content), "\n") {
line := strings.TrimRight(rawLine, "\r")
if !strings.Contains(line, ":") {
continue
}
idx := strings.Index(line, ":")
if idx < 0 {
continue
}
key := strings.TrimRight(line[:idx], " \t")
value := strings.TrimSpace(line[idx+1:])
if key == "" || value == "" || values[key] != "" {
continue
}
values[key] = value
}
return values
}
func appendXFusionFirmware(result *models.AnalysisResult, seen map[string]struct{}, fw models.FirmwareInfo) {
if result == nil || result.Hardware == nil {
return
}
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
if key == "" {
return
}
if _, exists := seen[key]; exists {
return
}
seen[key] = struct{}{}
result.Hardware.Firmware = append(result.Hardware.Firmware, fw)
}
// parseDiskInfo parses a single PhysicalDrivesInfo/DiskN/disk_info file.
func parseDiskInfo(content []byte) *models.Storage {
fields := parseKeyValueBlock(content)

View File

@@ -13,7 +13,7 @@ import (
"git.mchus.pro/mchus/logpile/internal/parser"
)
const parserVersion = "1.0"
const parserVersion = "1.1"
func init() {
parser.Register(&Parser{})
@@ -34,11 +34,15 @@ func (p *Parser) Detect(files []parser.ExtractedFile) int {
path := strings.ToLower(f.Path)
switch {
case strings.Contains(path, "appdump/frudata/fruinfo.txt"):
confidence += 60
confidence += 50
case strings.Contains(path, "rtosdump/versioninfo/app_revision.txt"):
confidence += 30
case strings.Contains(path, "appdump/sensor_alarm/sensor_info.txt"):
confidence += 20
confidence += 10
case strings.Contains(path, "appdump/card_manage/card_info"):
confidence += 20
case strings.Contains(path, "logdump/netcard/netcard_info.txt"):
confidence += 20
}
if confidence >= 100 {
return 100
@@ -54,17 +58,21 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
FRU: make([]models.FRUInfo, 0),
Sensors: make([]models.SensorReading, 0),
Hardware: &models.HardwareConfig{
CPUs: make([]models.CPU, 0),
Memory: make([]models.MemoryDIMM, 0),
Storage: make([]models.Storage, 0),
GPUs: make([]models.GPU, 0),
NetworkCards: make([]models.NIC, 0),
PowerSupply: make([]models.PSU, 0),
Firmware: make([]models.FirmwareInfo, 0),
Firmware: make([]models.FirmwareInfo, 0),
Devices: make([]models.HardwareDevice, 0),
CPUs: make([]models.CPU, 0),
Memory: make([]models.MemoryDIMM, 0),
Storage: make([]models.Storage, 0),
Volumes: make([]models.StorageVolume, 0),
PCIeDevices: make([]models.PCIeDevice, 0),
GPUs: make([]models.GPU, 0),
NetworkCards: make([]models.NIC, 0),
NetworkAdapters: make([]models.NetworkAdapter, 0),
PowerSupply: make([]models.PSU, 0),
},
}
if f := findByPath(files, "appdump/frudata/fruinfo.txt"); f != nil {
if f := findByAnyPath(files, "appdump/frudata/fruinfo.txt", "rtosdump/versioninfo/fruinfo.txt"); f != nil {
parseFRUInfo(f.Content, result)
}
if f := findByPath(files, "appdump/sensor_alarm/sensor_info.txt"); f != nil {
@@ -76,10 +84,20 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
if f := findByPath(files, "appdump/cpumem/mem_info"); f != nil {
result.Hardware.Memory = parseMemInfo(f.Content)
}
var nicCards []xfusionNICCard
if f := findByPath(files, "appdump/card_manage/card_info"); f != nil {
gpus, nics := parseCardInfo(f.Content)
gpus, cards := parseCardInfo(f.Content)
result.Hardware.GPUs = gpus
result.Hardware.NetworkCards = nics
nicCards = cards
}
if f := findByPath(files, "logdump/netcard/netcard_info.txt"); f != nil || len(nicCards) > 0 {
var content []byte
if f != nil {
content = f.Content
}
adapters, legacyNICs := mergeNetworkAdapters(nicCards, parseNetcardInfo(content))
result.Hardware.NetworkAdapters = adapters
result.Hardware.NetworkCards = legacyNICs
}
if f := findByPath(files, "appdump/bmc/psu_info.txt"); f != nil {
result.Hardware.PowerSupply = parsePSUInfo(f.Content)
@@ -87,6 +105,9 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
if f := findByPath(files, "appdump/storagemgnt/raid_controller_info.txt"); f != nil {
parseStorageControllerInfo(f.Content, result)
}
if f := findByPath(files, "rtosdump/versioninfo/app_revision.txt"); f != nil {
parseAppRevision(f.Content, result)
}
for _, f := range findDiskInfoFiles(files) {
disk := parseDiskInfo(f.Content)
if disk != nil {
@@ -99,6 +120,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
result.Protocol = "ipmi"
result.SourceType = models.SourceTypeArchive
parser.ApplyManufacturedYearWeekFromFRU(result.FRU, result.Hardware)
return result, nil
}
@@ -113,6 +135,15 @@ func findByPath(files []parser.ExtractedFile, substring string) *parser.Extracte
return nil
}
func findByAnyPath(files []parser.ExtractedFile, substrings ...string) *parser.ExtractedFile {
for _, substring := range substrings {
if f := findByPath(files, substring); f != nil {
return f
}
}
return nil
}
// findDiskInfoFiles returns all PhysicalDrivesInfo disk_info files.
func findDiskInfoFiles(files []parser.ExtractedFile) []parser.ExtractedFile {
var out []parser.ExtractedFile

View File

@@ -1,8 +1,10 @@
package xfusion
import (
"strings"
"testing"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
@@ -26,6 +28,29 @@ func TestDetect_G5500V7(t *testing.T) {
}
}
func TestDetect_ServerFileExportMarkers(t *testing.T) {
p := &Parser{}
score := p.Detect([]parser.ExtractedFile{
{Path: "dump_info/RTOSDump/versioninfo/app_revision.txt", Content: []byte("Product Name: G5500 V7")},
{Path: "dump_info/LogDump/netcard/netcard_info.txt", Content: []byte("2026-02-04 03:54:06 UTC")},
{Path: "dump_info/AppDump/card_manage/card_info", Content: []byte("OCP Card Info")},
})
if score < 70 {
t.Fatalf("expected Detect score >= 70 for xFusion file export markers, got %d", score)
}
}
func TestDetect_Negative(t *testing.T) {
p := &Parser{}
score := p.Detect([]parser.ExtractedFile{
{Path: "logs/messages.txt", Content: []byte("plain text")},
{Path: "inventory.json", Content: []byte(`{"vendor":"other"}`)},
})
if score != 0 {
t.Fatalf("expected Detect score 0 for non-xFusion input, got %d", score)
}
}
func TestParse_G5500V7_BoardInfo(t *testing.T) {
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
p := &Parser{}
@@ -126,6 +151,94 @@ func TestParse_G5500V7_NICs(t *testing.T) {
}
}
func TestParse_ServerFileExport_NetworkAdaptersAndFirmware(t *testing.T) {
p := &Parser{}
files := []parser.ExtractedFile{
{
Path: "dump_info/AppDump/card_manage/card_info",
Content: []byte(strings.TrimSpace(`
Pcie Card Info
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
OCP Card Info
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
`)),
},
{
Path: "dump_info/LogDump/netcard/netcard_info.txt",
Content: []byte(strings.TrimSpace(`
2026-02-04 03:54:06 UTC
ProductName :XC385
Manufacture :XFUSION
FirmwareVersion :26.39.2048
SlotId :1
Port0 BDF:0000:27:00.0
MacAddr:44:1A:4C:16:E8:03
ActualMac:44:1A:4C:16:E8:03
Port1 BDF:0000:27:00.1
MacAddr:00:00:00:00:00:00
ActualMac:44:1A:4C:16:E8:04
`)),
},
{
Path: "dump_info/RTOSDump/versioninfo/app_revision.txt",
Content: []byte(strings.TrimSpace(`
------------------- iBMC INFO -------------------
Active iBMC Version: (U68)3.08.05.85
Active iBMC Built: 16:46:26 Jan 4 2026
SDK Version: 13.16.30.16
SDK Built: 07:55:18 Dec 12 2025
Active BIOS Version: (U6216)01.02.08.17
Active BIOS Built: 00:00:00 Jan 05 2026
Product Name: G5500 V7
`)),
},
}
result, err := p.Parse(files)
if err != nil {
t.Fatalf("Parse: %v", err)
}
if result.Protocol != "ipmi" || result.SourceType != models.SourceTypeArchive {
t.Fatalf("unexpected source metadata: protocol=%q source_type=%q", result.Protocol, result.SourceType)
}
if result.Hardware == nil {
t.Fatal("Hardware is nil")
}
if len(result.Hardware.NetworkAdapters) != 1 {
t.Fatalf("expected 1 network adapter, got %d", len(result.Hardware.NetworkAdapters))
}
adapter := result.Hardware.NetworkAdapters[0]
if adapter.BDF != "0000:27:00.0" {
t.Fatalf("expected network adapter BDF 0000:27:00.0, got %q", adapter.BDF)
}
if adapter.Firmware != "26.39.2048" {
t.Fatalf("expected network adapter firmware 26.39.2048, got %q", adapter.Firmware)
}
if adapter.SerialNumber != "02Y238X6RC000058" {
t.Fatalf("expected network adapter serial from card_info, got %q", adapter.SerialNumber)
}
if len(adapter.MACAddresses) != 2 || adapter.MACAddresses[0] != "44:1A:4C:16:E8:03" || adapter.MACAddresses[1] != "44:1A:4C:16:E8:04" {
t.Fatalf("unexpected MAC addresses: %#v", adapter.MACAddresses)
}
fwByDevice := make(map[string]models.FirmwareInfo)
for _, fw := range result.Hardware.Firmware {
fwByDevice[fw.DeviceName] = fw
}
if fwByDevice["iBMC"].Version != "(U68)3.08.05.85" {
t.Fatalf("expected iBMC firmware from app_revision.txt, got %#v", fwByDevice["iBMC"])
}
if fwByDevice["BIOS"].Version != "(U6216)01.02.08.17" {
t.Fatalf("expected BIOS firmware from app_revision.txt, got %#v", fwByDevice["BIOS"])
}
if result.Hardware.BoardInfo.ProductName != "G5500 V7" {
t.Fatalf("expected board product fallback from app_revision.txt, got %q", result.Hardware.BoardInfo.ProductName)
}
}
func TestParse_G5500V7_PSUs(t *testing.T) {
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
p := &Parser{}

View File

@@ -44,6 +44,9 @@ func TestParserParseExample(t *testing.T) {
examplePath := filepath.Join("..", "..", "..", "..", "example", "xigmanas.txt")
raw, err := os.ReadFile(examplePath)
if err != nil {
if os.IsNotExist(err) {
t.Skipf("example file %s not present", examplePath)
}
t.Fatalf("read example file: %v", err)
}

View File

@@ -3,6 +3,8 @@ package server
import (
"bytes"
"encoding/json"
"fmt"
"net"
"net/http"
"net/http/httptest"
"strings"
@@ -22,6 +24,7 @@ func newCollectTestServer() (*Server, *httptest.Server) {
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
return s, httptest.NewServer(mux)
}
@@ -29,7 +32,17 @@ func TestCollectProbe(t *testing.T) {
_, ts := newCollectTestServer()
defer ts.Close()
body := `{"host":"bmc-off.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("listen probe target: %v", err)
}
defer ln.Close()
addr, ok := ln.Addr().(*net.TCPAddr)
if !ok {
t.Fatalf("unexpected listener address type: %T", ln.Addr())
}
body := fmt.Sprintf(`{"host":"127.0.0.1","protocol":"redfish","port":%d,"username":"admin-off","auth_type":"password","password":"secret","tls_mode":"strict"}`, addr.Port)
resp, err := http.Post(ts.URL+"/api/collect/probe", "application/json", bytes.NewBufferString(body))
if err != nil {
t.Fatalf("post collect probe failed: %v", err)
@@ -53,9 +66,6 @@ func TestCollectProbe(t *testing.T) {
if payload.HostPowerState != "Off" {
t.Fatalf("expected host power state Off, got %q", payload.HostPowerState)
}
if !payload.PowerControlAvailable {
t.Fatalf("expected power control to be available")
}
}
func TestCollectLifecycleToTerminal(t *testing.T) {

View File

@@ -21,13 +21,16 @@ func (c *mockConnector) Probe(ctx context.Context, req collector.Request) (*coll
if strings.Contains(strings.ToLower(req.Host), "fail") {
return nil, context.DeadlineExceeded
}
hostPoweredOn := true
if strings.Contains(strings.ToLower(req.Host), "off") || strings.Contains(strings.ToLower(req.Username), "off") {
hostPoweredOn = false
}
return &collector.ProbeResult{
Reachable: true,
Protocol: c.protocol,
HostPowerState: map[bool]string{true: "On", false: "Off"}[!strings.Contains(strings.ToLower(req.Host), "off")],
HostPoweredOn: !strings.Contains(strings.ToLower(req.Host), "off"),
PowerControlAvailable: true,
SystemPath: "/redfish/v1/Systems/1",
Reachable: true,
Protocol: c.protocol,
HostPowerState: map[bool]string{true: "On", false: "Off"}[hostPoweredOn],
HostPoweredOn: hostPoweredOn,
SystemPath: "/redfish/v1/Systems/1",
}, nil
}

View File

@@ -19,18 +19,15 @@ type CollectRequest struct {
Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"`
TLSMode string `json:"tls_mode"`
PowerOnIfHostOff bool `json:"power_on_if_host_off,omitempty"`
StopHostAfterCollect bool `json:"stop_host_after_collect,omitempty"`
DebugPayloads bool `json:"debug_payloads,omitempty"`
DebugPayloads bool `json:"debug_payloads,omitempty"`
}
type CollectProbeResponse struct {
Reachable bool `json:"reachable"`
Protocol string `json:"protocol,omitempty"`
HostPowerState string `json:"host_power_state,omitempty"`
HostPoweredOn bool `json:"host_powered_on"`
PowerControlAvailable bool `json:"power_control_available"`
Message string `json:"message,omitempty"`
HostPowerState string `json:"host_power_state,omitempty"`
HostPoweredOn bool `json:"host_powered_on"`
Message string `json:"message,omitempty"`
}
type CollectJobResponse struct {
@@ -78,7 +75,8 @@ type Job struct {
CreatedAt time.Time
UpdatedAt time.Time
RequestMeta CollectRequestMeta
cancel func()
cancel func()
skipFn func()
}
type CollectModuleStatus struct {

View File

@@ -81,7 +81,7 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
}
for _, mem := range hw.Memory {
if !mem.Present || mem.SizeMB == 0 {
if !mem.IsInstalledInventory() {
continue
}
present := mem.Present
@@ -243,6 +243,8 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
Source: "network_adapters",
Slot: nic.Slot,
Location: nic.Location,
BDF: nic.BDF,
DeviceClass: "NetworkController",
VendorID: nic.VendorID,
DeviceID: nic.DeviceID,
Model: nic.Model,
@@ -253,6 +255,11 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
PortCount: nic.PortCount,
PortType: nic.PortType,
MACAddresses: nic.MACAddresses,
LinkWidth: nic.LinkWidth,
LinkSpeed: nic.LinkSpeed,
MaxLinkWidth: nic.MaxLinkWidth,
MaxLinkSpeed: nic.MaxLinkSpeed,
NUMANode: nic.NUMANode,
Present: &present,
Status: nic.Status,
StatusCheckedAt: nic.StatusCheckedAt,

View File

@@ -90,6 +90,98 @@ func TestBuildHardwareDevices_MemorySameSerialDifferentSlots_NotDeduped(t *testi
}
}
func TestBuildHardwareDevices_ZeroSizeMemoryWithInventoryIsIncluded(t *testing.T) {
hw := &models.HardwareConfig{
Memory: []models.MemoryDIMM{
{
Slot: "PROC 1 DIMM 3",
Location: "PROC 1 DIMM 3",
Present: true,
SizeMB: 0,
Manufacturer: "Hynix",
SerialNumber: "2B5F92C6",
PartNumber: "HMCG88AEBRA115N",
Status: "ok",
},
},
}
devices := BuildHardwareDevices(hw)
memoryCount := 0
for _, d := range devices {
if d.Kind != models.DeviceKindMemory {
continue
}
memoryCount++
if d.Slot != "PROC 1 DIMM 3" || d.PartNumber != "HMCG88AEBRA115N" || d.SerialNumber != "2B5F92C6" {
t.Fatalf("unexpected memory device: %+v", d)
}
}
if memoryCount != 1 {
t.Fatalf("expected 1 installed zero-size memory record, got %d", memoryCount)
}
}
func TestBuildHardwareDevices_NetworkAdapterPreservesPCIeMetadata(t *testing.T) {
hw := &models.HardwareConfig{
NetworkAdapters: []models.NetworkAdapter{
{
Slot: "1",
Location: "OCP",
Present: true,
BDF: "0000:27:00.0",
Model: "ConnectX-6 Lx",
VendorID: 0x15b3,
DeviceID: 0x101f,
SerialNumber: "NIC-001",
Firmware: "26.39.2048",
MACAddresses: []string{"44:1A:4C:16:E8:03", "44:1A:4C:16:E8:04"},
LinkWidth: 16,
LinkSpeed: "32 GT/s",
NUMANode: 1,
Status: "ok",
},
},
}
devices := BuildHardwareDevices(hw)
for _, d := range devices {
if d.Kind != models.DeviceKindNetwork {
continue
}
if d.BDF != "0000:27:00.0" || d.LinkWidth != 16 || d.LinkSpeed != "32 GT/s" || d.NUMANode != 1 {
t.Fatalf("expected network PCIe metadata to be preserved, got %+v", d)
}
return
}
t.Fatal("expected network device in canonical inventory")
}
func TestBuildSpecification_ZeroSizeMemoryWithInventoryIsShown(t *testing.T) {
hw := &models.HardwareConfig{
Memory: []models.MemoryDIMM{
{
Slot: "PROC 1 DIMM 3",
Present: true,
SizeMB: 0,
Manufacturer: "Hynix",
PartNumber: "HMCG88AEBRA115N",
SerialNumber: "2B5F92C6",
Status: "ok",
},
},
}
spec := buildSpecification(hw)
for _, line := range spec {
if line.Category == "Память" && line.Name == "Hynix HMCG88AEBRA115N (size unknown)" && line.Quantity == 1 {
return
}
}
t.Fatalf("expected memory spec line for zero-size identified DIMM, got %+v", spec)
}
func TestBuildHardwareDevices_DuplicateSerials_AreAnnotated(t *testing.T) {
hw := &models.HardwareConfig{
Memory: []models.MemoryDIMM{
@@ -166,6 +258,31 @@ func TestBuildHardwareDevices_SkipsFirmwareOnlyNumericSlots(t *testing.T) {
}
}
func TestBuildHardwareDevices_NetworkDevicesUseUnifiedControllerClass(t *testing.T) {
hw := &models.HardwareConfig{
NetworkAdapters: []models.NetworkAdapter{
{
Slot: "NIC1",
Model: "Ethernet Adapter",
Vendor: "Intel",
Present: true,
},
},
}
devices := BuildHardwareDevices(hw)
for _, d := range devices {
if d.Kind != models.DeviceKindNetwork {
continue
}
if d.DeviceClass != "NetworkController" {
t.Fatalf("expected unified network controller class, got %+v", d)
}
return
}
t.Fatalf("expected one canonical network device")
}
func TestHandleGetConfig_ReturnsCanonicalHardware(t *testing.T) {
srv := &Server{}
srv.SetResult(&models.AnalysisResult{

View File

@@ -13,12 +13,13 @@ import (
"net"
"net/http"
"os"
"sync/atomic"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"git.mchus.pro/mchus/logpile/internal/collector"
@@ -530,11 +531,21 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
continue
}
present := mem.Present != nil && *mem.Present
// Skip empty slots (not present or 0 size)
if !present || mem.SizeMB == 0 {
if !present {
continue
}
// Include frequency if available
if mem.SizeMB == 0 {
name := strings.TrimSpace(strings.Join(nonEmptyStrings(mem.Manufacturer, mem.PartNumber, mem.Type), " "))
if name == "" {
name = "Installed DIMM (size unknown)"
} else {
name += " (size unknown)"
}
memGroups[name]++
continue
}
key := ""
currentSpeed := intFromDetails(mem.Details, "current_speed_mhz")
if currentSpeed > 0 {
@@ -626,6 +637,18 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
return spec
}
func nonEmptyStrings(values ...string) []string {
out := make([]string, 0, len(values))
for _, value := range values {
value = strings.TrimSpace(value)
if value == "" {
continue
}
out = append(out, value)
}
return out
}
func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
if result == nil {
@@ -717,6 +740,19 @@ func hasUsableSerial(serial string) bool {
}
}
func hasUsableFirmwareVersion(version string) bool {
v := strings.TrimSpace(version)
if v == "" {
return false
}
switch strings.ToUpper(v) {
case "N/A", "NA", "NONE", "NULL", "UNKNOWN", "-":
return false
default:
return true
}
}
func (s *Server) handleGetFirmware(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
if result == nil || result.Hardware == nil {
@@ -944,7 +980,7 @@ func buildFirmwareEntries(hw *models.HardwareConfig) []firmwareEntry {
component = strings.TrimSpace(component)
model = strings.TrimSpace(model)
version = strings.TrimSpace(version)
if component == "" || version == "" {
if component == "" || !hasUsableFirmwareVersion(version) {
return
}
if model == "" {
@@ -1639,34 +1675,28 @@ func (s *Server) handleCollectProbe(w http.ResponseWriter, r *http.Request) {
message := "Связь с BMC установлена"
if result != nil {
switch {
case !result.HostPoweredOn && result.PowerControlAvailable:
message = "Связь с BMC установлена, host выключен. Можно включить перед сбором."
case !result.HostPoweredOn:
message = "Связь с BMC установлена, host выключен."
default:
message = "Связь с BMC установлена, host включен."
if result.HostPoweredOn {
message = "Связь с BMC установлена, host включён."
} else {
message = "Связь с BMC установлена, host выключен. Данные инвентаря могут быть неполными."
}
}
hostPowerState := ""
hostPoweredOn := false
powerControlAvailable := false
reachable := false
if result != nil {
reachable = result.Reachable
hostPowerState = strings.TrimSpace(result.HostPowerState)
hostPoweredOn = result.HostPoweredOn
powerControlAvailable = result.PowerControlAvailable
}
jsonResponse(w, CollectProbeResponse{
Reachable: reachable,
Protocol: req.Protocol,
HostPowerState: hostPowerState,
HostPoweredOn: hostPoweredOn,
PowerControlAvailable: powerControlAvailable,
Message: message,
Reachable: reachable,
Protocol: req.Protocol,
HostPowerState: hostPowerState,
HostPoweredOn: hostPoweredOn,
Message: message,
})
}
@@ -1702,6 +1732,22 @@ func (s *Server) handleCollectCancel(w http.ResponseWriter, r *http.Request) {
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) handleCollectSkip(w http.ResponseWriter, r *http.Request) {
jobID := strings.TrimSpace(r.PathValue("id"))
if !isValidCollectJobID(jobID) {
jsonError(w, "Invalid collect job id", http.StatusBadRequest)
return
}
job, ok := s.jobManager.SkipJob(jobID)
if !ok {
jsonError(w, "Collect job not found", http.StatusNotFound)
return
}
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
ctx, cancel := context.WithCancel(context.Background())
if attached := s.jobManager.AttachJobCancel(jobID, cancel); !attached {
@@ -1709,6 +1755,11 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
return
}
skipCh := make(chan struct{})
var skipOnce sync.Once
skipFn := func() { skipOnce.Do(func() { close(skipCh) }) }
s.jobManager.AttachJobSkip(jobID, skipFn)
go func() {
connector, ok := s.getCollector(req.Protocol)
if !ok {
@@ -1776,7 +1827,9 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
}
}
result, err := connector.Collect(ctx, toCollectorRequest(req), emitProgress)
collectorReq := toCollectorRequest(req)
collectorReq.SkipHungCh = skipCh
result, err := connector.Collect(ctx, collectorReq, emitProgress)
if err != nil {
if ctx.Err() != nil {
return
@@ -1992,17 +2045,15 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
func toCollectorRequest(req CollectRequest) collector.Request {
return collector.Request{
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
Password: req.Password,
Token: req.Token,
TLSMode: req.TLSMode,
PowerOnIfHostOff: req.PowerOnIfHostOff,
StopHostAfterCollect: req.StopHostAfterCollect,
DebugPayloads: req.DebugPayloads,
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
Password: req.Password,
Token: req.Token,
TLSMode: req.TLSMode,
DebugPayloads: req.DebugPayloads,
}
}

View File

@@ -62,3 +62,22 @@ func TestBuildFirmwareEntries_IncludesGPUFirmwareFallback(t *testing.T) {
t.Fatalf("expected GPU firmware entry from hardware.gpus fallback")
}
}
func TestBuildFirmwareEntries_SkipsPlaceholderVersions(t *testing.T) {
hw := &models.HardwareConfig{
Firmware: []models.FirmwareInfo{
{DeviceName: "BMC", Version: "3.13.42P13"},
{DeviceName: "Front_BP_1", Version: "NA"},
{DeviceName: "Rear_BP_0", Version: "N/A"},
{DeviceName: "HDD_BP", Version: "-"},
},
}
entries := buildFirmwareEntries(hw)
if len(entries) != 1 {
t.Fatalf("expected only usable firmware entries, got %#v", entries)
}
if entries[0].Component != "BMC" || entries[0].Version != "3.13.42P13" {
t.Fatalf("unexpected remaining firmware entry: %#v", entries[0])
}
}

View File

@@ -175,6 +175,43 @@ func (m *JobManager) UpdateJobDebugInfo(id string, info *CollectDebugInfo) (*Job
return cloned, true
}
func (m *JobManager) AttachJobSkip(id string, skipFn func()) bool {
m.mu.Lock()
defer m.mu.Unlock()
job, ok := m.jobs[id]
if !ok || job == nil || isTerminalCollectStatus(job.Status) {
return false
}
job.skipFn = skipFn
return true
}
func (m *JobManager) SkipJob(id string) (*Job, bool) {
m.mu.Lock()
job, ok := m.jobs[id]
if !ok || job == nil {
m.mu.Unlock()
return nil, false
}
if isTerminalCollectStatus(job.Status) {
cloned := cloneJob(job)
m.mu.Unlock()
return cloned, true
}
skipFn := job.skipFn
job.skipFn = nil
job.UpdatedAt = time.Now().UTC()
job.Logs = append(job.Logs, formatCollectLogLine(job.UpdatedAt, "Пропуск зависших запросов по команде пользователя"))
cloned := cloneJob(job)
m.mu.Unlock()
if skipFn != nil {
skipFn()
}
return cloned, true
}
func (m *JobManager) AttachJobCancel(id string, cancelFn context.CancelFunc) bool {
m.mu.Lock()
defer m.mu.Unlock()
@@ -229,5 +266,6 @@ func cloneJob(job *Job) *Job {
cloned.CurrentPhase = job.CurrentPhase
cloned.ETASeconds = job.ETASeconds
cloned.cancel = nil
cloned.skipFn = nil
return &cloned
}

View File

@@ -99,6 +99,7 @@ func (s *Server) setupRoutes() {
s.mux.HandleFunc("POST /api/collect/probe", s.handleCollectProbe)
s.mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
s.mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
s.mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
}
func (s *Server) Run() error {

View File

@@ -24,6 +24,7 @@ func newFlowTestServer() (*Server, *httptest.Server) {
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
return s, httptest.NewServer(mux)
}

View File

@@ -211,8 +211,6 @@ main {
}
#api-connect-btn,
#api-power-on-collect-btn,
#api-collect-off-btn,
#convert-folder-btn,
#convert-run-btn,
#cancel-job-btn,
@@ -229,8 +227,6 @@ main {
}
#api-connect-btn:hover,
#api-power-on-collect-btn:hover,
#api-collect-off-btn:hover,
#convert-folder-btn:hover,
#convert-run-btn:hover,
#cancel-job-btn:hover,
@@ -241,8 +237,6 @@ main {
#convert-run-btn:disabled,
#convert-folder-btn:disabled,
#api-connect-btn:disabled,
#api-power-on-collect-btn:disabled,
#api-collect-off-btn:disabled,
#cancel-job-btn:disabled,
.upload-area button:disabled {
opacity: 0.6;
@@ -311,64 +305,19 @@ main {
border-top: 1px solid #e2e8f0;
}
.api-confirm-modal-backdrop {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.45);
.api-host-off-warning {
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.api-confirm-modal {
background: #fff;
border-radius: 10px;
padding: 1.5rem 1.75rem;
max-width: 380px;
width: 90%;
box-shadow: 0 8px 32px rgba(0,0,0,0.18);
}
.api-confirm-modal p {
margin-bottom: 1.1rem;
font-size: 0.95rem;
color: #333;
line-height: 1.5;
}
.api-confirm-modal-actions {
display: flex;
gap: 0.6rem;
justify-content: flex-end;
}
.api-confirm-modal-actions button {
border: none;
gap: 0.4rem;
padding: 0.5rem 0.75rem;
background: #fef3c7;
border: 1px solid #f59e0b;
border-radius: 6px;
padding: 0.5rem 1rem;
font-size: 0.9rem;
font-weight: 600;
cursor: pointer;
font-size: 0.875rem;
color: #92400e;
font-weight: 500;
}
.api-confirm-modal-actions .btn-cancel {
background: #e2e8f0;
color: #333;
}
.api-confirm-modal-actions .btn-cancel:hover {
background: #cbd5e1;
}
.api-confirm-modal-actions .btn-confirm {
background: #dc3545;
color: #fff;
}
.api-confirm-modal-actions .btn-confirm:hover {
background: #b02a37;
}
.api-connect-status {
margin-top: 0.75rem;
@@ -445,6 +394,33 @@ main {
cursor: default;
}
.job-status-actions {
display: flex;
gap: 0.5rem;
align-items: center;
}
#skip-hung-btn {
background: #f59e0b;
color: #fff;
border: none;
border-radius: 6px;
padding: 0.5rem 0.9rem;
font-size: 0.875rem;
font-weight: 600;
cursor: pointer;
transition: background-color 0.2s ease, opacity 0.2s ease;
}
#skip-hung-btn:hover {
background: #d97706;
}
#skip-hung-btn:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.job-status-meta {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(230px, 1fr));

View File

@@ -91,9 +91,9 @@ function initApiSource() {
}
const cancelJobButton = document.getElementById('cancel-job-btn');
const skipHungButton = document.getElementById('skip-hung-btn');
const connectButton = document.getElementById('api-connect-btn');
const collectButton = document.getElementById('api-collect-btn');
const powerOffCheckbox = document.getElementById('api-power-off');
const fieldNames = ['host', 'port', 'username', 'password'];
apiForm.addEventListener('submit', (event) => {
@@ -110,6 +110,11 @@ function initApiSource() {
cancelCollectionJob();
});
}
if (skipHungButton) {
skipHungButton.addEventListener('click', () => {
skipHungCollectionJob();
});
}
if (connectButton) {
connectButton.addEventListener('click', () => {
startApiProbe();
@@ -120,22 +125,6 @@ function initApiSource() {
startCollectionWithOptions();
});
}
if (powerOffCheckbox) {
powerOffCheckbox.addEventListener('change', () => {
if (!powerOffCheckbox.checked) {
return;
}
// If host was already on when probed, warn before enabling shutdown
if (apiProbeResult && apiProbeResult.host_powered_on) {
showConfirmModal(
'Хост был включён до начала сбора. Вы уверены, что хотите выключить его после завершения сбора?',
() => { /* confirmed — leave checked */ },
() => { powerOffCheckbox.checked = false; }
);
}
});
}
fieldNames.forEach((fieldName) => {
const field = apiForm.elements.namedItem(fieldName);
if (!field) {
@@ -163,36 +152,6 @@ function initApiSource() {
renderCollectionJob();
}
function showConfirmModal(message, onConfirm, onCancel) {
const backdrop = document.createElement('div');
backdrop.className = 'api-confirm-modal-backdrop';
backdrop.innerHTML = `
<div class="api-confirm-modal" role="dialog" aria-modal="true">
<p>${escapeHtml(message)}</p>
<div class="api-confirm-modal-actions">
<button class="btn-cancel">Отмена</button>
<button class="btn-confirm">Да, выключить</button>
</div>
</div>
`;
document.body.appendChild(backdrop);
const close = () => document.body.removeChild(backdrop);
backdrop.querySelector('.btn-cancel').addEventListener('click', () => {
close();
if (onCancel) onCancel();
});
backdrop.querySelector('.btn-confirm').addEventListener('click', () => {
close();
if (onConfirm) onConfirm();
});
backdrop.addEventListener('click', (e) => {
if (e.target === backdrop) {
close();
if (onCancel) onCancel();
}
});
}
function startApiProbe() {
const { isValid, payload, errors } = validateCollectForm();
@@ -255,11 +214,7 @@ function startCollectionWithOptions() {
return;
}
const powerOnCheckbox = document.getElementById('api-power-on');
const powerOffCheckbox = document.getElementById('api-power-off');
const debugPayloads = document.getElementById('api-debug-payloads');
payload.power_on_if_host_off = powerOnCheckbox ? powerOnCheckbox.checked : false;
payload.stop_host_after_collect = powerOffCheckbox ? powerOffCheckbox.checked : false;
payload.debug_payloads = debugPayloads ? debugPayloads.checked : false;
startCollectionJob(payload);
}
@@ -268,8 +223,6 @@ function renderApiProbeState() {
const connectButton = document.getElementById('api-connect-btn');
const probeOptions = document.getElementById('api-probe-options');
const status = document.getElementById('api-connect-status');
const powerOnCheckbox = document.getElementById('api-power-on');
const powerOffCheckbox = document.getElementById('api-power-off');
if (!connectButton || !probeOptions || !status) {
return;
}
@@ -283,7 +236,6 @@ function renderApiProbeState() {
}
const hostOn = apiProbeResult.host_powered_on;
const powerControlAvailable = apiProbeResult.power_control_available;
if (hostOn) {
status.textContent = apiProbeResult.message || 'Связь с BMC есть, host включён.';
@@ -295,25 +247,15 @@ function renderApiProbeState() {
probeOptions.classList.remove('hidden');
// "Включить" checkbox
if (powerOnCheckbox) {
const hostOffWarning = document.getElementById('api-host-off-warning');
if (hostOffWarning) {
if (hostOn) {
// Host already on — checkbox is checked and disabled
powerOnCheckbox.checked = true;
powerOnCheckbox.disabled = true;
hostOffWarning.classList.add('hidden');
} else {
// Host off — default: checked (will power on), enabled
powerOnCheckbox.checked = true;
powerOnCheckbox.disabled = !powerControlAvailable;
hostOffWarning.classList.remove('hidden');
}
}
// "Выключить" checkbox — default: unchecked
if (powerOffCheckbox) {
powerOffCheckbox.checked = false;
powerOffCheckbox.disabled = !powerControlAvailable;
}
connectButton.textContent = 'Переподключиться';
}
@@ -535,6 +477,36 @@ function pollCollectionJobStatus() {
});
}
function skipHungCollectionJob() {
if (!collectionJob || isCollectionJobTerminal(collectionJob.status)) {
return;
}
const btn = document.getElementById('skip-hung-btn');
if (btn) {
btn.disabled = true;
btn.textContent = 'Пропуск...';
}
fetch(`/api/collect/${encodeURIComponent(collectionJob.id)}/skip`, {
method: 'POST'
})
.then(async (response) => {
const body = await response.json().catch(() => ({}));
if (!response.ok) {
throw new Error(body.error || 'Не удалось пропустить зависшие запросы');
}
syncServerLogs(body.logs);
renderCollectionJob();
})
.catch((err) => {
appendJobLog(`Ошибка пропуска: ${err.message}`);
if (btn) {
btn.disabled = false;
btn.textContent = 'Пропустить зависшие';
}
renderCollectionJob();
});
}
function cancelCollectionJob() {
if (!collectionJob || isCollectionJobTerminal(collectionJob.status)) {
return;
@@ -671,6 +643,19 @@ function renderCollectionJob() {
)).join('');
cancelButton.disabled = isTerminal;
const skipBtn = document.getElementById('skip-hung-btn');
if (skipBtn) {
const isCollecting = !isTerminal && collectionJob.status === 'running';
if (isCollecting) {
skipBtn.classList.remove('hidden');
} else {
skipBtn.classList.add('hidden');
skipBtn.disabled = false;
skipBtn.textContent = 'Пропустить зависшие';
}
}
setApiFormBlocked(!isTerminal);
}

View File

@@ -36,9 +36,9 @@
<div id="archive-source-content">
<div class="upload-area" id="drop-zone">
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
<p class="hint">Поддерживаемые форматы: tar.gz, tar, tgz, sds, zip, json, txt, log</p>
<p class="hint">Поддерживаемые форматы: ahs, tar.gz, tar, tgz, sds, zip, json, txt, log</p>
</div>
<div id="upload-status"></div>
<div id="parsers-info" class="parsers-info"></div>
@@ -80,15 +80,9 @@
</div>
<div id="api-connect-status" class="api-connect-status"></div>
<div id="api-probe-options" class="api-probe-options hidden">
<label class="api-form-checkbox" for="api-power-on">
<input id="api-power-on" name="power_on_if_host_off" type="checkbox">
<span>Включить перед сбором</span>
</label>
<label class="api-form-checkbox" for="api-power-off">
<input id="api-power-off" name="stop_host_after_collect" type="checkbox">
<span>Выключить после сбора</span>
</label>
<div class="api-probe-options-separator"></div>
<div id="api-host-off-warning" class="api-host-off-warning hidden">
&#9888; Host выключен — данные инвентаря могут быть неполными
</div>
<label class="api-form-checkbox" for="api-debug-payloads">
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
<span>Сбор расширенных метрик для отладки</span>
@@ -102,7 +96,10 @@
<section id="api-job-status" class="job-status hidden" aria-live="polite">
<div class="job-status-header">
<h4>Статус задачи сбора</h4>
<button id="cancel-job-btn" type="button">Отменить</button>
<div class="job-status-actions">
<button id="skip-hung-btn" type="button" class="hidden" title="Прервать зависшие запросы и перейти к анализу собранных данных">Пропустить зависшие</button>
<button id="cancel-job-btn" type="button">Отменить</button>
</div>
</div>
<div class="job-status-meta">
<div><span class="meta-label">jobId:</span> <code id="job-id-value">-</code></div>