Compare commits
30 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ca457ac72b | ||
|
|
78d0e26fd0 | ||
|
|
88e4e8dd49 | ||
|
|
cf9cf5d0cf | ||
| aba7a54990 | |||
| 835df2676c | |||
| b86d51c921 | |||
|
|
a82fb227e5 | ||
| c9969fc3da | |||
| 89b6701f43 | |||
| b04877549a | |||
| 8ca173c99b | |||
| f19a3454fa | |||
|
|
becdca1d7e | ||
|
|
e10440ae32 | ||
| 5c2a21aff1 | |||
|
|
9df13327aa | ||
|
|
7e9af89c46 | ||
|
|
db74df9994 | ||
|
|
bb82387d48 | ||
|
|
475f6ac472 | ||
|
|
93ce676f04 | ||
|
|
c47c34fd11 | ||
|
|
d8c3256e41 | ||
|
|
1b2d978d29 | ||
|
|
0f310d57c4 | ||
|
|
3547ef9083 | ||
|
|
99f0d6217c | ||
|
|
8acbba3cc9 | ||
|
|
8942991f0c |
2
bible
2
bible
Submodule bible updated: 52444350c1...d2600f1279
@@ -27,11 +27,14 @@ All modes converge on the same normalized hardware model and exporter pipeline.
|
||||
## Current vendor coverage
|
||||
|
||||
- Dell TSR
|
||||
- Reanimator Easy Bee support bundles
|
||||
- H3C SDS G5/G6
|
||||
- Inspur / Kaytus
|
||||
- HPE iLO AHS
|
||||
- NVIDIA HGX Field Diagnostics
|
||||
- NVIDIA Bug Report
|
||||
- Unraid
|
||||
- xFusion iBMC dump / file export
|
||||
- XigmaNAS
|
||||
- Generic fallback parser
|
||||
|
||||
|
||||
@@ -58,6 +58,7 @@ Responses:
|
||||
|
||||
Optional request field:
|
||||
- `power_on_if_host_off`: when `true`, Redfish collection may power on the host before collection if preflight found it powered off
|
||||
- `debug_payloads`: when `true`, collector keeps extra diagnostic payloads and enables extended plan-B retries for slow HGX component inventory branches (`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`)
|
||||
|
||||
### `POST /api/collect/probe`
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ Request fields passed from the server:
|
||||
- credential field (`password` or token)
|
||||
- `tls_mode`
|
||||
- optional `power_on_if_host_off`
|
||||
- optional `debug_payloads` for extended diagnostics
|
||||
|
||||
### Core rule
|
||||
|
||||
@@ -35,18 +36,38 @@ If the collector adds a fallback, probe, or normalization rule, replay must mirr
|
||||
|
||||
### Preflight and host power
|
||||
|
||||
- `Probe()` may be used before collection to verify API connectivity and current host `PowerState`
|
||||
- if the host is off and the user chose power-on, the collector may issue `ComputerSystem.Reset`
|
||||
with `ResetType=On`
|
||||
- power-on attempts are bounded and logged
|
||||
- after a successful power-on, the collector waits an extra stabilization window, then checks
|
||||
`PowerState` again and only starts collection if the host is still on
|
||||
- if the collector powered on the host itself for collection, it must attempt to power it back off
|
||||
after collection completes
|
||||
- if the host was already on before collection, the collector must not power it off afterward
|
||||
- if power-on fails, collection still continues against the powered-off host
|
||||
- all power-control decisions and attempts must be visible in the collection log so they are
|
||||
preserved in raw-export bundles
|
||||
- `Probe()` is used before collection to verify API connectivity and report current host `PowerState`
|
||||
- if the host is off, the collector logs a warning and proceeds with collection; inventory data may
|
||||
be incomplete when the host is powered off
|
||||
- power-on and power-off are not performed by the collector
|
||||
|
||||
### Skip hung requests
|
||||
|
||||
Redfish collection uses a two-level context model:
|
||||
|
||||
- `ctx` — job lifetime context, cancelled only on explicit job cancel
|
||||
- `collectCtx` — collection phase context, derived from `ctx`; covers snapshot, prefetch, and plan-B
|
||||
|
||||
`collectCtx` is cancelled when the user presses "Пропустить зависшие" (skip hung).
|
||||
On skip, all in-flight HTTP requests in the current phase are aborted immediately via context
|
||||
cancellation, the crawler and plan-B loops exit, and execution proceeds to the replay phase using
|
||||
whatever was collected in `rawTree`. The result is partial but valid.
|
||||
|
||||
The skip signal travels: UI button → `POST /api/collect/{id}/skip` → `JobManager.SkipJob()` →
|
||||
closes `skipCh` → goroutine in `Collect()` → `cancelCollect()`.
|
||||
|
||||
The skip button is visible during `running` state and hidden once the job reaches a terminal state.
|
||||
|
||||
### Extended diagnostics toggle
|
||||
|
||||
The live collect form exposes a user-facing checkbox for extended diagnostics.
|
||||
|
||||
- default collection prioritizes inventory completeness and bounded runtime
|
||||
- when extended diagnostics is off, heavy HGX component-chassis critical plan-B retries
|
||||
(`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`) are skipped
|
||||
- when extended diagnostics is on, those retries are allowed and extra debug payloads are collected
|
||||
|
||||
This toggle is intended for operator-driven deep diagnostics on problematic hosts, not for the default path.
|
||||
|
||||
### Discovery model
|
||||
|
||||
@@ -100,6 +121,13 @@ Live Redfish collection must expose profile-match diagnostics:
|
||||
- the collect page should render active modules as chips from structured status data, not by
|
||||
parsing log lines
|
||||
|
||||
Profile matching may use stable platform grammar signals in addition to vendor strings:
|
||||
- discovered member/resource naming from lightweight discovery collections
|
||||
- firmware inventory member IDs
|
||||
- OEM action names and linked target paths embedded in discovery documents
|
||||
- replay-only snapshot hints such as OEM assembly/type markers when they are present in
|
||||
`raw_payloads.redfish_tree`
|
||||
|
||||
On replay, profile-derived analysis directives may enable vendor-specific inventory linking
|
||||
helpers such as processor-GPU fallback, chassis-ID alias resolution, and bounded storage recovery.
|
||||
Replay should now resolve a structured analysis plan inside `redfishprofile/`, analogous to the
|
||||
@@ -152,3 +180,10 @@ When changing collection logic:
|
||||
Status: mock scaffold only.
|
||||
|
||||
It remains registered for protocol completeness, but it is not a real collection path.
|
||||
The project is Redfish-first for live collection:
|
||||
- Redfish already covers the current product goals for inventory, sensors, and hardware event logs
|
||||
- the live architecture depends on replayable `raw_payloads.redfish_tree`
|
||||
- a generic IPMI collector would require a separate raw snapshot and replay contract
|
||||
|
||||
IPMI should be reconsidered only as a narrow fallback for real field cases where Redfish is
|
||||
missing or unreliable for a specific capability such as SEL, FRU, or sensors.
|
||||
|
||||
@@ -50,12 +50,16 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
|
||||
| Vendor ID | Input family | Notes |
|
||||
|-----------|--------------|-------|
|
||||
| `dell` | TSR ZIP archives | Broad hardware, firmware, sensors, lifecycle events |
|
||||
| `easy_bee` | `bee-support-*.tar.gz` | Imports embedded `export/bee-audit.json` snapshot from reanimator-easy-bee bundles |
|
||||
| `h3c_g5` | H3C SDS G5 bundles | INI/XML/CSV-driven hardware and event parsing |
|
||||
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
|
||||
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
|
||||
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
|
||||
| `lenovo_xcc` | Lenovo XCC mini-log ZIP archives | JSON inventory + platform event logs |
|
||||
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
|
||||
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
|
||||
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
|
||||
| `xfusion` | xFusion iBMC `tar.gz` dump / file export | AppDump + RTOSDump + LogDump merge for hardware and firmware |
|
||||
| `xigmanas` | XigmaNAS plain logs | FreeBSD/NAS-oriented inventory |
|
||||
| `generic` | fallback | Low-confidence text fallback when nothing else matches |
|
||||
|
||||
@@ -120,6 +124,55 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|
||||
---
|
||||
|
||||
### HPE iLO AHS (`hpe_ilo_ahs`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on HPE ProLiant Gen11 `.ahs` export from iLO 6.
|
||||
|
||||
**Archive format:** `.ahs` single-file Active Health System export.
|
||||
|
||||
**Detection:** Single-file input with `ABJR` container header and HPE AHS member names
|
||||
such as `CUST_INFO.DAT`, `*.zbb`, `ilo_boot_support.zbb`.
|
||||
|
||||
**Extracted data (current):**
|
||||
- System board identity (manufacturer, model, serial, part number)
|
||||
- iLO / System ROM / SPS top-level firmware
|
||||
- CPU inventory (model-level)
|
||||
- Memory DIMM inventory for populated slots
|
||||
- PSU inventory
|
||||
- PCIe / OCP NIC inventory from SMBIOS-style slot records
|
||||
- Storage controller and physical drives from embedded Redfish JSON inside `zbb` members
|
||||
- Basic iLO event log entries with timestamps when present
|
||||
|
||||
**Implementation note:** The format is proprietary. Parser support is intentionally hybrid:
|
||||
container parsing (`ABJR` + gzip) plus structured extraction from embedded Redfish objects and
|
||||
printable SMBIOS/FRU payloads. This is sufficient for inventory-grade parsing without decoding the
|
||||
entire internal `zbb` schema.
|
||||
|
||||
---
|
||||
|
||||
### xFusion iBMC Dump / File Export (`xfusion`)
|
||||
|
||||
**Status:** Ready (v1.1.0). Tested on xFusion G5500 V7 `tar.gz` exports.
|
||||
|
||||
**Archive format:** `tar.gz` dump exported from the iBMC UI, including `AppDump/`, `RTOSDump/`,
|
||||
and `LogDump/` trees.
|
||||
|
||||
**Detection:** `AppDump/FruData/fruinfo.txt`, `AppDump/card_manage/card_info`,
|
||||
`RTOSDump/versioninfo/app_revision.txt`, and `LogDump/netcard/netcard_info.txt`.
|
||||
|
||||
**Extracted data (current):**
|
||||
- Board / FRU inventory from `fruinfo.txt`
|
||||
- CPU inventory from `CpuMem/cpu_info`
|
||||
- Memory DIMM inventory from `CpuMem/mem_info`
|
||||
- GPU inventory from `card_info`
|
||||
- OCP NIC inventory by merging `card_info` with `LogDump/netcard/netcard_info.txt`
|
||||
- PSU inventory from `BMC/psu_info.txt`
|
||||
- Physical storage from `StorageMgnt/PhysicalDrivesInfo/*/disk_info`
|
||||
- System firmware entries from `RTOSDump/versioninfo/app_revision.txt`
|
||||
- Maintenance events from `LogDump/maintenance_log`
|
||||
|
||||
---
|
||||
|
||||
### Generic text fallback (`generic`)
|
||||
|
||||
**Status:** Ready (v1.0.0).
|
||||
@@ -139,10 +192,14 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
| Vendor | ID | Status | Tested on |
|
||||
|--------|----|--------|-----------|
|
||||
| Dell TSR | `dell` | Ready | TSR nested zip archives |
|
||||
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
|
||||
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| Lenovo XCC mini-log | `lenovo_xcc` | Ready | ThinkSystem SR650 V3 XCC mini-log ZIP |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
|
||||
| xFusion iBMC dump | `xfusion` | Ready | G5500 V7 file-export `tar.gz` bundles |
|
||||
| XigmaNAS | `xigmanas` | Ready | FreeBSD NAS logs |
|
||||
| H3C SDS G5 | `h3c_g5` | Ready | H3C UniServer R4900 G5 SDS archives |
|
||||
| H3C SDS G6 | `h3c_g6` | Ready | H3C UniServer R4700 G6 SDS archives |
|
||||
|
||||
@@ -57,6 +57,11 @@ Current behavior:
|
||||
7. Packages any already-present binaries from `bin/`
|
||||
8. Generates `SHA256SUMS.txt`
|
||||
|
||||
Release tag format:
|
||||
- project release tags use `vN.M`
|
||||
- do not create `vN.M.P` tags for LOGPile releases
|
||||
- release artifacts and `main.version` inherit the exact git tag string
|
||||
|
||||
Important limitation:
|
||||
- `scripts/release.sh` does not run `make build-all` for you
|
||||
- if you want Linux or additional macOS archives in the release directory, build them before running the script
|
||||
|
||||
@@ -258,6 +258,9 @@ at parse time before storing in any model struct. Use the regex
|
||||
**Date:** 2026-03-12
|
||||
**Context:** `shouldAdaptiveNVMeProbe` was introduced in `2fa4a12` to recover NVMe drives on
|
||||
Supermicro BMCs that expose empty `Drives` collections but serve disks at direct `Disk.Bay.N`
|
||||
|
||||
---
|
||||
|
||||
paths. The function returns `true` for any chassis with an empty `Members` array. On
|
||||
Supermicro HGX systems (SYS-A21GE-NBRT and similar) ~35 sub-chassis (GPU, NVSwitch,
|
||||
PCIeRetimer, ERoT, IRoT, BMC, FPGA) all carry `ChassisType=Module/Component/Zone` and
|
||||
@@ -918,3 +921,280 @@ hardware change.
|
||||
- Hardware event history (last 7 days) visible in Reanimator `EventLogs` section.
|
||||
- No impact on existing inventory pipeline or offline archive replay (archives without `redfish_log_entries` key silently skip parsing).
|
||||
- Adds extra HTTP requests during live collection (sequential, after tree-walk completes).
|
||||
|
||||
---
|
||||
|
||||
## ADL-036 — Redfish profile matching may use platform grammar hints beyond vendor strings
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Context:**
|
||||
Some BMCs expose unusable `Manufacturer` / `Model` values (`NULL`, placeholders, or generic SoC
|
||||
names) while still exposing a stable platform-specific Redfish grammar: repeated member names,
|
||||
firmware inventory IDs, OEM action names, and target-path quirks. Matching only on vendor
|
||||
strings forced such systems into fallback mode even when the platform shape was consistent.
|
||||
|
||||
**Decision:**
|
||||
- Extend `redfishprofile.MatchSignals` with doc-derived hint tokens collected from discovery docs
|
||||
and replay snapshots.
|
||||
- Allow profile matchers to score on stable platform grammar such as:
|
||||
- collection member naming (`outboardPCIeCard*`, drive slot grammars)
|
||||
- firmware inventory member IDs
|
||||
- OEM action/type markers and linked target paths
|
||||
- During live collection, gather only lightweight extra hint collections needed for matching
|
||||
(`NetworkInterfaces`, `NetworkAdapters`, `Drives`, `UpdateService/FirmwareInventory`), not slow
|
||||
deep inventory branches.
|
||||
- Keep such profiles out of fallback aggregation unless they are proven safe as broad additive
|
||||
hints.
|
||||
|
||||
**Consequences:**
|
||||
- Platform-family profiles can activate even when vendor strings are absent or set to `NULL`.
|
||||
- Matching logic becomes more robust for OEM BMC implementations that differ mainly by Redfish
|
||||
grammar rather than by explicit vendor strings.
|
||||
- Live collection gains a small amount of extra discovery I/O to harvest stable member IDs, but
|
||||
avoids slow deep probes such as `Assembly` just for profile selection.
|
||||
|
||||
---
|
||||
|
||||
## ADL-037 — easy-bee archives are parsed from the embedded bee-audit snapshot
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Context:**
|
||||
`reanimator-easy-bee` support bundles already contain a normalized hardware snapshot in
|
||||
`export/bee-audit.json` plus supporting logs and techdump files. Rebuilding the same inventory
|
||||
from raw `techdump/` files inside LOGPile would duplicate parser logic and create drift between
|
||||
the producer utility and archive importer.
|
||||
|
||||
**Decision:**
|
||||
- Add a dedicated `easy_bee` vendor parser for `bee-support-*.tar.gz` bundles.
|
||||
- Detect the bundle by `manifest.txt` (`bee_version=...`) plus `export/bee-audit.json`.
|
||||
- Parse the archive from the embedded snapshot first; treat `techdump/` and runtime files as
|
||||
secondary context only.
|
||||
- Normalize snapshot-only fields needed by LOGPile, notably:
|
||||
- flatten `hardware.sensors` groups into `[]SensorReading`
|
||||
- turn runtime issues/status into `[]Event`
|
||||
- synthesize a board FRU entry when the snapshot does not include FRU data
|
||||
|
||||
**Consequences:**
|
||||
- LOGPile stays aligned with the schema emitted by `reanimator-easy-bee`.
|
||||
- Adding support required only a thin archive adapter instead of a full hardware parser.
|
||||
- If the upstream utility changes the embedded snapshot schema, the `easy_bee` adapter is the
|
||||
only place that must be updated.
|
||||
|
||||
---
|
||||
|
||||
## ADL-038 — HPE AHS parser uses hybrid extraction instead of full `zbb` schema decoding
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** HPE iLO Active Health System exports (`.ahs`) are proprietary `ABJR` containers
|
||||
with gzip-compressed `zbb` payloads. The sample inventory data contains two practical signal
|
||||
families: printable SMBIOS/FRU-style strings and embedded Redfish JSON subtrees, especially for
|
||||
storage controllers and drives. Full `zbb` binary schema decoding is not documented and would add
|
||||
significant complexity before proving user value.
|
||||
**Decision:** Support HPE AHS with a hybrid parser:
|
||||
- decode the outer `ABJR` container
|
||||
- gunzip embedded members when applicable
|
||||
- extract inventory from printable SMBIOS/FRU payloads
|
||||
- extract storage/controller/backplane details from embedded Redfish JSON objects
|
||||
- enrich firmware and PSU inventory from auxiliary package payloads such as `bcert.pkg`
|
||||
- do not attempt complete semantic decoding of the internal `zbb` record format
|
||||
**Consequences:**
|
||||
- Parser reaches inventory-grade usefulness quickly for HPE `.ahs` uploads.
|
||||
- Storage inventory is stronger than text-only parsing because it reuses structured Redfish data when present.
|
||||
- Auxiliary package payloads can supply missing firmware/PSU fields even when the main SMBIOS-like blob is incomplete.
|
||||
- Future deeper `zbb` decoding can be added incrementally without replacing the current parser contract.
|
||||
|
||||
---
|
||||
|
||||
## ADL-039 — Canonical inventory keeps DIMMs with unknown capacity when identity is known
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** Some sources, notably HPE iLO AHS SMBIOS-like blobs, expose installed DIMM identity
|
||||
(slot, serial, part number, manufacturer) but do not include capacity. The parser already extracts
|
||||
those modules into `Hardware.Memory`, but canonical device building and export previously dropped
|
||||
them because `size_mb == 0`.
|
||||
**Decision:** Treat a DIMM as installed inventory when `present=true` and it has identifying
|
||||
memory fields such as serial number or part number, even if `size_mb` is unknown.
|
||||
**Consequences:**
|
||||
- HPE AHS uploads now show real installed memory modules instead of hiding them.
|
||||
- Empty slots still stay filtered because they lack inventory identity or are marked absent.
|
||||
- Specification/export can include "size unknown" memory entries without inventing capacity data.
|
||||
|
||||
---
|
||||
|
||||
## ADL-040 — HPE Redfish normalization prefers chassis `Devices/*` over generic PCIe topology labels
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** HPE ProLiant Gen11 Redfish snapshots expose parallel inventory trees. `Chassis/*/PCIeDevices/*`
|
||||
is good for topology presence, but often reports only generic `DeviceType` values such as
|
||||
`SingleFunction`. `Chassis/*/Devices/*` carries the concrete slot label, richer device type, and
|
||||
product-vs-spare part identifiers for the same physical NIC/controller. Replay fallback over empty
|
||||
storage volume collections can also discover `Volumes/Capabilities` children, which are not real
|
||||
logical volumes.
|
||||
|
||||
**Decision:**
|
||||
- Treat Redfish `SKU` as a valid fallback for `hardware.board.part_number` when `PartNumber` is empty.
|
||||
- Ignore `Volumes/Capabilities` documents during logical-volume parsing.
|
||||
- Enrich `Chassis/*/PCIeDevices/*` entries with matching `Chassis/*/Devices/*` documents by
|
||||
serial/name/part identity.
|
||||
- Keep `pcie.device_class` semantic; do not replace it with model or part-number strings when
|
||||
Redfish exposes only generic topology labels.
|
||||
|
||||
**Consequences:**
|
||||
- HPE Redfish imports now keep the server SKU in `hardware.board.part_number`.
|
||||
- Empty volume collections no longer produce fake `Capabilities` volume records.
|
||||
- HPE PCIe inventory gets better slot labels like `OCP 3.0 Slot 15` plus concrete classes such as
|
||||
`LOM/NIC` or `SAS/SATA Storage Controller`.
|
||||
- `part_number` remains available separately for model identity, without polluting the class field.
|
||||
|
||||
---
|
||||
|
||||
## ADL-041 — Redfish replay drops topology-only PCIe noise classes from canonical inventory
|
||||
|
||||
**Date:** 2026-04-01
|
||||
**Context:** Some Redfish BMCs, especially MSI/AMI GPU systems, expose a very wide PCIe topology
|
||||
tree under `Chassis/*/PCIeDevices/*`. Besides real endpoint devices, the replay sees bridge stages,
|
||||
CPU-side helper functions, IMC/mesh signal-processing nodes, USB/SPI side controllers, and GPU
|
||||
display-function duplicates reported as generic `Display Device`. Keeping all of them in
|
||||
`hardware.pcie_devices` pollutes downstream exports such as Reanimator and hides the actual
|
||||
endpoint inventory signal.
|
||||
|
||||
**Decision:**
|
||||
- Filter topology-only PCIe records during Redfish replay, not in the UI layer.
|
||||
- Drop PCIe entries with replay-resolved classes:
|
||||
- `Bridge`
|
||||
- `Processor`
|
||||
- `SignalProcessingController`
|
||||
- `SerialBusController`
|
||||
- Drop `DisplayController` entries when the source Redfish PCIe document is the generic MSI-style
|
||||
`Description: "Display Device"` duplicate.
|
||||
- Drop PCIe network endpoints when their PCIe functions already link to `NetworkDeviceFunctions`,
|
||||
because those devices are represented canonically in `hardware.network_adapters`.
|
||||
- When `Systems/*/NetworkInterfaces/*` links back to a chassis `NetworkAdapter`, match against the
|
||||
fully enriched chassis NIC identity to avoid creating a second ghost NIC row with the raw
|
||||
`NetworkAdapter_*` slot/name.
|
||||
- Treat generic Redfish object names such as `NetworkAdapter_*` and `PCIeDevice_*` as placeholder
|
||||
models and replace them from PCI IDs when a concrete vendor/device match exists.
|
||||
- Drop MSI-style storage service PCIe endpoints whose resolved device names are only
|
||||
`Volume Management Device NVMe RAID Controller` or `PCIe Switch management endpoint`; storage
|
||||
inventory already comes from the Redfish storage tree.
|
||||
- Normalize Ethernet-class NICs into the single exported class `NetworkController`; do not split
|
||||
`EthernetController` into a separate top-level inventory section.
|
||||
- Keep endpoint classes such as `NetworkController`, `MassStorageController`, and dedicated GPU
|
||||
inventory coming from `hardware.gpus`.
|
||||
|
||||
**Consequences:**
|
||||
- `hardware.pcie_devices` becomes closer to real endpoint inventory instead of raw PCIe topology.
|
||||
- Reanimator exports stop showing MSI bridge/processor/display duplicate noise.
|
||||
- Reanimator exports no longer duplicate the same MSI NIC as both `PCIeDevice_*` and
|
||||
`NetworkAdapter_*`.
|
||||
- Replay no longer creates extra NIC rows from `Systems/NetworkInterfaces` when the same adapter
|
||||
was already normalized from `Chassis/NetworkAdapters`.
|
||||
- MSI VMD / PCIe switch storage service endpoints no longer pollute PCIe inventory.
|
||||
- UI/Reanimator group all Ethernet NICs under the same `NETWORKCONTROLLER` section.
|
||||
- Canonical NIC inventory prefers resolved PCI product names over generic Redfish placeholder names.
|
||||
- The raw Redfish snapshot still remains available in `raw_payloads.redfish_tree` for low-level
|
||||
troubleshooting if topology details are ever needed.
|
||||
|
||||
---
|
||||
|
||||
## ADL-042 — xFusion file-export archives merge AppDump inventory with RTOS/Log snapshots
|
||||
|
||||
**Date:** 2026-04-04
|
||||
**Context:** xFusion iBMC `tar.gz` exports expose the base inventory in `AppDump/`, but the most
|
||||
useful NIC and firmware details live elsewhere: NIC firmware/MAC snapshots in
|
||||
`LogDump/netcard/netcard_info.txt` and system firmware versions in
|
||||
`RTOSDump/versioninfo/app_revision.txt`. Parsing only `AppDump/` left xFusion uploads detectable but
|
||||
incomplete for UI and Reanimator consumers.
|
||||
|
||||
**Decision:**
|
||||
- Treat xFusion file-export `tar.gz` bundles as a first-class archive parser input.
|
||||
- Merge OCP NIC identity from `AppDump/card_manage/card_info` with the latest per-slot snapshot
|
||||
from `LogDump/netcard/netcard_info.txt` to produce `hardware.network_adapters`.
|
||||
- Import system-level firmware from `RTOSDump/versioninfo/app_revision.txt` into
|
||||
`hardware.firmware`.
|
||||
- Allow FRU fallback from `RTOSDump/versioninfo/fruinfo.txt` when `AppDump/FruData/fruinfo.txt`
|
||||
is absent.
|
||||
|
||||
**Consequences:**
|
||||
- xFusion uploads now preserve NIC BDF, MAC, firmware, and serial identity in normalized output.
|
||||
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
|
||||
- xFusion archives participate more reliably in canonical device/export flows without special UI
|
||||
cases.
|
||||
|
||||
---
|
||||
|
||||
## ADL-043 — Extended HGX diagnostic plan-B is opt-in from the live collect form
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** Some Supermicro HGX Redfish targets expose slow or hanging component-chassis inventory
|
||||
collections during critical plan-B, especially under `Chassis/HGX_*` for `Assembly`,
|
||||
`Accelerators`, `Drives`, `NetworkAdapters`, and `PCIeDevices`. Default collection should not
|
||||
block operators on deep diagnostic retries that are useful mainly for troubleshooting.
|
||||
**Decision:** Keep the normal snapshot/replay path unchanged, but gate those heavy HGX
|
||||
component-chassis critical plan-B retries behind the existing live-collect `debug_payloads` flag,
|
||||
presented in the UI as "Сбор расширенных данных для диагностики".
|
||||
**Consequences:**
|
||||
- Default live collection skips those heavy diagnostic plan-B retries and reaches replay faster.
|
||||
- Operators can explicitly opt into the slower diagnostic path when they need deeper collection.
|
||||
- The same user-facing toggle continues to enable extra debug payload capture for troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
## ADL-044 — LOGPile project release tags use `vN.M`
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** The repository accumulated release tags in `vN.M.P` form, while the shared module
|
||||
versioning contract in `bible/rules/patterns/module-versioning/contract.md` standardizes version
|
||||
shape as `N.M`. Release tooling reads the git tag verbatim into build metadata and release
|
||||
artifacts, so inconsistent tag shape leaks directly into packaged versions.
|
||||
**Decision:** Use `vN.M` for LOGPile project release tags going forward. Do not create new
|
||||
`vN.M.P` tags for repository releases. Build metadata, release directory names, and release notes
|
||||
continue to inherit the exact git tag string from `git describe --tags`.
|
||||
**Consequences:**
|
||||
- Future project releases have a two-component version string such as `v1.12`.
|
||||
- Release artifacts and `--version` output stay aligned with the tag shape without extra mapping.
|
||||
- Existing historical `vN.M.P` tags remain as-is unless explicitly rewritten.
|
||||
|
||||
---
|
||||
|
||||
## ADL-045 — Generic live IPMI collector is deferred; Redfish remains the only production live path
|
||||
|
||||
**Date:** 2026-04-22
|
||||
**Context:** Sprint issue `#12` proposed a generic IPMI collector for SEL/FRU/sensors. By this
|
||||
point LOGPile already has a production Redfish pipeline with replayable raw snapshots, profile-
|
||||
driven acquisition, and normalized event/sensor/inventory extraction. Redfish also already covers
|
||||
the current product goals better than IPMI for live collection: richer inventory, structured
|
||||
resource relationships, and vendor log access via `LogServices`, including SEL-style logs on many
|
||||
implementations.
|
||||
|
||||
**Decision:** Do not build a generic live IPMI collector now. Keep `ipmi_mock.go` only as a
|
||||
protocol placeholder in the registry and UI/API contract. Treat Redfish as the only production
|
||||
live collection path. Revisit IPMI only if real field evidence shows that a specific target class
|
||||
cannot provide required data over Redfish. If revisited, prefer a narrow fallback scope such as
|
||||
`IPMI SEL fallback`, `IPMI FRU fallback`, or `IPMI sensor fallback` rather than a second full
|
||||
collector architecture.
|
||||
|
||||
**Consequences:**
|
||||
- Issue `#12` is closed as deferred/not planned, not as implemented.
|
||||
- Live collection architecture stays centered on replayable `raw_payloads.redfish_tree`.
|
||||
- The codebase avoids introducing a second generic live-ingest/replay contract for IPMI data.
|
||||
- Future IPMI work must be justified by concrete Redfish gaps on real hardware, not by protocol
|
||||
symmetry alone.
|
||||
|
||||
---
|
||||
|
||||
## ADL-046 — The web shell delegates report rendering to `internal/chart`
|
||||
|
||||
**Date:** 2026-04-22
|
||||
**Context:** The frontend had two competing report paths: the embedded `internal/chart` viewer and
|
||||
an older client-side renderer in `web/static/js/app.js` for config, firmware, sensors, serials,
|
||||
events, and parse errors. That duplication left dead controls in the shell and made the report
|
||||
source of truth ambiguous.
|
||||
**Decision:** The `web/` frontend shell is responsible only for data intake, job control, and
|
||||
top-level actions. The report itself must be rendered exclusively through `internal/chart`.
|
||||
Do not keep parallel report sections, filters, or table renderers in shell JavaScript.
|
||||
**Consequences:**
|
||||
- The browser UI has a single report rendering path: `/chart/current` inside the embedded viewer.
|
||||
- Report-level filtering or extra report sections must be implemented in `internal/chart`, not in
|
||||
`web/static/js/app.js`.
|
||||
- Removing legacy DOM renderers from the shell is a correctness fix, not a behavior regression.
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
@@ -38,10 +39,11 @@ func main() {
|
||||
server.WebFS = web.FS
|
||||
|
||||
cfg := server.Config{
|
||||
Port: *port,
|
||||
PreloadFile: *file,
|
||||
AppVersion: version,
|
||||
AppCommit: commit,
|
||||
Port: *port,
|
||||
PreloadFile: *file,
|
||||
AppVersion: version,
|
||||
AppCommit: commit,
|
||||
ChartVersion: detectChartVersion(),
|
||||
}
|
||||
|
||||
srv := server.New(cfg)
|
||||
@@ -92,6 +94,15 @@ func openBrowser(url string) {
|
||||
}
|
||||
}
|
||||
|
||||
func detectChartVersion() string {
|
||||
cmd := exec.Command("git", "-C", "internal/chart", "describe", "--tags", "--always", "--dirty", "--abbrev=7")
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(string(out))
|
||||
}
|
||||
|
||||
func maybeWaitForCrashInput(enabled bool) {
|
||||
if !enabled || !isInteractiveConsole() {
|
||||
return
|
||||
|
||||
Submodule internal/chart updated: c025ae0477...2a15bc87f1
File diff suppressed because it is too large
Load Diff
@@ -50,11 +50,15 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
}
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
collectFrom(joinPath(systemPath, "/LogServices"), isHardwareLogService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, systemPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isHardwareLogService)
|
||||
}
|
||||
}
|
||||
// Managers hold the IPMI SEL on AMI/MSI BMCs — include only the "SEL" service.
|
||||
for _, managerPath := range managerPaths {
|
||||
collectFrom(joinPath(managerPath, "/LogServices"), isManagerSELService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, managerPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isManagerSELService)
|
||||
}
|
||||
}
|
||||
|
||||
if len(out) > 0 {
|
||||
@@ -63,6 +67,42 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
return out
|
||||
}
|
||||
|
||||
func (c *RedfishConnector) redfishLinkedCollectionPaths(
|
||||
ctx context.Context,
|
||||
client *http.Client,
|
||||
req Request,
|
||||
baseURL, resourcePath, linkKey string,
|
||||
) []string {
|
||||
resourcePath = normalizeRedfishPath(resourcePath)
|
||||
if resourcePath == "" || strings.TrimSpace(linkKey) == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
seen := make(map[string]struct{}, 2)
|
||||
var out []string
|
||||
add := func(path string) {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
if _, ok := seen[path]; ok {
|
||||
return
|
||||
}
|
||||
seen[path] = struct{}{}
|
||||
out = append(out, path)
|
||||
}
|
||||
|
||||
add(joinPath(resourcePath, "/"+strings.TrimSpace(linkKey)))
|
||||
|
||||
resourceDoc, err := c.getJSON(ctx, client, req, baseURL, resourcePath)
|
||||
if err == nil {
|
||||
if linked := redfishLinkedPath(resourceDoc, linkKey); linked != "" {
|
||||
add(linked)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// fetchRedfishLogEntriesWithPaging fetches entries from a LogEntry collection,
|
||||
// following nextLink pages. Stops early when entries older than cutoff are encountered
|
||||
// (assumes BMC returns entries newest-first, which is typical).
|
||||
@@ -182,7 +222,7 @@ func redfishLogServiceEntriesPath(svc map[string]interface{}) string {
|
||||
// Audit, authentication, and session events are excluded.
|
||||
func isHardwareLogEntry(entry map[string]interface{}) bool {
|
||||
entryType := strings.TrimSpace(asString(entry["EntryType"]))
|
||||
if strings.EqualFold(entryType, "Oem") {
|
||||
if strings.EqualFold(entryType, "Oem") && !strings.EqualFold(strings.TrimSpace(asString(entry["OemRecordFormat"])), "Lenovo") {
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -362,6 +402,9 @@ func parseIPMIDumpKV(message string) map[string]string {
|
||||
// AMI/MSI BMCs often set Severity="OK" on all SEL records regardless of content,
|
||||
// so we fall back to inferring severity from SensorType when the explicit field is unhelpful.
|
||||
func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
if redfishLogEntryLooksLikeWarning(entry) {
|
||||
return models.SeverityWarning
|
||||
}
|
||||
// Newer Redfish uses MessageSeverity; older uses Severity.
|
||||
raw := strings.ToLower(firstNonEmpty(
|
||||
strings.TrimSpace(asString(entry["MessageSeverity"])),
|
||||
@@ -380,6 +423,16 @@ func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
}
|
||||
}
|
||||
|
||||
func redfishLogEntryLooksLikeWarning(entry map[string]interface{}) bool {
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
|
||||
asString(entry["Message"]),
|
||||
asString(entry["Name"]),
|
||||
asString(entry["SensorType"]),
|
||||
asString(entry["EntryCode"]),
|
||||
}, " ")))
|
||||
return strings.Contains(joined, "unqualified dimm")
|
||||
}
|
||||
|
||||
// redfishSeverityFromSensorType infers event severity from the IPMI/Redfish SensorType string.
|
||||
func redfishSeverityFromSensorType(sensorType string) models.Severity {
|
||||
switch strings.ToLower(sensorType) {
|
||||
|
||||
125
internal/collector/redfish_logentries_test.go
Normal file
125
internal/collector/redfish_logentries_test.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestCollectRedfishLogEntries_UsesLinkedManagerLogServicesPath(t *testing.T) {
|
||||
mux := http.NewServeMux()
|
||||
register := func(path string, payload interface{}) {
|
||||
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(payload)
|
||||
})
|
||||
}
|
||||
|
||||
register("/redfish/v1/Managers/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"LogServices": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL", map[string]interface{}{
|
||||
"Id": "SEL",
|
||||
"Entries": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries/1"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"Created": time.Now().UTC().Format(time.RFC3339),
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
})
|
||||
|
||||
ts := httptest.NewServer(mux)
|
||||
defer ts.Close()
|
||||
|
||||
c := NewRedfishConnector()
|
||||
got := c.collectRedfishLogEntries(context.Background(), ts.Client(), Request{
|
||||
Host: ts.URL,
|
||||
Port: 443,
|
||||
Protocol: "redfish",
|
||||
Username: "admin",
|
||||
AuthType: "password",
|
||||
Password: "secret",
|
||||
TLSMode: "strict",
|
||||
}, ts.URL, nil, []string{"/redfish/v1/Managers/1"})
|
||||
|
||||
if len(got) != 1 {
|
||||
t.Fatalf("expected 1 collected log entry, got %d", len(got))
|
||||
}
|
||||
if got[0]["Message"] != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected collected message: %#v", got[0]["Message"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_UnqualifiedDIMMBecomesWarning(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "sel-1",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].Description != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected description: %q", events[0].Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_LenovoOEMEntryIsKept(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "plat-55",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "DIMM A1 is unqualified",
|
||||
"MessageSeverity": "Warning",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Oem",
|
||||
"OemRecordFormat": "Lenovo",
|
||||
"EntryCode": "Assert",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 Lenovo OEM event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
}
|
||||
57
internal/collector/redfish_planb_test.go
Normal file
57
internal/collector/redfish_planb_test.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package collector
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestShouldIncludeCriticalPlanBPath(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
req Request
|
||||
path string
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "skip hgx erot pcie without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "skip hgx chassis assembly without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_Chassis_0/Assembly",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "keep standard chassis inventory without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/1/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep nvme storage backplane drives without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/NVMeSSD.0.Group.0.StorageBackplane/Drives",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep system processors without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Systems/HGX_Baseboard_0/Processors",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "include hgx erot pcie when extended diagnostics enabled",
|
||||
req: Request{DebugPayloads: true},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := shouldIncludeCriticalPlanBPath(tt.req, tt.path); got != tt.want {
|
||||
t.Fatalf("shouldIncludeCriticalPlanBPath(%q) = %v, want %v", tt.path, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -31,8 +31,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
if emit != nil {
|
||||
emit(Progress{Status: "running", Progress: 10, Message: "Redfish snapshot: replay service root..."})
|
||||
}
|
||||
serviceRootDoc, err := r.getJSON("/redfish/v1")
|
||||
if err != nil {
|
||||
if _, err := r.getJSON("/redfish/v1"); err != nil {
|
||||
log.Printf("redfish replay: service root /redfish/v1 missing from snapshot, continuing with defaults: %v", err)
|
||||
}
|
||||
|
||||
@@ -61,8 +60,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
fruDoc = chassisFRUDoc
|
||||
}
|
||||
boardFallbackDocs := r.collectBoardFallbackDocs(systemPaths, chassisPaths)
|
||||
resourceHints := append(append([]string{}, systemPaths...), append(chassisPaths, managerPaths...)...)
|
||||
profileSignals := redfishprofile.CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc, resourceHints)
|
||||
profileSignals := redfishprofile.CollectSignalsFromTree(tree)
|
||||
profileMatch := redfishprofile.MatchProfiles(profileSignals)
|
||||
analysisPlan := redfishprofile.ResolveAnalysisPlan(profileMatch, tree, redfishprofile.DiscoveredResources{
|
||||
SystemPaths: systemPaths,
|
||||
@@ -98,20 +96,27 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
networkProtocolDoc, _ := r.getJSON(joinPath(primaryManager, "/NetworkProtocol"))
|
||||
firmware := parseFirmware(systemDoc, biosDoc, managerDoc, networkProtocolDoc)
|
||||
firmware = dedupeFirmwareInfo(append(firmware, r.collectFirmwareInventory()...))
|
||||
boardInfo.BMCMACAddress = r.collectBMCMAC(managerPaths)
|
||||
firmware = filterStorageDriveFirmware(firmware, storageDevices)
|
||||
bmcManagementSummary := r.collectBMCManagementSummary(managerPaths)
|
||||
boardInfo.BMCMACAddress = strings.TrimSpace(firstNonEmpty(
|
||||
asString(bmcManagementSummary["mac_address"]),
|
||||
r.collectBMCMAC(managerPaths),
|
||||
))
|
||||
assemblyFRU := r.collectAssemblyFRU(chassisPaths)
|
||||
collectedAt, sourceTimezone := inferRedfishCollectionTime(managerDoc, rawPayloads)
|
||||
inventoryLastModifiedAt := inferInventoryLastModifiedTime(r.tree)
|
||||
logEntryEvents := parseRedfishLogEntries(rawPayloads, collectedAt)
|
||||
sensorHintSummary, sensorHintEvents := r.collectSensorsListHints(chassisPaths, collectedAt)
|
||||
bmcManagementEvent := buildBMCManagementSummaryEvent(bmcManagementSummary, collectedAt)
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
CollectedAt: collectedAt,
|
||||
InventoryLastModifiedAt: inventoryLastModifiedAt,
|
||||
SourceTimezone: sourceTimezone,
|
||||
Events: append(append(append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+len(driveFetchWarningEvents)+len(logEntryEvents)+1), healthEvents...), discreteEvents...), driveFetchWarningEvents...), logEntryEvents...),
|
||||
FRU: assemblyFRU,
|
||||
Sensors: dedupeSensorReadings(append(append(thresholdSensors, thermalSensors...), powerSensors...)),
|
||||
RawPayloads: cloneRawPayloads(rawPayloads),
|
||||
SourceTimezone: sourceTimezone,
|
||||
Events: append(append(append(append(append(append(make([]models.Event, 0, len(discreteEvents)+len(healthEvents)+len(driveFetchWarningEvents)+len(logEntryEvents)+len(sensorHintEvents)+2), healthEvents...), discreteEvents...), driveFetchWarningEvents...), logEntryEvents...), sensorHintEvents...), bmcManagementEvent...),
|
||||
FRU: assemblyFRU,
|
||||
Sensors: dedupeSensorReadings(append(append(thresholdSensors, thermalSensors...), powerSensors...)),
|
||||
RawPayloads: cloneRawPayloads(rawPayloads),
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: boardInfo,
|
||||
CPUs: processors,
|
||||
@@ -123,7 +128,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
PowerSupply: psus,
|
||||
NetworkAdapters: nics,
|
||||
Firmware: firmware,
|
||||
},
|
||||
},
|
||||
}
|
||||
match := profileMatch
|
||||
for _, profile := range match.Profiles {
|
||||
@@ -157,6 +162,12 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
if strings.TrimSpace(sourceTimezone) != "" {
|
||||
result.RawPayloads["source_timezone"] = sourceTimezone
|
||||
}
|
||||
if len(sensorHintSummary) > 0 {
|
||||
result.RawPayloads["redfish_sensor_hints"] = sensorHintSummary
|
||||
}
|
||||
if len(bmcManagementSummary) > 0 {
|
||||
result.RawPayloads["redfish_bmc_network_summary"] = bmcManagementSummary
|
||||
}
|
||||
appendMissingServerModelWarning(result, systemDoc, joinPath(primarySystem, "/Oem/Public/FRU"), joinPath(primaryChassis, "/Oem/Public/FRU"))
|
||||
return result, nil
|
||||
}
|
||||
@@ -277,7 +288,6 @@ func redfishFetchErrorsFromRawPayloads(rawPayloads map[string]any) map[string]st
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
func buildDriveFetchWarningEvents(rawPayloads map[string]any) []models.Event {
|
||||
errs := redfishFetchErrorsFromRawPayloads(rawPayloads)
|
||||
if len(errs) == 0 {
|
||||
@@ -327,6 +337,153 @@ func buildDriveFetchWarningEvents(rawPayloads map[string]any) []models.Event {
|
||||
}
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectSensorsListHints(chassisPaths []string, collectedAt time.Time) (map[string]any, []models.Event) {
|
||||
summary := make(map[string]any)
|
||||
var events []models.Event
|
||||
var presentDIMMs []string
|
||||
dimmTotal := 0
|
||||
dimmPresent := 0
|
||||
physicalDriveSlots := 0
|
||||
activePhysicalDriveSlots := 0
|
||||
logicalDriveStatus := ""
|
||||
|
||||
for _, chassisPath := range chassisPaths {
|
||||
doc, err := r.getJSON(joinPath(chassisPath, "/SensorsList"))
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
sensors, ok := doc["SensorsList"].([]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, item := range sensors {
|
||||
sensor, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimSpace(asString(sensor["SensorName"]))
|
||||
sensorType := strings.TrimSpace(asString(sensor["SensorType"]))
|
||||
status := strings.TrimSpace(asString(sensor["Status"]))
|
||||
switch {
|
||||
case strings.HasPrefix(name, "DIMM") && strings.HasSuffix(name, "_Status") && strings.EqualFold(sensorType, "Memory"):
|
||||
dimmTotal++
|
||||
if redfishSlotStatusLooksPresent(status) {
|
||||
dimmPresent++
|
||||
presentDIMMs = append(presentDIMMs, strings.TrimSuffix(name, "_Status"))
|
||||
}
|
||||
case strings.EqualFold(sensorType, "Drive Slot"):
|
||||
if strings.EqualFold(name, "Logical_Drive") {
|
||||
logicalDriveStatus = firstNonEmpty(logicalDriveStatus, status)
|
||||
continue
|
||||
}
|
||||
physicalDriveSlots++
|
||||
if redfishSlotStatusLooksPresent(status) {
|
||||
activePhysicalDriveSlots++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if dimmTotal > 0 {
|
||||
sort.Strings(presentDIMMs)
|
||||
summary["memory_slots"] = map[string]any{
|
||||
"total": dimmTotal,
|
||||
"present_count": dimmPresent,
|
||||
"present_slots": presentDIMMs,
|
||||
"source": "SensorsList",
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: replayEventTimestamp(collectedAt),
|
||||
Source: "Redfish",
|
||||
EventType: "Collection Info",
|
||||
Severity: models.SeverityInfo,
|
||||
Description: fmt.Sprintf("Memory slot sensors report %d populated positions out of %d", dimmPresent, dimmTotal),
|
||||
RawData: firstNonEmpty(strings.Join(presentDIMMs, ", "), "no populated DIMM slots reported"),
|
||||
})
|
||||
}
|
||||
if physicalDriveSlots > 0 || logicalDriveStatus != "" {
|
||||
summary["drive_slots"] = map[string]any{
|
||||
"physical_total": physicalDriveSlots,
|
||||
"physical_active_count": activePhysicalDriveSlots,
|
||||
"logical_drive_status": logicalDriveStatus,
|
||||
"source": "SensorsList",
|
||||
}
|
||||
rawParts := []string{
|
||||
fmt.Sprintf("physical_active=%d/%d", activePhysicalDriveSlots, physicalDriveSlots),
|
||||
}
|
||||
if logicalDriveStatus != "" {
|
||||
rawParts = append(rawParts, "logical_drive="+logicalDriveStatus)
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: replayEventTimestamp(collectedAt),
|
||||
Source: "Redfish",
|
||||
EventType: "Collection Info",
|
||||
Severity: models.SeverityInfo,
|
||||
Description: fmt.Sprintf("Drive slot sensors report %d active physical slots out of %d", activePhysicalDriveSlots, physicalDriveSlots),
|
||||
RawData: strings.Join(rawParts, "; "),
|
||||
})
|
||||
}
|
||||
|
||||
return summary, events
|
||||
}
|
||||
|
||||
func buildBMCManagementSummaryEvent(summary map[string]any, collectedAt time.Time) []models.Event {
|
||||
if len(summary) == 0 {
|
||||
return nil
|
||||
}
|
||||
desc := fmt.Sprintf(
|
||||
"BMC management interface %s link=%s ip=%s",
|
||||
firstNonEmpty(asString(summary["interface_id"]), "unknown"),
|
||||
firstNonEmpty(asString(summary["link_status"]), "unknown"),
|
||||
firstNonEmpty(asString(summary["ipv4_address"]), "n/a"),
|
||||
)
|
||||
rawParts := make([]string, 0, 8)
|
||||
for _, part := range []string{
|
||||
"mac_address=" + strings.TrimSpace(asString(summary["mac_address"])),
|
||||
"speed_mbps=" + strings.TrimSpace(asString(summary["speed_mbps"])),
|
||||
"lldp_chassis_name=" + strings.TrimSpace(asString(summary["lldp_chassis_name"])),
|
||||
"lldp_port_desc=" + strings.TrimSpace(asString(summary["lldp_port_desc"])),
|
||||
"lldp_port_id=" + strings.TrimSpace(asString(summary["lldp_port_id"])),
|
||||
"ipv4_gateway=" + strings.TrimSpace(asString(summary["ipv4_gateway"])),
|
||||
} {
|
||||
if !strings.HasSuffix(part, "=") {
|
||||
rawParts = append(rawParts, part)
|
||||
}
|
||||
}
|
||||
if vlan := asInt(summary["lldp_vlan_id"]); vlan > 0 {
|
||||
rawParts = append(rawParts, fmt.Sprintf("lldp_vlan_id=%d", vlan))
|
||||
}
|
||||
if asBool(summary["ncsi_enabled"]) {
|
||||
rawParts = append(rawParts, "ncsi_enabled=true")
|
||||
}
|
||||
return []models.Event{
|
||||
{
|
||||
Timestamp: replayEventTimestamp(collectedAt),
|
||||
Source: "Redfish",
|
||||
EventType: "Collection Info",
|
||||
Severity: models.SeverityInfo,
|
||||
Description: desc,
|
||||
RawData: strings.Join(rawParts, "; "),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func redfishSlotStatusLooksPresent(status string) bool {
|
||||
switch strings.ToLower(strings.TrimSpace(status)) {
|
||||
case "ok", "enabled", "present", "warning", "critical":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func replayEventTimestamp(collectedAt time.Time) time.Time {
|
||||
if !collectedAt.IsZero() {
|
||||
return collectedAt
|
||||
}
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo {
|
||||
docs, err := r.getCollectionMembers("/redfish/v1/UpdateService/FirmwareInventory")
|
||||
if err != nil || len(docs) == 0 {
|
||||
@@ -342,6 +499,10 @@ func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo
|
||||
if strings.TrimSpace(version) == "" {
|
||||
continue
|
||||
}
|
||||
// Skip placeholder version strings that carry no useful information.
|
||||
if strings.EqualFold(strings.TrimSpace(version), "N/A") {
|
||||
continue
|
||||
}
|
||||
name := firmwareInventoryDeviceName(doc)
|
||||
name = strings.TrimSpace(name)
|
||||
if name == "" {
|
||||
@@ -394,6 +555,32 @@ func dedupeFirmwareInfo(items []models.FirmwareInfo) []models.FirmwareInfo {
|
||||
return out
|
||||
}
|
||||
|
||||
// filterStorageDriveFirmware removes from fw any entries whose DeviceName+Version
|
||||
// already appear as a storage drive's Model+Firmware. Drive firmware is already
|
||||
// represented in the Storage section and should not be duplicated in the general
|
||||
// firmware list.
|
||||
func filterStorageDriveFirmware(fw []models.FirmwareInfo, storage []models.Storage) []models.FirmwareInfo {
|
||||
if len(storage) == 0 {
|
||||
return fw
|
||||
}
|
||||
driveFW := make(map[string]struct{}, len(storage))
|
||||
for _, d := range storage {
|
||||
model := strings.ToLower(strings.TrimSpace(d.Model))
|
||||
rev := strings.ToLower(strings.TrimSpace(d.Firmware))
|
||||
if model != "" && rev != "" {
|
||||
driveFW[model+"|"+rev] = struct{}{}
|
||||
}
|
||||
}
|
||||
out := fw[:0:0]
|
||||
for _, f := range fw {
|
||||
key := strings.ToLower(strings.TrimSpace(f.DeviceName)) + "|" + strings.ToLower(strings.TrimSpace(f.Version))
|
||||
if _, skip := driveFW[key]; !skip {
|
||||
out = append(out, f)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectThresholdSensors(chassisPaths []string) []models.SensorReading {
|
||||
out := make([]models.SensorReading, 0)
|
||||
seen := make(map[string]struct{})
|
||||
@@ -859,6 +1046,9 @@ func (r redfishSnapshotReader) fallbackCollectionMembers(collectionPath string,
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if redfishFallbackMemberLooksLikePlaceholder(collectionPath, doc) {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(asString(doc["@odata.id"])) == "" {
|
||||
doc["@odata.id"] = normalizeRedfishPath(p)
|
||||
}
|
||||
@@ -867,6 +1057,135 @@ func (r redfishSnapshotReader) fallbackCollectionMembers(collectionPath string,
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func redfishFallbackMemberLooksLikePlaceholder(collectionPath string, doc map[string]interface{}) bool {
|
||||
if len(doc) == 0 {
|
||||
return true
|
||||
}
|
||||
path := normalizeRedfishPath(collectionPath)
|
||||
switch {
|
||||
case strings.HasSuffix(path, "/NetworkAdapters"):
|
||||
return redfishNetworkAdapterPlaceholderDoc(doc)
|
||||
case strings.HasSuffix(path, "/PCIeDevices"):
|
||||
return redfishPCIePlaceholderDoc(doc)
|
||||
case strings.Contains(path, "/Storage"):
|
||||
return redfishStoragePlaceholderDoc(doc)
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func redfishNetworkAdapterPlaceholderDoc(doc map[string]interface{}) bool {
|
||||
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
|
||||
asHexOrInt(doc["VendorId"]) != 0 ||
|
||||
asHexOrInt(doc["DeviceId"]) != 0 {
|
||||
return false
|
||||
}
|
||||
return redfishDocHasOnlyAllowedKeys(doc,
|
||||
"@odata.context",
|
||||
"@odata.id",
|
||||
"@odata.type",
|
||||
"Id",
|
||||
"Name",
|
||||
)
|
||||
}
|
||||
|
||||
func redfishPCIePlaceholderDoc(doc map[string]interface{}) bool {
|
||||
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
|
||||
asHexOrInt(doc["VendorId"]) != 0 ||
|
||||
asHexOrInt(doc["DeviceId"]) != 0 {
|
||||
return false
|
||||
}
|
||||
return redfishDocHasOnlyAllowedKeys(doc,
|
||||
"@odata.context",
|
||||
"@odata.id",
|
||||
"@odata.type",
|
||||
"Id",
|
||||
"Name",
|
||||
)
|
||||
}
|
||||
|
||||
func redfishStoragePlaceholderDoc(doc map[string]interface{}) bool {
|
||||
if normalizeRedfishIdentityField(asString(doc["Model"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["Manufacturer"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["SerialNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["PartNumber"])) != "" ||
|
||||
normalizeRedfishIdentityField(asString(doc["BDF"])) != "" ||
|
||||
asHexOrInt(doc["VendorId"]) != 0 ||
|
||||
asHexOrInt(doc["DeviceId"]) != 0 {
|
||||
return false
|
||||
}
|
||||
if !redfishDocHasOnlyAllowedKeys(doc,
|
||||
"@odata.id",
|
||||
"@odata.type",
|
||||
"Drives",
|
||||
"Drives@odata.count",
|
||||
"LogicalDisk",
|
||||
"PhysicalDisk",
|
||||
"Name",
|
||||
) {
|
||||
return false
|
||||
}
|
||||
return redfishFieldIsEmptyCollection(doc["Drives"]) &&
|
||||
redfishFieldIsZeroLike(doc["Drives@odata.count"]) &&
|
||||
redfishFieldIsEmptyCollection(doc["LogicalDisk"]) &&
|
||||
redfishFieldIsEmptyCollection(doc["PhysicalDisk"])
|
||||
}
|
||||
|
||||
func redfishDocHasOnlyAllowedKeys(doc map[string]interface{}, allowed ...string) bool {
|
||||
if len(doc) == 0 {
|
||||
return false
|
||||
}
|
||||
allowedSet := make(map[string]struct{}, len(allowed))
|
||||
for _, key := range allowed {
|
||||
allowedSet[key] = struct{}{}
|
||||
}
|
||||
for key := range doc {
|
||||
if _, ok := allowedSet[key]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func redfishFieldIsEmptyCollection(v any) bool {
|
||||
switch x := v.(type) {
|
||||
case nil:
|
||||
return true
|
||||
case []interface{}:
|
||||
return len(x) == 0
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func redfishFieldIsZeroLike(v any) bool {
|
||||
switch x := v.(type) {
|
||||
case nil:
|
||||
return true
|
||||
case int:
|
||||
return x == 0
|
||||
case int32:
|
||||
return x == 0
|
||||
case int64:
|
||||
return x == 0
|
||||
case float64:
|
||||
return x == 0
|
||||
case string:
|
||||
x = strings.TrimSpace(x)
|
||||
return x == "" || x == "0"
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func cloneRawPayloads(src map[string]any) map[string]any {
|
||||
if len(src) == 0 {
|
||||
return nil
|
||||
@@ -925,6 +1244,15 @@ func (r redfishSnapshotReader) getLinkedPCIeFunctions(doc map[string]interface{}
|
||||
}
|
||||
return out
|
||||
}
|
||||
if ref, ok := links["PCIeFunction"].(map[string]interface{}); ok {
|
||||
memberPath := asString(ref["@odata.id"])
|
||||
if memberPath != "" {
|
||||
memberDoc, err := r.getJSON(memberPath)
|
||||
if err == nil {
|
||||
return []map[string]interface{}{memberDoc}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if pcieFunctions, ok := doc["PCIeFunctions"].(map[string]interface{}); ok {
|
||||
if collectionPath := asString(pcieFunctions["@odata.id"]); collectionPath != "" {
|
||||
@@ -937,6 +1265,33 @@ func (r redfishSnapshotReader) getLinkedPCIeFunctions(doc map[string]interface{}
|
||||
return nil
|
||||
}
|
||||
|
||||
func dedupeJSONDocsByPath(docs []map[string]interface{}) []map[string]interface{} {
|
||||
if len(docs) == 0 {
|
||||
return nil
|
||||
}
|
||||
seen := make(map[string]struct{}, len(docs))
|
||||
out := make([]map[string]interface{}, 0, len(docs))
|
||||
for _, doc := range docs {
|
||||
if len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
key := normalizeRedfishPath(asString(doc["@odata.id"]))
|
||||
if key == "" {
|
||||
payload, err := json.Marshal(doc)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
key = string(payload)
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, doc)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getLinkedSupplementalDocs(doc map[string]interface{}, keys ...string) []map[string]interface{} {
|
||||
if len(doc) == 0 || len(keys) == 0 {
|
||||
return nil
|
||||
@@ -973,6 +1328,12 @@ func (r redfishSnapshotReader) collectProcessors(systemPath string) []models.CPU
|
||||
!strings.EqualFold(pt, "CPU") && !strings.EqualFold(pt, "General") {
|
||||
continue
|
||||
}
|
||||
// Skip absent processor sockets — empty slots with no CPU installed.
|
||||
if status, ok := doc["Status"].(map[string]interface{}); ok {
|
||||
if strings.EqualFold(asString(status["State"]), "Absent") {
|
||||
continue
|
||||
}
|
||||
}
|
||||
cpu := parseCPUs([]map[string]interface{}{doc})[0]
|
||||
if cpu.Socket == 0 && socketIdx > 0 && strings.TrimSpace(asString(doc["Socket"])) == "" {
|
||||
cpu.Socket = socketIdx
|
||||
@@ -999,6 +1360,10 @@ func (r redfishSnapshotReader) collectMemory(systemPath string) []models.MemoryD
|
||||
out := make([]models.MemoryDIMM, 0, len(memberDocs))
|
||||
for _, doc := range memberDocs {
|
||||
dimm := parseMemory([]map[string]interface{}{doc})[0]
|
||||
// Skip empty DIMM slots — no installed memory.
|
||||
if !dimm.Present {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(doc, "MemoryMetrics", "EnvironmentMetrics", "Metrics")
|
||||
if len(supplementalDocs) > 0 {
|
||||
dimm.Details = mergeGenericDetails(dimm.Details, redfishMemoryDetailsAcrossDocs(doc, supplementalDocs...))
|
||||
|
||||
@@ -31,7 +31,7 @@ func (r redfishSnapshotReader) enrichNICsFromNetworkInterfaces(nics *[]models.Ne
|
||||
// the real NIC that came from Chassis/NetworkAdapters (e.g. "RISER 5
|
||||
// slot 1 (7)"). Try to find the real NIC via the Links.NetworkAdapter
|
||||
// cross-reference before creating a ghost entry.
|
||||
if linkedIdx := r.findNICIndexByLinkedNetworkAdapter(iface, bySlot); linkedIdx >= 0 {
|
||||
if linkedIdx := r.findNICIndexByLinkedNetworkAdapter(iface, *nics, bySlot); linkedIdx >= 0 {
|
||||
idx = linkedIdx
|
||||
ok = true
|
||||
}
|
||||
@@ -75,28 +75,53 @@ func (r redfishSnapshotReader) collectNICs(chassisPaths []string) []models.Netwo
|
||||
continue
|
||||
}
|
||||
for _, doc := range adapterDocs {
|
||||
nic := parseNIC(doc)
|
||||
for _, pciePath := range networkAdapterPCIeDevicePaths(doc) {
|
||||
pcieDoc, err := r.getJSON(pciePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(pcieDoc, "EnvironmentMetrics", "Metrics")
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
enrichNICFromPCIe(&nic, pcieDoc, functionDocs, supplementalDocs)
|
||||
}
|
||||
if len(nic.MACAddresses) == 0 {
|
||||
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, doc)
|
||||
}
|
||||
nics = append(nics, nic)
|
||||
nics = append(nics, r.buildNICFromAdapterDoc(doc))
|
||||
}
|
||||
}
|
||||
return dedupeNetworkAdapters(nics)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) buildNICFromAdapterDoc(adapterDoc map[string]interface{}) models.NetworkAdapter {
|
||||
nic := parseNIC(adapterDoc)
|
||||
adapterFunctionDocs := r.getNetworkAdapterFunctionDocs(adapterDoc)
|
||||
for _, pciePath := range networkAdapterPCIeDevicePaths(adapterDoc) {
|
||||
pcieDoc, err := r.getJSON(pciePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
|
||||
for _, adapterFnDoc := range adapterFunctionDocs {
|
||||
functionDocs = append(functionDocs, r.getLinkedPCIeFunctions(adapterFnDoc)...)
|
||||
}
|
||||
functionDocs = dedupeJSONDocsByPath(functionDocs)
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(pcieDoc, "EnvironmentMetrics", "Metrics")
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
enrichNICFromPCIe(&nic, pcieDoc, functionDocs, supplementalDocs)
|
||||
}
|
||||
if len(nic.MACAddresses) == 0 {
|
||||
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, adapterDoc)
|
||||
}
|
||||
return nic
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getNetworkAdapterFunctionDocs(adapterDoc map[string]interface{}) []map[string]interface{} {
|
||||
ndfCol, ok := adapterDoc["NetworkDeviceFunctions"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
colPath := asString(ndfCol["@odata.id"])
|
||||
if colPath == "" {
|
||||
return nil
|
||||
}
|
||||
funcDocs, err := r.getCollectionMembers(colPath)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return funcDocs
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []string) []models.PCIeDevice {
|
||||
collections := make([]string, 0, len(systemPaths)+len(chassisPaths))
|
||||
for _, systemPath := range systemPaths {
|
||||
@@ -116,13 +141,16 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
|
||||
if looksLikeGPU(doc, functionDocs) {
|
||||
continue
|
||||
}
|
||||
if replayPCIeDeviceBackedByCanonicalNIC(doc, functionDocs) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(doc, "EnvironmentMetrics", "Metrics")
|
||||
supplementalDocs = append(supplementalDocs, r.getChassisScopedPCIeSupplementalDocs(doc)...)
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
dev := parsePCIeDeviceWithSupplementalDocs(doc, functionDocs, supplementalDocs)
|
||||
if isUnidentifiablePCIeDevice(dev) {
|
||||
if shouldSkipReplayPCIeDevice(doc, dev) {
|
||||
continue
|
||||
}
|
||||
out = append(out, dev)
|
||||
@@ -136,41 +164,185 @@ func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []st
|
||||
for idx, fn := range functionDocs {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")
|
||||
dev := parsePCIeFunctionWithSupplementalDocs(fn, supplementalDocs, idx+1)
|
||||
if shouldSkipReplayPCIeDevice(fn, dev) {
|
||||
continue
|
||||
}
|
||||
out = append(out, dev)
|
||||
}
|
||||
}
|
||||
return dedupePCIeDevices(out)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getChassisScopedPCIeSupplementalDocs(doc map[string]interface{}) []map[string]interface{} {
|
||||
if !looksLikeNVSwitchPCIeDoc(doc) {
|
||||
return nil
|
||||
func shouldSkipReplayPCIeDevice(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
if isUnidentifiablePCIeDevice(dev) {
|
||||
return true
|
||||
}
|
||||
if replayNetworkFunctionBackedByCanonicalNIC(doc, dev) {
|
||||
return true
|
||||
}
|
||||
if isReplayStorageServiceEndpoint(doc, dev) {
|
||||
return true
|
||||
}
|
||||
if isReplayNoisePCIeClass(dev.DeviceClass) {
|
||||
return true
|
||||
}
|
||||
if isReplayDisplayDeviceDuplicate(doc, dev) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func replayPCIeDeviceBackedByCanonicalNIC(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
|
||||
if !looksLikeReplayNetworkPCIeDevice(doc, functionDocs) {
|
||||
return false
|
||||
}
|
||||
for _, fn := range functionDocs {
|
||||
if hasRedfishLinkedMember(fn, "NetworkDeviceFunctions") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func replayNetworkFunctionBackedByCanonicalNIC(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
if !looksLikeReplayNetworkClass(dev.DeviceClass) {
|
||||
return false
|
||||
}
|
||||
return hasRedfishLinkedMember(doc, "NetworkDeviceFunctions")
|
||||
}
|
||||
|
||||
func looksLikeReplayNetworkPCIeDevice(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
|
||||
for _, fn := range functionDocs {
|
||||
if looksLikeReplayNetworkClass(asString(fn["DeviceClass"])) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
|
||||
asString(doc["DeviceType"]),
|
||||
asString(doc["Description"]),
|
||||
asString(doc["Name"]),
|
||||
asString(doc["Model"]),
|
||||
}, " ")))
|
||||
return strings.Contains(joined, "network")
|
||||
}
|
||||
|
||||
func looksLikeReplayNetworkClass(class string) bool {
|
||||
class = strings.ToLower(strings.TrimSpace(class))
|
||||
return strings.Contains(class, "network") || strings.Contains(class, "ethernet")
|
||||
}
|
||||
|
||||
func isReplayStorageServiceEndpoint(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
|
||||
if class != "massstoragecontroller" && class != "mass storage controller" {
|
||||
return false
|
||||
}
|
||||
name := strings.ToLower(strings.TrimSpace(firstNonEmpty(
|
||||
dev.PartNumber,
|
||||
asString(doc["PartNumber"]),
|
||||
asString(doc["Description"]),
|
||||
)))
|
||||
if strings.Contains(name, "pcie switch management endpoint") {
|
||||
return true
|
||||
}
|
||||
if strings.Contains(name, "volume management device nvme raid controller") {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func hasRedfishLinkedMember(doc map[string]interface{}, key string) bool {
|
||||
links, ok := doc["Links"].(map[string]interface{})
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if asInt(links[key+"@odata.count"]) > 0 {
|
||||
return true
|
||||
}
|
||||
linked, ok := links[key]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
switch v := linked.(type) {
|
||||
case []interface{}:
|
||||
return len(v) > 0
|
||||
case map[string]interface{}:
|
||||
if asString(v["@odata.id"]) != "" {
|
||||
return true
|
||||
}
|
||||
return len(v) > 0
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isReplayNoisePCIeClass(class string) bool {
|
||||
switch strings.ToLower(strings.TrimSpace(class)) {
|
||||
case "bridge", "processor", "signalprocessingcontroller", "signal processing controller", "serialbuscontroller", "serial bus controller":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isReplayDisplayDeviceDuplicate(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
|
||||
if class != "displaycontroller" && class != "display controller" {
|
||||
return false
|
||||
}
|
||||
return strings.EqualFold(strings.TrimSpace(asString(doc["Description"])), "Display Device")
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getChassisScopedPCIeSupplementalDocs(doc map[string]interface{}) []map[string]interface{} {
|
||||
docPath := normalizeRedfishPath(asString(doc["@odata.id"]))
|
||||
chassisPath := chassisPathForPCIeDoc(docPath)
|
||||
if chassisPath == "" {
|
||||
return nil
|
||||
}
|
||||
out := make([]map[string]interface{}, 0, 4)
|
||||
for _, path := range []string{
|
||||
joinPath(chassisPath, "/EnvironmentMetrics"),
|
||||
joinPath(chassisPath, "/ThermalSubsystem/ThermalMetrics"),
|
||||
} {
|
||||
supplementalDoc, err := r.getJSON(path)
|
||||
if err != nil || len(supplementalDoc) == 0 {
|
||||
continue
|
||||
|
||||
out := make([]map[string]interface{}, 0, 6)
|
||||
if looksLikeNVSwitchPCIeDoc(doc) {
|
||||
for _, path := range []string{
|
||||
joinPath(chassisPath, "/EnvironmentMetrics"),
|
||||
joinPath(chassisPath, "/ThermalSubsystem/ThermalMetrics"),
|
||||
} {
|
||||
supplementalDoc, err := r.getJSON(path)
|
||||
if err != nil || len(supplementalDoc) == 0 {
|
||||
continue
|
||||
}
|
||||
out = append(out, supplementalDoc)
|
||||
}
|
||||
}
|
||||
deviceDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/Devices"))
|
||||
if err == nil {
|
||||
for _, deviceDoc := range deviceDocs {
|
||||
if !redfishPCIeMatchesChassisDeviceDoc(doc, deviceDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, deviceDoc)
|
||||
}
|
||||
out = append(out, supplementalDoc)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// collectBMCMAC returns the MAC address of the first active BMC management
|
||||
// interface found in Managers/*/EthernetInterfaces. Returns empty string if
|
||||
// no MAC is available.
|
||||
// collectBMCMAC returns the MAC address of the best BMC management interface
|
||||
// found in Managers/*/EthernetInterfaces. Prefer an active link with an IP
|
||||
// address over a passive sideband interface.
|
||||
func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
|
||||
summary := r.collectBMCManagementSummary(managerPaths)
|
||||
if len(summary) == 0 {
|
||||
return ""
|
||||
}
|
||||
return strings.ToUpper(strings.TrimSpace(asString(summary["mac_address"])))
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectBMCManagementSummary(managerPaths []string) map[string]any {
|
||||
bestScore := -1
|
||||
var best map[string]any
|
||||
for _, managerPath := range managerPaths {
|
||||
members, err := r.getCollectionMembers(joinPath(managerPath, "/EthernetInterfaces"))
|
||||
collectionPath := joinPath(managerPath, "/EthernetInterfaces")
|
||||
collectionDoc, _ := r.getJSON(collectionPath)
|
||||
ncsiEnabled, lldpMode, lldpByEth := redfishManagerEthernetCollectionHints(collectionDoc)
|
||||
members, err := r.getCollectionMembers(collectionPath)
|
||||
if err != nil || len(members) == 0 {
|
||||
continue
|
||||
}
|
||||
@@ -182,16 +354,146 @@ func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
|
||||
if mac == "" || strings.EqualFold(mac, "00:00:00:00:00:00") {
|
||||
continue
|
||||
}
|
||||
return strings.ToUpper(mac)
|
||||
ifaceID := strings.TrimSpace(firstNonEmpty(asString(doc["Id"]), asString(doc["Name"])))
|
||||
summary := map[string]any{
|
||||
"manager_path": managerPath,
|
||||
"interface_id": ifaceID,
|
||||
"hostname": strings.TrimSpace(asString(doc["HostName"])),
|
||||
"fqdn": strings.TrimSpace(asString(doc["FQDN"])),
|
||||
"mac_address": strings.ToUpper(mac),
|
||||
"link_status": strings.TrimSpace(asString(doc["LinkStatus"])),
|
||||
"speed_mbps": asInt(doc["SpeedMbps"]),
|
||||
"interface_name": strings.TrimSpace(asString(doc["Name"])),
|
||||
"interface_desc": strings.TrimSpace(asString(doc["Description"])),
|
||||
"ncsi_enabled": ncsiEnabled,
|
||||
"lldp_mode": lldpMode,
|
||||
"ipv4_address": redfishManagerIPv4Field(doc, "Address"),
|
||||
"ipv4_gateway": redfishManagerIPv4Field(doc, "Gateway"),
|
||||
"ipv4_subnet": redfishManagerIPv4Field(doc, "SubnetMask"),
|
||||
"ipv6_address": redfishManagerIPv6Field(doc, "Address"),
|
||||
"link_is_active": strings.EqualFold(strings.TrimSpace(asString(doc["LinkStatus"])), "LinkActive"),
|
||||
"interface_score": 0,
|
||||
}
|
||||
if lldp, ok := lldpByEth[strings.ToLower(ifaceID)]; ok {
|
||||
summary["lldp_chassis_name"] = lldp["ChassisName"]
|
||||
summary["lldp_port_desc"] = lldp["PortDesc"]
|
||||
summary["lldp_port_id"] = lldp["PortId"]
|
||||
if vlan := asInt(lldp["VlanId"]); vlan > 0 {
|
||||
summary["lldp_vlan_id"] = vlan
|
||||
}
|
||||
}
|
||||
score := redfishManagerInterfaceScore(summary)
|
||||
summary["interface_score"] = score
|
||||
if score > bestScore {
|
||||
bestScore = score
|
||||
best = summary
|
||||
}
|
||||
}
|
||||
}
|
||||
return best
|
||||
}
|
||||
|
||||
func redfishManagerEthernetCollectionHints(collectionDoc map[string]interface{}) (bool, string, map[string]map[string]interface{}) {
|
||||
lldpByEth := make(map[string]map[string]interface{})
|
||||
if len(collectionDoc) == 0 {
|
||||
return false, "", lldpByEth
|
||||
}
|
||||
oem, _ := collectionDoc["Oem"].(map[string]interface{})
|
||||
public, _ := oem["Public"].(map[string]interface{})
|
||||
ncsiEnabled := asBool(public["NcsiEnabled"])
|
||||
lldp, _ := public["LLDP"].(map[string]interface{})
|
||||
lldpMode := strings.TrimSpace(asString(lldp["LLDPMode"]))
|
||||
if members, ok := lldp["Members"].([]interface{}); ok {
|
||||
for _, item := range members {
|
||||
member, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
ethIndex := strings.ToLower(strings.TrimSpace(asString(member["EthIndex"])))
|
||||
if ethIndex == "" {
|
||||
continue
|
||||
}
|
||||
lldpByEth[ethIndex] = member
|
||||
}
|
||||
}
|
||||
return ncsiEnabled, lldpMode, lldpByEth
|
||||
}
|
||||
|
||||
func redfishManagerIPv4Field(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
}
|
||||
for _, field := range []string{"IPv4Addresses", "IPv4StaticAddresses"} {
|
||||
list, ok := doc[field].([]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, item := range list {
|
||||
entry, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
value := strings.TrimSpace(asString(entry[key]))
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func redfishManagerIPv6Field(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
}
|
||||
list, ok := doc["IPv6Addresses"].([]interface{})
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
for _, item := range list {
|
||||
entry, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
value := strings.TrimSpace(asString(entry[key]))
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func redfishManagerInterfaceScore(summary map[string]any) int {
|
||||
score := 0
|
||||
if strings.EqualFold(strings.TrimSpace(asString(summary["link_status"])), "LinkActive") {
|
||||
score += 100
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["ipv4_address"])) != "" {
|
||||
score += 40
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["ipv6_address"])) != "" {
|
||||
score += 10
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["mac_address"])) != "" {
|
||||
score += 10
|
||||
}
|
||||
if asInt(summary["speed_mbps"]) > 0 {
|
||||
score += 5
|
||||
}
|
||||
if ifaceID := strings.ToLower(strings.TrimSpace(asString(summary["interface_id"]))); ifaceID != "" && !strings.HasPrefix(ifaceID, "usb") {
|
||||
score += 3
|
||||
}
|
||||
if asBool(summary["ncsi_enabled"]) {
|
||||
score += 1
|
||||
}
|
||||
return score
|
||||
}
|
||||
|
||||
// findNICIndexByLinkedNetworkAdapter resolves a NetworkInterface document to an
|
||||
// existing NIC in bySlot by following Links.NetworkAdapter → the Chassis
|
||||
// NetworkAdapter doc → its slot label. Returns -1 if no match is found.
|
||||
func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[string]interface{}, bySlot map[string]int) int {
|
||||
// NetworkAdapter doc and reconstructing the canonical NIC identity. Returns -1
|
||||
// if no match is found.
|
||||
func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[string]interface{}, existing []models.NetworkAdapter, bySlot map[string]int) int {
|
||||
links, ok := iface["Links"].(map[string]interface{})
|
||||
if !ok {
|
||||
return -1
|
||||
@@ -208,15 +510,58 @@ func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[stri
|
||||
if err != nil || len(adapterDoc) == 0 {
|
||||
return -1
|
||||
}
|
||||
adapterNIC := parseNIC(adapterDoc)
|
||||
adapterNIC := r.buildNICFromAdapterDoc(adapterDoc)
|
||||
if serial := normalizeRedfishIdentityField(adapterNIC.SerialNumber); serial != "" {
|
||||
for idx, nic := range existing {
|
||||
if strings.EqualFold(normalizeRedfishIdentityField(nic.SerialNumber), serial) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
}
|
||||
if bdf := strings.TrimSpace(adapterNIC.BDF); bdf != "" {
|
||||
for idx, nic := range existing {
|
||||
if strings.EqualFold(strings.TrimSpace(nic.BDF), bdf) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
}
|
||||
if slot := strings.ToLower(strings.TrimSpace(adapterNIC.Slot)); slot != "" {
|
||||
if idx, ok := bySlot[slot]; ok {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
for idx, nic := range existing {
|
||||
if networkAdaptersShareMACs(nic, adapterNIC) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func networkAdaptersShareMACs(a, b models.NetworkAdapter) bool {
|
||||
if len(a.MACAddresses) == 0 || len(b.MACAddresses) == 0 {
|
||||
return false
|
||||
}
|
||||
seen := make(map[string]struct{}, len(a.MACAddresses))
|
||||
for _, mac := range a.MACAddresses {
|
||||
normalized := strings.ToUpper(strings.TrimSpace(mac))
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
seen[normalized] = struct{}{}
|
||||
}
|
||||
for _, mac := range b.MACAddresses {
|
||||
normalized := strings.ToUpper(strings.TrimSpace(mac))
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[normalized]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// enrichNICMACsFromNetworkDeviceFunctions reads the NetworkDeviceFunctions
|
||||
// collection linked from a NetworkAdapter document and populates the NIC's
|
||||
// MACAddresses from each function's Ethernet.PermanentMACAddress / MACAddress.
|
||||
|
||||
@@ -14,13 +14,16 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
driveDocs, err := r.getCollectionMembers(driveCollectionPath)
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(driveCollectionPath) {
|
||||
if isAbsentDriveDoc(driveDoc) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -43,7 +46,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -51,7 +54,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
if looksLikeDrive(member) {
|
||||
if isVirtualStorageDrive(member) {
|
||||
if isAbsentDriveDoc(member) || isVirtualStorageDrive(member) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(member, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
@@ -63,14 +66,14 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
driveDocs, err := r.getCollectionMembers(joinPath(enclosurePath, "/Drives"))
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(joinPath(enclosurePath, "/Drives")) {
|
||||
if isVirtualStorageDrive(driveDoc) {
|
||||
if isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
@@ -83,7 +86,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
|
||||
if len(plan.KnownStorageDriveCollections) > 0 {
|
||||
for _, driveDoc := range r.collectKnownStorageMembers(systemPath, plan.KnownStorageDriveCollections) {
|
||||
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -98,7 +101,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
}
|
||||
for _, devAny := range devices {
|
||||
devDoc, ok := devAny.(map[string]interface{})
|
||||
if !ok || !looksLikeDrive(devDoc) || isVirtualStorageDrive(devDoc) {
|
||||
if !ok || !looksLikeDrive(devDoc) || isAbsentDriveDoc(devDoc) || isVirtualStorageDrive(devDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(devDoc))
|
||||
@@ -112,7 +115,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
@@ -124,7 +127,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range r.probeSupermicroNVMeDiskBays(chassisPath) {
|
||||
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -326,6 +326,95 @@ func TestBuildAnalysisDirectives_SupermicroEnablesStorageRecovery(t *testing.T)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_LenovoXCCSelectsMatchedModeAndExcludesSensors(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
found := false
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == "lenovo" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("expected lenovo profile to be selected")
|
||||
}
|
||||
|
||||
// Verify the acquisition plan excludes noisy Lenovo-specific snapshot paths.
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
wantExcluded := []string{
|
||||
"/Sensors/",
|
||||
"/Oem/Lenovo/LEDs/",
|
||||
"/Oem/Lenovo/Slots/",
|
||||
"/Oem/Lenovo/Configuration",
|
||||
"/NetworkProtocol/Oem/Lenovo/",
|
||||
"/VirtualMedia/",
|
||||
"/ThermalSubsystem/Fans/",
|
||||
}
|
||||
for _, want := range wantExcluded {
|
||||
found := false
|
||||
for _, ex := range plan.Tuning.SnapshotExcludeContains {
|
||||
if ex == want {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("expected SnapshotExcludeContains to include %q, got %v", want, plan.Tuning.SnapshotExcludeContains)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_LenovoFiltersNonInventoryChassisBranches(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
ResourceHints: []string{
|
||||
"/redfish/v1/Chassis/1/Power",
|
||||
"/redfish/v1/Chassis/1/Thermal",
|
||||
"/redfish/v1/Chassis/1/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/3",
|
||||
"/redfish/v1/Chassis/IO_Board",
|
||||
},
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
ChassisPaths: []string{
|
||||
"/redfish/v1/Chassis/1",
|
||||
"/redfish/v1/Chassis/3",
|
||||
"/redfish/v1/Chassis/IO_Board",
|
||||
},
|
||||
}, signals)
|
||||
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/1/Power") {
|
||||
t.Fatal("expected primary Lenovo chassis power path to remain critical")
|
||||
}
|
||||
if containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/3/Power") {
|
||||
t.Fatal("did not expect non-inventory Lenovo backplane chassis power path")
|
||||
}
|
||||
if containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/IO_Board/Assembly") {
|
||||
t.Fatal("did not expect IO board assembly path without inventory hints")
|
||||
}
|
||||
if containsString(resolved.Plan.PlanBPaths, "/redfish/v1/Chassis/3/Assembly") {
|
||||
t.Fatal("did not expect non-inventory Lenovo chassis plan-b target")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/3") {
|
||||
t.Fatal("expected chassis root to remain discoverable even when suffixes are filtered")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_OrderingIsDeterministic(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
|
||||
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package redfishprofile
|
||||
|
||||
func hpeProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "hpe",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "hpe") ||
|
||||
containsFold(s.SystemManufacturer, "hewlett packard") ||
|
||||
containsFold(s.ChassisManufacturer, "hpe") ||
|
||||
containsFold(s.ChassisManufacturer, "hewlett packard") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "hpe") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsFold(s.ServiceRootProduct, "ilo") {
|
||||
score += 30
|
||||
}
|
||||
if containsFold(s.ManagerManufacturer, "hpe") || containsFold(s.ManagerManufacturer, "ilo") {
|
||||
score += 20
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// HPE ProLiant SmartStorage RAID controller inventory is not reachable
|
||||
// via standard Redfish Storage paths — it requires the HPE OEM SmartStorage tree.
|
||||
ensureScopedPathPolicy(plan, AcquisitionScopedPathPolicy{
|
||||
SystemCriticalSuffixes: []string{
|
||||
"/SmartStorage",
|
||||
"/SmartStorageConfig",
|
||||
},
|
||||
ManagerCriticalSuffixes: []string{
|
||||
"/LicenseService",
|
||||
},
|
||||
})
|
||||
// HPE iLO responds more slowly than average BMCs under load; give the
|
||||
// ETA estimator a realistic baseline so progress reports are accurate.
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 12,
|
||||
SnapshotSeconds: 180,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 25,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
// HPE iLO starts throttling under high request rates. Setting a higher
|
||||
// latency tolerance prevents the adaptive throttler from treating normal
|
||||
// iLO slowness as a reason to stall the collection.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1200,
|
||||
ThrottleP95LatencyMS: 2500,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "hpe ilo acquisition extensions enabled")
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,149 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
outboardCardHintRe = regexp.MustCompile(`/outboardPCIeCard\d+(?:/|$)`)
|
||||
obDriveHintRe = regexp.MustCompile(`/Drives/OB\d+$`)
|
||||
fpDriveHintRe = regexp.MustCompile(`/Drives/FP00HDD\d+$`)
|
||||
vrFirmwareHintRe = regexp.MustCompile(`^CPU\d+_PVCC.*_VR$`)
|
||||
)
|
||||
|
||||
var inspurGroupOEMFirmwareHints = map[string]struct{}{
|
||||
"Front_HDD_CPLD0": {},
|
||||
"MainBoard0CPLD": {},
|
||||
"MainBoardCPLD": {},
|
||||
"PDBBoardCPLD": {},
|
||||
"SCMCPLD": {},
|
||||
"SWBoardCPLD": {},
|
||||
}
|
||||
|
||||
func inspurGroupOEMPlatformsProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "inspur-group-oem-platforms",
|
||||
priority: 25,
|
||||
safeForFallback: false,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
topologyScore := 0
|
||||
boardScore := 0
|
||||
chassisOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Chassis/", outboardCardHintRe)
|
||||
systemOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Systems/", outboardCardHintRe)
|
||||
obDrives := matchedPathTokens(s.ResourceHints, "", obDriveHintRe)
|
||||
fpDrives := matchedPathTokens(s.ResourceHints, "", fpDriveHintRe)
|
||||
firmwareNames, vrFirmwareNames := inspurGroupOEMFirmwareMatches(s.ResourceHints)
|
||||
|
||||
if len(chassisOutboard) > 0 {
|
||||
topologyScore += 20
|
||||
}
|
||||
if len(systemOutboard) > 0 {
|
||||
topologyScore += 10
|
||||
}
|
||||
switch {
|
||||
case len(obDrives) > 0 && len(fpDrives) > 0:
|
||||
topologyScore += 15
|
||||
}
|
||||
switch {
|
||||
case len(firmwareNames) >= 2:
|
||||
boardScore += 15
|
||||
}
|
||||
switch {
|
||||
case len(vrFirmwareNames) >= 2:
|
||||
boardScore += 10
|
||||
}
|
||||
if anySignalContains(s, "COMMONbAssembly") {
|
||||
boardScore += 12
|
||||
}
|
||||
if anySignalContains(s, "EnvironmentMetrcs") {
|
||||
boardScore += 8
|
||||
}
|
||||
if anySignalContains(s, "GetServerAllUSBStatus") {
|
||||
boardScore += 8
|
||||
}
|
||||
if topologyScore == 0 || boardScore == 0 {
|
||||
return 0
|
||||
}
|
||||
return min(topologyScore+boardScore, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
addPlanNote(plan, "Inspur Group OEM platform fingerprint matched")
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func matchedPathTokens(paths []string, requiredPrefix string, re *regexp.Regexp) []string {
|
||||
seen := make(map[string]struct{})
|
||||
for _, rawPath := range paths {
|
||||
path := normalizePath(rawPath)
|
||||
if path == "" || (requiredPrefix != "" && !strings.HasPrefix(path, requiredPrefix)) {
|
||||
continue
|
||||
}
|
||||
token := re.FindString(path)
|
||||
if token == "" {
|
||||
continue
|
||||
}
|
||||
token = strings.Trim(token, "/")
|
||||
if token == "" {
|
||||
continue
|
||||
}
|
||||
seen[token] = struct{}{}
|
||||
}
|
||||
out := make([]string, 0, len(seen))
|
||||
for token := range seen {
|
||||
out = append(out, token)
|
||||
}
|
||||
return dedupeSorted(out)
|
||||
}
|
||||
|
||||
func inspurGroupOEMFirmwareMatches(paths []string) ([]string, []string) {
|
||||
firmwareNames := make(map[string]struct{})
|
||||
vrNames := make(map[string]struct{})
|
||||
for _, rawPath := range paths {
|
||||
path := normalizePath(rawPath)
|
||||
if !strings.HasPrefix(path, "/redfish/v1/UpdateService/FirmwareInventory/") {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimSpace(path[strings.LastIndex(path, "/")+1:])
|
||||
if name == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := inspurGroupOEMFirmwareHints[name]; ok {
|
||||
firmwareNames[name] = struct{}{}
|
||||
}
|
||||
if vrFirmwareHintRe.MatchString(name) {
|
||||
vrNames[name] = struct{}{}
|
||||
}
|
||||
}
|
||||
return mapKeysSorted(firmwareNames), mapKeysSorted(vrNames)
|
||||
}
|
||||
|
||||
func anySignalContains(signals MatchSignals, needle string) bool {
|
||||
needle = strings.TrimSpace(needle)
|
||||
if needle == "" {
|
||||
return false
|
||||
}
|
||||
for _, signal := range signals.ResourceHints {
|
||||
if strings.Contains(signal, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
for _, signal := range signals.DocHints {
|
||||
if strings.Contains(signal, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func mapKeysSorted(items map[string]struct{}) []string {
|
||||
out := make([]string, 0, len(items))
|
||||
for item := range items {
|
||||
out = append(out, item)
|
||||
}
|
||||
return dedupeSorted(out)
|
||||
}
|
||||
@@ -0,0 +1,182 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsSelectsMatchedMode(t *testing.T) {
|
||||
tree := map[string]interface{}{
|
||||
"/redfish/v1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1",
|
||||
},
|
||||
"/redfish/v1/Systems": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Systems/1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1",
|
||||
"Oem": map[string]interface{}{
|
||||
"Public": map[string]interface{}{
|
||||
"USB": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/Oem/Public/GetServerAllUSBStatus",
|
||||
},
|
||||
},
|
||||
},
|
||||
"NetworkInterfaces": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces",
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Systems/1/NetworkInterfaces": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1",
|
||||
"Actions": map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"Public": map[string]interface{}{
|
||||
"NvGpuPowerLimitWatts": map[string]interface{}{
|
||||
"target": "/redfish/v1/Chassis/1/GPU/EnvironmentMetrcs",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"Drives": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1/Drives",
|
||||
},
|
||||
"NetworkAdapters": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters",
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/Drives": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/OB01"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/FP00HDD00"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/NetworkAdapters": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/Assembly": map[string]interface{}{
|
||||
"Assemblies": []interface{}{
|
||||
map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"COMMONb": map[string]interface{}{
|
||||
"COMMONbAssembly": map[string]interface{}{
|
||||
"@odata.type": "#COMMONbAssembly.v1_0_0.COMMONbAssembly",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Managers": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Managers/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Managers/1": map[string]interface{}{
|
||||
"Actions": map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"#PublicManager.ExportConfFile": map[string]interface{}{
|
||||
"target": "/redfish/v1/Managers/1/Actions/Oem/Public/ExportConfFile",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/UpdateService/FirmwareInventory": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/Front_HDD_CPLD0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/SCMCPLD"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU0_PVCCD_HV_VR"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU1_PVCCIN_VR"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
signals := CollectSignalsFromTree(tree)
|
||||
match := MatchProfiles(signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "inspur-group-oem-platforms")
|
||||
}
|
||||
|
||||
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsDoesNotFalsePositiveOnExampleRawExports(t *testing.T) {
|
||||
examples := []string{
|
||||
"2026-03-18 (G5500 V7) - 210619KUGGXGS2000015.zip",
|
||||
"2026-03-11 (SYS-821GE-TNHR) - A514359X5C08846.zip",
|
||||
"2026-03-15 (CG480-S5063) - P5T0006091.zip",
|
||||
"2026-03-18 (CG290-S3063) - PAT0011258.zip",
|
||||
"2024-04-25 (AS -4124GQ-TNMI) - S490387X4418273.zip",
|
||||
}
|
||||
for _, name := range examples {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
tree := loadRawExportTreeFromExampleZip(t, name)
|
||||
match := MatchProfiles(CollectSignalsFromTree(tree))
|
||||
assertProfileNotSelected(t, match, "inspur-group-oem-platforms")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func loadRawExportTreeFromExampleZip(t *testing.T, name string) map[string]interface{} {
|
||||
t.Helper()
|
||||
path := filepath.Join("..", "..", "..", "example", name)
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
t.Fatalf("open example zip %s: %v", path, err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
info, err := f.Stat()
|
||||
if err != nil {
|
||||
t.Fatalf("stat example zip %s: %v", path, err)
|
||||
}
|
||||
|
||||
zr, err := zip.NewReader(f, info.Size())
|
||||
if err != nil {
|
||||
t.Fatalf("read example zip %s: %v", path, err)
|
||||
}
|
||||
for _, file := range zr.File {
|
||||
if file.Name != "raw_export.json" {
|
||||
continue
|
||||
}
|
||||
rc, err := file.Open()
|
||||
if err != nil {
|
||||
t.Fatalf("open %s in %s: %v", file.Name, path, err)
|
||||
}
|
||||
defer rc.Close()
|
||||
var payload struct {
|
||||
Source struct {
|
||||
RawPayloads struct {
|
||||
RedfishTree map[string]interface{} `json:"redfish_tree"`
|
||||
} `json:"raw_payloads"`
|
||||
} `json:"source"`
|
||||
}
|
||||
if err := json.NewDecoder(rc).Decode(&payload); err != nil {
|
||||
t.Fatalf("decode raw_export.json from %s: %v", path, err)
|
||||
}
|
||||
if len(payload.Source.RawPayloads.RedfishTree) == 0 {
|
||||
t.Fatalf("example %s has empty redfish_tree", path)
|
||||
}
|
||||
return payload.Source.RawPayloads.RedfishTree
|
||||
}
|
||||
t.Fatalf("raw_export.json not found in %s", path)
|
||||
return nil
|
||||
}
|
||||
175
internal/collector/redfishprofile/profile_lenovo.go
Normal file
175
internal/collector/redfishprofile/profile_lenovo.go
Normal file
@@ -0,0 +1,175 @@
|
||||
package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func lenovoProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "lenovo",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "lenovo") ||
|
||||
containsFold(s.ChassisManufacturer, "lenovo") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "lenovo") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
// Lenovo XClarity Controller (XCC) is the BMC product line.
|
||||
if containsFold(s.ServiceRootProduct, "xclarity") ||
|
||||
containsFold(s.ServiceRootProduct, "xcc") {
|
||||
score += 30
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// Lenovo XCC BMC exposes Chassis/1/Sensors with hundreds of individual
|
||||
// sensor member documents (e.g. Chassis/1/Sensors/101L1). These are
|
||||
// not used by any LOGPile parser — thermal/power data is read from
|
||||
// the aggregate Chassis/*/Thermal and Chassis/*/Power endpoints. On
|
||||
// a real server they largely return errors, wasting many minutes.
|
||||
// Lenovo OEM subtrees under Oem/Lenovo/LEDs and Oem/Lenovo/Slots also
|
||||
// enumerate dozens of individual documents not relevant to inventory.
|
||||
ensureSnapshotExcludeContains(plan,
|
||||
"/Sensors/", // individual sensor docs (Chassis/1/Sensors/NNN)
|
||||
"/Oem/Lenovo/LEDs/", // individual LED status entries (~47 per server)
|
||||
"/Oem/Lenovo/Slots/", // individual slot detail entries (~26 per server)
|
||||
"/Oem/Lenovo/Metrics/", // operational metrics, not inventory
|
||||
"/Oem/Lenovo/History", // historical telemetry
|
||||
"/Oem/Lenovo/Configuration", // BMC config service, not inventory
|
||||
"/Oem/Lenovo/DateTimeService", // BMC time service config
|
||||
"/Oem/Lenovo/GroupService", // XCC fleet/group management state
|
||||
"/Oem/Lenovo/Recipients", // alert recipient config
|
||||
"/Oem/Lenovo/RemoteControl", // remote-media/session management
|
||||
"/Oem/Lenovo/RemoteMap", // remote-media mapping config
|
||||
"/Oem/Lenovo/SecureKeyLifecycleService", // key lifecycle/cert config
|
||||
"/Oem/Lenovo/ServerProfile", // profile export/import config
|
||||
"/Oem/Lenovo/ServiceData", // support/service metadata
|
||||
"/Oem/Lenovo/SsoCertificates", // SSO certificate config
|
||||
"/Oem/Lenovo/SystemGuard", // snapshot/history service
|
||||
"/Oem/Lenovo/Watchdogs", // watchdog config
|
||||
"/Oem/Lenovo/ScheduledPower", // power scheduling config
|
||||
"/Oem/Lenovo/BootSettings/BootOrder", // individual boot order lists
|
||||
"/NetworkProtocol/Oem/Lenovo/", // DNS/LDAP/SMTP/SNMP manager config
|
||||
"/PortForwardingMap/", // network port forwarding config
|
||||
"/VirtualMedia/", // virtual media inventory/config, not hardware
|
||||
"/Boot/Certificates", // secure boot certificate stores, not inventory
|
||||
"/ThermalSubsystem/Fans/", // per-fan member docs; replay uses aggregate Thermal only
|
||||
)
|
||||
// Lenovo XCC BMC is typically slow (p95 latency often 3-5s even under
|
||||
// normal load). Set rate thresholds that don't over-throttle on the
|
||||
// first few requests, and give the ETA estimator a realistic baseline.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 2000,
|
||||
ThrottleP95LatencyMS: 4000,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 15,
|
||||
SnapshotSeconds: 120,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 20,
|
||||
})
|
||||
addPlanNote(plan, "lenovo xcc acquisition extensions enabled: noisy sensor/oem paths excluded from snapshot")
|
||||
},
|
||||
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, signals MatchSignals) {
|
||||
allowedChassis := lenovoAllowedInventoryChassis(discovered.ChassisPaths, signals.ResourceHints)
|
||||
resolved.SeedPaths = filterLenovoChassisInventoryPaths(resolved.SeedPaths, allowedChassis)
|
||||
resolved.CriticalPaths = filterLenovoChassisInventoryPaths(resolved.CriticalPaths, allowedChassis)
|
||||
resolved.Plan.SeedPaths = filterLenovoChassisInventoryPaths(resolved.Plan.SeedPaths, allowedChassis)
|
||||
resolved.Plan.CriticalPaths = filterLenovoChassisInventoryPaths(resolved.Plan.CriticalPaths, allowedChassis)
|
||||
resolved.Plan.PlanBPaths = filterLenovoChassisInventoryPaths(resolved.Plan.PlanBPaths, allowedChassis)
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func lenovoAllowedInventoryChassis(chassisPaths, resourceHints []string) map[string]struct{} {
|
||||
allowed := make(map[string]struct{}, len(chassisPaths))
|
||||
for _, chassisPath := range chassisPaths {
|
||||
normalized := normalizePath(chassisPath)
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
if normalized == "/redfish/v1/Chassis/1" {
|
||||
allowed[normalized] = struct{}{}
|
||||
continue
|
||||
}
|
||||
for _, hint := range resourceHints {
|
||||
hint = normalizePath(hint)
|
||||
if !strings.HasPrefix(hint, normalized+"/") {
|
||||
continue
|
||||
}
|
||||
if lenovoHintLooksLikeChassisInventory(hint) {
|
||||
allowed[normalized] = struct{}{}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return allowed
|
||||
}
|
||||
|
||||
func lenovoHintLooksLikeChassisInventory(path string) bool {
|
||||
for _, suffix := range []string{
|
||||
"/Power",
|
||||
"/PowerSubsystem",
|
||||
"/PowerSubsystem/PowerSupplies",
|
||||
"/Thermal",
|
||||
"/ThresholdSensors",
|
||||
"/DiscreteSensors",
|
||||
"/SensorsList",
|
||||
"/NetworkAdapters",
|
||||
"/PCIeDevices",
|
||||
"/Drives",
|
||||
"/Assembly",
|
||||
} {
|
||||
if strings.HasSuffix(path, suffix) || strings.Contains(path, suffix+"/") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func filterLenovoChassisInventoryPaths(paths []string, allowedChassis map[string]struct{}) []string {
|
||||
if len(paths) == 0 {
|
||||
return nil
|
||||
}
|
||||
out := make([]string, 0, len(paths))
|
||||
for _, path := range paths {
|
||||
normalized := normalizePath(path)
|
||||
chassis := lenovoPathChassisRoot(normalized)
|
||||
if chassis == "" {
|
||||
out = append(out, normalized)
|
||||
continue
|
||||
}
|
||||
if normalized == chassis {
|
||||
out = append(out, normalized)
|
||||
continue
|
||||
}
|
||||
if _, ok := allowedChassis[chassis]; ok {
|
||||
out = append(out, normalized)
|
||||
}
|
||||
}
|
||||
return dedupeSorted(out)
|
||||
}
|
||||
|
||||
func lenovoPathChassisRoot(path string) string {
|
||||
const prefix = "/redfish/v1/Chassis/"
|
||||
if !strings.HasPrefix(path, prefix) {
|
||||
return ""
|
||||
}
|
||||
rest := strings.TrimPrefix(path, prefix)
|
||||
if rest == "" {
|
||||
return ""
|
||||
}
|
||||
if idx := strings.IndexByte(rest, '/'); idx >= 0 {
|
||||
return prefix + rest[:idx]
|
||||
}
|
||||
return prefix + rest
|
||||
}
|
||||
@@ -55,6 +55,9 @@ func BuiltinProfiles() []Profile {
|
||||
msiProfile(),
|
||||
supermicroProfile(),
|
||||
dellProfile(),
|
||||
hpeProfile(),
|
||||
lenovoProfile(),
|
||||
inspurGroupOEMPlatformsProfile(),
|
||||
hgxProfile(),
|
||||
xfusionProfile(),
|
||||
}
|
||||
@@ -224,6 +227,10 @@ func ensurePrefetchPolicy(plan *AcquisitionPlan, policy AcquisitionPrefetchPolic
|
||||
addPlanPaths(&plan.Tuning.PrefetchPolicy.ExcludeContains, policy.ExcludeContains...)
|
||||
}
|
||||
|
||||
func ensureSnapshotExcludeContains(plan *AcquisitionPlan, patterns ...string) {
|
||||
addPlanPaths(&plan.Tuning.SnapshotExcludeContains, patterns...)
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
|
||||
@@ -2,7 +2,14 @@ package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string]interface{}, resourceHints []string) MatchSignals {
|
||||
func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string]interface{}, resourceHints []string, hintDocs ...map[string]interface{}) MatchSignals {
|
||||
resourceHints = append([]string{}, resourceHints...)
|
||||
docHints := make([]string, 0)
|
||||
for _, doc := range append([]map[string]interface{}{serviceRootDoc, systemDoc, chassisDoc, managerDoc}, hintDocs...) {
|
||||
embeddedPaths, embeddedHints := collectDocSignalHints(doc)
|
||||
resourceHints = append(resourceHints, embeddedPaths...)
|
||||
docHints = append(docHints, embeddedHints...)
|
||||
}
|
||||
signals := MatchSignals{
|
||||
ServiceRootVendor: lookupString(serviceRootDoc, "Vendor"),
|
||||
ServiceRootProduct: lookupString(serviceRootDoc, "Product"),
|
||||
@@ -13,6 +20,7 @@ func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string
|
||||
ChassisModel: lookupString(chassisDoc, "Model"),
|
||||
ManagerManufacturer: lookupString(managerDoc, "Manufacturer"),
|
||||
ResourceHints: resourceHints,
|
||||
DocHints: docHints,
|
||||
}
|
||||
signals.OEMNamespaces = dedupeSorted(append(
|
||||
oemNamespaces(serviceRootDoc),
|
||||
@@ -50,6 +58,7 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
|
||||
managerPath := memberPath("/redfish/v1/Managers", "/redfish/v1/Managers/1")
|
||||
|
||||
resourceHints := make([]string, 0, len(tree))
|
||||
hintDocs := make([]map[string]interface{}, 0, len(tree))
|
||||
for path := range tree {
|
||||
path = strings.TrimSpace(path)
|
||||
if path == "" {
|
||||
@@ -57,6 +66,13 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
|
||||
}
|
||||
resourceHints = append(resourceHints, path)
|
||||
}
|
||||
for _, v := range tree {
|
||||
doc, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
hintDocs = append(hintDocs, doc)
|
||||
}
|
||||
|
||||
return CollectSignals(
|
||||
getDoc("/redfish/v1"),
|
||||
@@ -64,9 +80,72 @@ func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
|
||||
getDoc(chassisPath),
|
||||
getDoc(managerPath),
|
||||
resourceHints,
|
||||
hintDocs...,
|
||||
)
|
||||
}
|
||||
|
||||
func collectDocSignalHints(doc map[string]interface{}) ([]string, []string) {
|
||||
if len(doc) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
paths := make([]string, 0)
|
||||
hints := make([]string, 0)
|
||||
var walk func(any)
|
||||
walk = func(v any) {
|
||||
switch x := v.(type) {
|
||||
case map[string]interface{}:
|
||||
for rawKey, child := range x {
|
||||
key := strings.TrimSpace(rawKey)
|
||||
if key != "" {
|
||||
hints = append(hints, key)
|
||||
}
|
||||
if s, ok := child.(string); ok {
|
||||
s = strings.TrimSpace(s)
|
||||
if s != "" {
|
||||
switch key {
|
||||
case "@odata.id", "target":
|
||||
paths = append(paths, s)
|
||||
case "@odata.type":
|
||||
hints = append(hints, s)
|
||||
default:
|
||||
if isInterestingSignalString(s) {
|
||||
hints = append(hints, s)
|
||||
if strings.HasPrefix(s, "/") {
|
||||
paths = append(paths, s)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
walk(child)
|
||||
}
|
||||
case []interface{}:
|
||||
for _, child := range x {
|
||||
walk(child)
|
||||
}
|
||||
}
|
||||
}
|
||||
walk(doc)
|
||||
return paths, hints
|
||||
}
|
||||
|
||||
func isInterestingSignalString(s string) bool {
|
||||
switch {
|
||||
case strings.HasPrefix(s, "/"):
|
||||
return true
|
||||
case strings.HasPrefix(s, "#"):
|
||||
return true
|
||||
case strings.Contains(s, "COMMONb"):
|
||||
return true
|
||||
case strings.Contains(s, "EnvironmentMetrcs"):
|
||||
return true
|
||||
case strings.Contains(s, "GetServerAllUSBStatus"):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func lookupString(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
|
||||
@@ -17,6 +17,7 @@ type MatchSignals struct {
|
||||
ManagerManufacturer string
|
||||
OEMNamespaces []string
|
||||
ResourceHints []string
|
||||
DocHints []string
|
||||
}
|
||||
|
||||
type AcquisitionPlan struct {
|
||||
@@ -52,16 +53,17 @@ type AcquisitionScopedPathPolicy struct {
|
||||
}
|
||||
|
||||
type AcquisitionTuning struct {
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
SnapshotExcludeContains []string
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
}
|
||||
|
||||
type AcquisitionRatePolicy struct {
|
||||
@@ -110,12 +112,12 @@ type AnalysisDirectives struct {
|
||||
}
|
||||
|
||||
type ResolvedAnalysisPlan struct {
|
||||
Match MatchResult
|
||||
Directives AnalysisDirectives
|
||||
Notes []string
|
||||
ProcessorGPUChassisLookupModes []string
|
||||
KnownStorageDriveCollections []string
|
||||
KnownStorageVolumeCollections []string
|
||||
Match MatchResult
|
||||
Directives AnalysisDirectives
|
||||
Notes []string
|
||||
ProcessorGPUChassisLookupModes []string
|
||||
KnownStorageDriveCollections []string
|
||||
KnownStorageVolumeCollections []string
|
||||
}
|
||||
|
||||
type Profile interface {
|
||||
@@ -146,6 +148,7 @@ type ProfileScore struct {
|
||||
func normalizeSignals(signals MatchSignals) MatchSignals {
|
||||
signals.OEMNamespaces = dedupeSorted(signals.OEMNamespaces)
|
||||
signals.ResourceHints = dedupeSorted(signals.ResourceHints)
|
||||
signals.DocHints = dedupeSorted(signals.DocHints)
|
||||
return signals
|
||||
}
|
||||
|
||||
|
||||
@@ -15,9 +15,8 @@ type Request struct {
|
||||
Password string
|
||||
Token string
|
||||
TLSMode string
|
||||
PowerOnIfHostOff bool
|
||||
StopHostAfterCollect bool
|
||||
DebugPayloads bool
|
||||
DebugPayloads bool
|
||||
SkipHungCh <-chan struct{}
|
||||
}
|
||||
|
||||
type Progress struct {
|
||||
@@ -65,10 +64,9 @@ type PhaseTelemetry struct {
|
||||
type ProbeResult struct {
|
||||
Reachable bool
|
||||
Protocol string
|
||||
HostPowerState string
|
||||
HostPoweredOn bool
|
||||
PowerControlAvailable bool
|
||||
SystemPath string
|
||||
HostPowerState string
|
||||
HostPoweredOn bool
|
||||
SystemPath string
|
||||
}
|
||||
|
||||
type Connector interface {
|
||||
|
||||
@@ -43,13 +43,13 @@ func ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, erro
|
||||
TargetHost: targetHost,
|
||||
CollectedAt: collectedAt,
|
||||
Hardware: ReanimatorHardware{
|
||||
Board: convertBoard(result.Hardware.BoardInfo),
|
||||
Firmware: dedupeFirmware(convertFirmware(result.Hardware.Firmware)),
|
||||
CPUs: dedupeCPUs(convertCPUsFromDevices(devices, collectedAt, result.Hardware.BoardInfo.SerialNumber, buildCPUMicrocodeBySocket(result.Hardware.Firmware))),
|
||||
Memory: dedupeMemory(convertMemoryFromDevices(devices, collectedAt)),
|
||||
Storage: dedupeStorage(convertStorageFromDevices(devices, collectedAt)),
|
||||
PCIeDevices: dedupePCIe(convertPCIeFromDevices(devices, collectedAt)),
|
||||
PowerSupplies: dedupePSUs(convertPSUsFromDevices(devices, collectedAt)),
|
||||
Board: convertBoard(result.Hardware.BoardInfo),
|
||||
Firmware: dedupeFirmware(convertFirmware(result.Hardware.Firmware)),
|
||||
CPUs: dedupeCPUs(convertCPUsFromDevices(devices, collectedAt, result.Hardware.BoardInfo.SerialNumber, buildCPUMicrocodeBySocket(result.Hardware.Firmware))),
|
||||
Memory: dedupeMemory(convertMemoryFromDevices(devices, collectedAt)),
|
||||
Storage: dedupeStorage(convertStorageFromDevices(devices, collectedAt)),
|
||||
PCIeDevices: dedupePCIe(convertPCIeFromDevices(devices, collectedAt)),
|
||||
PowerSupplies: dedupePSUs(convertPSUsFromDevices(devices, collectedAt)),
|
||||
Sensors: convertSensors(result.Sensors),
|
||||
EventLogs: convertEventLogs(result.Events, collectedAt),
|
||||
},
|
||||
@@ -159,6 +159,16 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
}
|
||||
for _, stor := range hw.Storage {
|
||||
present := stor.Present
|
||||
storDetails := mergeDetailMaps(nil, stor.Details)
|
||||
if stor.LogicalBlockSizeBytes != 0 {
|
||||
storDetails = mergeDetailMaps(storDetails, map[string]any{"logical_block_size_bytes": stor.LogicalBlockSizeBytes})
|
||||
}
|
||||
if stor.PhysicalBlockSizeBytes != 0 {
|
||||
storDetails = mergeDetailMaps(storDetails, map[string]any{"physical_block_size_bytes": stor.PhysicalBlockSizeBytes})
|
||||
}
|
||||
if stor.MetadataBytesPerBlock != 0 {
|
||||
storDetails = mergeDetailMaps(storDetails, map[string]any{"metadata_bytes_per_block": stor.MetadataBytesPerBlock})
|
||||
}
|
||||
appendDevice(models.HardwareDevice{
|
||||
Kind: models.DeviceKindStorage,
|
||||
Slot: stor.Slot,
|
||||
@@ -177,27 +187,41 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
StatusAtCollect: stor.StatusAtCollect,
|
||||
StatusHistory: stor.StatusHistory,
|
||||
ErrorDescription: stor.ErrorDescription,
|
||||
Details: mergeDetailMaps(nil, stor.Details),
|
||||
Details: storDetails,
|
||||
})
|
||||
}
|
||||
for _, pcie := range hw.PCIeDevices {
|
||||
// Use PartNumber as model when available; fall back to chip description.
|
||||
// Description contains the chip/product name (e.g. "BCM57414 NetXtreme-E …")
|
||||
// while PartNumber is a part/product code. Prefer PartNumber when set.
|
||||
pcieModel := pcie.PartNumber
|
||||
if pcieModel == "" {
|
||||
pcieModel = pcie.Description
|
||||
}
|
||||
// Priority: PartNumber (vendor P/N) > Model (product name) > Description (chip label).
|
||||
pcieModel := firstNonEmptyString(pcie.PartNumber, pcie.Model, pcie.Description)
|
||||
details := mergeDetailMaps(nil, pcie.Details)
|
||||
pcieFirmware := stringFromDetailMap(details, "firmware")
|
||||
// Firmware: prefer direct field, fall back to details, then NVSwitch lookup.
|
||||
pcieFirmware := firstNonEmptyString(pcie.Firmware, stringFromDetailMap(details, "firmware"))
|
||||
if pcieFirmware == "" && isNVSwitchPCIeDevice(pcie) {
|
||||
pcieFirmware = nvswitchFirmwareBySlot[normalizeNVSwitchSlotForLookup(pcie.Slot)]
|
||||
if pcieFirmware != "" {
|
||||
details = mergeDetailMaps(details, map[string]any{
|
||||
"firmware": pcieFirmware,
|
||||
})
|
||||
}
|
||||
}
|
||||
if pcieFirmware != "" {
|
||||
details = mergeDetailMaps(details, map[string]any{"firmware": pcieFirmware})
|
||||
}
|
||||
// Telemetry fields: put into details so convertPCIeFromDevices can pick them up.
|
||||
if pcie.TemperatureC != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"temperature_c": *pcie.TemperatureC})
|
||||
}
|
||||
if pcie.PowerW != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"power_w": *pcie.PowerW})
|
||||
}
|
||||
if pcie.ECCCorrectedTotal != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"ecc_corrected_total": *pcie.ECCCorrectedTotal})
|
||||
}
|
||||
if pcie.ECCUncorrectedTotal != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"ecc_uncorrected_total": *pcie.ECCUncorrectedTotal})
|
||||
}
|
||||
if pcie.HWSlowdown != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"hw_slowdown": *pcie.HWSlowdown})
|
||||
}
|
||||
if pcie.IOMMUGroup != nil {
|
||||
details = mergeDetailMaps(details, map[string]any{"iommu_group": *pcie.IOMMUGroup})
|
||||
}
|
||||
present := pcie.Present
|
||||
appendDevice(models.HardwareDevice{
|
||||
Kind: models.DeviceKindPCIe,
|
||||
Slot: pcie.Slot,
|
||||
@@ -209,11 +233,13 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
PartNumber: pcie.PartNumber,
|
||||
Manufacturer: pcie.Manufacturer,
|
||||
SerialNumber: pcie.SerialNumber,
|
||||
MACAddresses: append([]string(nil), pcie.MACAddresses...),
|
||||
LinkWidth: pcie.LinkWidth,
|
||||
LinkSpeed: pcie.LinkSpeed,
|
||||
MaxLinkWidth: pcie.MaxLinkWidth,
|
||||
MaxLinkSpeed: pcie.MaxLinkSpeed,
|
||||
NUMANode: pcie.NUMANode,
|
||||
Present: present,
|
||||
Status: pcie.Status,
|
||||
StatusCheckedAt: pcie.StatusCheckedAt,
|
||||
StatusChangedAt: pcie.StatusChangedAt,
|
||||
@@ -358,10 +384,12 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
prev.score = canonicalScore(prev.item)
|
||||
byKey[key] = prev
|
||||
}
|
||||
// Secondary pass: for items without serial/BDF (noKey), try to merge into an
|
||||
// existing keyed entry with the same model+manufacturer. This handles the case
|
||||
// where a device appears both in PCIeDevices (with BDF) and NetworkAdapters
|
||||
// (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard with the same model.
|
||||
// Secondary pass: for PCIe-class items without serial/BDF (noKey), try to merge
|
||||
// into an existing keyed entry with the same model+manufacturer. This handles
|
||||
// the case where a device appears both in PCIeDevices (with BDF) and
|
||||
// NetworkAdapters (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard
|
||||
// with the same model. Do not apply this to storage: repeated NVMe slots often
|
||||
// share the same model string and would collapse incorrectly.
|
||||
// deviceIdentity returns the best available model name for secondary matching,
|
||||
// preferring Model over DeviceClass (which may hold a resolved device name).
|
||||
deviceIdentity := func(d models.HardwareDevice) string {
|
||||
@@ -377,6 +405,10 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
var unmatched []models.HardwareDevice
|
||||
for _, item := range noKey {
|
||||
mergeKind := canonicalMergeKind(item.Kind)
|
||||
if mergeKind != "pcie-class" {
|
||||
unmatched = append(unmatched, item)
|
||||
continue
|
||||
}
|
||||
identity := deviceIdentity(item)
|
||||
mfr := strings.ToLower(strings.TrimSpace(item.Manufacturer))
|
||||
if identity == "" {
|
||||
@@ -669,7 +701,17 @@ func convertMemoryFromDevices(devices []models.HardwareDevice, collectedAt strin
|
||||
}
|
||||
present := boolFromPresentPtr(d.Present, true)
|
||||
status := normalizeStatus(d.Status, true)
|
||||
if !present || d.SizeMB == 0 || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
|
||||
mem := models.MemoryDIMM{
|
||||
Present: present,
|
||||
SizeMB: d.SizeMB,
|
||||
Type: d.Type,
|
||||
Description: stringFromDetailMap(d.Details, "description"),
|
||||
Manufacturer: d.Manufacturer,
|
||||
SerialNumber: d.SerialNumber,
|
||||
PartNumber: d.PartNumber,
|
||||
Status: d.Status,
|
||||
}
|
||||
if !mem.IsInstalledInventory() || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
|
||||
@@ -711,48 +753,50 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
|
||||
if isVirtualExportStorageDevice(d) {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(d.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
present := d.Present == nil || *d.Present
|
||||
if !present {
|
||||
if !shouldExportStorageDevice(d) {
|
||||
continue
|
||||
}
|
||||
present := boolFromPresentPtr(d.Present, true)
|
||||
status := inferStorageStatus(models.Storage{Present: present})
|
||||
if strings.TrimSpace(d.Status) != "" {
|
||||
status = normalizeStatus(d.Status, false)
|
||||
status = normalizeStatus(d.Status, !present)
|
||||
}
|
||||
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
|
||||
presentValue := present
|
||||
result = append(result, ReanimatorStorage{
|
||||
Slot: d.Slot,
|
||||
Type: d.Type,
|
||||
Model: d.Model,
|
||||
SizeGB: d.SizeGB,
|
||||
SerialNumber: d.SerialNumber,
|
||||
Manufacturer: d.Manufacturer,
|
||||
Firmware: d.Firmware,
|
||||
Interface: d.Interface,
|
||||
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
|
||||
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
|
||||
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
|
||||
UnsafeShutdowns: int64FromDetailMap(d.Details, "unsafe_shutdowns"),
|
||||
MediaErrors: int64FromDetailMap(d.Details, "media_errors"),
|
||||
ErrorLogEntries: int64FromDetailMap(d.Details, "error_log_entries"),
|
||||
WrittenBytes: int64FromDetailMap(d.Details, "written_bytes"),
|
||||
ReadBytes: int64FromDetailMap(d.Details, "read_bytes"),
|
||||
LifeUsedPct: floatFromDetailMap(d.Details, "life_used_pct"),
|
||||
RemainingEndurancePct: d.RemainingEndurancePct,
|
||||
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
|
||||
AvailableSparePct: floatFromDetailMap(d.Details, "available_spare_pct"),
|
||||
ReallocatedSectors: int64FromDetailMap(d.Details, "reallocated_sectors"),
|
||||
CurrentPendingSectors: int64FromDetailMap(d.Details, "current_pending_sectors"),
|
||||
OfflineUncorrectable: int64FromDetailMap(d.Details, "offline_uncorrectable"),
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
ManufacturedYearWeek: manufacturedYearWeekFromDetails(d.Details),
|
||||
StatusHistory: meta.StatusHistory,
|
||||
ErrorDescription: meta.ErrorDescription,
|
||||
Slot: d.Slot,
|
||||
Type: d.Type,
|
||||
Model: d.Model,
|
||||
SizeGB: d.SizeGB,
|
||||
SerialNumber: d.SerialNumber,
|
||||
Manufacturer: d.Manufacturer,
|
||||
Firmware: d.Firmware,
|
||||
Interface: d.Interface,
|
||||
Present: &presentValue,
|
||||
LogicalBlockSizeBytes: int64FromDetailMap(d.Details, "logical_block_size_bytes"),
|
||||
PhysicalBlockSizeBytes: int64FromDetailMap(d.Details, "physical_block_size_bytes"),
|
||||
MetadataBytesPerBlock: int64FromDetailMap(d.Details, "metadata_bytes_per_block"),
|
||||
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
|
||||
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
|
||||
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
|
||||
UnsafeShutdowns: int64FromDetailMap(d.Details, "unsafe_shutdowns"),
|
||||
MediaErrors: int64FromDetailMap(d.Details, "media_errors"),
|
||||
ErrorLogEntries: int64FromDetailMap(d.Details, "error_log_entries"),
|
||||
WrittenBytes: int64FromDetailMap(d.Details, "written_bytes"),
|
||||
ReadBytes: int64FromDetailMap(d.Details, "read_bytes"),
|
||||
LifeUsedPct: floatFromDetailMap(d.Details, "life_used_pct"),
|
||||
RemainingEndurancePct: d.RemainingEndurancePct,
|
||||
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
|
||||
AvailableSparePct: floatFromDetailMap(d.Details, "available_spare_pct"),
|
||||
ReallocatedSectors: int64FromDetailMap(d.Details, "reallocated_sectors"),
|
||||
CurrentPendingSectors: int64FromDetailMap(d.Details, "current_pending_sectors"),
|
||||
OfflineUncorrectable: int64FromDetailMap(d.Details, "offline_uncorrectable"),
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
StatusChangedAt: meta.StatusChangedAt,
|
||||
ManufacturedYearWeek: manufacturedYearWeekFromDetails(d.Details),
|
||||
StatusHistory: meta.StatusHistory,
|
||||
ErrorDescription: meta.ErrorDescription,
|
||||
})
|
||||
}
|
||||
return result
|
||||
@@ -803,6 +847,7 @@ func convertPCIeFromDevices(devices []models.HardwareDevice, collectedAt string)
|
||||
VendorID: d.VendorID,
|
||||
DeviceID: d.DeviceID,
|
||||
NUMANode: d.NUMANode,
|
||||
IOMMUGroup: intPtrFromDetailMap(d.Details, "iommu_group"),
|
||||
TemperatureC: temperatureC,
|
||||
PowerW: powerW,
|
||||
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
|
||||
@@ -1334,7 +1379,7 @@ func convertMemory(memory []models.MemoryDIMM, collectedAt string) []ReanimatorM
|
||||
|
||||
result := make([]ReanimatorMemory, 0, len(memory))
|
||||
for _, mem := range memory {
|
||||
if !mem.Present || mem.SizeMB == 0 || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
|
||||
if !mem.IsInstalledInventory() || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
status := normalizeStatus(mem.Status, true)
|
||||
@@ -1376,14 +1421,16 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
|
||||
result := make([]ReanimatorStorage, 0, len(storage))
|
||||
for _, stor := range storage {
|
||||
// Skip storage without serial number
|
||||
if stor.SerialNumber == "" {
|
||||
if isVirtualLegacyStorageDevice(stor) {
|
||||
continue
|
||||
}
|
||||
if !shouldExportLegacyStorage(stor) {
|
||||
continue
|
||||
}
|
||||
|
||||
status := inferStorageStatus(stor)
|
||||
if strings.TrimSpace(stor.Status) != "" {
|
||||
status = normalizeStatus(stor.Status, false)
|
||||
status = normalizeStatus(stor.Status, !stor.Present)
|
||||
}
|
||||
meta := buildStatusMeta(
|
||||
status,
|
||||
@@ -1393,6 +1440,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
stor.ErrorDescription,
|
||||
collectedAt,
|
||||
)
|
||||
present := stor.Present
|
||||
|
||||
result = append(result, ReanimatorStorage{
|
||||
Slot: stor.Slot,
|
||||
@@ -1403,6 +1451,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
Manufacturer: stor.Manufacturer,
|
||||
Firmware: stor.Firmware,
|
||||
Interface: stor.Interface,
|
||||
Present: &present,
|
||||
RemainingEndurancePct: stor.RemainingEndurancePct,
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
@@ -1414,6 +1463,53 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
return result
|
||||
}
|
||||
|
||||
func shouldExportStorageDevice(d models.HardwareDevice) bool {
|
||||
if normalizedSerial(d.SerialNumber) != "" {
|
||||
return true
|
||||
}
|
||||
if strings.TrimSpace(d.Slot) != "" {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(d.Model) {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(d.Type) || hasMeaningfulExporterText(d.Interface) {
|
||||
return true
|
||||
}
|
||||
if d.SizeGB > 0 {
|
||||
return true
|
||||
}
|
||||
return d.Present != nil
|
||||
}
|
||||
|
||||
func shouldExportLegacyStorage(stor models.Storage) bool {
|
||||
if normalizedSerial(stor.SerialNumber) != "" {
|
||||
return true
|
||||
}
|
||||
if strings.TrimSpace(stor.Slot) != "" {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(stor.Model) {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(stor.Type) || hasMeaningfulExporterText(stor.Interface) {
|
||||
return true
|
||||
}
|
||||
if stor.SizeGB > 0 {
|
||||
return true
|
||||
}
|
||||
return stor.Present
|
||||
}
|
||||
|
||||
func isVirtualLegacyStorageDevice(stor models.Storage) bool {
|
||||
return isVirtualExportStorageDevice(models.HardwareDevice{
|
||||
Kind: models.DeviceKindStorage,
|
||||
Slot: stor.Slot,
|
||||
Model: stor.Model,
|
||||
Manufacturer: stor.Manufacturer,
|
||||
})
|
||||
}
|
||||
|
||||
// convertPCIeDevices converts PCIe devices, GPUs, and network adapters to Reanimator format
|
||||
func convertPCIeDevices(hw *models.HardwareConfig, collectedAt string) []ReanimatorPCIe {
|
||||
result := make([]ReanimatorPCIe, 0)
|
||||
@@ -1895,7 +1991,10 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
slot := strings.ToLower(strings.TrimSpace(item.Slot))
|
||||
serial := strings.ToLower(strings.TrimSpace(item.SerialNumber))
|
||||
bdf := strings.ToLower(strings.TrimSpace(item.BDF))
|
||||
if slot != "" {
|
||||
// Generic slot names (e.g. "PCIe Device" from HGX BMC) are not unique
|
||||
// hardware positions — multiple distinct devices share the same name.
|
||||
// Fall through to serial/BDF so they are not incorrectly collapsed.
|
||||
if slot != "" && !isGenericPCIeSlotName(slot) {
|
||||
return "slot:" + slot
|
||||
}
|
||||
if serial != "" {
|
||||
@@ -1904,9 +2003,22 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
if bdf != "" {
|
||||
return "bdf:" + bdf
|
||||
}
|
||||
if slot != "" {
|
||||
return "slot:" + slot
|
||||
}
|
||||
return strings.ToLower(strings.TrimSpace(item.DeviceClass)) + "|" + strings.ToLower(strings.TrimSpace(item.Model))
|
||||
}
|
||||
|
||||
// isGenericPCIeSlotName reports whether slot is a generic device-type label
|
||||
// rather than a unique hardware position identifier.
|
||||
func isGenericPCIeSlotName(slot string) bool {
|
||||
switch slot {
|
||||
case "pcie device", "pcie slot", "pcie":
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func pcieQualityScore(item ReanimatorPCIe) int {
|
||||
score := 0
|
||||
if strings.TrimSpace(item.SerialNumber) != "" {
|
||||
@@ -2011,6 +2123,17 @@ func parseSocketFromSlot(slot string) int {
|
||||
return v
|
||||
}
|
||||
|
||||
func intPtrFromDetailMap(details map[string]any, key string) *int {
|
||||
if details == nil {
|
||||
return nil
|
||||
}
|
||||
if _, ok := details[key]; !ok {
|
||||
return nil
|
||||
}
|
||||
v := intFromDetailMap(details, key)
|
||||
return &v
|
||||
}
|
||||
|
||||
func intFromDetailMap(details map[string]any, key string) int {
|
||||
if details == nil {
|
||||
return 0
|
||||
@@ -2180,10 +2303,8 @@ func normalizePCIeDeviceClass(d models.HardwareDevice) string {
|
||||
|
||||
func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
|
||||
switch strings.ToLower(strings.TrimSpace(deviceClass)) {
|
||||
case "", "network", "network controller", "networkcontroller":
|
||||
case "", "network", "network controller", "networkcontroller", "ethernet", "ethernet controller", "ethernetcontroller":
|
||||
return "NetworkController"
|
||||
case "ethernet", "ethernet controller", "ethernetcontroller":
|
||||
return "EthernetController"
|
||||
case "fibre channel", "fibre channel controller", "fibrechannelcontroller", "fc":
|
||||
return "FibreChannelController"
|
||||
case "display", "displaycontroller", "display controller", "vga":
|
||||
@@ -2204,8 +2325,6 @@ func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
|
||||
func normalizeNetworkDeviceClass(portType, model, description string) string {
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{portType, model, description}, " ")))
|
||||
switch {
|
||||
case strings.Contains(joined, "ethernet"):
|
||||
return "EthernetController"
|
||||
case strings.Contains(joined, "fibre channel") || strings.Contains(joined, " fibrechannel") || strings.Contains(joined, "fc "):
|
||||
return "FibreChannelController"
|
||||
default:
|
||||
|
||||
@@ -259,6 +259,29 @@ func TestConvertMemory(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertMemory_KeepsInstalledDIMMWithUnknownSize(t *testing.T) {
|
||||
memory := []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
SerialNumber: "2B5F92C6",
|
||||
Status: "OK",
|
||||
},
|
||||
}
|
||||
|
||||
result := convertMemory(memory, "2026-03-30T10:00:00Z")
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 inventory-only DIMM, got %d", len(result))
|
||||
}
|
||||
if result[0].PartNumber != "HMCG88AEBRA115N" || result[0].SerialNumber != "2B5F92C6" || result[0].SizeMB != 0 {
|
||||
t.Fatalf("unexpected converted memory: %+v", result[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_CPUSerialIsNotSynthesizedAndSocketIsDeduped(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "cpu-dedupe.json",
|
||||
@@ -424,20 +447,26 @@ func TestConvertStorage(t *testing.T) {
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SerialNumber: "", // No serial - should be skipped
|
||||
SerialNumber: "",
|
||||
Present: true,
|
||||
},
|
||||
}
|
||||
|
||||
result := convertStorage(storage, "2026-02-10T15:30:00Z")
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 storage device (skipped one without serial), got %d", len(result))
|
||||
if len(result) != 2 {
|
||||
t.Fatalf("expected both inventory slots to be exported, got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Status != "Unknown" {
|
||||
t.Errorf("expected Unknown status, got %q", result[0].Status)
|
||||
}
|
||||
if result[1].SerialNumber != "" {
|
||||
t.Errorf("expected empty serial for second storage slot, got %q", result[1].SerialNumber)
|
||||
}
|
||||
if result[1].Present == nil || !*result[1].Present {
|
||||
t.Fatalf("expected present=true to be preserved for populated slot without serial")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_SkipsAMIVirtualStorageDevices(t *testing.T) {
|
||||
@@ -704,6 +733,42 @@ func TestConvertPCIeDevices_SkipsDisplayControllerDuplicates(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_PreservesAllGPUsWithGenericSlot(t *testing.T) {
|
||||
// Supermicro HGX BMC reports all GPU PCIe devices with Name "PCIe Device" —
|
||||
// a generic label that is not a unique hardware position. All 8 GPUs must
|
||||
// be preserved; dedup by generic slot name must not collapse them into one.
|
||||
gpus := make([]models.GPU, 8)
|
||||
serials := []string{
|
||||
"1654925165720", "1654925166160", "1654925165942", "1654925165271",
|
||||
"1654925165719", "1654925165252", "1654925165304", "1654925165587",
|
||||
}
|
||||
for i, sn := range serials {
|
||||
gpus[i] = models.GPU{
|
||||
Slot: "PCIe Device",
|
||||
Model: "B200 180GB HBM3e",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: sn,
|
||||
PartNumber: "2901-886-A1",
|
||||
Status: "OK",
|
||||
}
|
||||
}
|
||||
hw := &models.HardwareConfig{GPUs: gpus}
|
||||
result := convertPCIeDevices(hw, "2026-04-13T10:00:00Z")
|
||||
if len(result) != 8 {
|
||||
t.Fatalf("expected 8 GPU entries (one per serial), got %d", len(result))
|
||||
}
|
||||
seen := make(map[string]bool)
|
||||
for _, r := range result {
|
||||
if seen[r.SerialNumber] {
|
||||
t.Fatalf("duplicate serial %q in PCIe result", r.SerialNumber)
|
||||
}
|
||||
seen[r.SerialNumber] = true
|
||||
if r.DeviceClass != "VideoController" {
|
||||
t.Fatalf("expected VideoController device class, got %q", r.DeviceClass)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_MapsGPUStatusHistory(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
@@ -971,6 +1036,52 @@ func TestConvertToReanimator_StatusFallbackUsesCollectedAt(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_ExportsStorageInventoryWithoutSerial(t *testing.T) {
|
||||
collectedAt := time.Date(2026, 4, 1, 9, 0, 0, 0, time.UTC)
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "nvme-inventory.json",
|
||||
CollectedAt: collectedAt,
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
SerialNumber: "SSD-001",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB03",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
Present: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.Storage) != 3 {
|
||||
t.Fatalf("expected 3 storage entries including inventory slots without serial, got %d", len(out.Hardware.Storage))
|
||||
}
|
||||
if out.Hardware.Storage[1].Slot != "OB02" || out.Hardware.Storage[1].SerialNumber != "" {
|
||||
t.Fatalf("expected OB02 storage slot without serial to survive export, got %#v", out.Hardware.Storage[1])
|
||||
}
|
||||
if out.Hardware.Storage[2].Present == nil || *out.Hardware.Storage[2].Present {
|
||||
t.Fatalf("expected OB03 to preserve present=false, got %#v", out.Hardware.Storage[2])
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_FirmwareExcludesDeviceBoundEntries(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "fw-filter-test.json",
|
||||
@@ -1658,6 +1769,43 @@ func TestConvertToReanimator_ExportsContractV24Telemetry(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_UnifiesEthernetAndNetworkControllers(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-123"},
|
||||
Devices: []models.HardwareDevice{
|
||||
{
|
||||
Kind: models.DeviceKindPCIe,
|
||||
Slot: "PCIe1",
|
||||
DeviceClass: "EthernetController",
|
||||
Present: boolPtr(true),
|
||||
SerialNumber: "ETH-001",
|
||||
},
|
||||
{
|
||||
Kind: models.DeviceKindNetwork,
|
||||
Slot: "NIC1",
|
||||
Model: "Ethernet Adapter",
|
||||
Present: boolPtr(true),
|
||||
SerialNumber: "NIC-001",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.PCIeDevices) != 2 {
|
||||
t.Fatalf("expected two pcie-class exports, got %d", len(out.Hardware.PCIeDevices))
|
||||
}
|
||||
for _, dev := range out.Hardware.PCIeDevices {
|
||||
if dev.DeviceClass != "NetworkController" {
|
||||
t.Fatalf("expected unified NetworkController class, got %+v", dev)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_PreservesLegacyStorageAndPSUDetails(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "legacy-details.json",
|
||||
|
||||
@@ -12,15 +12,16 @@ type ReanimatorExport struct {
|
||||
|
||||
// ReanimatorHardware contains all hardware components
|
||||
type ReanimatorHardware struct {
|
||||
Board ReanimatorBoard `json:"board"`
|
||||
Firmware []ReanimatorFirmware `json:"firmware,omitempty"`
|
||||
CPUs []ReanimatorCPU `json:"cpus,omitempty"`
|
||||
Memory []ReanimatorMemory `json:"memory,omitempty"`
|
||||
Storage []ReanimatorStorage `json:"storage,omitempty"`
|
||||
PCIeDevices []ReanimatorPCIe `json:"pcie_devices,omitempty"`
|
||||
PowerSupplies []ReanimatorPSU `json:"power_supplies,omitempty"`
|
||||
Sensors *ReanimatorSensors `json:"sensors,omitempty"`
|
||||
EventLogs []ReanimatorEventLog `json:"event_logs,omitempty"`
|
||||
Board ReanimatorBoard `json:"board"`
|
||||
Firmware []ReanimatorFirmware `json:"firmware,omitempty"`
|
||||
CPUs []ReanimatorCPU `json:"cpus,omitempty"`
|
||||
Memory []ReanimatorMemory `json:"memory,omitempty"`
|
||||
Storage []ReanimatorStorage `json:"storage,omitempty"`
|
||||
PCIeDevices []ReanimatorPCIe `json:"pcie_devices,omitempty"`
|
||||
PowerSupplies []ReanimatorPSU `json:"power_supplies,omitempty"`
|
||||
Sensors *ReanimatorSensors `json:"sensors,omitempty"`
|
||||
EventLogs []ReanimatorEventLog `json:"event_logs,omitempty"`
|
||||
PlatformConfig map[string]any `json:"platform_config,omitempty"`
|
||||
}
|
||||
|
||||
// ReanimatorBoard represents motherboard/server information
|
||||
@@ -101,17 +102,20 @@ type ReanimatorMemory struct {
|
||||
|
||||
// ReanimatorStorage represents a storage device
|
||||
type ReanimatorStorage struct {
|
||||
Slot string `json:"slot"`
|
||||
Type string `json:"type,omitempty"`
|
||||
Model string `json:"model"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
Interface string `json:"interface,omitempty"`
|
||||
Present *bool `json:"present,omitempty"`
|
||||
TemperatureC float64 `json:"temperature_c,omitempty"`
|
||||
PowerOnHours int64 `json:"power_on_hours,omitempty"`
|
||||
Slot string `json:"slot"`
|
||||
Type string `json:"type,omitempty"`
|
||||
Model string `json:"model"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
Interface string `json:"interface,omitempty"`
|
||||
Present *bool `json:"present,omitempty"`
|
||||
LogicalBlockSizeBytes int64 `json:"logical_block_size_bytes,omitempty"`
|
||||
PhysicalBlockSizeBytes int64 `json:"physical_block_size_bytes,omitempty"`
|
||||
MetadataBytesPerBlock int64 `json:"metadata_bytes_per_block,omitempty"`
|
||||
TemperatureC float64 `json:"temperature_c,omitempty"`
|
||||
PowerOnHours int64 `json:"power_on_hours,omitempty"`
|
||||
PowerCycles int64 `json:"power_cycles,omitempty"`
|
||||
UnsafeShutdowns int64 `json:"unsafe_shutdowns,omitempty"`
|
||||
MediaErrors int64 `json:"media_errors,omitempty"`
|
||||
@@ -139,6 +143,7 @@ type ReanimatorPCIe struct {
|
||||
VendorID int `json:"vendor_id,omitempty"`
|
||||
DeviceID int `json:"device_id,omitempty"`
|
||||
NUMANode int `json:"numa_node,omitempty"`
|
||||
IOMMUGroup *int `json:"iommu_group,omitempty"`
|
||||
TemperatureC float64 `json:"temperature_c,omitempty"`
|
||||
PowerW float64 `json:"power_w,omitempty"`
|
||||
LifeRemainingPct float64 `json:"life_remaining_pct,omitempty"`
|
||||
|
||||
29
internal/models/memory.go
Normal file
29
internal/models/memory.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package models
|
||||
|
||||
import "strings"
|
||||
|
||||
// HasInventoryIdentity reports whether the DIMM has enough identifying
|
||||
// inventory data to treat it as a populated module even when size is unknown.
|
||||
func (m MemoryDIMM) HasInventoryIdentity() bool {
|
||||
return strings.TrimSpace(m.SerialNumber) != "" ||
|
||||
strings.TrimSpace(m.PartNumber) != "" ||
|
||||
strings.TrimSpace(m.Type) != "" ||
|
||||
strings.TrimSpace(m.Technology) != "" ||
|
||||
strings.TrimSpace(m.Description) != ""
|
||||
}
|
||||
|
||||
// IsInstalledInventory reports whether the DIMM represents an installed module
|
||||
// that should be kept in canonical inventory and exports.
|
||||
func (m MemoryDIMM) IsInstalledInventory() bool {
|
||||
if !m.Present {
|
||||
return false
|
||||
}
|
||||
|
||||
status := strings.ToLower(strings.TrimSpace(m.Status))
|
||||
switch status {
|
||||
case "empty", "absent", "not installed":
|
||||
return false
|
||||
}
|
||||
|
||||
return m.SizeMB > 0 || m.HasInventoryIdentity()
|
||||
}
|
||||
@@ -245,6 +245,9 @@ type Storage struct {
|
||||
Location string `json:"location,omitempty"` // Front/Rear
|
||||
BackplaneID int `json:"backplane_id,omitempty"`
|
||||
RemainingEndurancePct *int `json:"remaining_endurance_pct,omitempty"` // 0-100 %; nil = not reported
|
||||
LogicalBlockSizeBytes int64 `json:"logical_block_size_bytes,omitempty"`
|
||||
PhysicalBlockSizeBytes int64 `json:"physical_block_size_bytes,omitempty"`
|
||||
MetadataBytesPerBlock int64 `json:"metadata_bytes_per_block,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Details map[string]any `json:"details,omitempty"`
|
||||
|
||||
@@ -257,15 +260,16 @@ type Storage struct {
|
||||
|
||||
// StorageVolume represents a logical storage volume (RAID/VROC/etc.).
|
||||
type StorageVolume struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Controller string `json:"controller,omitempty"`
|
||||
RAIDLevel string `json:"raid_level,omitempty"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Bootable bool `json:"bootable,omitempty"`
|
||||
Encrypted bool `json:"encrypted,omitempty"`
|
||||
ID string `json:"id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Controller string `json:"controller,omitempty"`
|
||||
RAIDLevel string `json:"raid_level,omitempty"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Bootable bool `json:"bootable,omitempty"`
|
||||
Encrypted bool `json:"encrypted,omitempty"`
|
||||
Drives []string `json:"drives,omitempty"` // member drive names/labels
|
||||
}
|
||||
|
||||
// PCIeDevice represents a PCIe device
|
||||
@@ -277,6 +281,8 @@ type PCIeDevice struct {
|
||||
BDF string `json:"bdf"`
|
||||
DeviceClass string `json:"device_class"`
|
||||
Manufacturer string `json:"manufacturer,omitempty"`
|
||||
Model string `json:"model,omitempty"`
|
||||
Firmware string `json:"firmware,omitempty"`
|
||||
LinkWidth int `json:"link_width"`
|
||||
LinkSpeed string `json:"link_speed"`
|
||||
MaxLinkWidth int `json:"max_link_width"`
|
||||
@@ -285,8 +291,17 @@ type PCIeDevice struct {
|
||||
SerialNumber string `json:"serial_number,omitempty"`
|
||||
MACAddresses []string `json:"mac_addresses,omitempty"`
|
||||
NUMANode int `json:"numa_node,omitempty"` // 0 = not reported/N/A
|
||||
Present *bool `json:"present,omitempty"`
|
||||
IOMMUGroup *int `json:"iommu_group,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
|
||||
// GPU telemetry fields (populated by bee audit for GPU devices)
|
||||
TemperatureC *float64 `json:"temperature_c,omitempty"`
|
||||
PowerW *float64 `json:"power_w,omitempty"`
|
||||
ECCCorrectedTotal *int64 `json:"ecc_corrected_total,omitempty"`
|
||||
ECCUncorrectedTotal *int64 `json:"ecc_uncorrected_total,omitempty"`
|
||||
HWSlowdown *bool `json:"hw_slowdown,omitempty"`
|
||||
|
||||
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
|
||||
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
|
||||
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`
|
||||
|
||||
@@ -19,6 +19,7 @@ const maxZipArchiveSize = 50 * 1024 * 1024
|
||||
const maxGzipDecompressedSize = 50 * 1024 * 1024
|
||||
|
||||
var supportedArchiveExt = map[string]struct{}{
|
||||
".ahs": {},
|
||||
".gz": {},
|
||||
".tgz": {},
|
||||
".tar": {},
|
||||
@@ -45,6 +46,8 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
ext := strings.ToLower(filepath.Ext(archivePath))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFile(archivePath)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGz(archivePath)
|
||||
case ".tar", ".sds":
|
||||
@@ -66,6 +69,8 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
|
||||
ext := strings.ToLower(filepath.Ext(filename))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFileFromReader(r, filename)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGzFromReader(r, filename)
|
||||
case ".tar", ".sds":
|
||||
|
||||
@@ -76,6 +76,7 @@ func TestIsSupportedArchiveFilename(t *testing.T) {
|
||||
name string
|
||||
want bool
|
||||
}{
|
||||
{name: "HPE_CZ2D1X0GS3_20260330.ahs", want: true},
|
||||
{name: "dump.tar.gz", want: true},
|
||||
{name: "nvidia-bug-report-1651124000923.log.gz", want: true},
|
||||
{name: "snapshot.zip", want: true},
|
||||
@@ -124,3 +125,20 @@ func TestExtractArchiveFromReaderSDS(t *testing.T) {
|
||||
t.Fatalf("expected bmc/pack.info, got %q", files[0].Path)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveFromReaderAHS(t *testing.T) {
|
||||
payload := []byte("ABJRtest")
|
||||
files, err := ExtractArchiveFromReader(bytes.NewReader(payload), "sample.ahs")
|
||||
if err != nil {
|
||||
t.Fatalf("extract ahs from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 extracted file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "sample.ahs" {
|
||||
t.Fatalf("expected sample.ahs, got %q", files[0].Path)
|
||||
}
|
||||
if string(files[0].Content) != string(payload) {
|
||||
t.Fatalf("content mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
601
internal/parser/vendors/easy_bee/parser.go
vendored
Normal file
601
internal/parser/vendors/easy_bee/parser.go
vendored
Normal file
@@ -0,0 +1,601 @@
|
||||
package easy_bee
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser imports support bundles produced by reanimator-easy-bee.
|
||||
// These archives embed a ready-to-use hardware snapshot in export/bee-audit.json.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string {
|
||||
return "Reanimator Easy Bee Parser"
|
||||
}
|
||||
|
||||
func (p *Parser) Vendor() string {
|
||||
return "easy_bee"
|
||||
}
|
||||
|
||||
func (p *Parser) Version() string {
|
||||
return parserVersion
|
||||
}
|
||||
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
hasManifest := false
|
||||
hasBeeAudit := false
|
||||
hasRuntimeHealth := false
|
||||
hasTechdump := false
|
||||
hasBundlePrefix := false
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(f.Path))
|
||||
content := strings.ToLower(string(f.Content))
|
||||
|
||||
if !hasBundlePrefix && strings.Contains(path, "bee-support-") {
|
||||
hasBundlePrefix = true
|
||||
confidence += 5
|
||||
}
|
||||
|
||||
if (strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") &&
|
||||
strings.Contains(content, "bee_version=") {
|
||||
hasManifest = true
|
||||
confidence += 35
|
||||
if strings.Contains(content, "export_dir=") {
|
||||
confidence += 10
|
||||
}
|
||||
}
|
||||
|
||||
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
|
||||
hasBeeAudit = true
|
||||
confidence += 55
|
||||
}
|
||||
|
||||
if hasBundlePrefix && (strings.HasSuffix(path, "/export/runtime-health.json") || path == "runtime-health.json") {
|
||||
hasRuntimeHealth = true
|
||||
confidence += 10
|
||||
}
|
||||
|
||||
if hasBundlePrefix && !hasTechdump && strings.Contains(path, "/export/techdump/") {
|
||||
hasTechdump = true
|
||||
confidence += 10
|
||||
}
|
||||
}
|
||||
|
||||
if hasManifest && hasBeeAudit {
|
||||
return 100
|
||||
}
|
||||
if hasBeeAudit && (hasRuntimeHealth || hasTechdump) {
|
||||
confidence += 10
|
||||
}
|
||||
if confidence > 100 {
|
||||
return 100
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
snapshotFile := findSnapshotFile(files)
|
||||
if snapshotFile == nil {
|
||||
return nil, fmt.Errorf("easy-bee snapshot not found")
|
||||
}
|
||||
|
||||
var snapshot beeSnapshot
|
||||
if err := json.Unmarshal(snapshotFile.Content, &snapshot); err != nil {
|
||||
return nil, fmt.Errorf("decode %s: %w", snapshotFile.Path, err)
|
||||
}
|
||||
|
||||
manifest := parseManifest(files)
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
SourceType: strings.TrimSpace(snapshot.SourceType),
|
||||
Protocol: strings.TrimSpace(snapshot.Protocol),
|
||||
TargetHost: firstNonEmpty(snapshot.TargetHost, manifest.Host),
|
||||
SourceTimezone: strings.TrimSpace(snapshot.SourceTimezone),
|
||||
CollectedAt: chooseCollectedAt(snapshot, manifest),
|
||||
InventoryLastModifiedAt: snapshot.InventoryLastModifiedAt,
|
||||
RawPayloads: snapshot.RawPayloads,
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: append([]models.FRUInfo(nil), snapshot.FRU...),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: append([]models.FirmwareInfo(nil), snapshot.Hardware.Firmware...),
|
||||
BoardInfo: snapshot.Hardware.Board,
|
||||
Devices: append([]models.HardwareDevice(nil), snapshot.Hardware.Devices...),
|
||||
CPUs: append([]models.CPU(nil), snapshot.Hardware.CPUs...),
|
||||
Memory: append([]models.MemoryDIMM(nil), snapshot.Hardware.Memory...),
|
||||
Storage: append([]models.Storage(nil), snapshot.Hardware.Storage...),
|
||||
Volumes: append([]models.StorageVolume(nil), snapshot.Hardware.Volumes...),
|
||||
PCIeDevices: normalizePCIeDevices(snapshot.Hardware.PCIeDevices),
|
||||
GPUs: append([]models.GPU(nil), snapshot.Hardware.GPUs...),
|
||||
NetworkCards: append([]models.NIC(nil), snapshot.Hardware.NetworkCards...),
|
||||
NetworkAdapters: normalizeNetworkAdapters(snapshot.Hardware.NetworkAdapters),
|
||||
PowerSupply: append([]models.PSU(nil), snapshot.Hardware.PowerSupply...),
|
||||
},
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, snapshot.Events...)
|
||||
result.Events = append(result.Events, convertRuntimeToEvents(snapshot.Runtime, result.CollectedAt)...)
|
||||
result.Events = append(result.Events, convertEventLogs(snapshot.Hardware.EventLogs)...)
|
||||
|
||||
result.Sensors = append(result.Sensors, snapshot.Sensors...)
|
||||
result.Sensors = append(result.Sensors, flattenSensorGroups(snapshot.Hardware.Sensors)...)
|
||||
|
||||
if len(result.FRU) == 0 {
|
||||
if boardFRU, ok := buildBoardFRU(snapshot.Hardware.Board); ok {
|
||||
result.FRU = append(result.FRU, boardFRU)
|
||||
}
|
||||
}
|
||||
|
||||
if result.Hardware == nil || (result.Hardware.BoardInfo.SerialNumber == "" &&
|
||||
len(result.Hardware.CPUs) == 0 &&
|
||||
len(result.Hardware.Memory) == 0 &&
|
||||
len(result.Hardware.Storage) == 0 &&
|
||||
len(result.Hardware.PCIeDevices) == 0 &&
|
||||
len(result.Hardware.Devices) == 0) {
|
||||
return nil, fmt.Errorf("unsupported easy-bee snapshot format")
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
type beeSnapshot struct {
|
||||
SourceType string `json:"source_type,omitempty"`
|
||||
Protocol string `json:"protocol,omitempty"`
|
||||
TargetHost string `json:"target_host,omitempty"`
|
||||
SourceTimezone string `json:"source_timezone,omitempty"`
|
||||
CollectedAt time.Time `json:"collected_at,omitempty"`
|
||||
InventoryLastModifiedAt time.Time `json:"inventory_last_modified_at,omitempty"`
|
||||
RawPayloads map[string]any `json:"raw_payloads,omitempty"`
|
||||
Events []models.Event `json:"events,omitempty"`
|
||||
FRU []models.FRUInfo `json:"fru,omitempty"`
|
||||
Sensors []models.SensorReading `json:"sensors,omitempty"`
|
||||
Hardware beeHardware `json:"hardware"`
|
||||
Runtime beeRuntime `json:"runtime,omitempty"`
|
||||
}
|
||||
|
||||
type beeHardware struct {
|
||||
Board models.BoardInfo `json:"board"`
|
||||
Firmware []models.FirmwareInfo `json:"firmware,omitempty"`
|
||||
Devices []models.HardwareDevice `json:"devices,omitempty"`
|
||||
CPUs []models.CPU `json:"cpus,omitempty"`
|
||||
Memory []models.MemoryDIMM `json:"memory,omitempty"`
|
||||
Storage []models.Storage `json:"storage,omitempty"`
|
||||
Volumes []models.StorageVolume `json:"volumes,omitempty"`
|
||||
PCIeDevices []models.PCIeDevice `json:"pcie_devices,omitempty"`
|
||||
GPUs []models.GPU `json:"gpus,omitempty"`
|
||||
NetworkCards []models.NIC `json:"network_cards,omitempty"`
|
||||
NetworkAdapters []models.NetworkAdapter `json:"network_adapters,omitempty"`
|
||||
PowerSupply []models.PSU `json:"power_supplies,omitempty"`
|
||||
Sensors beeSensorGroups `json:"sensors,omitempty"`
|
||||
EventLogs []beeEventLog `json:"event_logs,omitempty"`
|
||||
}
|
||||
|
||||
type beeSensorGroups struct {
|
||||
Fans []beeFanSensor `json:"fans,omitempty"`
|
||||
Power []beePowerSensor `json:"power,omitempty"`
|
||||
Temperatures []beeTemperatureSensor `json:"temperatures,omitempty"`
|
||||
Other []beeOtherSensor `json:"other,omitempty"`
|
||||
}
|
||||
|
||||
type beeFanSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
RPM int `json:"rpm,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beePowerSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
VoltageV float64 `json:"voltage_v,omitempty"`
|
||||
CurrentA float64 `json:"current_a,omitempty"`
|
||||
PowerW float64 `json:"power_w,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeTemperatureSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
Celsius float64 `json:"celsius,omitempty"`
|
||||
ThresholdWarningCelsius float64 `json:"threshold_warning_celsius,omitempty"`
|
||||
ThresholdCriticalCelsius float64 `json:"threshold_critical_celsius,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeOtherSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
Value float64 `json:"value,omitempty"`
|
||||
Unit string `json:"unit,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntime struct {
|
||||
Status string `json:"status,omitempty"`
|
||||
CheckedAt time.Time `json:"checked_at,omitempty"`
|
||||
NetworkStatus string `json:"network_status,omitempty"`
|
||||
Issues []beeRuntimeIssue `json:"issues,omitempty"`
|
||||
Services []beeRuntimeStatus `json:"services,omitempty"`
|
||||
Interfaces []beeInterface `json:"interfaces,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntimeIssue struct {
|
||||
Code string `json:"code,omitempty"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntimeStatus struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeInterface struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
IPv4 []string `json:"ipv4,omitempty"`
|
||||
Outcome string `json:"outcome,omitempty"`
|
||||
}
|
||||
|
||||
type beeEventLog struct {
|
||||
Source string `json:"source,omitempty"`
|
||||
EventTime string `json:"event_time,omitempty"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
MessageID string `json:"message_id,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
RawPayload map[string]any `json:"raw_payload,omitempty"`
|
||||
}
|
||||
|
||||
type manifestMetadata struct {
|
||||
Host string
|
||||
GeneratedAtUTC time.Time
|
||||
}
|
||||
|
||||
func findSnapshotFile(files []parser.ExtractedFile) *parser.ExtractedFile {
|
||||
for i := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(files[i].Path))
|
||||
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
for i := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(files[i].Path))
|
||||
if strings.HasSuffix(path, ".json") && strings.Contains(path, "reanimator") {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseManifest(files []parser.ExtractedFile) manifestMetadata {
|
||||
var meta manifestMetadata
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(f.Path))
|
||||
if !(strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") {
|
||||
continue
|
||||
}
|
||||
|
||||
lines := strings.Split(string(f.Content), "\n")
|
||||
for _, line := range lines {
|
||||
key, value, ok := strings.Cut(strings.TrimSpace(line), "=")
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
switch strings.TrimSpace(key) {
|
||||
case "host":
|
||||
meta.Host = strings.TrimSpace(value)
|
||||
case "generated_at_utc":
|
||||
if ts, err := time.Parse(time.RFC3339, strings.TrimSpace(value)); err == nil {
|
||||
meta.GeneratedAtUTC = ts.UTC()
|
||||
}
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
return meta
|
||||
}
|
||||
|
||||
func chooseCollectedAt(snapshot beeSnapshot, manifest manifestMetadata) time.Time {
|
||||
switch {
|
||||
case !snapshot.CollectedAt.IsZero():
|
||||
return snapshot.CollectedAt.UTC()
|
||||
case !snapshot.Runtime.CheckedAt.IsZero():
|
||||
return snapshot.Runtime.CheckedAt.UTC()
|
||||
case !manifest.GeneratedAtUTC.IsZero():
|
||||
return manifest.GeneratedAtUTC.UTC()
|
||||
default:
|
||||
return time.Time{}
|
||||
}
|
||||
}
|
||||
|
||||
func convertRuntimeToEvents(runtime beeRuntime, fallback time.Time) []models.Event {
|
||||
events := make([]models.Event, 0)
|
||||
ts := runtime.CheckedAt
|
||||
if ts.IsZero() {
|
||||
ts = fallback
|
||||
}
|
||||
|
||||
if status := strings.TrimSpace(runtime.Status); status != "" {
|
||||
desc := "Bee runtime status: " + status
|
||||
if networkStatus := strings.TrimSpace(runtime.NetworkStatus); networkStatus != "" {
|
||||
desc += " (network: " + networkStatus + ")"
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "Bee Runtime",
|
||||
EventType: "Runtime Status",
|
||||
Severity: mapSeverity(status),
|
||||
Description: desc,
|
||||
})
|
||||
}
|
||||
|
||||
for _, issue := range runtime.Issues {
|
||||
desc := strings.TrimSpace(issue.Description)
|
||||
if desc == "" {
|
||||
desc = "Bee runtime issue"
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "Bee Runtime",
|
||||
EventType: "Runtime Issue",
|
||||
Severity: mapSeverity(issue.Severity),
|
||||
Description: desc,
|
||||
RawData: strings.TrimSpace(issue.Code),
|
||||
})
|
||||
}
|
||||
|
||||
for _, svc := range runtime.Services {
|
||||
status := strings.TrimSpace(svc.Status)
|
||||
if status == "" || strings.EqualFold(status, "active") {
|
||||
continue
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "systemd",
|
||||
EventType: "Service Status",
|
||||
Severity: mapSeverity(status),
|
||||
Description: fmt.Sprintf("%s is %s", strings.TrimSpace(svc.Name), status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, iface := range runtime.Interfaces {
|
||||
state := strings.TrimSpace(iface.State)
|
||||
outcome := strings.TrimSpace(iface.Outcome)
|
||||
if state == "" && outcome == "" {
|
||||
continue
|
||||
}
|
||||
if strings.EqualFold(state, "up") && strings.EqualFold(outcome, "lease_acquired") {
|
||||
continue
|
||||
}
|
||||
desc := fmt.Sprintf("interface %s state=%s outcome=%s", strings.TrimSpace(iface.Name), state, outcome)
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "network",
|
||||
EventType: "Interface Status",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: strings.TrimSpace(desc),
|
||||
})
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
func convertEventLogs(items []beeEventLog) []models.Event {
|
||||
events := make([]models.Event, 0, len(items))
|
||||
for _, item := range items {
|
||||
message := strings.TrimSpace(item.Message)
|
||||
if message == "" {
|
||||
continue
|
||||
}
|
||||
ts := parseEventTime(item.EventTime)
|
||||
rawData := strings.TrimSpace(item.MessageID)
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: firstNonEmpty(strings.TrimSpace(item.Source), "Reanimator"),
|
||||
EventType: "Event Log",
|
||||
Severity: mapSeverity(item.Severity),
|
||||
Description: message,
|
||||
RawData: rawData,
|
||||
})
|
||||
}
|
||||
return events
|
||||
}
|
||||
|
||||
func parseEventTime(raw string) time.Time {
|
||||
raw = strings.TrimSpace(raw)
|
||||
if raw == "" {
|
||||
return time.Time{}
|
||||
}
|
||||
layouts := []string{time.RFC3339Nano, time.RFC3339}
|
||||
for _, layout := range layouts {
|
||||
if ts, err := time.Parse(layout, raw); err == nil {
|
||||
return ts.UTC()
|
||||
}
|
||||
}
|
||||
return time.Time{}
|
||||
}
|
||||
|
||||
func flattenSensorGroups(groups beeSensorGroups) []models.SensorReading {
|
||||
result := make([]models.SensorReading, 0, len(groups.Fans)+len(groups.Power)+len(groups.Temperatures)+len(groups.Other))
|
||||
|
||||
for _, fan := range groups.Fans {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(fan.Name, fan.Location),
|
||||
Type: "fan",
|
||||
Value: float64(fan.RPM),
|
||||
Unit: "RPM",
|
||||
Status: strings.TrimSpace(fan.Status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, power := range groups.Power {
|
||||
name := sensorName(power.Name, power.Location)
|
||||
status := strings.TrimSpace(power.Status)
|
||||
if power.PowerW != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name,
|
||||
Type: "power",
|
||||
Value: power.PowerW,
|
||||
Unit: "W",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
if power.VoltageV != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name + " Voltage",
|
||||
Type: "voltage",
|
||||
Value: power.VoltageV,
|
||||
Unit: "V",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
if power.CurrentA != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name + " Current",
|
||||
Type: "current",
|
||||
Value: power.CurrentA,
|
||||
Unit: "A",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
for _, temp := range groups.Temperatures {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(temp.Name, temp.Location),
|
||||
Type: "temperature",
|
||||
Value: temp.Celsius,
|
||||
Unit: "C",
|
||||
Status: strings.TrimSpace(temp.Status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, other := range groups.Other {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(other.Name, other.Location),
|
||||
Type: "other",
|
||||
Value: other.Value,
|
||||
Unit: strings.TrimSpace(other.Unit),
|
||||
Status: strings.TrimSpace(other.Status),
|
||||
})
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func sensorName(name, location string) string {
|
||||
name = strings.TrimSpace(name)
|
||||
location = strings.TrimSpace(location)
|
||||
if name == "" {
|
||||
return location
|
||||
}
|
||||
if location == "" {
|
||||
return name
|
||||
}
|
||||
return name + " [" + location + "]"
|
||||
}
|
||||
|
||||
func normalizePCIeDevices(items []models.PCIeDevice) []models.PCIeDevice {
|
||||
out := append([]models.PCIeDevice(nil), items...)
|
||||
for i := range out {
|
||||
slot := strings.TrimSpace(out[i].Slot)
|
||||
if out[i].BDF == "" && looksLikeBDF(slot) {
|
||||
out[i].BDF = slot
|
||||
}
|
||||
if out[i].Slot == "" && out[i].BDF != "" {
|
||||
out[i].Slot = out[i].BDF
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func normalizeNetworkAdapters(items []models.NetworkAdapter) []models.NetworkAdapter {
|
||||
out := append([]models.NetworkAdapter(nil), items...)
|
||||
for i := range out {
|
||||
slot := strings.TrimSpace(out[i].Slot)
|
||||
if out[i].BDF == "" && looksLikeBDF(slot) {
|
||||
out[i].BDF = slot
|
||||
}
|
||||
if out[i].Slot == "" && out[i].BDF != "" {
|
||||
out[i].Slot = out[i].BDF
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func looksLikeBDF(value string) bool {
|
||||
value = strings.TrimSpace(value)
|
||||
if len(value) != len("0000:00:00.0") {
|
||||
return false
|
||||
}
|
||||
for i, r := range value {
|
||||
switch i {
|
||||
case 4, 7:
|
||||
if r != ':' {
|
||||
return false
|
||||
}
|
||||
case 10:
|
||||
if r != '.' {
|
||||
return false
|
||||
}
|
||||
default:
|
||||
if !((r >= '0' && r <= '9') || (r >= 'a' && r <= 'f') || (r >= 'A' && r <= 'F')) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func buildBoardFRU(board models.BoardInfo) (models.FRUInfo, bool) {
|
||||
if strings.TrimSpace(board.SerialNumber) == "" &&
|
||||
strings.TrimSpace(board.Manufacturer) == "" &&
|
||||
strings.TrimSpace(board.ProductName) == "" &&
|
||||
strings.TrimSpace(board.PartNumber) == "" {
|
||||
return models.FRUInfo{}, false
|
||||
}
|
||||
|
||||
return models.FRUInfo{
|
||||
Description: "System Board",
|
||||
Manufacturer: strings.TrimSpace(board.Manufacturer),
|
||||
ProductName: strings.TrimSpace(board.ProductName),
|
||||
SerialNumber: strings.TrimSpace(board.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(board.PartNumber),
|
||||
}, true
|
||||
}
|
||||
|
||||
func mapSeverity(raw string) models.Severity {
|
||||
switch strings.ToLower(strings.TrimSpace(raw)) {
|
||||
case "critical", "crit", "error", "failed", "failure":
|
||||
return models.SeverityCritical
|
||||
case "warning", "warn", "partial", "degraded", "inactive", "activating", "deactivating":
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
func firstNonEmpty(values ...string) string {
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
219
internal/parser/vendors/easy_bee/parser_test.go
vendored
Normal file
219
internal/parser/vendors/easy_bee/parser_test.go
vendored
Normal file
@@ -0,0 +1,219 @@
|
||||
package easy_bee
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectBeeSupportArchive(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/manifest.txt",
|
||||
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
|
||||
Content: []byte(`{"hardware":{"board":{"serial_number":"SN-BEE-001"}}}`),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/runtime-health.json",
|
||||
Content: []byte(`{"status":"PARTIAL"}`),
|
||||
},
|
||||
}
|
||||
|
||||
if got := p.Detect(files); got < 90 {
|
||||
t.Fatalf("expected high confidence detect score, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetectRejectsNonBeeArchive(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "random/manifest.txt",
|
||||
Content: []byte("host=test\n"),
|
||||
},
|
||||
{
|
||||
Path: "random/export/runtime-health.json",
|
||||
Content: []byte(`{"status":"OK"}`),
|
||||
},
|
||||
}
|
||||
|
||||
if got := p.Detect(files); got != 0 {
|
||||
t.Fatalf("expected detect score 0, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseBeeAuditSnapshot(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/manifest.txt",
|
||||
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
|
||||
Content: []byte(`{
|
||||
"source_type": "manual",
|
||||
"target_host": "debian",
|
||||
"collected_at": "2026-03-25T16:08:09Z",
|
||||
"runtime": {
|
||||
"status": "PARTIAL",
|
||||
"checked_at": "2026-03-25T16:07:56Z",
|
||||
"network_status": "OK",
|
||||
"issues": [
|
||||
{
|
||||
"code": "nvidia_kernel_module_missing",
|
||||
"severity": "warning",
|
||||
"description": "NVIDIA kernel module is not loaded."
|
||||
}
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "bee-web",
|
||||
"status": "inactive"
|
||||
}
|
||||
]
|
||||
},
|
||||
"hardware": {
|
||||
"board": {
|
||||
"manufacturer": "Supermicro",
|
||||
"product_name": "AS-4124GQ-TNMI",
|
||||
"serial_number": "S490387X4418273",
|
||||
"part_number": "H12DGQ-NT6",
|
||||
"uuid": "d868ae00-a61f-11ee-8000-7cc255e10309"
|
||||
},
|
||||
"firmware": [
|
||||
{
|
||||
"device_name": "BIOS",
|
||||
"version": "2.8"
|
||||
}
|
||||
],
|
||||
"cpus": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"socket": 1,
|
||||
"model": "AMD EPYC 7763 64-Core Processor",
|
||||
"cores": 64,
|
||||
"threads": 128,
|
||||
"frequency_mhz": 2450,
|
||||
"max_frequency_mhz": 3525
|
||||
}
|
||||
],
|
||||
"memory": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "P1-DIMMA1",
|
||||
"location": "P0_Node0_Channel0_Dimm0",
|
||||
"present": true,
|
||||
"size_mb": 32768,
|
||||
"type": "DDR4",
|
||||
"max_speed_mhz": 3200,
|
||||
"current_speed_mhz": 2933,
|
||||
"manufacturer": "SK Hynix",
|
||||
"serial_number": "80AD01224887286666",
|
||||
"part_number": "HMA84GR7DJR4N-XN"
|
||||
}
|
||||
],
|
||||
"storage": [
|
||||
{
|
||||
"status": "Unknown",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "nvme0n1",
|
||||
"type": "NVMe",
|
||||
"model": "KCD6XLUL960G",
|
||||
"serial_number": "2470A00XT5M8",
|
||||
"interface": "NVMe",
|
||||
"present": true
|
||||
}
|
||||
],
|
||||
"pcie_devices": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "0000:05:00.0",
|
||||
"vendor_id": 5555,
|
||||
"device_id": 4123,
|
||||
"device_class": "EthernetController",
|
||||
"manufacturer": "Mellanox Technologies",
|
||||
"model": "MT28908 Family [ConnectX-6]",
|
||||
"link_width": 16,
|
||||
"link_speed": "Gen4",
|
||||
"max_link_width": 16,
|
||||
"max_link_speed": "Gen4",
|
||||
"mac_addresses": ["94:6d:ae:9a:75:4a"],
|
||||
"present": true
|
||||
}
|
||||
],
|
||||
"sensors": {
|
||||
"power": [
|
||||
{
|
||||
"name": "PPT",
|
||||
"location": "amdgpu-pci-1100",
|
||||
"power_w": 95
|
||||
}
|
||||
],
|
||||
"temperatures": [
|
||||
{
|
||||
"name": "Composite",
|
||||
"location": "nvme-pci-0600",
|
||||
"celsius": 28.85,
|
||||
"threshold_warning_celsius": 72.85,
|
||||
"threshold_critical_celsius": 81.85,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware == nil {
|
||||
t.Fatal("expected hardware to be populated")
|
||||
}
|
||||
if result.TargetHost != "debian" {
|
||||
t.Fatalf("expected target host debian, got %q", result.TargetHost)
|
||||
}
|
||||
wantCollectedAt := time.Date(2026, 3, 25, 16, 8, 9, 0, time.UTC)
|
||||
if !result.CollectedAt.Equal(wantCollectedAt) {
|
||||
t.Fatalf("expected collected_at %s, got %s", wantCollectedAt, result.CollectedAt)
|
||||
}
|
||||
if result.Hardware.BoardInfo.SerialNumber != "S490387X4418273" {
|
||||
t.Fatalf("unexpected board serial %q", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
if len(result.Hardware.CPUs) != 1 {
|
||||
t.Fatalf("expected 1 cpu, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected 1 dimm, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected 1 storage device, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
if len(result.Hardware.PCIeDevices) != 1 {
|
||||
t.Fatalf("expected 1 pcie device, got %d", len(result.Hardware.PCIeDevices))
|
||||
}
|
||||
if result.Hardware.PCIeDevices[0].BDF != "0000:05:00.0" {
|
||||
t.Fatalf("expected BDF to be normalized from slot, got %q", result.Hardware.PCIeDevices[0].BDF)
|
||||
}
|
||||
if len(result.Sensors) != 2 {
|
||||
t.Fatalf("expected 2 flattened sensors, got %d", len(result.Sensors))
|
||||
}
|
||||
if len(result.Events) < 3 {
|
||||
t.Fatalf("expected runtime events to be created, got %d", len(result.Events))
|
||||
}
|
||||
if len(result.FRU) == 0 {
|
||||
t.Fatal("expected board FRU fallback to be populated")
|
||||
}
|
||||
}
|
||||
1706
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
1706
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
316
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
316
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
@@ -0,0 +1,316 @@
|
||||
package hpe_ilo_ahs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/binary"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectAHS(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: makeAHSArchive(t, []ahsTestEntry{{Name: "CUST_INFO.DAT", Payload: []byte("x")}}),
|
||||
}})
|
||||
if score < 80 {
|
||||
t.Fatalf("expected high confidence detect, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAHSInventory(t *testing.T) {
|
||||
p := &Parser{}
|
||||
content := makeAHSArchive(t, []ahsTestEntry{
|
||||
{Name: "CUST_INFO.DAT", Payload: make([]byte, 16)},
|
||||
{Name: "0000088-2026-03-30.zbb", Payload: gzipBytes(t, []byte(sampleInventoryBlob()))},
|
||||
{Name: "bcert.pkg", Payload: []byte(sampleBCertBlob())},
|
||||
})
|
||||
|
||||
result, err := p.Parse([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: content,
|
||||
}})
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.Manufacturer != "HPE" {
|
||||
t.Fatalf("unexpected board manufacturer: %q", board.Manufacturer)
|
||||
}
|
||||
if board.ProductName != "ProLiant DL380 Gen11" {
|
||||
t.Fatalf("unexpected board product: %q", board.ProductName)
|
||||
}
|
||||
if board.SerialNumber != "CZ2D1X0GS3" {
|
||||
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
|
||||
}
|
||||
if board.PartNumber != "P52560-421" {
|
||||
t.Fatalf("unexpected board part number: %q", board.PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) != 1 || result.Hardware.CPUs[0].Model != "Intel(R) Xeon(R) Gold 6444Y" {
|
||||
t.Fatalf("unexpected CPUs: %+v", result.Hardware.CPUs)
|
||||
}
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected one DIMM, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if result.Hardware.Memory[0].PartNumber != "HMCG88AEBRA115N" {
|
||||
t.Fatalf("unexpected DIMM part number: %q", result.Hardware.Memory[0].PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 2 {
|
||||
t.Fatalf("expected two network adapters, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 1 {
|
||||
t.Fatalf("expected one PSU, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].SerialNumber != "5XUWB0C4DJG4BV" {
|
||||
t.Fatalf("unexpected PSU serial: %q", result.Hardware.PowerSupply[0].SerialNumber)
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].Firmware != "2.00" {
|
||||
t.Fatalf("unexpected PSU firmware: %q", result.Hardware.PowerSupply[0].Firmware)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected one physical drive, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
drive := result.Hardware.Storage[0]
|
||||
if drive.Model != "SAMSUNGMZ7L3480HCHQ-00A07" {
|
||||
t.Fatalf("unexpected drive model: %q", drive.Model)
|
||||
}
|
||||
if drive.SerialNumber != "S664NC0Y502720" {
|
||||
t.Fatalf("unexpected drive serial: %q", drive.SerialNumber)
|
||||
}
|
||||
if drive.SizeGB != 480 {
|
||||
t.Fatalf("unexpected drive size: %d", drive.SizeGB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Fatalf("expected firmware inventory")
|
||||
}
|
||||
foundILO := false
|
||||
foundControllerFW := false
|
||||
foundNICFW := false
|
||||
foundBackplaneFW := false
|
||||
for _, item := range result.Hardware.Firmware {
|
||||
if item.DeviceName == "iLO 6" && item.Version == "v1.63p20" {
|
||||
foundILO = true
|
||||
}
|
||||
if item.DeviceName == "HPE MR408i-o Gen11" && item.Version == "52.26.3-5379" {
|
||||
foundControllerFW = true
|
||||
}
|
||||
if item.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && item.Version == "20.28.41" {
|
||||
foundNICFW = true
|
||||
}
|
||||
if item.DeviceName == "8 SFF 24G x1NVMe/SAS UBM3 BC BP" && item.Version == "1.24" {
|
||||
foundBackplaneFW = true
|
||||
}
|
||||
}
|
||||
if !foundILO {
|
||||
t.Fatalf("expected iLO firmware entry")
|
||||
}
|
||||
if !foundControllerFW {
|
||||
t.Fatalf("expected controller firmware entry")
|
||||
}
|
||||
if !foundNICFW {
|
||||
t.Fatalf("expected broadcom firmware entry")
|
||||
}
|
||||
if !foundBackplaneFW {
|
||||
t.Fatalf("expected backplane firmware entry")
|
||||
}
|
||||
|
||||
broadcomFound := false
|
||||
backplaneFound := false
|
||||
for _, nic := range result.Hardware.NetworkAdapters {
|
||||
if nic.SerialNumber == "1CH0150001" && nic.Firmware == "20.28.41" {
|
||||
broadcomFound = true
|
||||
}
|
||||
}
|
||||
for _, dev := range result.Hardware.Devices {
|
||||
if dev.DeviceClass == "storage_backplane" && dev.Firmware == "1.24" {
|
||||
backplaneFound = true
|
||||
}
|
||||
}
|
||||
if !broadcomFound {
|
||||
t.Fatalf("expected broadcom adapter firmware to be enriched")
|
||||
}
|
||||
if !backplaneFound {
|
||||
t.Fatalf("expected backplane canonical device")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Devices) < 6 {
|
||||
t.Fatalf("expected canonical devices, got %d", len(result.Hardware.Devices))
|
||||
}
|
||||
if len(result.Events) == 0 {
|
||||
t.Fatalf("expected parsed events")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseExampleAHS(t *testing.T) {
|
||||
path := filepath.Join("..", "..", "..", "..", "example", "HPE_CZ2D1X0GS3_20260330.ahs")
|
||||
content, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
t.Skipf("example fixture unavailable: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse([]parser.ExtractedFile{{
|
||||
Path: filepath.Base(path),
|
||||
Content: content,
|
||||
}})
|
||||
if err != nil {
|
||||
t.Fatalf("parse example failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.ProductName != "ProLiant DL380 Gen11" {
|
||||
t.Fatalf("unexpected board product: %q", board.ProductName)
|
||||
}
|
||||
if board.SerialNumber != "CZ2D1X0GS3" {
|
||||
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) < 2 {
|
||||
t.Fatalf("expected at least two drives, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 2 {
|
||||
t.Fatalf("expected exactly two PSUs, got %d: %+v", len(result.Hardware.PowerSupply), result.Hardware.PowerSupply)
|
||||
}
|
||||
|
||||
foundController := false
|
||||
foundBackplaneFW := false
|
||||
foundNICFW := false
|
||||
for _, device := range result.Hardware.Devices {
|
||||
if device.Model == "HPE MR408i-o Gen11" && device.SerialNumber == "PXSFQ0BBIJY3B3" {
|
||||
foundController = true
|
||||
}
|
||||
if device.DeviceClass == "storage_backplane" && device.Firmware == "1.24" {
|
||||
foundBackplaneFW = true
|
||||
}
|
||||
}
|
||||
if !foundController {
|
||||
t.Fatalf("expected MR408i-o controller in canonical devices")
|
||||
}
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
if fw.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && fw.Version == "20.28.41" {
|
||||
foundNICFW = true
|
||||
}
|
||||
}
|
||||
if !foundBackplaneFW {
|
||||
t.Fatalf("expected backplane device in canonical devices")
|
||||
}
|
||||
if !foundNICFW {
|
||||
t.Fatalf("expected broadcom firmware from bcert/pkg lockdown")
|
||||
}
|
||||
}
|
||||
|
||||
type ahsTestEntry struct {
|
||||
Name string
|
||||
Payload []byte
|
||||
Flag uint32
|
||||
}
|
||||
|
||||
func makeAHSArchive(t *testing.T, entries []ahsTestEntry) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
for _, entry := range entries {
|
||||
header := make([]byte, ahsHeaderSize)
|
||||
copy(header[:4], []byte("ABJR"))
|
||||
binary.LittleEndian.PutUint16(header[4:6], 0x0300)
|
||||
binary.LittleEndian.PutUint16(header[6:8], 0x0002)
|
||||
binary.LittleEndian.PutUint32(header[8:12], uint32(len(entry.Payload)))
|
||||
flag := entry.Flag
|
||||
if flag == 0 {
|
||||
flag = 0x80000002
|
||||
if len(entry.Payload) >= 2 && entry.Payload[0] == 0x1f && entry.Payload[1] == 0x8b {
|
||||
flag = 0x80000001
|
||||
}
|
||||
}
|
||||
binary.LittleEndian.PutUint32(header[16:20], flag)
|
||||
copy(header[20:52], []byte(entry.Name))
|
||||
buf.Write(header)
|
||||
buf.Write(entry.Payload)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func gzipBytes(t *testing.T, payload []byte) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
zw := gzip.NewWriter(&buf)
|
||||
if _, err := zw.Write(payload); err != nil {
|
||||
t.Fatalf("gzip payload: %v", err)
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Fatalf("close gzip writer: %v", err)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func sampleInventoryBlob() string {
|
||||
return stringsJoin(
|
||||
"iLO 6 v1.63p20 built on Sep 13 2024",
|
||||
"HPE",
|
||||
"ProLiant DL380 Gen11",
|
||||
"CZ2D1X0GS3",
|
||||
"P52560-421",
|
||||
"Proc 1",
|
||||
"Intel(R) Corporation",
|
||||
"Intel(R) Xeon(R) Gold 6444Y",
|
||||
"PROC 1 DIMM 3",
|
||||
"Hynix",
|
||||
"HMCG88AEBRA115N",
|
||||
"2B5F92C6",
|
||||
"Power Supply 1",
|
||||
"5XUWB0C4DJG4BV",
|
||||
"P03178-B21",
|
||||
"PciRoot(0x1)/Pci(0x5,0x0)/Pci(0x0,0x0)",
|
||||
"NIC.Slot.1.1",
|
||||
"Network Controller",
|
||||
"Slot 1",
|
||||
"MCX512A-ACAT",
|
||||
"MT2230478382",
|
||||
"PciRoot(0x3)/Pci(0x1,0x0)/Pci(0x0,0x0)",
|
||||
"OCP.Slot.15.1",
|
||||
"Broadcom NetXtreme Gigabit Ethernet - NIC",
|
||||
"OCP Slot 15",
|
||||
"P51183-001",
|
||||
"1CH0150001",
|
||||
"20.28.41",
|
||||
"System ROM",
|
||||
"v2.22 (06/19/2024)",
|
||||
"03/30/2026 09:47:33",
|
||||
"iLO network link down.",
|
||||
`{"@odata.id":"/redfish/v1/Systems/1/Storage/DE00A000/Controllers/0","@odata.type":"#StorageController.v1_7_0.StorageController","Id":"0","Name":"HPE MR408i-o Gen11","FirmwareVersion":"52.26.3-5379","Manufacturer":"HPE","Model":"HPE MR408i-o Gen11","PartNumber":"P58543-001","SKU":"P58335-B21","SerialNumber":"PXSFQ0BBIJY3B3","Status":{"State":"Enabled","Health":"OK"},"Location":{"PartLocation":{"ServiceLabel":"Slot=14","LocationType":"Slot","LocationOrdinalValue":14}},"PCIeInterface":{"PCIeType":"Gen4","LanesInUse":8}}`,
|
||||
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000","@odata.type":"#Fabric.v1_3_0.Fabric","Id":"DE00A000","Name":"8 SFF 24G x1NVMe/SAS UBM3 BC BP","FabricType":"MultiProtocol"}`,
|
||||
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000/Switches/1","@odata.type":"#Switch.v1_9_1.Switch","Id":"1","Name":"Direct Attached","Model":"UBM3","FirmwareVersion":"1.24","SupportedProtocols":["SAS","SATA","NVMe"],"SwitchType":"MultiProtocol","Status":{"State":"Enabled","Health":"OK"}}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/0","@odata.type":"#Drive.v1_17_0.Drive","Id":"0","Name":"480GB 6G SATA SSD","Status":{"State":"StandbyOffline","Health":"OK"},"PhysicalLocation":{"PartLocation":{"ServiceLabel":"Slot=14:Port=1:Box=3:Bay=1","LocationType":"Bay","LocationOrdinalValue":1}},"CapacityBytes":480103981056,"MediaType":"SSD","Model":"SAMSUNGMZ7L3480HCHQ-00A07","Protocol":"SATA","Revision":"JXTC604Q","SerialNumber":"S664NC0Y502720","PredictedMediaLifeLeftPercent":100}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/64515","@odata.type":"#Drive.v1_17_0.Drive","Id":"64515","Name":"Empty Bay","Status":{"State":"Absent","Health":"OK"}}`,
|
||||
)
|
||||
}
|
||||
|
||||
func sampleBCertBlob() string {
|
||||
return `<BC><MfgRecord><PowerSupplySlot id="0"><Present>Yes</Present><SerialNumber>5XUWB0C4DJG4BV</SerialNumber><FirmwareVersion>2.00</FirmwareVersion><SparePartNumber>P44412-001</SparePartNumber></PowerSupplySlot><FirmwareLockdown><SystemProgrammableLogicDevice>0x12</SystemProgrammableLogicDevice><ServerPlatformServicesSPSFirmware>6.1.4.47</ServerPlatformServicesSPSFirmware><STMicroGen11TPM>1.512</STMicroGen11TPM><HPEMR408i-oGen11>52.26.3-5379</HPEMR408i-oGen11><UBM3>UBM3/1.24</UBM3><BCM57191Gb4pBASE-TOCP3>20.28.41</BCM57191Gb4pBASE-TOCP3></FirmwareLockdown></MfgRecord></BC>`
|
||||
}
|
||||
|
||||
func stringsJoin(parts ...string) string {
|
||||
return string(bytes.Join(func() [][]byte {
|
||||
out := make([][]byte, 0, len(parts))
|
||||
for _, part := range parts {
|
||||
out = append(out, []byte(part))
|
||||
}
|
||||
return out
|
||||
}(), []byte{0}))
|
||||
}
|
||||
1021
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
1021
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
506
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
506
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
@@ -0,0 +1,506 @@
|
||||
package lenovo_xcc
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const exampleArchive = "/Users/mchusavitin/Documents/git/logpile/example/7D76CTO1WW_JF0002KT_xcc_mini-log_20260413-122150.zip"
|
||||
|
||||
func TestDetect_LenovoXCCMiniLog(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
score := p.Detect(files)
|
||||
if score < 80 {
|
||||
t.Errorf("expected Detect score >= 80 for XCC mini-log archive, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_BasicSysInfo(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse returned error: %v", err)
|
||||
}
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil result or hardware")
|
||||
}
|
||||
|
||||
hw := result.Hardware
|
||||
if hw.BoardInfo.SerialNumber == "" {
|
||||
t.Error("BoardInfo.SerialNumber is empty")
|
||||
}
|
||||
if hw.BoardInfo.ProductName == "" {
|
||||
t.Error("BoardInfo.ProductName is empty")
|
||||
}
|
||||
t.Logf("BoardInfo: serial=%s model=%s uuid=%s", hw.BoardInfo.SerialNumber, hw.BoardInfo.ProductName, hw.BoardInfo.UUID)
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_CPUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) == 0 {
|
||||
t.Error("expected at least one CPU, got none")
|
||||
}
|
||||
for i, cpu := range result.Hardware.CPUs {
|
||||
t.Logf("CPU[%d]: socket=%d model=%q cores=%d threads=%d freq=%dMHz", i, cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Memory(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) == 0 {
|
||||
t.Error("expected memory DIMMs, got none")
|
||||
}
|
||||
t.Logf("Memory: %d DIMMs", len(result.Hardware.Memory))
|
||||
for i, m := range result.Hardware.Memory {
|
||||
t.Logf("DIMM[%d]: slot=%s present=%v size=%dMB sn=%s", i, m.Slot, m.Present, m.SizeMB, m.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Storage(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("Storage: %d disks", len(result.Hardware.Storage))
|
||||
for i, s := range result.Hardware.Storage {
|
||||
t.Logf("Disk[%d]: slot=%s model=%q size=%dGB sn=%s", i, s.Slot, s.Model, s.SizeGB, s.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PCIeCards(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("PCIe cards: %d", len(result.Hardware.PCIeDevices))
|
||||
for i, c := range result.Hardware.PCIeDevices {
|
||||
t.Logf("Card[%d]: slot=%s desc=%q bdf=%s", i, c.Slot, c.Description, c.BDF)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PSUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.PowerSupply) == 0 {
|
||||
t.Error("expected PSUs, got none")
|
||||
}
|
||||
for i, p := range result.Hardware.PowerSupply {
|
||||
t.Logf("PSU[%d]: slot=%s wattage=%dW status=%s sn=%s", i, p.Slot, p.WattageW, p.Status, p.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Sensors(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Sensors) == 0 {
|
||||
t.Error("expected sensors, got none")
|
||||
}
|
||||
t.Logf("Sensors: %d", len(result.Sensors))
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Events(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Events) == 0 {
|
||||
t.Error("expected events, got none")
|
||||
}
|
||||
t.Logf("Events: %d", len(result.Events))
|
||||
for i, e := range result.Events {
|
||||
if i >= 5 {
|
||||
break
|
||||
}
|
||||
t.Logf("Event[%d]: severity=%s ts=%s desc=%q", i, e.Severity, e.Timestamp.Format("2006-01-02T15:04:05"), e.Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_FRU(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("FRU: %d entries", len(result.FRU))
|
||||
for i, f := range result.FRU {
|
||||
t.Logf("FRU[%d]: desc=%q product=%q serial=%q", i, f.Description, f.ProductName, f.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Firmware(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Error("expected firmware entries, got none")
|
||||
}
|
||||
for i, f := range result.Hardware.Firmware {
|
||||
t.Logf("FW[%d]: name=%q version=%q buildtime=%q", i, f.DeviceName, f.Version, f.BuildTime)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_VROCVolumes(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Volumes) == 0 {
|
||||
t.Error("expected at least one VROC volume, got none")
|
||||
}
|
||||
for i, v := range result.Hardware.Volumes {
|
||||
t.Logf("Volume[%d]: id=%s controller=%q raid=%s size=%dGB status=%s drives=%v",
|
||||
i, v.ID, v.Controller, v.RAIDLevel, v.SizeGB, v.Status, v.Drives)
|
||||
if v.RAIDLevel == "" {
|
||||
t.Errorf("Volume[%d]: RAIDLevel is empty", i)
|
||||
}
|
||||
if v.Status == "" {
|
||||
t.Errorf("Volume[%d]: Status is empty", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseVolumes_IntelVROC(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"identifier": "storage.id",
|
||||
"items": [{
|
||||
"volumes": [{
|
||||
"id": 1,
|
||||
"name": "",
|
||||
"drives": "M.2 Drive 0, M.2 Drive 1",
|
||||
"rdlvlstr": "RAID 1",
|
||||
"capacityStr": "893.750 GiB",
|
||||
"status": 3,
|
||||
"statusStr": "Optimal"
|
||||
}],
|
||||
"totalCapacityStr": "893.750 GiB"
|
||||
}]
|
||||
}`)
|
||||
|
||||
vols := parseVolumes(content)
|
||||
if len(vols) != 1 {
|
||||
t.Fatalf("expected 1 volume, got %d", len(vols))
|
||||
}
|
||||
v := vols[0]
|
||||
if v.ID != "1" {
|
||||
t.Errorf("expected ID=1, got %q", v.ID)
|
||||
}
|
||||
if v.RAIDLevel != "RAID 1" {
|
||||
t.Errorf("expected RAIDLevel=RAID 1, got %q", v.RAIDLevel)
|
||||
}
|
||||
if v.Status != "Optimal" {
|
||||
t.Errorf("expected Status=Optimal, got %q", v.Status)
|
||||
}
|
||||
if v.Controller != "Intel VROC" {
|
||||
t.Errorf("expected Controller=Intel VROC, got %q", v.Controller)
|
||||
}
|
||||
if len(v.Drives) != 2 {
|
||||
t.Errorf("expected 2 drives, got %d: %v", len(v.Drives), v.Drives)
|
||||
}
|
||||
if v.SizeGB < 900 || v.SizeGB > 1000 {
|
||||
t.Errorf("expected SizeGB ~960, got %d", v.SizeGB)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDIMMs_UnqualifiedDIMMAddsWarningEvent(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"items": [{
|
||||
"memory": [{
|
||||
"memory_name": "DIMM A1",
|
||||
"memory_status": "Unqualified DIMM",
|
||||
"memory_type": "DDR5",
|
||||
"memory_capacity": 32
|
||||
}]
|
||||
}]
|
||||
}`)
|
||||
|
||||
memory, events := parseDIMMs(content)
|
||||
if len(memory) != 1 {
|
||||
t.Fatalf("expected 1 DIMM, got %d", len(memory))
|
||||
}
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 warning event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].SensorName != "DIMM A1" {
|
||||
t.Fatalf("unexpected sensor name: %q", events[0].SensorName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSeverity_UnqualifiedDIMMMessageBecomesWarning(t *testing.T) {
|
||||
if got := xccSeverity("I", "System found Unqualified DIMM in slot DIMM A1"); got != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyDIMMWarningsFromEvents_UpdatesDIMMStatusForExport(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: []models.Event{
|
||||
{
|
||||
Timestamp: time.Date(2026, 4, 13, 11, 37, 38, 0, time.UTC),
|
||||
Severity: models.SeverityWarning,
|
||||
Description: "Unqualified DIMM 3 has been detected, the DIMM serial number is 80CE042328460C5D88-V20.",
|
||||
},
|
||||
},
|
||||
Hardware: &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "DIMM 3",
|
||||
Present: true,
|
||||
SerialNumber: "80CE042328460C5D88",
|
||||
Status: "Normal",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
applyDIMMWarningsFromEvents(result)
|
||||
|
||||
dimm := result.Hardware.Memory[0]
|
||||
if dimm.Status != "Warning" {
|
||||
t.Fatalf("expected DIMM status Warning, got %q", dimm.Status)
|
||||
}
|
||||
if dimm.ErrorDescription == "" || dimm.ErrorDescription != result.Events[0].Description {
|
||||
t.Fatalf("expected DIMM error description to be populated, got %q", dimm.ErrorDescription)
|
||||
}
|
||||
if dimm.StatusChangedAt == nil || !dimm.StatusChangedAt.Equal(result.Events[0].Timestamp) {
|
||||
t.Fatalf("expected status_changed_at from event timestamp, got %#v", dimm.StatusChangedAt)
|
||||
}
|
||||
if len(dimm.StatusHistory) != 1 || dimm.StatusHistory[0].Status != "Warning" {
|
||||
t.Fatalf("expected warning status history entry, got %#v", dimm.StatusHistory)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseBasicSysInfo_CleansPlaceholderValuesAndSetsTargetHost(t *testing.T) {
|
||||
result := &models.AnalysisResult{Hardware: &models.HardwareConfig{}}
|
||||
content := []byte(`{
|
||||
"items": [{
|
||||
"machine_name": " sr650v3-node01 ",
|
||||
"machine_typemodel": " 7D76CTO1WW ",
|
||||
"serial_number": " Not Specified ",
|
||||
"uuid": "N/A"
|
||||
}]
|
||||
}`)
|
||||
|
||||
parseBasicSysInfo(content, result)
|
||||
|
||||
if result.TargetHost != "sr650v3-node01" {
|
||||
t.Fatalf("unexpected target host: %q", result.TargetHost)
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName != "7D76CTO1WW" {
|
||||
t.Fatalf("unexpected product name: %q", result.Hardware.BoardInfo.ProductName)
|
||||
}
|
||||
if result.Hardware.BoardInfo.SerialNumber != "" {
|
||||
t.Fatalf("expected serial number to be cleaned, got %q", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
if result.Hardware.BoardInfo.UUID != "" {
|
||||
t.Fatalf("expected UUID to be cleaned, got %q", result.Hardware.BoardInfo.UUID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnrichBoardFromFRU_SystemBoardManufacturerOnly(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{},
|
||||
FRU: []models.FRUInfo{
|
||||
{Description: "Power Supply 1", Manufacturer: "Ignore Me"},
|
||||
{Description: "System Board", Manufacturer: " Lenovo "},
|
||||
},
|
||||
}
|
||||
|
||||
enrichBoardFromFRU(result)
|
||||
|
||||
if result.Hardware.BoardInfo.Manufacturer != "Lenovo" {
|
||||
t.Fatalf("unexpected manufacturer: %q", result.Hardware.BoardInfo.Manufacturer)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnrichPSUsFromSensors_AssignsTelemetryBySlot(t *testing.T) {
|
||||
psus := []models.PSU{
|
||||
{Slot: "1"},
|
||||
{Slot: "2"},
|
||||
}
|
||||
sensors := []models.SensorReading{
|
||||
{Name: "PSU1 Input Power", Value: 430},
|
||||
{Name: "Power Supply 1 Output Power", Value: 390},
|
||||
{Name: "PWS1 AC Voltage", Value: 230.5},
|
||||
{Name: "PSU2 Input Power", Value: 0},
|
||||
{Name: "PSU3 Input Power", Value: 999},
|
||||
{Name: "Fan 1", Value: 12000},
|
||||
}
|
||||
|
||||
got := enrichPSUsFromSensors(psus, sensors)
|
||||
|
||||
if got[0].InputPowerW != 430 {
|
||||
t.Fatalf("unexpected PSU1 input power: %d", got[0].InputPowerW)
|
||||
}
|
||||
if got[0].OutputPowerW != 390 {
|
||||
t.Fatalf("unexpected PSU1 output power: %d", got[0].OutputPowerW)
|
||||
}
|
||||
if got[0].InputVoltage != 230.5 {
|
||||
t.Fatalf("unexpected PSU1 input voltage: %v", got[0].InputVoltage)
|
||||
}
|
||||
if got[1].InputPowerW != 0 || got[1].OutputPowerW != 0 || got[1].InputVoltage != 0 {
|
||||
t.Fatalf("unexpected telemetry assigned to PSU2: %+v", got[1])
|
||||
}
|
||||
}
|
||||
|
||||
func TestMapDiskHealthStatus(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
code int
|
||||
stateStr string
|
||||
want string
|
||||
}{
|
||||
{name: "normal", code: 2, stateStr: "Online", want: "OK"},
|
||||
{name: "warning", code: 1, stateStr: "Online", want: "Warning"},
|
||||
{name: "predictive failure", code: 4, stateStr: "Online", want: "Warning"},
|
||||
{name: "critical", code: 3, stateStr: "Failed", want: "Critical"},
|
||||
{name: "fallback state", code: 0, stateStr: "Rebuilding", want: "Rebuilding"},
|
||||
{name: "unknown", code: 0, stateStr: "", want: "Unknown"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := mapDiskHealthStatus(tt.code, tt.stateStr); got != tt.want {
|
||||
t.Fatalf("got %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestClassifySensorType(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
in string
|
||||
unit string
|
||||
want string
|
||||
}{
|
||||
{name: "unit rpm", in: "Fan 1", unit: "RPM", want: "fan"},
|
||||
{name: "unit celsius", in: "CPU Temp", unit: "C", want: "temperature"},
|
||||
{name: "unit watts", in: "PSU1 Input Power", unit: "W", want: "power"},
|
||||
{name: "unit volts", in: "PWS1 AC Voltage", unit: "V", want: "voltage"},
|
||||
{name: "unit amps", in: "PSU1 Current", unit: "A", want: "current"},
|
||||
{name: "name fallback", in: "GPU Temp", unit: "", want: "temperature"},
|
||||
{name: "other", in: "Presence", unit: "", want: "other"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := classifySensorType(tt.in, tt.unit); got != tt.want {
|
||||
t.Fatalf("got %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCleanXCCValue(t *testing.T) {
|
||||
tests := []struct {
|
||||
in string
|
||||
want string
|
||||
}{
|
||||
{in: " Lenovo ", want: "Lenovo"},
|
||||
{in: "N/A", want: ""},
|
||||
{in: " not specified ", want: ""},
|
||||
{in: "-", want: ""},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
if got := cleanXCCValue(tt.in); got != tt.want {
|
||||
t.Fatalf("cleanXCCValue(%q) = %q, want %q", tt.in, got, tt.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
3
internal/parser/vendors/vendors.go
vendored
3
internal/parser/vendors/vendors.go
vendored
@@ -5,13 +5,16 @@ package vendors
|
||||
import (
|
||||
// Import vendor modules to trigger their init() registration
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/easy_bee"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/h3c"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe_ilo_ahs"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xfusion"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo_xcc"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"
|
||||
|
||||
450
internal/parser/vendors/xfusion/hardware.go
vendored
450
internal/parser/vendors/xfusion/hardware.go
vendored
@@ -10,6 +10,33 @@ import (
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
type xfusionNICCard struct {
|
||||
Slot string
|
||||
Model string
|
||||
ProductName string
|
||||
Vendor string
|
||||
VendorID int
|
||||
DeviceID int
|
||||
BDF string
|
||||
SerialNumber string
|
||||
PartNumber string
|
||||
}
|
||||
|
||||
type xfusionNetcardPort struct {
|
||||
BDF string
|
||||
MAC string
|
||||
ActualMAC string
|
||||
}
|
||||
|
||||
type xfusionNetcardSnapshot struct {
|
||||
Timestamp time.Time
|
||||
Slot string
|
||||
ProductName string
|
||||
Manufacturer string
|
||||
Firmware string
|
||||
Ports []xfusionNetcardPort
|
||||
}
|
||||
|
||||
// ── FRU ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
// parseFRUInfo parses fruinfo.txt and populates result.FRU and result.Hardware.BoardInfo.
|
||||
@@ -232,15 +259,15 @@ func parseCPUInfo(content []byte) []models.CPU {
|
||||
}
|
||||
|
||||
cpus = append(cpus, models.CPU{
|
||||
Socket: socketNum,
|
||||
Model: model,
|
||||
Cores: cores,
|
||||
Threads: threads,
|
||||
L1CacheKB: l1,
|
||||
L2CacheKB: l2,
|
||||
L3CacheKB: l3,
|
||||
Socket: socketNum,
|
||||
Model: model,
|
||||
Cores: cores,
|
||||
Threads: threads,
|
||||
L1CacheKB: l1,
|
||||
L2CacheKB: l2,
|
||||
L3CacheKB: l3,
|
||||
SerialNumber: sn,
|
||||
Status: "ok",
|
||||
Status: "ok",
|
||||
})
|
||||
}
|
||||
return cpus
|
||||
@@ -338,9 +365,9 @@ func parseMemInfo(content []byte) []models.MemoryDIMM {
|
||||
|
||||
// ── Card Info (GPU + NIC) ─────────────────────────────────────────────────────
|
||||
|
||||
// parseCardInfo parses card_info file, extracting GPU and NIC entries.
|
||||
// parseCardInfo parses card_info file, extracting GPU and OCP NIC card inventory.
|
||||
// The file has named sections ("GPU Card Info", "OCP Card Info", etc.) each with a pipe-table.
|
||||
func parseCardInfo(content []byte) (gpus []models.GPU, nics []models.NIC) {
|
||||
func parseCardInfo(content []byte) (gpus []models.GPU, nicCards []xfusionNICCard) {
|
||||
sections := splitPipeSections(content)
|
||||
|
||||
// Build BDF and VendorID/DeviceID map from PCIe Card Info: slot → info
|
||||
@@ -396,17 +423,22 @@ func parseCardInfo(content []byte) (gpus []models.GPU, nics []models.NIC) {
|
||||
}
|
||||
|
||||
// OCP Card Info: NIC cards
|
||||
for i, row := range sections["ocp card info"] {
|
||||
desc := strings.TrimSpace(row["card desc"])
|
||||
sn := strings.TrimSpace(row["serialnumber"])
|
||||
nics = append(nics, models.NIC{
|
||||
Name: fmt.Sprintf("OCP%d", i+1),
|
||||
Model: desc,
|
||||
SerialNumber: sn,
|
||||
for _, row := range sections["ocp card info"] {
|
||||
slot := strings.TrimSpace(row["slot"])
|
||||
pcie := slotPCIe[slot]
|
||||
nicCards = append(nicCards, xfusionNICCard{
|
||||
Slot: slot,
|
||||
Model: strings.TrimSpace(row["card desc"]),
|
||||
ProductName: strings.TrimSpace(row["card desc"]),
|
||||
VendorID: parseHexInt(row["vender id"]),
|
||||
DeviceID: parseHexInt(row["device id"]),
|
||||
BDF: pcie.bdf,
|
||||
SerialNumber: strings.TrimSpace(row["serialnumber"]),
|
||||
PartNumber: strings.TrimSpace(row["partnum"]),
|
||||
})
|
||||
}
|
||||
|
||||
return gpus, nics
|
||||
return gpus, nicCards
|
||||
}
|
||||
|
||||
// splitPipeSections parses a multi-section file where each section starts with a
|
||||
@@ -462,6 +494,301 @@ func parseHexInt(s string) int {
|
||||
return int(n)
|
||||
}
|
||||
|
||||
func parseNetcardInfo(content []byte) []xfusionNetcardSnapshot {
|
||||
if len(content) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var snapshots []xfusionNetcardSnapshot
|
||||
var current *xfusionNetcardSnapshot
|
||||
var currentPort *xfusionNetcardPort
|
||||
|
||||
flushPort := func() {
|
||||
if current == nil || currentPort == nil {
|
||||
return
|
||||
}
|
||||
current.Ports = append(current.Ports, *currentPort)
|
||||
currentPort = nil
|
||||
}
|
||||
flushSnapshot := func() {
|
||||
if current == nil || !current.hasData() {
|
||||
return
|
||||
}
|
||||
flushPort()
|
||||
snapshots = append(snapshots, *current)
|
||||
current = nil
|
||||
}
|
||||
|
||||
for _, rawLine := range strings.Split(string(content), "\n") {
|
||||
line := strings.TrimSpace(rawLine)
|
||||
if line == "" {
|
||||
flushPort()
|
||||
continue
|
||||
}
|
||||
if ts, ok := parseXFusionUTCTimestamp(line); ok {
|
||||
if current == nil {
|
||||
current = &xfusionNetcardSnapshot{Timestamp: ts}
|
||||
continue
|
||||
}
|
||||
if current.hasData() {
|
||||
flushSnapshot()
|
||||
current = &xfusionNetcardSnapshot{Timestamp: ts}
|
||||
continue
|
||||
}
|
||||
current.Timestamp = ts
|
||||
continue
|
||||
}
|
||||
if current == nil {
|
||||
current = &xfusionNetcardSnapshot{}
|
||||
}
|
||||
if port := parseNetcardPortHeader(line); port != nil {
|
||||
flushPort()
|
||||
currentPort = port
|
||||
continue
|
||||
}
|
||||
if currentPort != nil {
|
||||
if value, ok := parseSimpleKV(line, "MacAddr"); ok {
|
||||
currentPort.MAC = value
|
||||
continue
|
||||
}
|
||||
if value, ok := parseSimpleKV(line, "ActualMac"); ok {
|
||||
currentPort.ActualMAC = value
|
||||
continue
|
||||
}
|
||||
}
|
||||
if value, ok := parseSimpleKV(line, "ProductName"); ok {
|
||||
current.ProductName = value
|
||||
continue
|
||||
}
|
||||
if value, ok := parseSimpleKV(line, "Manufacture"); ok {
|
||||
current.Manufacturer = value
|
||||
continue
|
||||
}
|
||||
if value, ok := parseSimpleKV(line, "FirmwareVersion"); ok {
|
||||
current.Firmware = value
|
||||
continue
|
||||
}
|
||||
if value, ok := parseSimpleKV(line, "SlotId"); ok {
|
||||
current.Slot = value
|
||||
}
|
||||
}
|
||||
flushSnapshot()
|
||||
|
||||
bestIndexBySlot := make(map[string]int)
|
||||
for i, snapshot := range snapshots {
|
||||
slot := strings.TrimSpace(snapshot.Slot)
|
||||
if slot == "" {
|
||||
continue
|
||||
}
|
||||
prevIdx, exists := bestIndexBySlot[slot]
|
||||
if !exists || snapshot.isBetterThan(snapshots[prevIdx]) {
|
||||
bestIndexBySlot[slot] = i
|
||||
}
|
||||
}
|
||||
|
||||
ordered := make([]xfusionNetcardSnapshot, 0, len(bestIndexBySlot))
|
||||
for i, snapshot := range snapshots {
|
||||
slot := strings.TrimSpace(snapshot.Slot)
|
||||
bestIdx, ok := bestIndexBySlot[slot]
|
||||
if !ok || bestIdx != i {
|
||||
continue
|
||||
}
|
||||
ordered = append(ordered, snapshot)
|
||||
delete(bestIndexBySlot, slot)
|
||||
}
|
||||
return ordered
|
||||
}
|
||||
|
||||
func mergeNetworkAdapters(cards []xfusionNICCard, snapshots []xfusionNetcardSnapshot) ([]models.NetworkAdapter, []models.NIC) {
|
||||
bySlotCard := make(map[string]xfusionNICCard, len(cards))
|
||||
bySlotSnapshot := make(map[string]xfusionNetcardSnapshot, len(snapshots))
|
||||
orderedSlots := make([]string, 0, len(cards)+len(snapshots))
|
||||
seenSlots := make(map[string]struct{}, len(cards)+len(snapshots))
|
||||
|
||||
for _, card := range cards {
|
||||
slot := strings.TrimSpace(card.Slot)
|
||||
if slot == "" {
|
||||
continue
|
||||
}
|
||||
bySlotCard[slot] = card
|
||||
if _, seen := seenSlots[slot]; !seen {
|
||||
orderedSlots = append(orderedSlots, slot)
|
||||
seenSlots[slot] = struct{}{}
|
||||
}
|
||||
}
|
||||
for _, snapshot := range snapshots {
|
||||
slot := strings.TrimSpace(snapshot.Slot)
|
||||
if slot == "" {
|
||||
continue
|
||||
}
|
||||
bySlotSnapshot[slot] = snapshot
|
||||
if _, seen := seenSlots[slot]; !seen {
|
||||
orderedSlots = append(orderedSlots, slot)
|
||||
seenSlots[slot] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
adapters := make([]models.NetworkAdapter, 0, len(orderedSlots))
|
||||
legacyNICs := make([]models.NIC, 0, len(orderedSlots))
|
||||
for _, slot := range orderedSlots {
|
||||
card := bySlotCard[slot]
|
||||
snapshot := bySlotSnapshot[slot]
|
||||
|
||||
model := firstNonEmpty(card.Model, snapshot.ProductName)
|
||||
description := ""
|
||||
if !strings.EqualFold(strings.TrimSpace(model), strings.TrimSpace(snapshot.ProductName)) {
|
||||
description = strings.TrimSpace(snapshot.ProductName)
|
||||
}
|
||||
macs := snapshot.macAddresses()
|
||||
bdf := firstNonEmpty(snapshot.primaryBDF(), card.BDF)
|
||||
firmware := normalizeXFusionValue(snapshot.Firmware)
|
||||
manufacturer := firstNonEmpty(snapshot.Manufacturer, card.Vendor)
|
||||
portCount := len(snapshot.Ports)
|
||||
if portCount == 0 && len(macs) > 0 {
|
||||
portCount = len(macs)
|
||||
}
|
||||
if portCount == 0 {
|
||||
portCount = 1
|
||||
}
|
||||
|
||||
adapters = append(adapters, models.NetworkAdapter{
|
||||
Slot: slot,
|
||||
Location: "OCP",
|
||||
Present: true,
|
||||
BDF: bdf,
|
||||
Model: model,
|
||||
Description: description,
|
||||
Vendor: manufacturer,
|
||||
VendorID: card.VendorID,
|
||||
DeviceID: card.DeviceID,
|
||||
SerialNumber: card.SerialNumber,
|
||||
PartNumber: card.PartNumber,
|
||||
Firmware: firmware,
|
||||
PortCount: portCount,
|
||||
PortType: "ethernet",
|
||||
MACAddresses: macs,
|
||||
Status: "ok",
|
||||
})
|
||||
legacyNICs = append(legacyNICs, models.NIC{
|
||||
Name: fmt.Sprintf("OCP%s", slot),
|
||||
Model: model,
|
||||
Description: description,
|
||||
MACAddress: firstNonEmpty(macs...),
|
||||
SerialNumber: card.SerialNumber,
|
||||
})
|
||||
}
|
||||
|
||||
return adapters, legacyNICs
|
||||
}
|
||||
|
||||
func parseXFusionUTCTimestamp(line string) (time.Time, bool) {
|
||||
ts, err := time.Parse("2006-01-02 15:04:05 MST", strings.TrimSpace(line))
|
||||
if err != nil {
|
||||
return time.Time{}, false
|
||||
}
|
||||
return ts, true
|
||||
}
|
||||
|
||||
func parseNetcardPortHeader(line string) *xfusionNetcardPort {
|
||||
fields := strings.Fields(strings.TrimSpace(line))
|
||||
if len(fields) < 2 || !strings.HasPrefix(strings.ToLower(fields[0]), "port") {
|
||||
return nil
|
||||
}
|
||||
joined := strings.Join(fields[1:], " ")
|
||||
if !strings.HasPrefix(strings.ToLower(joined), "bdf:") {
|
||||
return nil
|
||||
}
|
||||
return &xfusionNetcardPort{BDF: strings.TrimSpace(joined[len("BDF:"):])}
|
||||
}
|
||||
|
||||
func parseSimpleKV(line, key string) (string, bool) {
|
||||
idx := strings.Index(line, ":")
|
||||
if idx < 0 {
|
||||
return "", false
|
||||
}
|
||||
gotKey := strings.TrimSpace(line[:idx])
|
||||
if !strings.EqualFold(gotKey, key) {
|
||||
return "", false
|
||||
}
|
||||
return strings.TrimSpace(line[idx+1:]), true
|
||||
}
|
||||
|
||||
func normalizeXFusionValue(value string) string {
|
||||
value = strings.TrimSpace(value)
|
||||
switch strings.ToUpper(value) {
|
||||
case "", "N/A", "NA", "UNKNOWN":
|
||||
return ""
|
||||
default:
|
||||
return value
|
||||
}
|
||||
}
|
||||
|
||||
func (s xfusionNetcardSnapshot) hasData() bool {
|
||||
return strings.TrimSpace(s.Slot) != "" ||
|
||||
strings.TrimSpace(s.ProductName) != "" ||
|
||||
strings.TrimSpace(s.Manufacturer) != "" ||
|
||||
strings.TrimSpace(s.Firmware) != "" ||
|
||||
len(s.Ports) > 0
|
||||
}
|
||||
|
||||
func (s xfusionNetcardSnapshot) score() int {
|
||||
score := len(s.Ports)
|
||||
if normalizeXFusionValue(s.Firmware) != "" {
|
||||
score += 10
|
||||
}
|
||||
score += len(s.macAddresses()) * 2
|
||||
return score
|
||||
}
|
||||
|
||||
func (s xfusionNetcardSnapshot) isBetterThan(other xfusionNetcardSnapshot) bool {
|
||||
if s.score() != other.score() {
|
||||
return s.score() > other.score()
|
||||
}
|
||||
if !s.Timestamp.Equal(other.Timestamp) {
|
||||
return s.Timestamp.After(other.Timestamp)
|
||||
}
|
||||
return len(s.Ports) > len(other.Ports)
|
||||
}
|
||||
|
||||
func (s xfusionNetcardSnapshot) primaryBDF() string {
|
||||
for _, port := range s.Ports {
|
||||
if bdf := strings.TrimSpace(port.BDF); bdf != "" {
|
||||
return bdf
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (s xfusionNetcardSnapshot) macAddresses() []string {
|
||||
out := make([]string, 0, len(s.Ports))
|
||||
seen := make(map[string]struct{}, len(s.Ports))
|
||||
for _, port := range s.Ports {
|
||||
for _, candidate := range []string{port.ActualMAC, port.MAC} {
|
||||
mac := normalizeMAC(candidate)
|
||||
if mac == "" {
|
||||
continue
|
||||
}
|
||||
if _, exists := seen[mac]; exists {
|
||||
continue
|
||||
}
|
||||
seen[mac] = struct{}{}
|
||||
out = append(out, mac)
|
||||
break
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func normalizeMAC(value string) string {
|
||||
value = strings.ToUpper(strings.TrimSpace(value))
|
||||
switch value {
|
||||
case "", "N/A", "NA", "UNKNOWN", "00:00:00:00:00:00":
|
||||
return ""
|
||||
default:
|
||||
return value
|
||||
}
|
||||
}
|
||||
|
||||
// ── PSU ───────────────────────────────────────────────────────────────────────
|
||||
|
||||
// parsePSUInfo parses the pipe-delimited psu_info.txt.
|
||||
@@ -525,6 +852,11 @@ func parsePSUInfo(content []byte) []models.PSU {
|
||||
func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
|
||||
// File may contain multiple controller blocks; parse key:value pairs from each.
|
||||
// We only look at the first occurrence of each key (first controller).
|
||||
seen := make(map[string]struct{}, len(result.Hardware.Firmware))
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
|
||||
seen[key] = struct{}{}
|
||||
}
|
||||
text := string(content)
|
||||
blocks := strings.Split(text, "RAID Controller #")
|
||||
for _, block := range blocks[1:] { // skip pre-block preamble
|
||||
@@ -532,7 +864,7 @@ func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
|
||||
name := firstNonEmpty(fields["Component Name"], fields["Controller Name"], fields["Controller Type"])
|
||||
firmware := fields["Firmware Version"]
|
||||
if name != "" && firmware != "" {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, models.FirmwareInfo{
|
||||
appendXFusionFirmware(result, seen, models.FirmwareInfo{
|
||||
DeviceName: name,
|
||||
Description: fields["Controller Name"],
|
||||
Version: firmware,
|
||||
@@ -541,6 +873,86 @@ func parseStorageControllerInfo(content []byte, result *models.AnalysisResult) {
|
||||
}
|
||||
}
|
||||
|
||||
func parseAppRevision(content []byte, result *models.AnalysisResult) {
|
||||
type firmwareLine struct {
|
||||
deviceName string
|
||||
description string
|
||||
buildKey string
|
||||
}
|
||||
|
||||
known := map[string]firmwareLine{
|
||||
"Active iBMC Version": {deviceName: "iBMC", description: "active iBMC", buildKey: "Active iBMC Built"},
|
||||
"Active BIOS Version": {deviceName: "BIOS", description: "active BIOS", buildKey: "Active BIOS Built"},
|
||||
"CPLD Version": {deviceName: "CPLD", description: "mainboard CPLD"},
|
||||
"SDK Version": {deviceName: "SDK", description: "iBMC SDK", buildKey: "SDK Built"},
|
||||
"Active Uboot Version": {deviceName: "U-Boot", description: "active U-Boot"},
|
||||
"Active Secure Bootloader Version": {deviceName: "Secure Bootloader", description: "active secure bootloader"},
|
||||
"Active Secure Firmware Version": {deviceName: "Secure Firmware", description: "active secure firmware"},
|
||||
}
|
||||
|
||||
values := parseAlignedKeyValues(content)
|
||||
if result.Hardware.BoardInfo.ProductName == "" {
|
||||
if productName := values["Product Name"]; productName != "" {
|
||||
result.Hardware.BoardInfo.ProductName = productName
|
||||
}
|
||||
}
|
||||
|
||||
seen := make(map[string]struct{}, len(result.Hardware.Firmware))
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
|
||||
seen[key] = struct{}{}
|
||||
}
|
||||
|
||||
for key, meta := range known {
|
||||
version := normalizeXFusionValue(values[key])
|
||||
if version == "" {
|
||||
continue
|
||||
}
|
||||
appendXFusionFirmware(result, seen, models.FirmwareInfo{
|
||||
DeviceName: meta.deviceName,
|
||||
Description: meta.description,
|
||||
Version: version,
|
||||
BuildTime: normalizeXFusionValue(values[meta.buildKey]),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func parseAlignedKeyValues(content []byte) map[string]string {
|
||||
values := make(map[string]string)
|
||||
for _, rawLine := range strings.Split(string(content), "\n") {
|
||||
line := strings.TrimRight(rawLine, "\r")
|
||||
if !strings.Contains(line, ":") {
|
||||
continue
|
||||
}
|
||||
idx := strings.Index(line, ":")
|
||||
if idx < 0 {
|
||||
continue
|
||||
}
|
||||
key := strings.TrimRight(line[:idx], " \t")
|
||||
value := strings.TrimSpace(line[idx+1:])
|
||||
if key == "" || value == "" || values[key] != "" {
|
||||
continue
|
||||
}
|
||||
values[key] = value
|
||||
}
|
||||
return values
|
||||
}
|
||||
|
||||
func appendXFusionFirmware(result *models.AnalysisResult, seen map[string]struct{}, fw models.FirmwareInfo) {
|
||||
if result == nil || result.Hardware == nil {
|
||||
return
|
||||
}
|
||||
key := strings.ToLower(strings.TrimSpace(fw.DeviceName + "\x00" + fw.Version + "\x00" + fw.Description))
|
||||
if key == "" {
|
||||
return
|
||||
}
|
||||
if _, exists := seen[key]; exists {
|
||||
return
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, fw)
|
||||
}
|
||||
|
||||
// parseDiskInfo parses a single PhysicalDrivesInfo/DiskN/disk_info file.
|
||||
func parseDiskInfo(content []byte) *models.Storage {
|
||||
fields := parseKeyValueBlock(content)
|
||||
|
||||
57
internal/parser/vendors/xfusion/parser.go
vendored
57
internal/parser/vendors/xfusion/parser.go
vendored
@@ -13,7 +13,7 @@ import (
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.0"
|
||||
const parserVersion = "1.1"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
@@ -34,11 +34,15 @@ func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
path := strings.ToLower(f.Path)
|
||||
switch {
|
||||
case strings.Contains(path, "appdump/frudata/fruinfo.txt"):
|
||||
confidence += 60
|
||||
confidence += 50
|
||||
case strings.Contains(path, "rtosdump/versioninfo/app_revision.txt"):
|
||||
confidence += 30
|
||||
case strings.Contains(path, "appdump/sensor_alarm/sensor_info.txt"):
|
||||
confidence += 20
|
||||
confidence += 10
|
||||
case strings.Contains(path, "appdump/card_manage/card_info"):
|
||||
confidence += 20
|
||||
case strings.Contains(path, "logdump/netcard/netcard_info.txt"):
|
||||
confidence += 20
|
||||
}
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
@@ -54,17 +58,21 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
GPUs: make([]models.GPU, 0),
|
||||
NetworkCards: make([]models.NIC, 0),
|
||||
PowerSupply: make([]models.PSU, 0),
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
Devices: make([]models.HardwareDevice, 0),
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
Volumes: make([]models.StorageVolume, 0),
|
||||
PCIeDevices: make([]models.PCIeDevice, 0),
|
||||
GPUs: make([]models.GPU, 0),
|
||||
NetworkCards: make([]models.NIC, 0),
|
||||
NetworkAdapters: make([]models.NetworkAdapter, 0),
|
||||
PowerSupply: make([]models.PSU, 0),
|
||||
},
|
||||
}
|
||||
|
||||
if f := findByPath(files, "appdump/frudata/fruinfo.txt"); f != nil {
|
||||
if f := findByAnyPath(files, "appdump/frudata/fruinfo.txt", "rtosdump/versioninfo/fruinfo.txt"); f != nil {
|
||||
parseFRUInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "appdump/sensor_alarm/sensor_info.txt"); f != nil {
|
||||
@@ -76,10 +84,20 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
if f := findByPath(files, "appdump/cpumem/mem_info"); f != nil {
|
||||
result.Hardware.Memory = parseMemInfo(f.Content)
|
||||
}
|
||||
var nicCards []xfusionNICCard
|
||||
if f := findByPath(files, "appdump/card_manage/card_info"); f != nil {
|
||||
gpus, nics := parseCardInfo(f.Content)
|
||||
gpus, cards := parseCardInfo(f.Content)
|
||||
result.Hardware.GPUs = gpus
|
||||
result.Hardware.NetworkCards = nics
|
||||
nicCards = cards
|
||||
}
|
||||
if f := findByPath(files, "logdump/netcard/netcard_info.txt"); f != nil || len(nicCards) > 0 {
|
||||
var content []byte
|
||||
if f != nil {
|
||||
content = f.Content
|
||||
}
|
||||
adapters, legacyNICs := mergeNetworkAdapters(nicCards, parseNetcardInfo(content))
|
||||
result.Hardware.NetworkAdapters = adapters
|
||||
result.Hardware.NetworkCards = legacyNICs
|
||||
}
|
||||
if f := findByPath(files, "appdump/bmc/psu_info.txt"); f != nil {
|
||||
result.Hardware.PowerSupply = parsePSUInfo(f.Content)
|
||||
@@ -87,6 +105,9 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
if f := findByPath(files, "appdump/storagemgnt/raid_controller_info.txt"); f != nil {
|
||||
parseStorageControllerInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "rtosdump/versioninfo/app_revision.txt"); f != nil {
|
||||
parseAppRevision(f.Content, result)
|
||||
}
|
||||
for _, f := range findDiskInfoFiles(files) {
|
||||
disk := parseDiskInfo(f.Content)
|
||||
if disk != nil {
|
||||
@@ -99,6 +120,7 @@ func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, er
|
||||
|
||||
result.Protocol = "ipmi"
|
||||
result.SourceType = models.SourceTypeArchive
|
||||
parser.ApplyManufacturedYearWeekFromFRU(result.FRU, result.Hardware)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
@@ -113,6 +135,15 @@ func findByPath(files []parser.ExtractedFile, substring string) *parser.Extracte
|
||||
return nil
|
||||
}
|
||||
|
||||
func findByAnyPath(files []parser.ExtractedFile, substrings ...string) *parser.ExtractedFile {
|
||||
for _, substring := range substrings {
|
||||
if f := findByPath(files, substring); f != nil {
|
||||
return f
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// findDiskInfoFiles returns all PhysicalDrivesInfo disk_info files.
|
||||
func findDiskInfoFiles(files []parser.ExtractedFile) []parser.ExtractedFile {
|
||||
var out []parser.ExtractedFile
|
||||
|
||||
113
internal/parser/vendors/xfusion/parser_test.go
vendored
113
internal/parser/vendors/xfusion/parser_test.go
vendored
@@ -1,8 +1,10 @@
|
||||
package xfusion
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
@@ -26,6 +28,29 @@ func TestDetect_G5500V7(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetect_ServerFileExportMarkers(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{
|
||||
{Path: "dump_info/RTOSDump/versioninfo/app_revision.txt", Content: []byte("Product Name: G5500 V7")},
|
||||
{Path: "dump_info/LogDump/netcard/netcard_info.txt", Content: []byte("2026-02-04 03:54:06 UTC")},
|
||||
{Path: "dump_info/AppDump/card_manage/card_info", Content: []byte("OCP Card Info")},
|
||||
})
|
||||
if score < 70 {
|
||||
t.Fatalf("expected Detect score >= 70 for xFusion file export markers, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetect_Negative(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{
|
||||
{Path: "logs/messages.txt", Content: []byte("plain text")},
|
||||
{Path: "inventory.json", Content: []byte(`{"vendor":"other"}`)},
|
||||
})
|
||||
if score != 0 {
|
||||
t.Fatalf("expected Detect score 0 for non-xFusion input, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_BoardInfo(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
@@ -126,6 +151,94 @@ func TestParse_G5500V7_NICs(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_ServerFileExport_NetworkAdaptersAndFirmware(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "dump_info/AppDump/card_manage/card_info",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
Pcie Card Info
|
||||
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
|
||||
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
|
||||
|
||||
OCP Card Info
|
||||
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
|
||||
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
|
||||
`)),
|
||||
},
|
||||
{
|
||||
Path: "dump_info/LogDump/netcard/netcard_info.txt",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
2026-02-04 03:54:06 UTC
|
||||
ProductName :XC385
|
||||
Manufacture :XFUSION
|
||||
FirmwareVersion :26.39.2048
|
||||
SlotId :1
|
||||
Port0 BDF:0000:27:00.0
|
||||
MacAddr:44:1A:4C:16:E8:03
|
||||
ActualMac:44:1A:4C:16:E8:03
|
||||
Port1 BDF:0000:27:00.1
|
||||
MacAddr:00:00:00:00:00:00
|
||||
ActualMac:44:1A:4C:16:E8:04
|
||||
`)),
|
||||
},
|
||||
{
|
||||
Path: "dump_info/RTOSDump/versioninfo/app_revision.txt",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
------------------- iBMC INFO -------------------
|
||||
Active iBMC Version: (U68)3.08.05.85
|
||||
Active iBMC Built: 16:46:26 Jan 4 2026
|
||||
SDK Version: 13.16.30.16
|
||||
SDK Built: 07:55:18 Dec 12 2025
|
||||
Active BIOS Version: (U6216)01.02.08.17
|
||||
Active BIOS Built: 00:00:00 Jan 05 2026
|
||||
Product Name: G5500 V7
|
||||
`)),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if result.Protocol != "ipmi" || result.SourceType != models.SourceTypeArchive {
|
||||
t.Fatalf("unexpected source metadata: protocol=%q source_type=%q", result.Protocol, result.SourceType)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatal("Hardware is nil")
|
||||
}
|
||||
if len(result.Hardware.NetworkAdapters) != 1 {
|
||||
t.Fatalf("expected 1 network adapter, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
adapter := result.Hardware.NetworkAdapters[0]
|
||||
if adapter.BDF != "0000:27:00.0" {
|
||||
t.Fatalf("expected network adapter BDF 0000:27:00.0, got %q", adapter.BDF)
|
||||
}
|
||||
if adapter.Firmware != "26.39.2048" {
|
||||
t.Fatalf("expected network adapter firmware 26.39.2048, got %q", adapter.Firmware)
|
||||
}
|
||||
if adapter.SerialNumber != "02Y238X6RC000058" {
|
||||
t.Fatalf("expected network adapter serial from card_info, got %q", adapter.SerialNumber)
|
||||
}
|
||||
if len(adapter.MACAddresses) != 2 || adapter.MACAddresses[0] != "44:1A:4C:16:E8:03" || adapter.MACAddresses[1] != "44:1A:4C:16:E8:04" {
|
||||
t.Fatalf("unexpected MAC addresses: %#v", adapter.MACAddresses)
|
||||
}
|
||||
|
||||
fwByDevice := make(map[string]models.FirmwareInfo)
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
fwByDevice[fw.DeviceName] = fw
|
||||
}
|
||||
if fwByDevice["iBMC"].Version != "(U68)3.08.05.85" {
|
||||
t.Fatalf("expected iBMC firmware from app_revision.txt, got %#v", fwByDevice["iBMC"])
|
||||
}
|
||||
if fwByDevice["BIOS"].Version != "(U6216)01.02.08.17" {
|
||||
t.Fatalf("expected BIOS firmware from app_revision.txt, got %#v", fwByDevice["BIOS"])
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName != "G5500 V7" {
|
||||
t.Fatalf("expected board product fallback from app_revision.txt, got %q", result.Hardware.BoardInfo.ProductName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_PSUs(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
|
||||
@@ -44,6 +44,9 @@ func TestParserParseExample(t *testing.T) {
|
||||
examplePath := filepath.Join("..", "..", "..", "..", "example", "xigmanas.txt")
|
||||
raw, err := os.ReadFile(examplePath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
t.Skipf("example file %s not present", examplePath)
|
||||
}
|
||||
t.Fatalf("read example file: %v", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,8 @@ package server
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
@@ -22,6 +24,7 @@ func newCollectTestServer() (*Server, *httptest.Server) {
|
||||
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
|
||||
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
|
||||
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
|
||||
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
|
||||
return s, httptest.NewServer(mux)
|
||||
}
|
||||
|
||||
@@ -29,7 +32,17 @@ func TestCollectProbe(t *testing.T) {
|
||||
_, ts := newCollectTestServer()
|
||||
defer ts.Close()
|
||||
|
||||
body := `{"host":"bmc-off.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatalf("listen probe target: %v", err)
|
||||
}
|
||||
defer ln.Close()
|
||||
addr, ok := ln.Addr().(*net.TCPAddr)
|
||||
if !ok {
|
||||
t.Fatalf("unexpected listener address type: %T", ln.Addr())
|
||||
}
|
||||
|
||||
body := fmt.Sprintf(`{"host":"127.0.0.1","protocol":"redfish","port":%d,"username":"admin-off","auth_type":"password","password":"secret","tls_mode":"strict"}`, addr.Port)
|
||||
resp, err := http.Post(ts.URL+"/api/collect/probe", "application/json", bytes.NewBufferString(body))
|
||||
if err != nil {
|
||||
t.Fatalf("post collect probe failed: %v", err)
|
||||
@@ -53,9 +66,6 @@ func TestCollectProbe(t *testing.T) {
|
||||
if payload.HostPowerState != "Off" {
|
||||
t.Fatalf("expected host power state Off, got %q", payload.HostPowerState)
|
||||
}
|
||||
if !payload.PowerControlAvailable {
|
||||
t.Fatalf("expected power control to be available")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectLifecycleToTerminal(t *testing.T) {
|
||||
|
||||
@@ -21,13 +21,16 @@ func (c *mockConnector) Probe(ctx context.Context, req collector.Request) (*coll
|
||||
if strings.Contains(strings.ToLower(req.Host), "fail") {
|
||||
return nil, context.DeadlineExceeded
|
||||
}
|
||||
hostPoweredOn := true
|
||||
if strings.Contains(strings.ToLower(req.Host), "off") || strings.Contains(strings.ToLower(req.Username), "off") {
|
||||
hostPoweredOn = false
|
||||
}
|
||||
return &collector.ProbeResult{
|
||||
Reachable: true,
|
||||
Protocol: c.protocol,
|
||||
HostPowerState: map[bool]string{true: "On", false: "Off"}[!strings.Contains(strings.ToLower(req.Host), "off")],
|
||||
HostPoweredOn: !strings.Contains(strings.ToLower(req.Host), "off"),
|
||||
PowerControlAvailable: true,
|
||||
SystemPath: "/redfish/v1/Systems/1",
|
||||
Reachable: true,
|
||||
Protocol: c.protocol,
|
||||
HostPowerState: map[bool]string{true: "On", false: "Off"}[hostPoweredOn],
|
||||
HostPoweredOn: hostPoweredOn,
|
||||
SystemPath: "/redfish/v1/Systems/1",
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -19,18 +19,15 @@ type CollectRequest struct {
|
||||
Password string `json:"password,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
TLSMode string `json:"tls_mode"`
|
||||
PowerOnIfHostOff bool `json:"power_on_if_host_off,omitempty"`
|
||||
StopHostAfterCollect bool `json:"stop_host_after_collect,omitempty"`
|
||||
DebugPayloads bool `json:"debug_payloads,omitempty"`
|
||||
DebugPayloads bool `json:"debug_payloads,omitempty"`
|
||||
}
|
||||
|
||||
type CollectProbeResponse struct {
|
||||
Reachable bool `json:"reachable"`
|
||||
Protocol string `json:"protocol,omitempty"`
|
||||
HostPowerState string `json:"host_power_state,omitempty"`
|
||||
HostPoweredOn bool `json:"host_powered_on"`
|
||||
PowerControlAvailable bool `json:"power_control_available"`
|
||||
Message string `json:"message,omitempty"`
|
||||
HostPowerState string `json:"host_power_state,omitempty"`
|
||||
HostPoweredOn bool `json:"host_powered_on"`
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
type CollectJobResponse struct {
|
||||
@@ -78,7 +75,8 @@ type Job struct {
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
RequestMeta CollectRequestMeta
|
||||
cancel func()
|
||||
cancel func()
|
||||
skipFn func()
|
||||
}
|
||||
|
||||
type CollectModuleStatus struct {
|
||||
|
||||
@@ -81,7 +81,7 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
}
|
||||
|
||||
for _, mem := range hw.Memory {
|
||||
if !mem.Present || mem.SizeMB == 0 {
|
||||
if !mem.IsInstalledInventory() {
|
||||
continue
|
||||
}
|
||||
present := mem.Present
|
||||
@@ -243,6 +243,8 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
Source: "network_adapters",
|
||||
Slot: nic.Slot,
|
||||
Location: nic.Location,
|
||||
BDF: nic.BDF,
|
||||
DeviceClass: "NetworkController",
|
||||
VendorID: nic.VendorID,
|
||||
DeviceID: nic.DeviceID,
|
||||
Model: nic.Model,
|
||||
@@ -253,6 +255,11 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
PortCount: nic.PortCount,
|
||||
PortType: nic.PortType,
|
||||
MACAddresses: nic.MACAddresses,
|
||||
LinkWidth: nic.LinkWidth,
|
||||
LinkSpeed: nic.LinkSpeed,
|
||||
MaxLinkWidth: nic.MaxLinkWidth,
|
||||
MaxLinkSpeed: nic.MaxLinkSpeed,
|
||||
NUMANode: nic.NUMANode,
|
||||
Present: &present,
|
||||
Status: nic.Status,
|
||||
StatusCheckedAt: nic.StatusCheckedAt,
|
||||
|
||||
@@ -90,6 +90,98 @@ func TestBuildHardwareDevices_MemorySameSerialDifferentSlots_NotDeduped(t *testi
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_ZeroSizeMemoryWithInventoryIsIncluded(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Location: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
SerialNumber: "2B5F92C6",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
memoryCount := 0
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindMemory {
|
||||
continue
|
||||
}
|
||||
memoryCount++
|
||||
if d.Slot != "PROC 1 DIMM 3" || d.PartNumber != "HMCG88AEBRA115N" || d.SerialNumber != "2B5F92C6" {
|
||||
t.Fatalf("unexpected memory device: %+v", d)
|
||||
}
|
||||
}
|
||||
if memoryCount != 1 {
|
||||
t.Fatalf("expected 1 installed zero-size memory record, got %d", memoryCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_NetworkAdapterPreservesPCIeMetadata(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
NetworkAdapters: []models.NetworkAdapter{
|
||||
{
|
||||
Slot: "1",
|
||||
Location: "OCP",
|
||||
Present: true,
|
||||
BDF: "0000:27:00.0",
|
||||
Model: "ConnectX-6 Lx",
|
||||
VendorID: 0x15b3,
|
||||
DeviceID: 0x101f,
|
||||
SerialNumber: "NIC-001",
|
||||
Firmware: "26.39.2048",
|
||||
MACAddresses: []string{"44:1A:4C:16:E8:03", "44:1A:4C:16:E8:04"},
|
||||
LinkWidth: 16,
|
||||
LinkSpeed: "32 GT/s",
|
||||
NUMANode: 1,
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindNetwork {
|
||||
continue
|
||||
}
|
||||
if d.BDF != "0000:27:00.0" || d.LinkWidth != 16 || d.LinkSpeed != "32 GT/s" || d.NUMANode != 1 {
|
||||
t.Fatalf("expected network PCIe metadata to be preserved, got %+v", d)
|
||||
}
|
||||
return
|
||||
}
|
||||
t.Fatal("expected network device in canonical inventory")
|
||||
}
|
||||
|
||||
func TestBuildSpecification_ZeroSizeMemoryWithInventoryIsShown(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
SerialNumber: "2B5F92C6",
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
spec := buildSpecification(hw)
|
||||
for _, line := range spec {
|
||||
if line.Category == "Память" && line.Name == "Hynix HMCG88AEBRA115N (size unknown)" && line.Quantity == 1 {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
t.Fatalf("expected memory spec line for zero-size identified DIMM, got %+v", spec)
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_DuplicateSerials_AreAnnotated(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
@@ -166,6 +258,31 @@ func TestBuildHardwareDevices_SkipsFirmwareOnlyNumericSlots(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_NetworkDevicesUseUnifiedControllerClass(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
NetworkAdapters: []models.NetworkAdapter{
|
||||
{
|
||||
Slot: "NIC1",
|
||||
Model: "Ethernet Adapter",
|
||||
Vendor: "Intel",
|
||||
Present: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindNetwork {
|
||||
continue
|
||||
}
|
||||
if d.DeviceClass != "NetworkController" {
|
||||
t.Fatalf("expected unified network controller class, got %+v", d)
|
||||
}
|
||||
return
|
||||
}
|
||||
t.Fatalf("expected one canonical network device")
|
||||
}
|
||||
|
||||
func TestHandleGetConfig_ReturnsCanonicalHardware(t *testing.T) {
|
||||
srv := &Server{}
|
||||
srv.SetResult(&models.AnalysisResult{
|
||||
|
||||
@@ -13,12 +13,13 @@ import (
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector"
|
||||
@@ -49,11 +50,20 @@ func (s *Server) handleIndex(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
tmpl.Execute(w, map[string]string{
|
||||
"AppVersion": s.config.AppVersion,
|
||||
"AppCommit": s.config.AppCommit,
|
||||
"AppVersion": normalizeDisplayVersion(s.config.AppVersion),
|
||||
"AppCommit": s.config.AppCommit,
|
||||
"ChartVersion": normalizeDisplayVersion(s.config.ChartVersion),
|
||||
})
|
||||
}
|
||||
|
||||
func normalizeDisplayVersion(v string) string {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimPrefix(v, "v")
|
||||
}
|
||||
|
||||
func (s *Server) handleChartCurrent(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
title := chartTitle(result)
|
||||
@@ -530,11 +540,21 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
|
||||
continue
|
||||
}
|
||||
present := mem.Present != nil && *mem.Present
|
||||
// Skip empty slots (not present or 0 size)
|
||||
if !present || mem.SizeMB == 0 {
|
||||
if !present {
|
||||
continue
|
||||
}
|
||||
// Include frequency if available
|
||||
|
||||
if mem.SizeMB == 0 {
|
||||
name := strings.TrimSpace(strings.Join(nonEmptyStrings(mem.Manufacturer, mem.PartNumber, mem.Type), " "))
|
||||
if name == "" {
|
||||
name = "Installed DIMM (size unknown)"
|
||||
} else {
|
||||
name += " (size unknown)"
|
||||
}
|
||||
memGroups[name]++
|
||||
continue
|
||||
}
|
||||
|
||||
key := ""
|
||||
currentSpeed := intFromDetails(mem.Details, "current_speed_mhz")
|
||||
if currentSpeed > 0 {
|
||||
@@ -626,6 +646,18 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
|
||||
return spec
|
||||
}
|
||||
|
||||
func nonEmptyStrings(values ...string) []string {
|
||||
out := make([]string, 0, len(values))
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, value)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil {
|
||||
@@ -717,6 +749,19 @@ func hasUsableSerial(serial string) bool {
|
||||
}
|
||||
}
|
||||
|
||||
func hasUsableFirmwareVersion(version string) bool {
|
||||
v := strings.TrimSpace(version)
|
||||
if v == "" {
|
||||
return false
|
||||
}
|
||||
switch strings.ToUpper(v) {
|
||||
case "N/A", "NA", "NONE", "NULL", "UNKNOWN", "-":
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleGetFirmware(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil || result.Hardware == nil {
|
||||
@@ -944,7 +989,7 @@ func buildFirmwareEntries(hw *models.HardwareConfig) []firmwareEntry {
|
||||
component = strings.TrimSpace(component)
|
||||
model = strings.TrimSpace(model)
|
||||
version = strings.TrimSpace(version)
|
||||
if component == "" || version == "" {
|
||||
if component == "" || !hasUsableFirmwareVersion(version) {
|
||||
return
|
||||
}
|
||||
if model == "" {
|
||||
@@ -1639,34 +1684,28 @@ func (s *Server) handleCollectProbe(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
message := "Связь с BMC установлена"
|
||||
if result != nil {
|
||||
switch {
|
||||
case !result.HostPoweredOn && result.PowerControlAvailable:
|
||||
message = "Связь с BMC установлена, host выключен. Можно включить перед сбором."
|
||||
case !result.HostPoweredOn:
|
||||
message = "Связь с BMC установлена, host выключен."
|
||||
default:
|
||||
message = "Связь с BMC установлена, host включен."
|
||||
if result.HostPoweredOn {
|
||||
message = "Связь с BMC установлена, host включён."
|
||||
} else {
|
||||
message = "Связь с BMC установлена, host выключен. Данные инвентаря могут быть неполными."
|
||||
}
|
||||
}
|
||||
|
||||
hostPowerState := ""
|
||||
hostPoweredOn := false
|
||||
powerControlAvailable := false
|
||||
reachable := false
|
||||
if result != nil {
|
||||
reachable = result.Reachable
|
||||
hostPowerState = strings.TrimSpace(result.HostPowerState)
|
||||
hostPoweredOn = result.HostPoweredOn
|
||||
powerControlAvailable = result.PowerControlAvailable
|
||||
}
|
||||
|
||||
jsonResponse(w, CollectProbeResponse{
|
||||
Reachable: reachable,
|
||||
Protocol: req.Protocol,
|
||||
HostPowerState: hostPowerState,
|
||||
HostPoweredOn: hostPoweredOn,
|
||||
PowerControlAvailable: powerControlAvailable,
|
||||
Message: message,
|
||||
Reachable: reachable,
|
||||
Protocol: req.Protocol,
|
||||
HostPowerState: hostPowerState,
|
||||
HostPoweredOn: hostPoweredOn,
|
||||
Message: message,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1702,6 +1741,22 @@ func (s *Server) handleCollectCancel(w http.ResponseWriter, r *http.Request) {
|
||||
jsonResponse(w, job.toStatusResponse())
|
||||
}
|
||||
|
||||
func (s *Server) handleCollectSkip(w http.ResponseWriter, r *http.Request) {
|
||||
jobID := strings.TrimSpace(r.PathValue("id"))
|
||||
if !isValidCollectJobID(jobID) {
|
||||
jsonError(w, "Invalid collect job id", http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
job, ok := s.jobManager.SkipJob(jobID)
|
||||
if !ok {
|
||||
jsonError(w, "Collect job not found", http.StatusNotFound)
|
||||
return
|
||||
}
|
||||
|
||||
jsonResponse(w, job.toStatusResponse())
|
||||
}
|
||||
|
||||
func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
if attached := s.jobManager.AttachJobCancel(jobID, cancel); !attached {
|
||||
@@ -1709,6 +1764,11 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
|
||||
return
|
||||
}
|
||||
|
||||
skipCh := make(chan struct{})
|
||||
var skipOnce sync.Once
|
||||
skipFn := func() { skipOnce.Do(func() { close(skipCh) }) }
|
||||
s.jobManager.AttachJobSkip(jobID, skipFn)
|
||||
|
||||
go func() {
|
||||
connector, ok := s.getCollector(req.Protocol)
|
||||
if !ok {
|
||||
@@ -1776,7 +1836,9 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
|
||||
}
|
||||
}
|
||||
|
||||
result, err := connector.Collect(ctx, toCollectorRequest(req), emitProgress)
|
||||
collectorReq := toCollectorRequest(req)
|
||||
collectorReq.SkipHungCh = skipCh
|
||||
result, err := connector.Collect(ctx, collectorReq, emitProgress)
|
||||
if err != nil {
|
||||
if ctx.Err() != nil {
|
||||
return
|
||||
@@ -1992,17 +2054,15 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
|
||||
|
||||
func toCollectorRequest(req CollectRequest) collector.Request {
|
||||
return collector.Request{
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
PowerOnIfHostOff: req.PowerOnIfHostOff,
|
||||
StopHostAfterCollect: req.StopHostAfterCollect,
|
||||
DebugPayloads: req.DebugPayloads,
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
DebugPayloads: req.DebugPayloads,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -62,3 +62,22 @@ func TestBuildFirmwareEntries_IncludesGPUFirmwareFallback(t *testing.T) {
|
||||
t.Fatalf("expected GPU firmware entry from hardware.gpus fallback")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildFirmwareEntries_SkipsPlaceholderVersions(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Firmware: []models.FirmwareInfo{
|
||||
{DeviceName: "BMC", Version: "3.13.42P13"},
|
||||
{DeviceName: "Front_BP_1", Version: "NA"},
|
||||
{DeviceName: "Rear_BP_0", Version: "N/A"},
|
||||
{DeviceName: "HDD_BP", Version: "-"},
|
||||
},
|
||||
}
|
||||
|
||||
entries := buildFirmwareEntries(hw)
|
||||
if len(entries) != 1 {
|
||||
t.Fatalf("expected only usable firmware entries, got %#v", entries)
|
||||
}
|
||||
if entries[0].Component != "BMC" || entries[0].Version != "3.13.42P13" {
|
||||
t.Fatalf("unexpected remaining firmware entry: %#v", entries[0])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -175,6 +175,43 @@ func (m *JobManager) UpdateJobDebugInfo(id string, info *CollectDebugInfo) (*Job
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) AttachJobSkip(id string, skipFn func()) bool {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
job, ok := m.jobs[id]
|
||||
if !ok || job == nil || isTerminalCollectStatus(job.Status) {
|
||||
return false
|
||||
}
|
||||
job.skipFn = skipFn
|
||||
return true
|
||||
}
|
||||
|
||||
func (m *JobManager) SkipJob(id string) (*Job, bool) {
|
||||
m.mu.Lock()
|
||||
job, ok := m.jobs[id]
|
||||
if !ok || job == nil {
|
||||
m.mu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
if isTerminalCollectStatus(job.Status) {
|
||||
cloned := cloneJob(job)
|
||||
m.mu.Unlock()
|
||||
return cloned, true
|
||||
}
|
||||
skipFn := job.skipFn
|
||||
job.skipFn = nil
|
||||
job.UpdatedAt = time.Now().UTC()
|
||||
job.Logs = append(job.Logs, formatCollectLogLine(job.UpdatedAt, "Пропуск зависших запросов по команде пользователя"))
|
||||
cloned := cloneJob(job)
|
||||
m.mu.Unlock()
|
||||
|
||||
if skipFn != nil {
|
||||
skipFn()
|
||||
}
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) AttachJobCancel(id string, cancelFn context.CancelFunc) bool {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
@@ -229,5 +266,6 @@ func cloneJob(job *Job) *Job {
|
||||
cloned.CurrentPhase = job.CurrentPhase
|
||||
cloned.ETASeconds = job.ETASeconds
|
||||
cloned.cancel = nil
|
||||
cloned.skipFn = nil
|
||||
return &cloned
|
||||
}
|
||||
|
||||
@@ -19,10 +19,11 @@ import (
|
||||
var WebFS embed.FS
|
||||
|
||||
type Config struct {
|
||||
Port int
|
||||
PreloadFile string
|
||||
AppVersion string
|
||||
AppCommit string
|
||||
Port int
|
||||
PreloadFile string
|
||||
AppVersion string
|
||||
AppCommit string
|
||||
ChartVersion string
|
||||
}
|
||||
|
||||
type Server struct {
|
||||
@@ -99,6 +100,7 @@ func (s *Server) setupRoutes() {
|
||||
s.mux.HandleFunc("POST /api/collect/probe", s.handleCollectProbe)
|
||||
s.mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
|
||||
s.mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
|
||||
s.mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
|
||||
}
|
||||
|
||||
func (s *Server) Run() error {
|
||||
|
||||
@@ -24,6 +24,7 @@ func newFlowTestServer() (*Server, *httptest.Server) {
|
||||
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
|
||||
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
|
||||
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
|
||||
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
|
||||
return s, httptest.NewServer(mux)
|
||||
}
|
||||
|
||||
|
||||
@@ -128,6 +128,7 @@ echo ""
|
||||
# Show next steps
|
||||
echo -e "${YELLOW}Next steps:${NC}"
|
||||
echo " 1. Create git tag:"
|
||||
echo " # LOGPile release tags use vN.M, for example: v1.12"
|
||||
echo " git tag -a ${VERSION} -m \"Release ${VERSION}\""
|
||||
echo ""
|
||||
echo " 2. Push tag to remote:"
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1219
web/static/js/app.js
1219
web/static/js/app.js
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,5 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="ru">
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
@@ -7,57 +7,63 @@
|
||||
<link rel="stylesheet" href="/static/css/style.css">
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<div class="app-header-row">
|
||||
<div class="app-header-brand">
|
||||
<h1>LOGPile <span class="header-domain">mchus.pro</span></h1>
|
||||
<p>Анализатор диагностических данных BMC/IPMI</p>
|
||||
</div>
|
||||
<div id="header-log-meta" class="header-log-meta hidden">
|
||||
<div class="header-actions">
|
||||
<button id="clear-btn" class="hidden" onclick="clearData()">Очистить данные</button>
|
||||
<button id="header-raw-btn" class="hidden" onclick="exportData('json')">Export Raw Data</button>
|
||||
<button id="header-reanimator-btn" class="hidden" onclick="exportData('reanimator')">Экспорт Reanimator</button>
|
||||
<button id="restart-btn" onclick="restartApp()">Перезапуск</button>
|
||||
<button id="exit-btn" onclick="exitApp()">Выход</button>
|
||||
</div>
|
||||
<header class="page-header">
|
||||
<div class="page-header-brand">
|
||||
<p class="page-eyebrow">Diagnostic Workbench</p>
|
||||
<h1>LOGPile</h1>
|
||||
<p class="page-subtitle">BMC diagnostic data analyzer</p>
|
||||
</div>
|
||||
<div id="header-log-meta" class="header-log-meta hidden">
|
||||
<div class="header-actions">
|
||||
<button id="clear-btn" class="header-action hidden" onclick="clearData()">Clear Data</button>
|
||||
<button id="header-raw-btn" class="header-action hidden" onclick="exportData('json')">Raw Data</button>
|
||||
<button id="header-reanimator-btn" class="header-action hidden" onclick="exportData('reanimator')">Reanimator</button>
|
||||
<button id="restart-btn" class="header-action" onclick="restartApp()">Restart</button>
|
||||
<button id="exit-btn" class="header-action" onclick="exitApp()">Exit</button>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<main>
|
||||
<section id="upload-section">
|
||||
<div class="source-switch" role="tablist" aria-label="Источник данных">
|
||||
<button type="button" class="source-switch-btn active" data-source-type="archive">Архив</button>
|
||||
<main class="page-main">
|
||||
<section id="upload-section" class="control-deck">
|
||||
<div class="source-switch" role="tablist" aria-label="Data source">
|
||||
<button type="button" class="source-switch-btn active" data-source-type="archive">Archive</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="api">API</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="convert">Convert</button>
|
||||
</div>
|
||||
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
<div id="archive-source-content" class="surface-panel upload-panel">
|
||||
<h2>Open Archive</h2>
|
||||
<p>Upload a support archive, plain log, or raw JSON snapshot to open the hardware report.</p>
|
||||
<div class="upload-area upload-dropzone" id="drop-zone">
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<span class="upload-kicker">Archive Import</span>
|
||||
<strong>Drop a file here</strong>
|
||||
<span class="upload-copy">LOGPile will parse it and open the report immediately.</span>
|
||||
<div class="upload-actions">
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Select File</button>
|
||||
</div>
|
||||
<p class="hint">Supported formats: `.ahs`, `.tar.gz`, `.tar`, `.tgz`, `.sds`, `.zip`, `.json`, `.txt`, `.log`</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
</div>
|
||||
|
||||
<div id="api-source-content" class="api-placeholder hidden">
|
||||
<div id="api-source-content" class="surface-panel upload-panel hidden">
|
||||
<h2>BMC API</h2>
|
||||
<p>Validate access and start live collection through the production Redfish pipeline.</p>
|
||||
<form id="api-connect-form" novalidate>
|
||||
<h3>Подключение к BMC API</h3>
|
||||
<div id="api-form-errors" class="form-errors hidden"></div>
|
||||
|
||||
<div class="api-form-grid">
|
||||
<label class="api-form-field" for="api-host">
|
||||
<span>Host</span>
|
||||
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 или bmc.example.local">
|
||||
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 or bmc.example.local">
|
||||
<span class="field-error" data-error-for="host"></span>
|
||||
</label>
|
||||
|
||||
<label class="api-form-field" for="api-port">
|
||||
<span>Порт</span>
|
||||
<span>Port</span>
|
||||
<input id="api-port" name="port" type="number" min="1" max="65535" value="443" placeholder="443">
|
||||
<span class="field-error" data-error-for="port"></span>
|
||||
</label>
|
||||
@@ -69,55 +75,52 @@
|
||||
</label>
|
||||
|
||||
<label class="api-form-field" id="api-password-field" for="api-password">
|
||||
<span>Пароль</span>
|
||||
<span>Password</span>
|
||||
<input id="api-password" name="password" type="password" autocomplete="current-password">
|
||||
<span class="field-error" data-error-for="password"></span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="api-form-actions">
|
||||
<button id="api-connect-btn" type="button">Подключиться</button>
|
||||
<button id="api-connect-btn" type="button">Connect</button>
|
||||
</div>
|
||||
<div id="api-connect-status" class="api-connect-status"></div>
|
||||
<div id="api-probe-options" class="api-probe-options hidden">
|
||||
<label class="api-form-checkbox" for="api-power-on">
|
||||
<input id="api-power-on" name="power_on_if_host_off" type="checkbox">
|
||||
<span>Включить перед сбором</span>
|
||||
</label>
|
||||
<label class="api-form-checkbox" for="api-power-off">
|
||||
<input id="api-power-off" name="stop_host_after_collect" type="checkbox">
|
||||
<span>Выключить после сбора</span>
|
||||
</label>
|
||||
<div class="api-probe-options-separator"></div>
|
||||
<div id="api-host-off-warning" class="api-host-off-warning hidden">
|
||||
⚠ Host is powered off. Inventory data may be incomplete.
|
||||
</div>
|
||||
<label class="api-form-checkbox" for="api-debug-payloads">
|
||||
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
|
||||
<span>Сбор расширенных метрик для отладки</span>
|
||||
<span>Collect extended diagnostics</span>
|
||||
</label>
|
||||
<div class="api-form-actions">
|
||||
<button id="api-collect-btn" type="submit">Собрать</button>
|
||||
<button id="api-collect-btn" type="submit">Collect</button>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
<section id="api-job-status" class="job-status hidden" aria-live="polite">
|
||||
<div class="job-status-header">
|
||||
<h4>Статус задачи сбора</h4>
|
||||
<button id="cancel-job-btn" type="button">Отменить</button>
|
||||
<h4>Collection Job Status</h4>
|
||||
<div class="job-status-actions">
|
||||
<button id="skip-hung-btn" type="button" class="hidden" title="Abort hung requests and continue with analysis of collected data">Skip Hung Requests</button>
|
||||
<button id="cancel-job-btn" type="button">Cancel</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="job-status-meta">
|
||||
<div><span class="meta-label">jobId:</span> <code id="job-id-value">-</code></div>
|
||||
<div>
|
||||
<span class="meta-label">Статус:</span>
|
||||
<span class="meta-label">Status:</span>
|
||||
<span id="job-status-value" class="job-status-badge">Queued</span>
|
||||
</div>
|
||||
<div><span class="meta-label">Этап:</span> <span id="job-progress-value">Сбор данных...</span></div>
|
||||
<div><span class="meta-label">Stage:</span> <span id="job-progress-value">Collecting data...</span></div>
|
||||
<div><span class="meta-label">ETA:</span> <span id="job-eta-value">-</span></div>
|
||||
</div>
|
||||
<div class="job-progress" aria-label="Прогресс задачи">
|
||||
<div class="job-progress" aria-label="Job progress">
|
||||
<div id="job-progress-bar" class="job-progress-bar" style="width: 0%">0%</div>
|
||||
</div>
|
||||
<div id="job-active-modules" class="job-active-modules hidden">
|
||||
<p class="meta-label">Активные модули:</p>
|
||||
<p class="meta-label">Active modules:</p>
|
||||
<div id="job-active-modules-list" class="job-module-chips"></div>
|
||||
</div>
|
||||
<div id="job-debug-info" class="job-debug-info hidden">
|
||||
@@ -126,23 +129,23 @@
|
||||
<div id="job-phase-telemetry" class="job-phase-telemetry"></div>
|
||||
</div>
|
||||
<div class="job-status-logs">
|
||||
<p class="meta-label">Журнал шагов:</p>
|
||||
<p class="meta-label">Step log:</p>
|
||||
<ul id="job-logs-list"></ul>
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div id="convert-source-content" class="api-placeholder hidden">
|
||||
<h3>Пакетная выгрузка Reanimator</h3>
|
||||
<p>Выберите папку с файлами поддерживаемого типа. Для каждого файла будет создан отдельный экспорт Reanimator.</p>
|
||||
<div id="convert-source-content" class="surface-panel upload-panel hidden">
|
||||
<h2>Batch Convert</h2>
|
||||
<p>Select a folder with supported files. A separate Reanimator export will be produced for each file.</p>
|
||||
<div class="api-form-actions">
|
||||
<input type="file" id="convert-folder-input" webkitdirectory directory multiple hidden>
|
||||
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Выбрать папку</button>
|
||||
<button id="convert-run-btn" type="button">Конвертировать в Reanimator</button>
|
||||
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Choose Folder</button>
|
||||
<button id="convert-run-btn" type="button">Convert to Reanimator</button>
|
||||
</div>
|
||||
<div id="convert-progress" class="convert-progress hidden" aria-live="polite">
|
||||
<div class="convert-progress-meta">
|
||||
<span id="convert-progress-label">Подготовка...</span>
|
||||
<span id="convert-progress-label">Preparing...</span>
|
||||
<span id="convert-progress-value">0%</span>
|
||||
</div>
|
||||
<div class="convert-progress-track">
|
||||
@@ -155,12 +158,12 @@
|
||||
</section>
|
||||
|
||||
<section id="data-section" class="hidden">
|
||||
<section class="result-panel">
|
||||
<section class="viewer-panel">
|
||||
<div class="audit-viewer-shell">
|
||||
<iframe
|
||||
id="audit-viewer-frame"
|
||||
class="audit-viewer-frame"
|
||||
title="Reanimator chart viewer"
|
||||
title="Hardware report"
|
||||
loading="eager"
|
||||
scrolling="no"
|
||||
referrerpolicy="same-origin">
|
||||
@@ -170,11 +173,9 @@
|
||||
</section>
|
||||
</main>
|
||||
|
||||
<footer>
|
||||
<div class="footer-buttons">
|
||||
</div>
|
||||
<footer class="page-footer">
|
||||
<div class="footer-info">
|
||||
<p>Автор: <a href="https://mchus.pro" target="_blank">mchus.pro</a> | <a href="https://git.mchus.pro/mchus/logpile" target="_blank">Git Repository</a>{{if .AppVersion}} | v{{.AppVersion}}{{end}}</p>
|
||||
<p>{{if .AppVersion}}LOGPile {{.AppVersion}}{{end}}{{if and .AppVersion .ChartVersion}} · {{end}}{{if .ChartVersion}}Chart {{.ChartVersion}}{{end}}</p>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user