Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7e9af89c46 | ||
|
|
db74df9994 | ||
|
|
bb82387d48 | ||
|
|
475f6ac472 | ||
|
|
93ce676f04 | ||
|
|
c47c34fd11 | ||
|
|
d8c3256e41 | ||
|
|
1b2d978d29 | ||
|
|
0f310d57c4 | ||
|
|
3547ef9083 | ||
|
|
99f0d6217c | ||
|
|
8acbba3cc9 | ||
|
|
8942991f0c | ||
|
|
9b71c4a95f | ||
|
|
125f77ef69 | ||
|
|
063b08d5fb | ||
|
|
e3ff1745fc | ||
|
|
96e65d8f65 | ||
|
|
30409eef67 | ||
|
|
65e65968cf | ||
|
|
380c199705 | ||
|
|
d650a6ba1c |
2
bible
2
bible
Submodule bible updated: 0c829182a1...52444350c1
@@ -27,11 +27,14 @@ All modes converge on the same normalized hardware model and exporter pipeline.
|
||||
## Current vendor coverage
|
||||
|
||||
- Dell TSR
|
||||
- Reanimator Easy Bee support bundles
|
||||
- H3C SDS G5/G6
|
||||
- Inspur / Kaytus
|
||||
- HPE iLO AHS
|
||||
- NVIDIA HGX Field Diagnostics
|
||||
- NVIDIA Bug Report
|
||||
- Unraid
|
||||
- xFusion iBMC dump / file export
|
||||
- XigmaNAS
|
||||
- Generic fallback parser
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ LOGPile remains responsible for upload, collection, parsing, normalization, and
|
||||
```text
|
||||
cmd/logpile/main.go entrypoint and CLI flags
|
||||
internal/server/ HTTP handlers, jobs, upload/export flows
|
||||
internal/ingest/ source-family orchestration for upload and raw replay
|
||||
internal/collector/ live collection and Redfish replay
|
||||
internal/analyzer/ shared analysis helpers
|
||||
internal/parser/ archive extraction and parser dispatch
|
||||
@@ -50,18 +51,21 @@ Failed or canceled jobs do not overwrite the previous dataset.
|
||||
### Upload
|
||||
|
||||
1. `POST /api/upload` receives multipart field `archive`
|
||||
2. JSON inputs are checked for raw-export package or `AnalysisResult` snapshot
|
||||
3. Non-JSON inputs go through `parser.BMCParser`
|
||||
4. Archive metadata is normalized onto `AnalysisResult`
|
||||
5. Result becomes the current in-memory dataset
|
||||
2. `internal/ingest.Service` resolves the source family
|
||||
3. JSON inputs are checked for raw-export package or `AnalysisResult` snapshot
|
||||
4. Non-JSON archives go through the archive parser family
|
||||
5. Archive metadata is normalized onto `AnalysisResult`
|
||||
6. Result becomes the current in-memory dataset
|
||||
|
||||
### Live collect
|
||||
|
||||
1. `POST /api/collect` validates request fields
|
||||
2. Server creates an async job and returns `202 Accepted`
|
||||
3. Selected collector gathers raw data
|
||||
4. For Redfish, collector saves `raw_payloads.redfish_tree`
|
||||
5. Result is normalized, source metadata applied, and state replaced on success
|
||||
4. For Redfish, collector runs minimal discovery, matches Redfish profiles, and builds an acquisition plan
|
||||
5. Collector applies profile tuning hints (for example crawl breadth, prefetch, bounded plan-B passes)
|
||||
6. Collector saves `raw_payloads.redfish_tree` plus acquisition diagnostics
|
||||
7. Result is normalized, source metadata applied, and state replaced on success
|
||||
|
||||
### Batch convert
|
||||
|
||||
@@ -76,6 +80,10 @@ Failed or canceled jobs do not overwrite the previous dataset.
|
||||
Live Redfish collection and offline Redfish re-analysis must use the same replay path.
|
||||
The collector first captures `raw_payloads.redfish_tree`, then the replay logic builds the normalized result.
|
||||
|
||||
Redfish is being split into two coordinated phases:
|
||||
- acquisition: profile-driven snapshot collection strategy
|
||||
- analysis: replay over the saved snapshot with the same profile framework
|
||||
|
||||
## PCI IDs lookup
|
||||
|
||||
Lookup order:
|
||||
|
||||
@@ -6,6 +6,12 @@ Core files:
|
||||
- `registry.go` for protocol registration
|
||||
- `redfish.go` for live collection
|
||||
- `redfish_replay.go` for replay from raw payloads
|
||||
- `redfish_replay_gpu.go` for profile-driven GPU replay collectors and GPU fallback helpers
|
||||
- `redfish_replay_storage.go` for profile-driven storage replay collectors and storage recovery helpers
|
||||
- `redfish_replay_inventory.go` for replay inventory collectors (PCIe, NIC, BMC MAC, NIC enrichment)
|
||||
- `redfish_replay_fru.go` for board fallback helpers and Assembly/FRU replay extraction
|
||||
- `redfish_replay_profiles.go` for profile-driven replay helpers and vendor-aware recovery helpers
|
||||
- `redfishprofile/` for Redfish profile matching and acquisition/analysis hooks
|
||||
- `ipmi_mock.go` for the placeholder IPMI implementation
|
||||
- `types.go` for request/progress contracts
|
||||
|
||||
@@ -50,11 +56,79 @@ It discovers and follows Redfish resources dynamically from root collections suc
|
||||
- `Chassis`
|
||||
- `Managers`
|
||||
|
||||
After minimal discovery the collector builds `MatchSignals` and selects a Redfish profile mode:
|
||||
- `matched` when one or more profiles score with high confidence
|
||||
- `fallback` when vendor/platform confidence is low; in this mode the collector aggregates safe additive profile probes to maximize snapshot completeness
|
||||
|
||||
Profile modules may contribute:
|
||||
- primary acquisition seeds
|
||||
- bounded `PlanBPaths` for secondary recovery
|
||||
- critical paths
|
||||
- acquisition notes/diagnostics
|
||||
- tuning hints such as snapshot document cap, prefetch behavior, and expensive post-probe toggles
|
||||
- post-probe policy for numeric collection recovery, direct NVMe `Disk.Bay` recovery, and sensor post-probe enablement
|
||||
- recovery policy for critical collection member retry, slow numeric plan-B probing, and profile-specific plan-B activation
|
||||
- scoped path policy for discovered `Systems/*`, `Chassis/*`, and `Managers/*` branches when a profile needs extra seeds/critical targets beyond the vendor-neutral core set
|
||||
- prefetch policy for which critical paths are eligible for adaptive prefetch and which path shapes are explicitly excluded
|
||||
|
||||
Model- or topology-specific `CriticalPaths` and profile `PlanBPaths` must live in the profile
|
||||
module that owns the behavior. The collector core may execute those paths, but it should not
|
||||
hardcode vendor-specific recovery targets.
|
||||
The same rule applies to expensive post-probe decisions: the collector core may execute bounded
|
||||
post-probe loops, but profiles own whether those loops are enabled for a given platform shape.
|
||||
The same rule applies to critical recovery passes: the collector core may run bounded plan-B
|
||||
loops, but profiles own whether member retry, slow numeric recovery, and profile-specific plan-B
|
||||
passes are enabled.
|
||||
When a profile needs extra discovered-path branches such as storage controller subtrees, it must
|
||||
provide them as scoped suffix policy rather than by hardcoding platform-shaped suffixes into the
|
||||
collector core baseline seed list.
|
||||
The same applies to prefetch shaping: the collector core may execute adaptive prefetch, but
|
||||
profiles own the include/exclude rules for which critical paths should participate.
|
||||
The same applies to critical inventory shaping: the collector core should keep only a minimal
|
||||
vendor-neutral critical baseline, while profiles own additional system/chassis/manager critical
|
||||
suffixes and top-level critical targets.
|
||||
Resolved live acquisition plans should be built inside `redfishprofile/`, not by hand in
|
||||
`redfish.go`. The collector core should receive discovered resources plus the selected profile
|
||||
plan and then execute the resolved seed/critical paths.
|
||||
When profile behavior depends on what discovery actually returned, use a post-discovery
|
||||
refinement hook in `redfishprofile/` instead of hardcoding guessed absolute paths in the static
|
||||
plan. MSI GPU chassis refinement is the reference example.
|
||||
|
||||
Live Redfish collection must expose profile-match diagnostics:
|
||||
- collector logs must include the selected modules and score for every known module
|
||||
- job status responses must carry structured `active_modules` and `module_scores`
|
||||
- the collect page should render active modules as chips from structured status data, not by
|
||||
parsing log lines
|
||||
|
||||
Profile matching may use stable platform grammar signals in addition to vendor strings:
|
||||
- discovered member/resource naming from lightweight discovery collections
|
||||
- firmware inventory member IDs
|
||||
- OEM action names and linked target paths embedded in discovery documents
|
||||
- replay-only snapshot hints such as OEM assembly/type markers when they are present in
|
||||
`raw_payloads.redfish_tree`
|
||||
|
||||
On replay, profile-derived analysis directives may enable vendor-specific inventory linking
|
||||
helpers such as processor-GPU fallback, chassis-ID alias resolution, and bounded storage recovery.
|
||||
Replay should now resolve a structured analysis plan inside `redfishprofile/`, analogous to the
|
||||
live acquisition plan. The replay core may execute collectors against the resolved directives, but
|
||||
snapshot-aware vendor decisions should live in profile analysis hooks, not in `redfish_replay.go`.
|
||||
GPU and storage replay executors should consume the resolved analysis plan directly, not a raw
|
||||
`AnalysisDirectives` struct, so the boundary between planning and execution stays explicit.
|
||||
|
||||
Profile matching and acquisition tuning must be regression-tested against repo-owned compact
|
||||
fixtures under `internal/collector/redfishprofile/testdata/`, derived from representative
|
||||
raw-export snapshots, for at least MSI and Supermicro shapes.
|
||||
When multiple raw-export snapshots exist for the same platform, profile selection must remain
|
||||
stable across those sibling fixtures unless the topology actually changes.
|
||||
Analysis-plan metadata should be stored in replay raw payloads so vendor hook activation is
|
||||
debuggable offline.
|
||||
|
||||
### Stored raw data
|
||||
|
||||
Important raw payloads:
|
||||
- `raw_payloads.redfish_tree`
|
||||
- `raw_payloads.redfish_fetch_errors`
|
||||
- `raw_payloads.redfish_profiles`
|
||||
- `raw_payloads.source_timezone` when available
|
||||
|
||||
### Snapshot crawler rules
|
||||
@@ -68,7 +142,7 @@ Important raw payloads:
|
||||
|
||||
When changing collection logic:
|
||||
|
||||
1. Prefer alternate-path support over vendor hardcoding
|
||||
1. Prefer profile modules over ad-hoc vendor branches in the collector core
|
||||
2. Keep expensive probing bounded
|
||||
3. Deduplicate by serial, then BDF, then location/model fallbacks
|
||||
4. Preserve replay determinism from saved raw payloads
|
||||
|
||||
@@ -50,12 +50,15 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
|
||||
| Vendor ID | Input family | Notes |
|
||||
|-----------|--------------|-------|
|
||||
| `dell` | TSR ZIP archives | Broad hardware, firmware, sensors, lifecycle events |
|
||||
| `easy_bee` | `bee-support-*.tar.gz` | Imports embedded `export/bee-audit.json` snapshot from reanimator-easy-bee bundles |
|
||||
| `h3c_g5` | H3C SDS G5 bundles | INI/XML/CSV-driven hardware and event parsing |
|
||||
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
|
||||
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
|
||||
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
|
||||
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
|
||||
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
|
||||
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
|
||||
| `xfusion` | xFusion iBMC `tar.gz` dump / file export | AppDump + RTOSDump + LogDump merge for hardware and firmware |
|
||||
| `xigmanas` | XigmaNAS plain logs | FreeBSD/NAS-oriented inventory |
|
||||
| `generic` | fallback | Low-confidence text fallback when nothing else matches |
|
||||
|
||||
@@ -120,6 +123,55 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|
||||
---
|
||||
|
||||
### HPE iLO AHS (`hpe_ilo_ahs`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on HPE ProLiant Gen11 `.ahs` export from iLO 6.
|
||||
|
||||
**Archive format:** `.ahs` single-file Active Health System export.
|
||||
|
||||
**Detection:** Single-file input with `ABJR` container header and HPE AHS member names
|
||||
such as `CUST_INFO.DAT`, `*.zbb`, `ilo_boot_support.zbb`.
|
||||
|
||||
**Extracted data (current):**
|
||||
- System board identity (manufacturer, model, serial, part number)
|
||||
- iLO / System ROM / SPS top-level firmware
|
||||
- CPU inventory (model-level)
|
||||
- Memory DIMM inventory for populated slots
|
||||
- PSU inventory
|
||||
- PCIe / OCP NIC inventory from SMBIOS-style slot records
|
||||
- Storage controller and physical drives from embedded Redfish JSON inside `zbb` members
|
||||
- Basic iLO event log entries with timestamps when present
|
||||
|
||||
**Implementation note:** The format is proprietary. Parser support is intentionally hybrid:
|
||||
container parsing (`ABJR` + gzip) plus structured extraction from embedded Redfish objects and
|
||||
printable SMBIOS/FRU payloads. This is sufficient for inventory-grade parsing without decoding the
|
||||
entire internal `zbb` schema.
|
||||
|
||||
---
|
||||
|
||||
### xFusion iBMC Dump / File Export (`xfusion`)
|
||||
|
||||
**Status:** Ready (v1.1.0). Tested on xFusion G5500 V7 `tar.gz` exports.
|
||||
|
||||
**Archive format:** `tar.gz` dump exported from the iBMC UI, including `AppDump/`, `RTOSDump/`,
|
||||
and `LogDump/` trees.
|
||||
|
||||
**Detection:** `AppDump/FruData/fruinfo.txt`, `AppDump/card_manage/card_info`,
|
||||
`RTOSDump/versioninfo/app_revision.txt`, and `LogDump/netcard/netcard_info.txt`.
|
||||
|
||||
**Extracted data (current):**
|
||||
- Board / FRU inventory from `fruinfo.txt`
|
||||
- CPU inventory from `CpuMem/cpu_info`
|
||||
- Memory DIMM inventory from `CpuMem/mem_info`
|
||||
- GPU inventory from `card_info`
|
||||
- OCP NIC inventory by merging `card_info` with `LogDump/netcard/netcard_info.txt`
|
||||
- PSU inventory from `BMC/psu_info.txt`
|
||||
- Physical storage from `StorageMgnt/PhysicalDrivesInfo/*/disk_info`
|
||||
- System firmware entries from `RTOSDump/versioninfo/app_revision.txt`
|
||||
- Maintenance events from `LogDump/maintenance_log`
|
||||
|
||||
---
|
||||
|
||||
### Generic text fallback (`generic`)
|
||||
|
||||
**Status:** Ready (v1.0.0).
|
||||
@@ -139,10 +191,13 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
| Vendor | ID | Status | Tested on |
|
||||
|--------|----|--------|-----------|
|
||||
| Dell TSR | `dell` | Ready | TSR nested zip archives |
|
||||
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
|
||||
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
|
||||
| xFusion iBMC dump | `xfusion` | Ready | G5500 V7 file-export `tar.gz` bundles |
|
||||
| XigmaNAS | `xigmanas` | Ready | FreeBSD NAS logs |
|
||||
| H3C SDS G5 | `h3c_g5` | Ready | H3C UniServer R4900 G5 SDS archives |
|
||||
| H3C SDS G6 | `h3c_g6` | Ready | H3C UniServer R4700 G6 SDS archives |
|
||||
|
||||
@@ -258,6 +258,9 @@ at parse time before storing in any model struct. Use the regex
|
||||
**Date:** 2026-03-12
|
||||
**Context:** `shouldAdaptiveNVMeProbe` was introduced in `2fa4a12` to recover NVMe drives on
|
||||
Supermicro BMCs that expose empty `Drives` collections but serve disks at direct `Disk.Bay.N`
|
||||
|
||||
---
|
||||
|
||||
paths. The function returns `true` for any chassis with an empty `Members` array. On
|
||||
Supermicro HGX systems (SYS-A21GE-NBRT and similar) ~35 sub-chassis (GPU, NVSwitch,
|
||||
PCIeRetimer, ERoT, IRoT, BMC, FPGA) all carry `ChassisType=Module/Component/Zone` and
|
||||
@@ -274,6 +277,188 @@ for `Enclosure`, `RackMount`, and any unrecognised type (fail-safe).
|
||||
both the excluded types and the storage-capable types (see `TestChassisTypeCanHaveNVMe`
|
||||
and `TestNVMePostProbeSkipsNonStorageChassis`).
|
||||
|
||||
## ADL-019 — Redfish post-probe recovery is profile-owned acquisition policy
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Numeric collection post-probe and direct NVMe `Disk.Bay` recovery were still
|
||||
controlled by collector-core heuristics, which kept platform-specific acquisition behavior in
|
||||
`redfish.go` and made vendor/topology refactoring incomplete.
|
||||
**Decision:** Move expensive Redfish post-probe enablement into profile-owned acquisition policy.
|
||||
The collector core may execute bounded post-probe loops, but profiles must explicitly enable:
|
||||
- numeric collection post-probe
|
||||
- direct NVMe `Disk.Bay` recovery
|
||||
- sensor collection post-probe
|
||||
**Consequences:**
|
||||
- Generic collector flow no longer implicitly turns on storage/NVMe recovery for every platform.
|
||||
- Supermicro-specific direct NVMe recovery and generic numeric collection recovery are now
|
||||
regression-tested through profile fixtures.
|
||||
- Future platform storage/post-probe behavior must be added through profile tuning, not new
|
||||
vendor-shaped `if` branches in collector core.
|
||||
|
||||
## ADL-020 — Redfish critical plan-B activation is profile-owned recovery policy
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** `critical plan-B` and `profile plan-B` were still effectively always-on collector
|
||||
behavior once paths were present, including critical collection member retry and slow numeric
|
||||
child probing. That kept acquisition recovery semantics in `redfish.go` instead of the profile
|
||||
layer.
|
||||
**Decision:** Move plan-B activation into profile-owned recovery policy. Profiles must explicitly
|
||||
enable:
|
||||
- critical collection member retry
|
||||
- slow numeric probing during critical plan-B
|
||||
- profile-specific plan-B pass
|
||||
**Consequences:**
|
||||
- Recovery behavior is now observable in raw Redfish diagnostics alongside other tuning.
|
||||
- Generic/fallback recovery remains available through profile policy instead of implicit collector
|
||||
defaults.
|
||||
- Future platform-specific plan-B behavior must be introduced through profile tuning and tests,
|
||||
not through new unconditional collector branches.
|
||||
|
||||
## ADL-021 — Extra discovered-path storage seeds must be profile-scoped, not core-baseline
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** The collector core baseline seed list still contained storage-specific discovered-path
|
||||
suffixes such as `SimpleStorage` and `Storage/IntelVROC/*`. These are useful on some platforms,
|
||||
but they are acquisition extensions layered on top of discovered `Systems/*` resources, not part
|
||||
of the minimal vendor-neutral Redfish baseline.
|
||||
**Decision:** Move such discovered-path expansions into profile-owned scoped path policy. The
|
||||
collector core keeps the vendor-neutral baseline; profiles may add extra system/chassis/manager
|
||||
suffixes that are expanded over discovered members during acquisition planning.
|
||||
**Consequences:**
|
||||
- Platform-shaped storage discovery no longer lives in `redfish.go` baseline seed construction.
|
||||
- Extra discovered-path branches are visible in plan diagnostics and fixture regression tests.
|
||||
- Future model/vendor storage path expansions must be added through scoped profile policy instead
|
||||
of editing the shared baseline seed list.
|
||||
|
||||
## ADL-022 — Adaptive prefetch eligibility is profile-owned policy
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** The adaptive prefetch executor was still driven by hardcoded include/exclude path
|
||||
rules in `redfish.go`. That made GPU/storage/network prefetch shaping part of collector-core
|
||||
knowledge rather than profile-owned acquisition policy.
|
||||
**Decision:** Move prefetch eligibility rules into profile tuning. The collector core still runs
|
||||
adaptive prefetch, but profiles provide:
|
||||
- `IncludeSuffixes` for critical paths eligible for prefetch
|
||||
- `ExcludeContains` for path shapes that must never be prefetched
|
||||
**Consequences:**
|
||||
- Prefetch behavior is now visible in raw Redfish diagnostics and test fixtures.
|
||||
- Platform- or topology-specific prefetch shaping no longer requires editing collector-core
|
||||
string lists.
|
||||
- Future prefetch tuning must be introduced through profiles and regression tests.
|
||||
|
||||
## ADL-023 — Core critical baseline is roots-only; critical shaping is profile-owned
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** `redfishCriticalEndpoints(...)` still encoded a broad set of system/chassis/manager
|
||||
critical branches directly in collector core. This mixed minimal crawl invariants with profile-
|
||||
specific acquisition shaping.
|
||||
**Decision:** Reduce collector-core critical baseline to vendor-neutral roots only:
|
||||
- `/redfish/v1`
|
||||
- discovered `Systems/*`
|
||||
- discovered `Chassis/*`
|
||||
- discovered `Managers/*`
|
||||
|
||||
Profiles now own additional critical shaping through:
|
||||
- scoped critical suffix policy for discovered resources
|
||||
- explicit top-level `CriticalPaths`
|
||||
**Consequences:**
|
||||
- Critical inventory breadth is now explained by the acquisition plan, not hidden in collector
|
||||
helper defaults.
|
||||
- Generic profile still provides the previous broad critical coverage, so behavior stays stable.
|
||||
- Future critical-path tuning must be implemented in profiles and regression-tested there.
|
||||
|
||||
## ADL-024 — Live Redfish execution plans are resolved inside redfishprofile
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Even after moving seeds, scoped paths, critical shaping, recovery, and prefetch
|
||||
policy into profiles, `redfish.go` still manually merged discovered resources with those policy
|
||||
fragments. That left acquisition-plan resolution logic in collector core.
|
||||
**Decision:** Introduce `redfishprofile.ResolveAcquisitionPlan(...)` as the boundary between
|
||||
profile planning and collector execution. `redfishprofile` now resolves:
|
||||
- baseline seeds
|
||||
- baseline critical roots
|
||||
- scoped path expansions
|
||||
- explicit profile seed/critical/plan-B paths
|
||||
|
||||
The collector core consumes the resolved plan and executes it.
|
||||
**Consequences:**
|
||||
- Acquisition planning logic is now testable in `redfishprofile` without going through the live
|
||||
collector.
|
||||
- `redfish.go` no longer owns path-resolution helpers for seeds/critical planning.
|
||||
- This creates a clean next step toward true per-profile acquisition hooks beyond static policy
|
||||
fragments.
|
||||
|
||||
## ADL-025 — Post-discovery acquisition refinement belongs to profile hooks
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Some acquisition behavior depends not only on vendor/model hints, but on what the
|
||||
lightweight Redfish discovery actually returned. Static absolute path lists in profile plans are
|
||||
too rigid for such cases and reintroduce guessed platform knowledge.
|
||||
**Decision:** Add a post-discovery acquisition refinement hook to Redfish profiles. Profiles may
|
||||
mutate the resolved execution plan after discovered `Systems/*`, `Chassis/*`, and `Managers/*`
|
||||
are known.
|
||||
|
||||
First concrete use:
|
||||
- MSI now derives GPU chassis seeds and `.../Sensors` critical/plan-B paths from discovered
|
||||
`Chassis/GPU*` resources instead of hardcoded `GPU1..GPU4` absolute paths in the static plan.
|
||||
Additional use:
|
||||
- Supermicro now derives `UpdateService/Oem/Supermicro/FirmwareInventory` critical/plan-B paths
|
||||
from resource hints instead of carrying that absolute path in the static plan.
|
||||
Additional use:
|
||||
- Dell now derives `Managers/iDRAC.Embedded.*` acquisition paths from discovered manager
|
||||
resources instead of carrying `Managers/iDRAC.Embedded.1` as a static absolute path.
|
||||
**Consequences:**
|
||||
- Profile modules can react to actual discovery results without pushing conditional logic back
|
||||
into `redfish.go`.
|
||||
- Diagnostics still show the final refined plan because the collector stores the refined plan,
|
||||
not only the pre-refinement template.
|
||||
- Future vendor-specific discovery-dependent acquisition behavior should be implemented through
|
||||
this hook rather than new collector-core branches.
|
||||
|
||||
## ADL-026 — Replay analysis uses a resolved profile plan, not ad-hoc directives only
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Replay still relied on a flat `AnalysisDirectives` struct assembled centrally,
|
||||
while vendor-specific conditions often depended on the actual snapshot shape. That made analysis
|
||||
behavior harder to explain and kept too much vendor logic in generic replay collectors.
|
||||
**Decision:** Introduce `redfishprofile.ResolveAnalysisPlan(...)` for replay. The resolved
|
||||
analysis plan contains:
|
||||
- active match result
|
||||
- resolved analysis directives
|
||||
- analysis notes explaining snapshot-aware hook activation
|
||||
|
||||
Profiles may refine this plan using the snapshot and discovered resources before replay collectors
|
||||
run.
|
||||
|
||||
First concrete uses:
|
||||
- MSI enables processor-GPU fallback and MSI chassis lookup only when the snapshot actually
|
||||
contains GPU processors and `Chassis/GPU*`
|
||||
- HGX enables processor-GPU alias fallback from actual HGX/GPU_SXM topology signals in the snapshot
|
||||
- Supermicro enables NVMe backplane and known-controller recovery from actual snapshot paths
|
||||
**Consequences:**
|
||||
- Replay behavior is now closer to the acquisition architecture: a resolved profile plan feeds the
|
||||
executor.
|
||||
- `redfish_analysis_plan` is stored in raw payload metadata for offline debugging.
|
||||
- Future analysis-side vendor logic should move into profile refinement hooks instead of growing the
|
||||
central directive builder.
|
||||
|
||||
## ADL-027 — Replay GPU/storage executors consume resolved analysis plans
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Even after introducing `ResolveAnalysisPlan(...)`, replay GPU/storage collectors still
|
||||
accepted a raw `AnalysisDirectives` struct. That preserved an implicit shortcut from the old design
|
||||
and weakened the plan/executor boundary.
|
||||
**Decision:** Replay GPU/storage executors now accept `redfishprofile.ResolvedAnalysisPlan`
|
||||
directly. The executor reads resolved directives from the plan instead of being passed a standalone
|
||||
directive bundle.
|
||||
**Consequences:**
|
||||
- GPU and storage replay execution now follows the same architectural pattern as acquisition:
|
||||
resolve plan first, execute second.
|
||||
- Future profile-owned execution helpers can use plan notes or additional resolved fields without
|
||||
changing the executor API again.
|
||||
- Remaining replay areas should migrate the same way instead of continuing to accept raw directive
|
||||
structs.
|
||||
|
||||
## ADL-019 — isDeviceBoundFirmwareName must cover vendor-specific naming patterns per vendor
|
||||
|
||||
**Date:** 2026-03-12
|
||||
@@ -604,3 +789,334 @@ presentation drift and duplicated UI logic.
|
||||
- The host UI becomes a service shell around the viewer instead of maintaining its own
|
||||
field-by-field tabs.
|
||||
- `internal/chart` must be updated explicitly as a git submodule when the viewer changes.
|
||||
|
||||
---
|
||||
|
||||
## ADL-031 — Redfish uses profile-driven acquisition and unified ingest entrypoints
|
||||
|
||||
**Date:** 2026-03-17
|
||||
**Context:**
|
||||
Redfish collection had accumulated platform-specific probing in the shared collector path, while
|
||||
upload and raw-export replay still entered analysis through direct handler branches. This made
|
||||
vendor/model tuning harder to contain and increased regression risk when one topology needed a
|
||||
special acquisition strategy.
|
||||
|
||||
**Decision:**
|
||||
- Introduce `internal/ingest.Service` as the internal source-family entrypoint for archive parsing
|
||||
and Redfish raw replay.
|
||||
- Introduce `internal/collector/redfishprofile/` for Redfish profile matching and modular hooks.
|
||||
- Split Redfish behavior into coordinated phases:
|
||||
- acquisition planning during live collection
|
||||
- analysis hooks during snapshot replay
|
||||
- Use score-based profile matching. If confidence is low, enter fallback acquisition mode and
|
||||
aggregate only safe additive profile probes.
|
||||
- Allow profile modules to provide bounded acquisition tuning hints such as crawl cap, prefetch
|
||||
behavior, and expensive post-probe toggles.
|
||||
- Allow profile modules to own model-specific `CriticalPaths` and bounded `PlanBPaths` so vendor
|
||||
recovery targets stop leaking into the collector core.
|
||||
- Expose Redfish profile matching as structured diagnostics during live collection: logs must
|
||||
contain all module scores, and collect job status must expose active modules for the UI.
|
||||
|
||||
**Consequences:**
|
||||
- Server handlers stop owning parser-vs-replay branching details directly.
|
||||
- Vendor/model-specific Redfish logic gets an explicit module boundary.
|
||||
- Unknown-vendor Redfish collection becomes slower but more complete by design.
|
||||
- Tactical Redfish fixes should move into profile modules instead of widening generic replay logic.
|
||||
- Repo-owned compact fixtures under `internal/collector/redfishprofile/testdata/`, derived from
|
||||
representative raw-export snapshots, are used to lock profile matching and acquisition tuning
|
||||
for known MSI and Supermicro-family shapes.
|
||||
|
||||
---
|
||||
|
||||
## ADL-032 — MSI ghost GPU filter: exclude GPUs with temperature=0 on powered-on host
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:**
|
||||
MSI/AMI BMC caches GPU inventory from the host via Host Interface (in-band). When GPUs are
|
||||
removed without a reboot the old entries remain in `Chassis/GPU*` and
|
||||
`Systems/Self/Processors/GPU*` with `Status.Health: OK, State: Enabled`. The BMC has no
|
||||
out-of-band mechanism to detect physical absence. A physically present GPU always reports
|
||||
an ambient temperature (>0°C) even when idle; a stale cached entry returns `Reading: 0`.
|
||||
|
||||
**Decision:**
|
||||
- Add `EnableMSIGhostGPUFilter` directive (enabled by MSI profile's `refineAnalysis`
|
||||
alongside `EnableProcessorGPUFallback`).
|
||||
- In `collectGPUsFromProcessors`: for each processor GPU, resolve its chassis path and read
|
||||
`Chassis/GPU{n}/Sensors/GPU{n}_Temperature`. If `PowerState=On` and `Reading=0` → skip.
|
||||
- Filter only applies when host is powered on; when host is off all temperatures are 0 and
|
||||
the signal is ambiguous.
|
||||
|
||||
**Consequences:**
|
||||
- Ghost GPUs from previous hardware configurations no longer appear in the inventory.
|
||||
- Filter is MSI-profile-owned and does not affect HGX, Supermicro, or generic paths.
|
||||
- Any new MSI GPU chassis that uses a different temperature sensor path will bypass the filter
|
||||
(safe default: include rather than wrongly exclude).
|
||||
|
||||
---
|
||||
|
||||
## ADL-033 — Reanimator export collected_at uses inventory LastModifiedTime with 30-day fallback
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:**
|
||||
For Redfish sources the BMC Manager `DateTime` reflects when the BMC clock read the time, not
|
||||
when the hardware inventory was last known-good. `InventoryData/Status.LastModifiedTime`
|
||||
(AMI/MSI OEM endpoint) records the actual timestamp of the last successful host-pushed
|
||||
inventory cycle and is a better proxy for "when was this hardware configuration last confirmed".
|
||||
|
||||
**Decision:**
|
||||
- `inferInventoryLastModifiedTime` reads `LastModifiedTime` from the snapshot and sets
|
||||
`AnalysisResult.InventoryLastModifiedAt`.
|
||||
- `reanimatorCollectedAt()` in the exporter selects `InventoryLastModifiedAt` when it is set
|
||||
and no older than 30 days; otherwise falls back to `CollectedAt`.
|
||||
- Fallback rationale: inventory older than 30 days is likely from a long-running server with
|
||||
no recent reboot; using the actual collection date is more useful for the downstream consumer.
|
||||
- The inventory timestamp is also logged during replay and live collection for diagnostics.
|
||||
|
||||
**Consequences:**
|
||||
- Reanimator export `collected_at` reflects the last confirmed inventory cycle on AMI/MSI BMCs.
|
||||
- On non-AMI BMCs or when `InventoryData/Status` is absent, behavior is unchanged.
|
||||
- If inventory is stale (>30 days), collection date is used as before.
|
||||
|
||||
---
|
||||
|
||||
## ADL-034 — Redfish inventory invalidated before host power-on
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:**
|
||||
When a host is powered on by the collector (`power_on_if_host_off=true`), the BMC still holds
|
||||
inventory from the previous boot. If hardware changed between shutdowns, the new boot will push
|
||||
fresh inventory — but only if the BMC accepts it (CRC mismatch triggers re-population). Without
|
||||
explicit invalidation, unchanged CRCs can cause the BMC to skip re-processing even after a
|
||||
hardware change.
|
||||
|
||||
**Decision:**
|
||||
- Before any power-on attempt, `invalidateRedfishInventory` POSTs to
|
||||
`{systemPath}/Oem/Ami/Inventory/Crc` with all groups zeroed (`CPU`, `DIMM`, `PCIE`,
|
||||
`CERTIFICATES`, `SECUREBOOT`).
|
||||
- Best-effort: a 404/405 response (non-AMI BMC) is logged and silently ignored.
|
||||
- The invalidation is logged at `INFO` level and surfaced as a collect progress message.
|
||||
|
||||
**Consequences:**
|
||||
- On AMI/MSI BMCs: the next boot will push a full fresh inventory regardless of whether
|
||||
CRCs appear unchanged, eliminating ghost components from prior hardware configurations.
|
||||
- On non-AMI BMCs: the POST fails immediately (endpoint does not exist), nothing changes.
|
||||
- Invalidation runs only when `power_on_if_host_off=true` and host is confirmed off.
|
||||
|
||||
---
|
||||
|
||||
## ADL-035 — Redfish hardware event log collection from Systems LogServices
|
||||
|
||||
**Date:** 2026-03-18
|
||||
**Context:** Redfish BMCs expose event logs via `LogServices/{svc}/Entries`. On MSI/AMI this includes the IPMI SEL with hardware events (temperature, power, drive failures, etc.). Live collection previously collected only inventory/sensor snapshots; event history was unavailable in Reanimator.
|
||||
**Decision:**
|
||||
- After tree-walk, fetch hardware log entries separately via `collectRedfishLogEntries()` (not part of tree-walk to avoid bloat).
|
||||
- Only `Systems/{sys}/LogServices` is queried — Managers LogServices (BMC audit/journal) are excluded.
|
||||
- Log services with Id/Name containing "audit", "journal", "bmc", "security", "manager", "debug" are skipped.
|
||||
- Entries older than 7 days (client-side filter) are discarded. Pages are followed until an out-of-window entry is found (assumes newest-first ordering, typical for BMCs).
|
||||
- Entries with `EntryType: "Oem"` or `MessageId` containing user/auth/login keywords are filtered as non-hardware.
|
||||
- Raw entries stored in `rawPayloads["redfish_log_entries"]` as `[]map[string]interface{}`.
|
||||
- Parsed to `models.Event` in `parseRedfishLogEntries()` during replay — same path for live and offline.
|
||||
- Max 200 entries per log service, 500 total to limit BMC load.
|
||||
**Consequences:**
|
||||
- Hardware event history (last 7 days) visible in Reanimator `EventLogs` section.
|
||||
- No impact on existing inventory pipeline or offline archive replay (archives without `redfish_log_entries` key silently skip parsing).
|
||||
- Adds extra HTTP requests during live collection (sequential, after tree-walk completes).
|
||||
|
||||
---
|
||||
|
||||
## ADL-036 — Redfish profile matching may use platform grammar hints beyond vendor strings
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Context:**
|
||||
Some BMCs expose unusable `Manufacturer` / `Model` values (`NULL`, placeholders, or generic SoC
|
||||
names) while still exposing a stable platform-specific Redfish grammar: repeated member names,
|
||||
firmware inventory IDs, OEM action names, and target-path quirks. Matching only on vendor
|
||||
strings forced such systems into fallback mode even when the platform shape was consistent.
|
||||
|
||||
**Decision:**
|
||||
- Extend `redfishprofile.MatchSignals` with doc-derived hint tokens collected from discovery docs
|
||||
and replay snapshots.
|
||||
- Allow profile matchers to score on stable platform grammar such as:
|
||||
- collection member naming (`outboardPCIeCard*`, drive slot grammars)
|
||||
- firmware inventory member IDs
|
||||
- OEM action/type markers and linked target paths
|
||||
- During live collection, gather only lightweight extra hint collections needed for matching
|
||||
(`NetworkInterfaces`, `NetworkAdapters`, `Drives`, `UpdateService/FirmwareInventory`), not slow
|
||||
deep inventory branches.
|
||||
- Keep such profiles out of fallback aggregation unless they are proven safe as broad additive
|
||||
hints.
|
||||
|
||||
**Consequences:**
|
||||
- Platform-family profiles can activate even when vendor strings are absent or set to `NULL`.
|
||||
- Matching logic becomes more robust for OEM BMC implementations that differ mainly by Redfish
|
||||
grammar rather than by explicit vendor strings.
|
||||
- Live collection gains a small amount of extra discovery I/O to harvest stable member IDs, but
|
||||
avoids slow deep probes such as `Assembly` just for profile selection.
|
||||
|
||||
---
|
||||
|
||||
## ADL-037 — easy-bee archives are parsed from the embedded bee-audit snapshot
|
||||
|
||||
**Date:** 2026-03-25
|
||||
**Context:**
|
||||
`reanimator-easy-bee` support bundles already contain a normalized hardware snapshot in
|
||||
`export/bee-audit.json` plus supporting logs and techdump files. Rebuilding the same inventory
|
||||
from raw `techdump/` files inside LOGPile would duplicate parser logic and create drift between
|
||||
the producer utility and archive importer.
|
||||
|
||||
**Decision:**
|
||||
- Add a dedicated `easy_bee` vendor parser for `bee-support-*.tar.gz` bundles.
|
||||
- Detect the bundle by `manifest.txt` (`bee_version=...`) plus `export/bee-audit.json`.
|
||||
- Parse the archive from the embedded snapshot first; treat `techdump/` and runtime files as
|
||||
secondary context only.
|
||||
- Normalize snapshot-only fields needed by LOGPile, notably:
|
||||
- flatten `hardware.sensors` groups into `[]SensorReading`
|
||||
- turn runtime issues/status into `[]Event`
|
||||
- synthesize a board FRU entry when the snapshot does not include FRU data
|
||||
|
||||
**Consequences:**
|
||||
- LOGPile stays aligned with the schema emitted by `reanimator-easy-bee`.
|
||||
- Adding support required only a thin archive adapter instead of a full hardware parser.
|
||||
- If the upstream utility changes the embedded snapshot schema, the `easy_bee` adapter is the
|
||||
only place that must be updated.
|
||||
|
||||
---
|
||||
|
||||
## ADL-038 — HPE AHS parser uses hybrid extraction instead of full `zbb` schema decoding
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** HPE iLO Active Health System exports (`.ahs`) are proprietary `ABJR` containers
|
||||
with gzip-compressed `zbb` payloads. The sample inventory data contains two practical signal
|
||||
families: printable SMBIOS/FRU-style strings and embedded Redfish JSON subtrees, especially for
|
||||
storage controllers and drives. Full `zbb` binary schema decoding is not documented and would add
|
||||
significant complexity before proving user value.
|
||||
**Decision:** Support HPE AHS with a hybrid parser:
|
||||
- decode the outer `ABJR` container
|
||||
- gunzip embedded members when applicable
|
||||
- extract inventory from printable SMBIOS/FRU payloads
|
||||
- extract storage/controller/backplane details from embedded Redfish JSON objects
|
||||
- enrich firmware and PSU inventory from auxiliary package payloads such as `bcert.pkg`
|
||||
- do not attempt complete semantic decoding of the internal `zbb` record format
|
||||
**Consequences:**
|
||||
- Parser reaches inventory-grade usefulness quickly for HPE `.ahs` uploads.
|
||||
- Storage inventory is stronger than text-only parsing because it reuses structured Redfish data when present.
|
||||
- Auxiliary package payloads can supply missing firmware/PSU fields even when the main SMBIOS-like blob is incomplete.
|
||||
- Future deeper `zbb` decoding can be added incrementally without replacing the current parser contract.
|
||||
|
||||
---
|
||||
|
||||
## ADL-039 — Canonical inventory keeps DIMMs with unknown capacity when identity is known
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** Some sources, notably HPE iLO AHS SMBIOS-like blobs, expose installed DIMM identity
|
||||
(slot, serial, part number, manufacturer) but do not include capacity. The parser already extracts
|
||||
those modules into `Hardware.Memory`, but canonical device building and export previously dropped
|
||||
them because `size_mb == 0`.
|
||||
**Decision:** Treat a DIMM as installed inventory when `present=true` and it has identifying
|
||||
memory fields such as serial number or part number, even if `size_mb` is unknown.
|
||||
**Consequences:**
|
||||
- HPE AHS uploads now show real installed memory modules instead of hiding them.
|
||||
- Empty slots still stay filtered because they lack inventory identity or are marked absent.
|
||||
- Specification/export can include "size unknown" memory entries without inventing capacity data.
|
||||
|
||||
---
|
||||
|
||||
## ADL-040 — HPE Redfish normalization prefers chassis `Devices/*` over generic PCIe topology labels
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** HPE ProLiant Gen11 Redfish snapshots expose parallel inventory trees. `Chassis/*/PCIeDevices/*`
|
||||
is good for topology presence, but often reports only generic `DeviceType` values such as
|
||||
`SingleFunction`. `Chassis/*/Devices/*` carries the concrete slot label, richer device type, and
|
||||
product-vs-spare part identifiers for the same physical NIC/controller. Replay fallback over empty
|
||||
storage volume collections can also discover `Volumes/Capabilities` children, which are not real
|
||||
logical volumes.
|
||||
|
||||
**Decision:**
|
||||
- Treat Redfish `SKU` as a valid fallback for `hardware.board.part_number` when `PartNumber` is empty.
|
||||
- Ignore `Volumes/Capabilities` documents during logical-volume parsing.
|
||||
- Enrich `Chassis/*/PCIeDevices/*` entries with matching `Chassis/*/Devices/*` documents by
|
||||
serial/name/part identity.
|
||||
- Keep `pcie.device_class` semantic; do not replace it with model or part-number strings when
|
||||
Redfish exposes only generic topology labels.
|
||||
|
||||
**Consequences:**
|
||||
- HPE Redfish imports now keep the server SKU in `hardware.board.part_number`.
|
||||
- Empty volume collections no longer produce fake `Capabilities` volume records.
|
||||
- HPE PCIe inventory gets better slot labels like `OCP 3.0 Slot 15` plus concrete classes such as
|
||||
`LOM/NIC` or `SAS/SATA Storage Controller`.
|
||||
- `part_number` remains available separately for model identity, without polluting the class field.
|
||||
|
||||
---
|
||||
|
||||
## ADL-041 — Redfish replay drops topology-only PCIe noise classes from canonical inventory
|
||||
|
||||
**Date:** 2026-04-01
|
||||
**Context:** Some Redfish BMCs, especially MSI/AMI GPU systems, expose a very wide PCIe topology
|
||||
tree under `Chassis/*/PCIeDevices/*`. Besides real endpoint devices, the replay sees bridge stages,
|
||||
CPU-side helper functions, IMC/mesh signal-processing nodes, USB/SPI side controllers, and GPU
|
||||
display-function duplicates reported as generic `Display Device`. Keeping all of them in
|
||||
`hardware.pcie_devices` pollutes downstream exports such as Reanimator and hides the actual
|
||||
endpoint inventory signal.
|
||||
|
||||
**Decision:**
|
||||
- Filter topology-only PCIe records during Redfish replay, not in the UI layer.
|
||||
- Drop PCIe entries with replay-resolved classes:
|
||||
- `Bridge`
|
||||
- `Processor`
|
||||
- `SignalProcessingController`
|
||||
- `SerialBusController`
|
||||
- Drop `DisplayController` entries when the source Redfish PCIe document is the generic MSI-style
|
||||
`Description: "Display Device"` duplicate.
|
||||
- Drop PCIe network endpoints when their PCIe functions already link to `NetworkDeviceFunctions`,
|
||||
because those devices are represented canonically in `hardware.network_adapters`.
|
||||
- When `Systems/*/NetworkInterfaces/*` links back to a chassis `NetworkAdapter`, match against the
|
||||
fully enriched chassis NIC identity to avoid creating a second ghost NIC row with the raw
|
||||
`NetworkAdapter_*` slot/name.
|
||||
- Treat generic Redfish object names such as `NetworkAdapter_*` and `PCIeDevice_*` as placeholder
|
||||
models and replace them from PCI IDs when a concrete vendor/device match exists.
|
||||
- Drop MSI-style storage service PCIe endpoints whose resolved device names are only
|
||||
`Volume Management Device NVMe RAID Controller` or `PCIe Switch management endpoint`; storage
|
||||
inventory already comes from the Redfish storage tree.
|
||||
- Normalize Ethernet-class NICs into the single exported class `NetworkController`; do not split
|
||||
`EthernetController` into a separate top-level inventory section.
|
||||
- Keep endpoint classes such as `NetworkController`, `MassStorageController`, and dedicated GPU
|
||||
inventory coming from `hardware.gpus`.
|
||||
|
||||
**Consequences:**
|
||||
- `hardware.pcie_devices` becomes closer to real endpoint inventory instead of raw PCIe topology.
|
||||
- Reanimator exports stop showing MSI bridge/processor/display duplicate noise.
|
||||
- Reanimator exports no longer duplicate the same MSI NIC as both `PCIeDevice_*` and
|
||||
`NetworkAdapter_*`.
|
||||
- Replay no longer creates extra NIC rows from `Systems/NetworkInterfaces` when the same adapter
|
||||
was already normalized from `Chassis/NetworkAdapters`.
|
||||
- MSI VMD / PCIe switch storage service endpoints no longer pollute PCIe inventory.
|
||||
- UI/Reanimator group all Ethernet NICs under the same `NETWORKCONTROLLER` section.
|
||||
- Canonical NIC inventory prefers resolved PCI product names over generic Redfish placeholder names.
|
||||
- The raw Redfish snapshot still remains available in `raw_payloads.redfish_tree` for low-level
|
||||
troubleshooting if topology details are ever needed.
|
||||
|
||||
---
|
||||
|
||||
## ADL-042 — xFusion file-export archives merge AppDump inventory with RTOS/Log snapshots
|
||||
|
||||
**Date:** 2026-04-04
|
||||
**Context:** xFusion iBMC `tar.gz` exports expose the base inventory in `AppDump/`, but the most
|
||||
useful NIC and firmware details live elsewhere: NIC firmware/MAC snapshots in
|
||||
`LogDump/netcard/netcard_info.txt` and system firmware versions in
|
||||
`RTOSDump/versioninfo/app_revision.txt`. Parsing only `AppDump/` left xFusion uploads detectable but
|
||||
incomplete for UI and Reanimator consumers.
|
||||
|
||||
**Decision:**
|
||||
- Treat xFusion file-export `tar.gz` bundles as a first-class archive parser input.
|
||||
- Merge OCP NIC identity from `AppDump/card_manage/card_info` with the latest per-slot snapshot
|
||||
from `LogDump/netcard/netcard_info.txt` to produce `hardware.network_adapters`.
|
||||
- Import system-level firmware from `RTOSDump/versioninfo/app_revision.txt` into
|
||||
`hardware.firmware`.
|
||||
- Allow FRU fallback from `RTOSDump/versioninfo/fruinfo.txt` when `AppDump/FruData/fruinfo.txt`
|
||||
is absent.
|
||||
|
||||
**Consequences:**
|
||||
- xFusion uploads now preserve NIC BDF, MAC, firmware, and serial identity in normalized output.
|
||||
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
|
||||
- xFusion archives participate more reliably in canonical device/export flows without special UI
|
||||
cases.
|
||||
|
||||
343
bible-local/docs/msi-redfish-api.md
Normal file
343
bible-local/docs/msi-redfish-api.md
Normal file
@@ -0,0 +1,343 @@
|
||||
# MSI BMC Redfish API Reference
|
||||
|
||||
Source: MSI Enterprise Platform Solutions — Redfish BMC User Guide v1.0 (AMI/MegaRAC stack).
|
||||
Spec compliance: DSP0266 1.15.1, DSP8010 2019.2.
|
||||
|
||||
> This document is trimmed to sections relevant to LOGPile collection and inventory analysis.
|
||||
> Auth, LDAP/AD, SMTP, VirtualMedia, Certificates, RADIUS, Composability, and BMC config
|
||||
> sections are omitted.
|
||||
|
||||
---
|
||||
|
||||
## Supported HTTP methods
|
||||
|
||||
`GET`, `POST`, `PATCH`, `DELETE`. Unsupported methods return `405`.
|
||||
|
||||
PATCH requires an `If-Match` / `ETag` precondition header; missing header → `428`, mismatch → `412`.
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Redfish API endpoints
|
||||
|
||||
| Resource | URI | Schema |
|
||||
|---|---|---|
|
||||
| Service Root | `/redfish/v1/` | ServiceRoot.v1_7_0 |
|
||||
| ComputerSystem Collection | `/redfish/v1/Systems` | ComputerSystemCollection |
|
||||
| ComputerSystem | `/redfish/v1/Systems/{sys}` | ComputerSystem.v1_16_2 |
|
||||
| Memory Collection | `/redfish/v1/Systems/{sys}/Memory` | MemoryCollection |
|
||||
| Memory | `/redfish/v1/Systems/{sys}/Memory/{mem}` | Memory.v1_19_0 |
|
||||
| MemoryMetrics | `/redfish/v1/Systems/{sys}/Memory/{mem}/MemoryMetrics` | MemoryMetrics.v1_7_0 |
|
||||
| MemoryDomain Collection | `/redfish/v1/Systems/{sys}/MemoryDomain` | MemoryDomainCollection |
|
||||
| MemoryDomain | `/redfish/v1/Systems/{sys}/MemoryDomain/{dom}` | MemoryDomain.v1_2_3 |
|
||||
| MemoryChunks Collection | `/redfish/v1/Systems/{sys}/MemoryDomain/{dom}/MemoryChunks` | MemoryChunksCollection |
|
||||
| MemoryChunks | `/redfish/v1/Systems/{sys}/MemoryDomain/{dom}/MemoryChunks/{chunk}` | MemoryChunks.v1_4_0 |
|
||||
| Processor Collection | `/redfish/v1/Systems/{sys}/Processors` | ProcessorCollection |
|
||||
| Processor | `/redfish/v1/Systems/{sys}/Processors/{proc}` | Processor.v1_15_0 |
|
||||
| SubProcessors Collection | `/redfish/v1/Systems/{sys}/Processors/{proc}/SubProcessors` | ProcessorCollection |
|
||||
| SubProcessor | `/redfish/v1/Systems/{sys}/Processors/{proc}/SubProcessors/{sub}` | Processor.v1_15_0 |
|
||||
| ProcessorMetrics | `/redfish/v1/Systems/{sys}/Processors/{proc}/ProcessorMetrics` | ProcessorMetrics.v1_4_0 |
|
||||
| Bios | `/redfish/v1/Systems/{sys}/Bios` | Bios.v1_2_0 |
|
||||
| SimpleStorage Collection | `/redfish/v1/Systems/{sys}/SimpleStorage` | SimpleStorageCollection |
|
||||
| SimpleStorage | `/redfish/v1/Systems/{sys}/SimpleStorage/{ss}` | SimpleStorage.v1_3_0 |
|
||||
| Storage Collection | `/redfish/v1/Systems/{sys}/Storage` | StorageCollection |
|
||||
| Storage | `/redfish/v1/Systems/{sys}/Storage/{stor}` | Storage.v1_9_0 |
|
||||
| StorageController Collection | `/redfish/v1/Systems/{sys}/Storage/{stor}/Controllers` | StorageControllerCollection |
|
||||
| StorageController | `/redfish/v1/Systems/{sys}/Storage/{stor}/Controllers/{ctrl}` | StorageController.v1_0_0 |
|
||||
| Drive | `/redfish/v1/Systems/{sys}/Storage/{stor}/Drives/{drv}` | Drive.v1_13_0 |
|
||||
| Volume Collection | `/redfish/v1/Systems/{sys}/Storage/{stor}/Volumes` | VolumeCollection |
|
||||
| Volume | `/redfish/v1/Systems/{sys}/Storage/{stor}/Volumes/{vol}` | Volume.v1_5_0 |
|
||||
| NetworkInterface Collection | `/redfish/v1/Systems/{sys}/NetworkInterfaces` | NetworkInterfaceCollection |
|
||||
| NetworkInterface | `/redfish/v1/Systems/{sys}/NetworkInterfaces/{nic}` | NetworkInterface.v1_2_0 |
|
||||
| EthernetInterface (System) | `/redfish/v1/Systems/{sys}/EthernetInterfaces/{eth}` | EthernetInterface.v1_6_2 |
|
||||
| GraphicsController Collection | `/redfish/v1/Systems/{sys}/GraphicsControllers` | GraphicsControllerCollection |
|
||||
| GraphicsController | `/redfish/v1/Systems/{sys}/GraphicsControllers/{gpu}` | GraphicsController.v1_0_0 |
|
||||
| USBController Collection | `/redfish/v1/Systems/{sys}/USBControllers` | USBControllerCollection |
|
||||
| USBController | `/redfish/v1/Systems/{sys}/USBControllers/{usb}` | USBController.v1_0_0 |
|
||||
| SecureBoot | `/redfish/v1/Systems/{sys}/SecureBoot` | SecureBoot.v1_1_0 |
|
||||
| LogService Collection (System) | `/redfish/v1/Systems/{sys}/LogServices` | LogServiceCollection |
|
||||
| LogService (System) | `/redfish/v1/Systems/{sys}/LogServices/{log}` | LogService.v1_1_3 |
|
||||
| LogEntry Collection | `/redfish/v1/Systems/{sys}/LogServices/{log}/Entries` | LogEntryCollection |
|
||||
| LogEntry | `/redfish/v1/Systems/{sys}/LogServices/{log}/Entries/{entry}` | LogEntry.v1_12_0 |
|
||||
| Chassis Collection | `/redfish/v1/Chassis` | ChassisCollection |
|
||||
| Chassis | `/redfish/v1/Chassis/{ch}` | Chassis.v1_15_0 |
|
||||
| Power | `/redfish/v1/Chassis/{ch}/Power` | Power.v1_5_4 |
|
||||
| PowerSubSystem | `/redfish/v1/Chassis/{ch}/PowerSubSystem` | PowerSubsystem.v1_1_0 |
|
||||
| PowerSupplies Collection | `/redfish/v1/Chassis/{ch}/PowerSubSystem/PowerSupplies` | PowerSupplyCollection |
|
||||
| PowerSupply | `/redfish/v1/Chassis/{ch}/PowerSubSystem/PowerSupplies/{psu}` | PowerSupply.v1_3_0 |
|
||||
| PowerSupplyMetrics | `/redfish/v1/Chassis/{ch}/PowerSubSystem/PowerSupplies/{psu}/Metrics` | PowerSupplyMetrics.v1_0_1 |
|
||||
| Thermal | `/redfish/v1/Chassis/{ch}/Thermal` | Thermal.v1_5_3 |
|
||||
| ThermalSubSystem | `/redfish/v1/Chassis/{ch}/ThermalSubSystem` | ThermalSubsystem.v1_0_0 |
|
||||
| ThermalMetrics | `/redfish/v1/Chassis/{ch}/ThermalSubSystem/ThermalMetrics` | ThermalMetrics.v1_0_1 |
|
||||
| Fans Collection | `/redfish/v1/Chassis/{ch}/ThermalSubSystem/Fans` | FanCollection |
|
||||
| Fan | `/redfish/v1/Chassis/{ch}/ThermalSubSystem/Fans/{fan}` | Fan.v1_1_1 |
|
||||
| Sensor Collection | `/redfish/v1/Chassis/{ch}/Sensors` | SensorCollection |
|
||||
| Sensor | `/redfish/v1/Chassis/{ch}/Sensors/{sen}` | Sensor.v1_0_2 |
|
||||
| PCIeDevice Collection | `/redfish/v1/Chassis/{ch}/PCIeDevices` | PCIeDeviceCollection |
|
||||
| PCIeDevice | `/redfish/v1/Chassis/{ch}/PCIeDevices/{dev}` | PCIeDevice.v1_9_0 |
|
||||
| PCIeFunction Collection | `/redfish/v1/Chassis/{ch}/PCIeDevices/{dev}/PCIeFunctions` | PCIeFunctionCollection |
|
||||
| PCIeFunction | `/redfish/v1/Chassis/{ch}/PCIeDevices/{dev}/PCIeFunctions/{fn}` | PCIeFunction.v1_2_3 |
|
||||
| PCIeSlots | `/redfish/v1/Chassis/{ch}/PCIeSlots` | PCIeSlots.v1_5_0 |
|
||||
| NetworkAdapter Collection | `/redfish/v1/Chassis/{ch}/NetworkAdapters` | NetworkAdapterCollection |
|
||||
| NetworkAdapter | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}` | NetworkAdapter.v1_8_0 |
|
||||
| NetworkDeviceFunction Collection | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}/NetworkDeviceFunctions` | NetworkDeviceFunctionCollection |
|
||||
| NetworkDeviceFunction | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}/NetworkDeviceFunctions/{fn}` | NetworkDeviceFunction.v1_5_0 |
|
||||
| Assembly | `/redfish/v1/Chassis/{ch}/Assembly` | Assembly.v1_2_2 |
|
||||
| Assembly (Drive) | `/redfish/v1/Systems/{sys}/Storage/{stor}/Drives/{drv}/Assembly` | Assembly.v1_2_2 |
|
||||
| Assembly (Processor) | `/redfish/v1/Systems/{sys}/Processors/{proc}/Assembly` | Assembly.v1_2_2 |
|
||||
| Assembly (Memory) | `/redfish/v1/Systems/{sys}/Memory/{mem}/Assembly` | Assembly.v1_2_2 |
|
||||
| Assembly (NetworkAdapter) | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}/Assembly` | Assembly.v1_2_2 |
|
||||
| Assembly (PCIeDevice) | `/redfish/v1/Chassis/{ch}/PCIeDevices/{dev}/Assembly` | Assembly.v1_2_2 |
|
||||
| MediaController Collection | `/redfish/v1/Chassis/{ch}/MediaControllers` | MediaControllerCollection |
|
||||
| MediaController | `/redfish/v1/Chassis/{ch}/MediaControllers/{mc}` | MediaController.v1_1_0 |
|
||||
| LogService Collection (Chassis) | `/redfish/v1/Chassis/{ch}/LogServices` | LogServiceCollection |
|
||||
| LogService (Chassis) | `/redfish/v1/Chassis/{ch}/LogServices/{log}` | LogService.v1_1_3 |
|
||||
| Manager Collection | `/redfish/v1/Managers` | ManagerCollection |
|
||||
| Manager | `/redfish/v1/Managers/{mgr}` | Manager.v1_13_0 |
|
||||
| EthernetInterface (Manager) | `/redfish/v1/Managers/{mgr}/EthernetInterfaces/{eth}` | EthernetInterface.v1_6_2 |
|
||||
| LogService Collection (Manager) | `/redfish/v1/Managers/{mgr}/LogServices` | LogServiceCollection |
|
||||
| LogService (Manager) | `/redfish/v1/Managers/{mgr}/LogServices/{log}` | LogService.v1_1_3 |
|
||||
| UpdateService | `/redfish/v1/UpdateService` | UpdateService.v1_6_0 |
|
||||
| TaskService | `/redfish/v1/TasksService` | TaskService.v1_1_4 |
|
||||
| Task Collection | `/redfish/v1/TaskService/Tasks` | TaskCollection |
|
||||
| Task | `/redfish/v1/TaskService/Tasks/{task}` | Task.v1_4_2 |
|
||||
|
||||
---
|
||||
|
||||
## 2. Telemetry API endpoints
|
||||
|
||||
| Resource | URI | Schema |
|
||||
|---|---|---|
|
||||
| TelemetryService | `/redfish/v1/TelemetryService` | TelemetryService.v1_2_1 |
|
||||
| MetricDefinition Collection | `/redfish/v1/TelemetryService/MetricDefinitions` | MetricDefinitionCollection |
|
||||
| MetricDefinition | `/redfish/v1/TelemetryService/MetricDefinitions/{md}` | MetricDefinition.v1_0_3 |
|
||||
| MetricReportDefinition Collection | `/redfish/v1/TelemetryService/MetricReportDefinitions` | MetricReportDefinitionCollection |
|
||||
| MetricReportDefinition | `/redfish/v1/TelemetryService/MetricReportDefinitions/{mrd}` | MetricReportDefinition.v1_3_0 |
|
||||
| MetricReport Collection | `/redfish/v1/TelemetryService/MetricReports` | MetricReportCollection |
|
||||
| MetricReport | `/redfish/v1/TelemetryService/MetricReports/{mr}` | MetricReport.v1_2_0 |
|
||||
| Telemetry LogService | `/redfish/v1/TelemetryService/LogService` | LogService.v1_1_3 |
|
||||
| Telemetry LogEntry Collection | `/redfish/v1/TelemetryService/LogService/Entries` | LogEntryCollection |
|
||||
|
||||
---
|
||||
|
||||
## 3. Processor / NIC sub-resources (GPU-relevant)
|
||||
|
||||
| Resource | URI |
|
||||
|---|---|
|
||||
| Processor (NetworkAdapter) | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}/Processors/{proc}` |
|
||||
| AccelerationFunction Collection | `/redfish/v1/Systems/{sys}/Processors/{proc}/AccelerationFunctions` |
|
||||
| AccelerationFunction | `/redfish/v1/Systems/{sys}/Processors/{proc}/AccelerationFunctions/{fn}` |
|
||||
| Port Collection (NetworkAdapter) | `/redfish/v1/Chassis/{ch}/NetworkAdapters/{na}/Ports` |
|
||||
| Port (GraphicsController) | `/redfish/v1/Systems/{sys}/GraphicsControllers/{gpu}/Ports/{port}` |
|
||||
| OperatingConfig Collection | `/redfish/v1/Systems/{sys}/Processors/{proc}/OperatingConfigs` |
|
||||
| OperatingConfig | `/redfish/v1/Systems/{sys}/Processors/{proc}/OperatingConfigs/{cfg}` |
|
||||
|
||||
---
|
||||
|
||||
## 4. Error response format
|
||||
|
||||
On error, the service returns an HTTP status code and a JSON body with a single `error` property:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "Base.1.12.0.ActionParameterMissing",
|
||||
"message": "...",
|
||||
"@Message.ExtendedInfo": [
|
||||
{
|
||||
"@odata.type": "#Message.v1_0_8.Message",
|
||||
"MessageId": "Base.1.12.0.ActionParameterMissing",
|
||||
"Message": "...",
|
||||
"MessageArgs": [],
|
||||
"Severity": "Warning",
|
||||
"Resolution": "..."
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Common status codes:**
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 200 | OK with body |
|
||||
| 201 | Created |
|
||||
| 204 | Success, no body |
|
||||
| 400 | Bad request / validation error |
|
||||
| 401 | Unauthorized |
|
||||
| 403 | Forbidden / firmware update in progress |
|
||||
| 404 | Resource not found |
|
||||
| 405 | Method not allowed |
|
||||
| 412 | ETag precondition failed (PATCH) |
|
||||
| 415 | Unsupported media type |
|
||||
| 428 | Missing precondition header (PATCH) |
|
||||
| 501 | Not implemented |
|
||||
|
||||
**Request validation sequence:**
|
||||
1. Authorization check → 401
|
||||
2. Entity privilege check → 403
|
||||
3. URI existence → 404
|
||||
4. Firmware update lock → 403
|
||||
5. Method allowed → 405
|
||||
6. Media type → 415
|
||||
7. Body format → 400
|
||||
8. PATCH: ETag header → 428/412
|
||||
9. Property validation → 400
|
||||
|
||||
---
|
||||
|
||||
## 5. OEM: Inventory refresh (AMI/MSI-specific)
|
||||
|
||||
### 5.1 InventoryCrc — force component re-inventory
|
||||
|
||||
`GET/POST/DELETE /redfish/v1/Systems/{sys}/Oem/Ami/Inventory/Crc`
|
||||
|
||||
The `GroupCrcList` field lists current CRC checksums per component group. When a group's CRC
|
||||
changes (host sends new inventory) or is explicitly zeroed out via POST, the BMC discards its
|
||||
cached inventory and re-reads that group from the host.
|
||||
|
||||
**CRC groups:**
|
||||
|
||||
| Group | Covers |
|
||||
|-------|--------|
|
||||
| `CPU` | Processors, ProcessorMetrics |
|
||||
| `DIMM` | Memory, MemoryDomains, MemoryChunks, MemoryMetrics |
|
||||
| `PCIE` | Storage, PCIeDevices, NetworkInterfaces, NetworkAdapters |
|
||||
| `CERTIFICATES` | Boot Certificates |
|
||||
| `SECURBOOT` | SecureBoot data |
|
||||
|
||||
**POST — invalidate selected groups (force re-inventory):**
|
||||
|
||||
```
|
||||
POST /redfish/v1/Systems/{sys}/Oem/Ami/Inventory/Crc
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"GroupCrcList": [
|
||||
{ "CPU": 0 },
|
||||
{ "DIMM": 0 },
|
||||
{ "PCIE": 0 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Setting a group's value to `0` signals the BMC to invalidate and repopulate that group on next
|
||||
host inventory push (typically at next boot or host-interface inventory cycle).
|
||||
|
||||
**DELETE** — remove all CRC records entirely.
|
||||
|
||||
**Note:** Inventory data is populated by the host via the Redfish Host Interface (in-band),
|
||||
not by the BMC itself. Zeroing a CRC group does not immediately re-read hardware — it marks
|
||||
the group as stale so the next host-side inventory push will be accepted. A cold reboot is the
|
||||
most reliable trigger.
|
||||
|
||||
### 5.2 InventoryData Status — monitor inventory processing
|
||||
|
||||
`GET /redfish/v1/Oem/Ami/InventoryData/Status`
|
||||
|
||||
Available only after the host has posted an inventory file. Shows current processing state.
|
||||
|
||||
**Status enum:**
|
||||
|
||||
| Value | Meaning |
|
||||
|-------|---------|
|
||||
| `BootInProgress` | Host is booting |
|
||||
| `Queued` | Processing task queued |
|
||||
| `In-Progress` | Processing running in background |
|
||||
| `Ready` / `Completed` | Processing finished successfully |
|
||||
| `Failed` | Processing failed |
|
||||
|
||||
Response also includes:
|
||||
- `InventoryData.DeletedModules` — array of groups updated in this population cycle
|
||||
- `InventoryData.Messages` — warnings/errors encountered during processing
|
||||
- `ProcessingTime` — milliseconds taken
|
||||
- `LastModifiedTime` — ISO 8601 timestamp of last successful update
|
||||
|
||||
### 5.3 Systems OEM properties — Inventory reference
|
||||
|
||||
`GET /redfish/v1/Systems/{sys}` → `Oem.Ami` contains:
|
||||
|
||||
| Property | Notes |
|
||||
|----------|-------|
|
||||
| `Inventory` | Reference to InventoryCrc URI + current GroupCrc data |
|
||||
| `RedfishVersion` | BIOS Redfish version (populated via Host Interface) |
|
||||
| `RtpVersion` | BIOS RTP version (populated via Host Interface) |
|
||||
| `ManagerBootConfiguration.ManagerBootMode` | PATCH to trigger soft reset: `SoftReset` / `ResetTimeout` / `None` |
|
||||
|
||||
---
|
||||
|
||||
## 6. OEM: Component state actions
|
||||
|
||||
### 6.1 Memory enable/disable
|
||||
|
||||
```
|
||||
POST /redfish/v1/Systems/{sys}/Memory/{mem}/Actions/AmiBios.ChangeState
|
||||
Content-Type: application/json
|
||||
|
||||
{ "State": "Disabled" }
|
||||
```
|
||||
|
||||
Response: 204.
|
||||
|
||||
### 6.2 PCIeFunction enable/disable
|
||||
|
||||
```
|
||||
POST /redfish/v1/Chassis/{ch}/PCIeDevices/{dev}/PCIeFunctions/{fn}/Actions/AmiBios.ChangeState
|
||||
Content-Type: application/json
|
||||
|
||||
{ "State": "Disabled" }
|
||||
```
|
||||
|
||||
Response: 204.
|
||||
|
||||
---
|
||||
|
||||
## 7. OEM: Storage sensor readings
|
||||
|
||||
`GET /redfish/v1/Systems/{sys}/Storage/{stor}` → `Oem.Ami.StorageControllerSensors`
|
||||
|
||||
Array of sensor objects per storage controller instance. Each entry exposes:
|
||||
- `Reading` (Number) — current sensor value
|
||||
- `ReadingType` (String) — type of reading
|
||||
- `ReadingUnit` (String) — unit
|
||||
|
||||
---
|
||||
|
||||
## 8. OEM: Power and Thermal OwnerLUN
|
||||
|
||||
Both `GET /redfish/v1/Chassis/{ch}/Power` and `GET /redfish/v1/Chassis/{ch}/Thermal` expose
|
||||
`Oem.Ami.OwnerLUN` (Number, read-only) — the IPMI LUN associated with each
|
||||
temperature/fan/voltage sensor entry. Useful for correlating Redfish sensor readings with IPMI
|
||||
SDR records.
|
||||
|
||||
---
|
||||
|
||||
## 9. UpdateService
|
||||
|
||||
`GET /redfish/v1/UpdateService` → `Oem.Ami.BMC.DualImageConfiguration`:
|
||||
|
||||
| Property | Description |
|
||||
|----------|-------------|
|
||||
| `ActiveImage` | Currently active BMC image slot |
|
||||
| `BootImage` | Image slot BMC boots from |
|
||||
| `FirmwareImage1Name` / `FirmwareImage1Version` | First image slot name + version |
|
||||
| `FirmwareImage2Name` / `FirmwareImage2Version` | Second image slot name + version |
|
||||
|
||||
Standard `SimpleUpdate` action available at `/redfish/v1/UpdateService/Actions/UpdateService.SimpleUpdate`.
|
||||
|
||||
---
|
||||
|
||||
## 10. Inventory refresh summary
|
||||
|
||||
| Approach | Trigger | Latency | Scope |
|
||||
|----------|---------|---------|-------|
|
||||
| Host reboot | Physical/soft reset | Minutes | All groups |
|
||||
| `POST InventoryCrc` (groups = 0) | Explicit API call | Next host inventory push | Selected groups |
|
||||
| Firmware update (`SimpleUpdate`) | Explicit API call | Minutes + reboot | Full platform |
|
||||
| Sensor/telemetry reads | Always live on GET | Immediate | Sensors only |
|
||||
|
||||
**Key constraint:** `InventoryCrc POST` marks groups stale but does not re-read hardware
|
||||
directly. Actual inventory data flows from the host to BMC via the Redfish Host Interface
|
||||
in-band channel, typically during POST/boot. For immediate inventory refresh without a full
|
||||
reboot, a soft reset via `ManagerBootMode: SoftReset` PATCH may be sufficient on some
|
||||
configurations.
|
||||
Submodule internal/chart updated: a71f55a6f9...2fb01d30a6
File diff suppressed because it is too large
Load Diff
392
internal/collector/redfish_logentries.go
Normal file
392
internal/collector/redfish_logentries.go
Normal file
@@ -0,0 +1,392 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
const (
|
||||
redfishLogEntriesWindow = 7 * 24 * time.Hour
|
||||
redfishLogEntriesMaxTotal = 500
|
||||
redfishLogEntriesMaxPerSvc = 200
|
||||
)
|
||||
|
||||
// collectRedfishLogEntries fetches hardware event log entries from Systems and Managers LogServices.
|
||||
// Only hardware-relevant entries from the last 7 days are returned.
|
||||
// For Systems: all log services except audit/journal/security/debug.
|
||||
// For Managers: only the IPMI SEL service (Id="SEL") — audit and event logs are excluded.
|
||||
func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client *http.Client, req Request, baseURL string, systemPaths, managerPaths []string) []map[string]interface{} {
|
||||
cutoff := time.Now().UTC().Add(-redfishLogEntriesWindow)
|
||||
seen := make(map[string]struct{})
|
||||
var out []map[string]interface{}
|
||||
|
||||
collectFrom := func(logServicesPath string, filter func(map[string]interface{}) bool) {
|
||||
if len(out) >= redfishLogEntriesMaxTotal {
|
||||
return
|
||||
}
|
||||
services, err := c.getCollectionMembers(ctx, client, req, baseURL, logServicesPath)
|
||||
if err != nil || len(services) == 0 {
|
||||
return
|
||||
}
|
||||
for _, svc := range services {
|
||||
if len(out) >= redfishLogEntriesMaxTotal {
|
||||
break
|
||||
}
|
||||
if !filter(svc) {
|
||||
continue
|
||||
}
|
||||
entriesPath := redfishLogServiceEntriesPath(svc)
|
||||
if entriesPath == "" {
|
||||
continue
|
||||
}
|
||||
entries := c.fetchRedfishLogEntriesWithPaging(ctx, client, req, baseURL, entriesPath, cutoff, seen, redfishLogEntriesMaxPerSvc)
|
||||
out = append(out, entries...)
|
||||
}
|
||||
}
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
collectFrom(joinPath(systemPath, "/LogServices"), isHardwareLogService)
|
||||
}
|
||||
// Managers hold the IPMI SEL on AMI/MSI BMCs — include only the "SEL" service.
|
||||
for _, managerPath := range managerPaths {
|
||||
collectFrom(joinPath(managerPath, "/LogServices"), isManagerSELService)
|
||||
}
|
||||
|
||||
if len(out) > 0 {
|
||||
log.Printf("redfish: collected %d hardware log entries (Systems+Managers SEL, window=7d)", len(out))
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// fetchRedfishLogEntriesWithPaging fetches entries from a LogEntry collection,
|
||||
// following nextLink pages. Stops early when entries older than cutoff are encountered
|
||||
// (assumes BMC returns entries newest-first, which is typical).
|
||||
func (c *RedfishConnector) fetchRedfishLogEntriesWithPaging(ctx context.Context, client *http.Client, req Request, baseURL, entriesPath string, cutoff time.Time, seen map[string]struct{}, limit int) []map[string]interface{} {
|
||||
var out []map[string]interface{}
|
||||
nextPath := entriesPath
|
||||
|
||||
for nextPath != "" && len(out) < limit {
|
||||
collection, err := c.getJSON(ctx, client, req, baseURL, nextPath)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
|
||||
// Handle both linked members (@odata.id only) and inline members (full objects).
|
||||
rawMembers, _ := collection["Members"].([]interface{})
|
||||
hitOldEntry := false
|
||||
|
||||
for _, rawMember := range rawMembers {
|
||||
if len(out) >= limit {
|
||||
break
|
||||
}
|
||||
memberMap, ok := rawMember.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
var entry map[string]interface{}
|
||||
if _, hasCreated := memberMap["Created"]; hasCreated {
|
||||
// Inline entry — use directly.
|
||||
entry = memberMap
|
||||
} else {
|
||||
// Linked entry — fetch by path.
|
||||
memberPath := normalizeRedfishPath(asString(memberMap["@odata.id"]))
|
||||
if memberPath == "" {
|
||||
continue
|
||||
}
|
||||
entry, err = c.getJSON(ctx, client, req, baseURL, memberPath)
|
||||
if err != nil || len(entry) == 0 {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
// Dedup by entry Id or path.
|
||||
entryKey := asString(entry["Id"])
|
||||
if entryKey == "" {
|
||||
entryKey = asString(entry["@odata.id"])
|
||||
}
|
||||
if entryKey != "" {
|
||||
if _, dup := seen[entryKey]; dup {
|
||||
continue
|
||||
}
|
||||
seen[entryKey] = struct{}{}
|
||||
}
|
||||
|
||||
// Time filter.
|
||||
created := parseRedfishEntryTime(asString(entry["Created"]))
|
||||
if !created.IsZero() && created.Before(cutoff) {
|
||||
hitOldEntry = true
|
||||
continue
|
||||
}
|
||||
|
||||
// Hardware relevance filter.
|
||||
if !isHardwareLogEntry(entry) {
|
||||
continue
|
||||
}
|
||||
|
||||
out = append(out, entry)
|
||||
}
|
||||
|
||||
// Stop paging once we've seen entries older than the window.
|
||||
if hitOldEntry {
|
||||
break
|
||||
}
|
||||
nextPath = firstNonEmpty(
|
||||
normalizeRedfishPath(asString(collection["Members@odata.nextLink"])),
|
||||
normalizeRedfishPath(asString(collection["@odata.nextLink"])),
|
||||
)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// isManagerSELService returns true only for the IPMI SEL exposed under Managers.
|
||||
// On AMI/MSI BMCs the hardware SEL lives at Managers/{mgr}/LogServices/SEL.
|
||||
// All other Manager log services (AuditLog, EventLog, Journal) are excluded.
|
||||
func isManagerSELService(svc map[string]interface{}) bool {
|
||||
id := strings.ToLower(strings.TrimSpace(asString(svc["Id"])))
|
||||
return id == "sel"
|
||||
}
|
||||
|
||||
// isHardwareLogService returns true if the log service looks like a hardware event log
|
||||
// (SEL, System Event Log) rather than a BMC audit/journal log.
|
||||
func isHardwareLogService(svc map[string]interface{}) bool {
|
||||
id := strings.ToLower(strings.TrimSpace(asString(svc["Id"])))
|
||||
name := strings.ToLower(strings.TrimSpace(asString(svc["Name"])))
|
||||
for _, skip := range []string{"audit", "journal", "bmc", "security", "manager", "debug"} {
|
||||
if strings.Contains(id, skip) || strings.Contains(name, skip) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// redfishLogServiceEntriesPath returns the Entries collection path for a LogService document.
|
||||
func redfishLogServiceEntriesPath(svc map[string]interface{}) string {
|
||||
if entriesLink, ok := svc["Entries"].(map[string]interface{}); ok {
|
||||
if p := normalizeRedfishPath(asString(entriesLink["@odata.id"])); p != "" {
|
||||
return p
|
||||
}
|
||||
}
|
||||
if id := normalizeRedfishPath(asString(svc["@odata.id"])); id != "" {
|
||||
return joinPath(id, "/Entries")
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// isHardwareLogEntry returns true if the log entry is hardware-related.
|
||||
// Audit, authentication, and session events are excluded.
|
||||
func isHardwareLogEntry(entry map[string]interface{}) bool {
|
||||
entryType := strings.TrimSpace(asString(entry["EntryType"]))
|
||||
if strings.EqualFold(entryType, "Oem") {
|
||||
return false
|
||||
}
|
||||
|
||||
msgID := strings.ToLower(strings.TrimSpace(asString(entry["MessageId"])))
|
||||
for _, skip := range []string{
|
||||
"user", "account", "password", "login", "logon", "session",
|
||||
"auth", "certificate", "security", "credential", "privilege",
|
||||
} {
|
||||
if strings.Contains(msgID, skip) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
// Also check the human-readable message for obvious audit patterns.
|
||||
msg := strings.ToLower(strings.TrimSpace(asString(entry["Message"])))
|
||||
for _, skip := range []string{"logged in", "logged out", "log in", "log out", "sign in", "signed in"} {
|
||||
if strings.Contains(msg, skip) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// parseRedfishEntryTime parses a Redfish LogEntry Created timestamp (ISO 8601 / RFC 3339).
|
||||
func parseRedfishEntryTime(raw string) time.Time {
|
||||
raw = strings.TrimSpace(raw)
|
||||
if raw == "" {
|
||||
return time.Time{}
|
||||
}
|
||||
for _, layout := range []string{time.RFC3339, time.RFC3339Nano, "2006-01-02T15:04:05Z07:00"} {
|
||||
if t, err := time.Parse(layout, raw); err == nil {
|
||||
return t.UTC()
|
||||
}
|
||||
}
|
||||
return time.Time{}
|
||||
}
|
||||
|
||||
// parseRedfishLogEntries converts raw log entries stored in RawPayloads into models.Event slice.
|
||||
// Called during Redfish replay for both live and offline (archive) collections.
|
||||
func parseRedfishLogEntries(rawPayloads map[string]any, collectedAt time.Time) []models.Event {
|
||||
raw, ok := rawPayloads["redfish_log_entries"]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
var entries []map[string]interface{}
|
||||
switch v := raw.(type) {
|
||||
case []map[string]interface{}:
|
||||
entries = v
|
||||
case []interface{}:
|
||||
for _, item := range v {
|
||||
if m, ok := item.(map[string]interface{}); ok {
|
||||
entries = append(entries, m)
|
||||
}
|
||||
}
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(entries) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]models.Event, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
ev := redfishLogEntryToEvent(entry, collectedAt)
|
||||
if ev == nil {
|
||||
continue
|
||||
}
|
||||
out = append(out, *ev)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// redfishLogEntryToEvent converts a single Redfish LogEntry document to models.Event.
|
||||
func redfishLogEntryToEvent(entry map[string]interface{}, collectedAt time.Time) *models.Event {
|
||||
// Prefer EventTimestamp (actual hardware event time) over Created (Redfish record creation time).
|
||||
ts := parseRedfishEntryTime(asString(entry["EventTimestamp"]))
|
||||
if ts.IsZero() {
|
||||
ts = parseRedfishEntryTime(asString(entry["Created"]))
|
||||
}
|
||||
if ts.IsZero() {
|
||||
ts = collectedAt
|
||||
}
|
||||
|
||||
severity := redfishLogEntrySeverity(entry)
|
||||
sensorType := strings.TrimSpace(asString(entry["SensorType"]))
|
||||
messageID := strings.TrimSpace(asString(entry["MessageId"]))
|
||||
entryType := strings.TrimSpace(asString(entry["EntryType"]))
|
||||
entryCode := strings.TrimSpace(asString(entry["EntryCode"]))
|
||||
|
||||
// SensorName: prefer "Name", fall back to "SensorNumber" + SensorType.
|
||||
sensorName := strings.TrimSpace(asString(entry["Name"]))
|
||||
if sensorName == "" {
|
||||
num := strings.TrimSpace(asString(entry["SensorNumber"]))
|
||||
if num != "" && sensorType != "" {
|
||||
sensorName = sensorType + " " + num
|
||||
}
|
||||
}
|
||||
|
||||
rawMessage := strings.TrimSpace(asString(entry["Message"]))
|
||||
|
||||
// AMI/MSI BMCs dump raw IPMI record fields into Message instead of human-readable text.
|
||||
// Detect this and build a readable description from structured fields instead.
|
||||
description, rawData := redfishDecodeMessage(rawMessage, sensorType, entryCode, entry)
|
||||
if description == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &models.Event{
|
||||
ID: messageID,
|
||||
Timestamp: ts,
|
||||
Source: "redfish",
|
||||
SensorType: sensorType,
|
||||
SensorName: sensorName,
|
||||
EventType: entryType,
|
||||
Severity: severity,
|
||||
Description: description,
|
||||
RawData: rawData,
|
||||
}
|
||||
}
|
||||
|
||||
// redfishDecodeMessage returns a human-readable description and optional raw data.
|
||||
// AMI/MSI BMCs dump raw IPMI record fields into Message as "Key : Value, Key : Value, ..."
|
||||
// instead of a plain human-readable string. We extract the useful decoded fields from it.
|
||||
func redfishDecodeMessage(message, sensorType, entryCode string, entry map[string]interface{}) (description, rawData string) {
|
||||
if !isRawIPMIDump(message) {
|
||||
description = message
|
||||
return
|
||||
}
|
||||
|
||||
rawData = message
|
||||
kv := parseIPMIDumpKV(message)
|
||||
|
||||
// Sensor_Type inside the dump is more specific than the top-level SensorType field.
|
||||
if v := kv["Sensor_Type"]; v != "" {
|
||||
sensorType = v
|
||||
}
|
||||
eventType := kv["Event_Type"] // human-readable IPMI event type, e.g. "Legacy OFF State"
|
||||
|
||||
var parts []string
|
||||
if sensorType != "" {
|
||||
parts = append(parts, sensorType)
|
||||
}
|
||||
if eventType != "" {
|
||||
parts = append(parts, eventType)
|
||||
} else if entryCode != "" {
|
||||
parts = append(parts, entryCode)
|
||||
}
|
||||
description = strings.Join(parts, ": ")
|
||||
return
|
||||
}
|
||||
|
||||
// isRawIPMIDump returns true if the message is an AMI raw IPMI record dump.
|
||||
func isRawIPMIDump(message string) bool {
|
||||
return strings.Contains(message, "Event_Data_1 :") && strings.Contains(message, "Record_Type :")
|
||||
}
|
||||
|
||||
// parseIPMIDumpKV parses the AMI "Key : Value, Key : Value, " format into a map.
|
||||
func parseIPMIDumpKV(message string) map[string]string {
|
||||
out := make(map[string]string)
|
||||
for _, part := range strings.Split(message, ",") {
|
||||
part = strings.TrimSpace(part)
|
||||
idx := strings.Index(part, " : ")
|
||||
if idx < 0 {
|
||||
continue
|
||||
}
|
||||
k := strings.TrimSpace(part[:idx])
|
||||
v := strings.TrimSpace(part[idx+3:])
|
||||
if k != "" && v != "" {
|
||||
out[k] = v
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// redfishLogEntrySeverity maps a Redfish LogEntry to models.Severity.
|
||||
// AMI/MSI BMCs often set Severity="OK" on all SEL records regardless of content,
|
||||
// so we fall back to inferring severity from SensorType when the explicit field is unhelpful.
|
||||
func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
// Newer Redfish uses MessageSeverity; older uses Severity.
|
||||
raw := strings.ToLower(firstNonEmpty(
|
||||
strings.TrimSpace(asString(entry["MessageSeverity"])),
|
||||
strings.TrimSpace(asString(entry["Severity"])),
|
||||
))
|
||||
switch raw {
|
||||
case "critical":
|
||||
return models.SeverityCritical
|
||||
case "warning":
|
||||
return models.SeverityWarning
|
||||
case "ok", "informational", "":
|
||||
// BMC didn't set a meaningful severity — infer from SensorType.
|
||||
return redfishSeverityFromSensorType(strings.TrimSpace(asString(entry["SensorType"])))
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
// redfishSeverityFromSensorType infers event severity from the IPMI/Redfish SensorType string.
|
||||
func redfishSeverityFromSensorType(sensorType string) models.Severity {
|
||||
switch strings.ToLower(sensorType) {
|
||||
case "critical interrupt", "processor", "memory", "power unit",
|
||||
"power supply", "drive slot", "system firmware progress":
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
159
internal/collector/redfish_replay_fru.go
Normal file
159
internal/collector/redfish_replay_fru.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func (r redfishSnapshotReader) collectBoardFallbackDocs(systemPaths, chassisPaths []string) []map[string]interface{} {
|
||||
out := make([]map[string]interface{}, 0)
|
||||
for _, chassisPath := range chassisPaths {
|
||||
for _, suffix := range []string{"/Boards", "/Backplanes"} {
|
||||
path := joinPath(chassisPath, suffix)
|
||||
if docs, err := r.getCollectionMembers(path); err == nil && len(docs) > 0 {
|
||||
out = append(out, docs...)
|
||||
continue
|
||||
}
|
||||
if doc, err := r.getJSON(path); err == nil && len(doc) > 0 {
|
||||
out = append(out, doc)
|
||||
}
|
||||
}
|
||||
}
|
||||
for _, path := range append(append([]string{}, systemPaths...), chassisPaths...) {
|
||||
for _, suffix := range []string{"/Oem/Public", "/Oem/Public/ThermalConfig", "/ThermalConfig"} {
|
||||
docPath := joinPath(path, suffix)
|
||||
if doc, err := r.getJSON(docPath); err == nil && len(doc) > 0 {
|
||||
out = append(out, doc)
|
||||
}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func applyBoardInfoFallbackFromDocs(board *models.BoardInfo, docs []map[string]interface{}) {
|
||||
if board == nil || len(docs) == 0 {
|
||||
return
|
||||
}
|
||||
for _, doc := range docs {
|
||||
candidate := parseBoardInfoFromFRUDoc(doc)
|
||||
if !isLikelyServerProductName(candidate.ProductName) {
|
||||
continue
|
||||
}
|
||||
if board.Manufacturer == "" {
|
||||
board.Manufacturer = candidate.Manufacturer
|
||||
}
|
||||
if board.ProductName == "" {
|
||||
board.ProductName = candidate.ProductName
|
||||
}
|
||||
if board.SerialNumber == "" {
|
||||
board.SerialNumber = candidate.SerialNumber
|
||||
}
|
||||
if board.PartNumber == "" {
|
||||
board.PartNumber = candidate.PartNumber
|
||||
}
|
||||
if board.Manufacturer != "" && board.ProductName != "" && board.SerialNumber != "" && board.PartNumber != "" {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func isLikelyServerProductName(v string) bool {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return false
|
||||
}
|
||||
n := strings.ToUpper(v)
|
||||
if strings.Contains(n, "NULL") {
|
||||
return false
|
||||
}
|
||||
componentTokens := []string{
|
||||
"DIMM", "DDR", "NVME", "SSD", "HDD", "GPU", "NIC", "RAID",
|
||||
"PSU", "FAN", "BACKPLANE", "FRU",
|
||||
}
|
||||
for _, token := range componentTokens {
|
||||
if strings.Contains(n, strings.ToUpper(token)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// collectAssemblyFRU reads Chassis/*/Assembly documents and returns FRU entries
|
||||
// for subcomponents (backplanes, PSUs, DIMMs, etc.) that carry meaningful
|
||||
// serial or part numbers. Entries already present in dedicated collections
|
||||
// (PSUs, DIMMs) are included here as well so that all FRU data is available
|
||||
// in one place; deduplication by serial is performed.
|
||||
func (r redfishSnapshotReader) collectAssemblyFRU(chassisPaths []string) []models.FRUInfo {
|
||||
seen := make(map[string]struct{})
|
||||
var out []models.FRUInfo
|
||||
|
||||
add := func(fru models.FRUInfo) {
|
||||
key := strings.ToUpper(strings.TrimSpace(fru.SerialNumber))
|
||||
if key == "" {
|
||||
key = strings.ToUpper(strings.TrimSpace(fru.Description + "|" + fru.PartNumber))
|
||||
}
|
||||
if key == "" || key == "|" {
|
||||
return
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
return
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, fru)
|
||||
}
|
||||
|
||||
for _, chassisPath := range chassisPaths {
|
||||
doc, err := r.getJSON(joinPath(chassisPath, "/Assembly"))
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
assemblies, _ := doc["Assemblies"].([]interface{})
|
||||
for _, aAny := range assemblies {
|
||||
a, ok := aAny.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimSpace(firstNonEmpty(asString(a["Name"]), asString(a["Description"])))
|
||||
model := strings.TrimSpace(asString(a["Model"]))
|
||||
partNumber := strings.TrimSpace(asString(a["PartNumber"]))
|
||||
serial := extractAssemblySerial(a)
|
||||
|
||||
if serial == "" && partNumber == "" {
|
||||
continue
|
||||
}
|
||||
add(models.FRUInfo{
|
||||
Description: name,
|
||||
ProductName: model,
|
||||
SerialNumber: serial,
|
||||
PartNumber: partNumber,
|
||||
})
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// extractAssemblySerial tries to find a serial number in an Assembly entry.
|
||||
// Standard Redfish Assembly has no top-level SerialNumber; vendors put it in Oem.
|
||||
func extractAssemblySerial(a map[string]interface{}) string {
|
||||
if s := strings.TrimSpace(asString(a["SerialNumber"])); s != "" {
|
||||
return s
|
||||
}
|
||||
oem, _ := a["Oem"].(map[string]interface{})
|
||||
for _, v := range oem {
|
||||
subtree, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, v2 := range subtree {
|
||||
node, ok := v2.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if s := strings.TrimSpace(asString(node["SerialNumber"])); s != "" {
|
||||
return s
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
198
internal/collector/redfish_replay_gpu.go
Normal file
198
internal/collector/redfish_replay_gpu.go
Normal file
@@ -0,0 +1,198 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector/redfishprofile"
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func (r redfishSnapshotReader) collectGPUs(systemPaths, chassisPaths []string, plan redfishprofile.ResolvedAnalysisPlan) []models.GPU {
|
||||
collections := make([]string, 0, len(systemPaths)*3+len(chassisPaths)*2)
|
||||
for _, systemPath := range systemPaths {
|
||||
collections = append(collections, joinPath(systemPath, "/PCIeDevices"))
|
||||
collections = append(collections, joinPath(systemPath, "/Accelerators"))
|
||||
collections = append(collections, joinPath(systemPath, "/GraphicsControllers"))
|
||||
}
|
||||
for _, chassisPath := range chassisPaths {
|
||||
collections = append(collections, joinPath(chassisPath, "/PCIeDevices"))
|
||||
collections = append(collections, joinPath(chassisPath, "/Accelerators"))
|
||||
}
|
||||
var out []models.GPU
|
||||
seen := make(map[string]struct{})
|
||||
idx := 1
|
||||
for _, collectionPath := range collections {
|
||||
memberDocs, err := r.getCollectionMembers(collectionPath)
|
||||
if err != nil || len(memberDocs) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, doc := range memberDocs {
|
||||
functionDocs := r.getLinkedPCIeFunctions(doc)
|
||||
if !looksLikeGPU(doc, functionDocs) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(doc, "EnvironmentMetrics", "Metrics")
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
gpu := parseGPUWithSupplementalDocs(doc, functionDocs, supplementalDocs, idx)
|
||||
idx++
|
||||
if plan.Directives.EnableGenericGraphicsControllerDedup && shouldSkipGenericGPUDuplicate(out, gpu) {
|
||||
continue
|
||||
}
|
||||
key := gpuDocDedupKey(doc, gpu)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[key]; ok {
|
||||
continue
|
||||
}
|
||||
seen[key] = struct{}{}
|
||||
out = append(out, gpu)
|
||||
}
|
||||
}
|
||||
if plan.Directives.EnableGenericGraphicsControllerDedup {
|
||||
return dropModelOnlyGPUPlaceholders(out)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// msiGhostGPUFilter returns true when the GPU chassis for gpuID shows a temperature
|
||||
// of 0 on a powered-on host, which is the reliable MSI/AMI signal that the GPU is
|
||||
// no longer physically installed (stale BMC inventory cache).
|
||||
// It only filters when the system PowerState is "On" — when the host is off, all
|
||||
// temperature readings are 0 and we cannot distinguish absent from idle.
|
||||
func (r redfishSnapshotReader) msiGhostGPUFilter(systemPaths []string, gpuID, chassisPath string) bool {
|
||||
// Require host powered on.
|
||||
for _, sp := range systemPaths {
|
||||
doc, err := r.getJSON(sp)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if !strings.EqualFold(strings.TrimSpace(asString(doc["PowerState"])), "on") {
|
||||
return false
|
||||
}
|
||||
break
|
||||
}
|
||||
// Read the temperature sensor for this GPU chassis.
|
||||
sensorPath := joinPath(chassisPath, "/Sensors/"+gpuID+"_Temperature")
|
||||
sensorDoc, err := r.getJSON(sensorPath)
|
||||
if err != nil || len(sensorDoc) == 0 {
|
||||
return false
|
||||
}
|
||||
reading, ok := sensorDoc["Reading"]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
switch v := reading.(type) {
|
||||
case float64:
|
||||
return v == 0
|
||||
case int:
|
||||
return v == 0
|
||||
case int64:
|
||||
return v == 0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// collectGPUsFromProcessors finds GPUs that some BMCs (e.g. MSI) expose as
|
||||
// Processor entries with ProcessorType=GPU rather than as PCIe devices.
|
||||
// It supplements the existing gpus slice (already found via PCIe path),
|
||||
// skipping entries already present by UUID or SerialNumber.
|
||||
// Serial numbers are looked up from Chassis members named after each GPU Id.
|
||||
func (r redfishSnapshotReader) collectGPUsFromProcessors(systemPaths, chassisPaths []string, existing []models.GPU, plan redfishprofile.ResolvedAnalysisPlan) []models.GPU {
|
||||
if !plan.Directives.EnableProcessorGPUFallback {
|
||||
return append([]models.GPU{}, existing...)
|
||||
}
|
||||
chassisByID := make(map[string]map[string]interface{})
|
||||
chassisPathByID := make(map[string]string)
|
||||
for _, cp := range chassisPaths {
|
||||
doc, err := r.getJSON(cp)
|
||||
if err != nil || len(doc) == 0 {
|
||||
continue
|
||||
}
|
||||
id := strings.TrimSpace(asString(doc["Id"]))
|
||||
if id != "" {
|
||||
chassisByID[strings.ToUpper(id)] = doc
|
||||
chassisPathByID[strings.ToUpper(id)] = cp
|
||||
}
|
||||
}
|
||||
|
||||
seenUUID := make(map[string]struct{})
|
||||
seenSerial := make(map[string]struct{})
|
||||
for _, g := range existing {
|
||||
if u := strings.ToUpper(strings.TrimSpace(g.UUID)); u != "" {
|
||||
seenUUID[u] = struct{}{}
|
||||
}
|
||||
if s := strings.ToUpper(strings.TrimSpace(g.SerialNumber)); s != "" {
|
||||
seenSerial[s] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
out := append([]models.GPU{}, existing...)
|
||||
idx := len(existing) + 1
|
||||
for _, systemPath := range systemPaths {
|
||||
procDocs, err := r.getCollectionMembers(joinPath(systemPath, "/Processors"))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, doc := range procDocs {
|
||||
if !strings.EqualFold(strings.TrimSpace(asString(doc["ProcessorType"])), "GPU") {
|
||||
continue
|
||||
}
|
||||
|
||||
gpuID := strings.TrimSpace(asString(doc["Id"]))
|
||||
serial := findFirstNormalizedStringByKeys(doc, "SerialNumber")
|
||||
if serial == "" {
|
||||
serial = resolveProcessorGPUChassisSerial(chassisByID, gpuID, plan)
|
||||
}
|
||||
|
||||
if plan.Directives.EnableMSIGhostGPUFilter {
|
||||
chassisPath := resolveProcessorGPUChassisPath(chassisPathByID, gpuID, plan)
|
||||
if chassisPath != "" && r.msiGhostGPUFilter(systemPaths, gpuID, chassisPath) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
uuid := strings.TrimSpace(asString(doc["UUID"]))
|
||||
uuidKey := strings.ToUpper(uuid)
|
||||
serialKey := strings.ToUpper(serial)
|
||||
|
||||
if uuidKey != "" {
|
||||
if _, dup := seenUUID[uuidKey]; dup {
|
||||
continue
|
||||
}
|
||||
seenUUID[uuidKey] = struct{}{}
|
||||
}
|
||||
if serialKey != "" {
|
||||
if _, dup := seenSerial[serialKey]; dup {
|
||||
continue
|
||||
}
|
||||
seenSerial[serialKey] = struct{}{}
|
||||
}
|
||||
|
||||
slotLabel := firstNonEmpty(
|
||||
redfishLocationLabel(doc["Location"]),
|
||||
redfishLocationLabel(doc["PhysicalLocation"]),
|
||||
)
|
||||
if slotLabel == "" && gpuID != "" {
|
||||
slotLabel = gpuID
|
||||
}
|
||||
if slotLabel == "" {
|
||||
slotLabel = fmt.Sprintf("GPU%d", idx)
|
||||
}
|
||||
out = append(out, models.GPU{
|
||||
Slot: slotLabel,
|
||||
Model: firstNonEmpty(asString(doc["Model"]), asString(doc["Name"])),
|
||||
Manufacturer: asString(doc["Manufacturer"]),
|
||||
PartNumber: asString(doc["PartNumber"]),
|
||||
SerialNumber: serial,
|
||||
UUID: uuid,
|
||||
Status: mapStatus(doc["Status"]),
|
||||
})
|
||||
idx++
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
599
internal/collector/redfish_replay_inventory.go
Normal file
599
internal/collector/redfish_replay_inventory.go
Normal file
@@ -0,0 +1,599 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func (r redfishSnapshotReader) enrichNICsFromNetworkInterfaces(nics *[]models.NetworkAdapter, systemPaths []string) {
|
||||
if nics == nil {
|
||||
return
|
||||
}
|
||||
bySlot := make(map[string]int, len(*nics))
|
||||
for i, nic := range *nics {
|
||||
bySlot[strings.ToLower(strings.TrimSpace(nic.Slot))] = i
|
||||
}
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
ifaces, err := r.getCollectionMembers(joinPath(systemPath, "/NetworkInterfaces"))
|
||||
if err != nil || len(ifaces) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, iface := range ifaces {
|
||||
slot := firstNonEmpty(asString(iface["Id"]), asString(iface["Name"]))
|
||||
if strings.TrimSpace(slot) == "" {
|
||||
continue
|
||||
}
|
||||
idx, ok := bySlot[strings.ToLower(strings.TrimSpace(slot))]
|
||||
if !ok {
|
||||
// The NetworkInterface Id (e.g. "2") may not match the display slot of
|
||||
// the real NIC that came from Chassis/NetworkAdapters (e.g. "RISER 5
|
||||
// slot 1 (7)"). Try to find the real NIC via the Links.NetworkAdapter
|
||||
// cross-reference before creating a ghost entry.
|
||||
if linkedIdx := r.findNICIndexByLinkedNetworkAdapter(iface, *nics, bySlot); linkedIdx >= 0 {
|
||||
idx = linkedIdx
|
||||
ok = true
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
*nics = append(*nics, models.NetworkAdapter{
|
||||
Slot: slot,
|
||||
Present: true,
|
||||
Model: firstNonEmpty(asString(iface["Model"]), asString(iface["Name"])),
|
||||
Status: mapStatus(iface["Status"]),
|
||||
})
|
||||
idx = len(*nics) - 1
|
||||
bySlot[strings.ToLower(strings.TrimSpace(slot))] = idx
|
||||
}
|
||||
|
||||
portsPath := redfishLinkedPath(iface, "NetworkPorts")
|
||||
if portsPath == "" {
|
||||
continue
|
||||
}
|
||||
portDocs, err := r.getCollectionMembers(portsPath)
|
||||
if err != nil || len(portDocs) == 0 {
|
||||
continue
|
||||
}
|
||||
macs := append([]string{}, (*nics)[idx].MACAddresses...)
|
||||
for _, p := range portDocs {
|
||||
macs = append(macs, collectNetworkPortMACs(p)...)
|
||||
}
|
||||
(*nics)[idx].MACAddresses = dedupeStrings(macs)
|
||||
if sanitizeNetworkPortCount((*nics)[idx].PortCount) == 0 {
|
||||
(*nics)[idx].PortCount = len(portDocs)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectNICs(chassisPaths []string) []models.NetworkAdapter {
|
||||
var nics []models.NetworkAdapter
|
||||
for _, chassisPath := range chassisPaths {
|
||||
adapterDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/NetworkAdapters"))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, doc := range adapterDocs {
|
||||
nics = append(nics, r.buildNICFromAdapterDoc(doc))
|
||||
}
|
||||
}
|
||||
return dedupeNetworkAdapters(nics)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) buildNICFromAdapterDoc(adapterDoc map[string]interface{}) models.NetworkAdapter {
|
||||
nic := parseNIC(adapterDoc)
|
||||
adapterFunctionDocs := r.getNetworkAdapterFunctionDocs(adapterDoc)
|
||||
for _, pciePath := range networkAdapterPCIeDevicePaths(adapterDoc) {
|
||||
pcieDoc, err := r.getJSON(pciePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
functionDocs := r.getLinkedPCIeFunctions(pcieDoc)
|
||||
for _, adapterFnDoc := range adapterFunctionDocs {
|
||||
functionDocs = append(functionDocs, r.getLinkedPCIeFunctions(adapterFnDoc)...)
|
||||
}
|
||||
functionDocs = dedupeJSONDocsByPath(functionDocs)
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(pcieDoc, "EnvironmentMetrics", "Metrics")
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
enrichNICFromPCIe(&nic, pcieDoc, functionDocs, supplementalDocs)
|
||||
}
|
||||
if len(nic.MACAddresses) == 0 {
|
||||
r.enrichNICMACsFromNetworkDeviceFunctions(&nic, adapterDoc)
|
||||
}
|
||||
return nic
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getNetworkAdapterFunctionDocs(adapterDoc map[string]interface{}) []map[string]interface{} {
|
||||
ndfCol, ok := adapterDoc["NetworkDeviceFunctions"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
colPath := asString(ndfCol["@odata.id"])
|
||||
if colPath == "" {
|
||||
return nil
|
||||
}
|
||||
funcDocs, err := r.getCollectionMembers(colPath)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return funcDocs
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectPCIeDevices(systemPaths, chassisPaths []string) []models.PCIeDevice {
|
||||
collections := make([]string, 0, len(systemPaths)+len(chassisPaths))
|
||||
for _, systemPath := range systemPaths {
|
||||
collections = append(collections, joinPath(systemPath, "/PCIeDevices"))
|
||||
}
|
||||
for _, chassisPath := range chassisPaths {
|
||||
collections = append(collections, joinPath(chassisPath, "/PCIeDevices"))
|
||||
}
|
||||
var out []models.PCIeDevice
|
||||
for _, collectionPath := range collections {
|
||||
memberDocs, err := r.getCollectionMembers(collectionPath)
|
||||
if err != nil || len(memberDocs) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, doc := range memberDocs {
|
||||
functionDocs := r.getLinkedPCIeFunctions(doc)
|
||||
if looksLikeGPU(doc, functionDocs) {
|
||||
continue
|
||||
}
|
||||
if replayPCIeDeviceBackedByCanonicalNIC(doc, functionDocs) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(doc, "EnvironmentMetrics", "Metrics")
|
||||
supplementalDocs = append(supplementalDocs, r.getChassisScopedPCIeSupplementalDocs(doc)...)
|
||||
for _, fn := range functionDocs {
|
||||
supplementalDocs = append(supplementalDocs, r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")...)
|
||||
}
|
||||
dev := parsePCIeDeviceWithSupplementalDocs(doc, functionDocs, supplementalDocs)
|
||||
if shouldSkipReplayPCIeDevice(doc, dev) {
|
||||
continue
|
||||
}
|
||||
out = append(out, dev)
|
||||
}
|
||||
}
|
||||
for _, systemPath := range systemPaths {
|
||||
functionDocs, err := r.getCollectionMembers(joinPath(systemPath, "/PCIeFunctions"))
|
||||
if err != nil || len(functionDocs) == 0 {
|
||||
continue
|
||||
}
|
||||
for idx, fn := range functionDocs {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(fn, "EnvironmentMetrics", "Metrics")
|
||||
dev := parsePCIeFunctionWithSupplementalDocs(fn, supplementalDocs, idx+1)
|
||||
if shouldSkipReplayPCIeDevice(fn, dev) {
|
||||
continue
|
||||
}
|
||||
out = append(out, dev)
|
||||
}
|
||||
}
|
||||
return dedupePCIeDevices(out)
|
||||
}
|
||||
|
||||
func shouldSkipReplayPCIeDevice(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
if isUnidentifiablePCIeDevice(dev) {
|
||||
return true
|
||||
}
|
||||
if replayNetworkFunctionBackedByCanonicalNIC(doc, dev) {
|
||||
return true
|
||||
}
|
||||
if isReplayStorageServiceEndpoint(doc, dev) {
|
||||
return true
|
||||
}
|
||||
if isReplayNoisePCIeClass(dev.DeviceClass) {
|
||||
return true
|
||||
}
|
||||
if isReplayDisplayDeviceDuplicate(doc, dev) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func replayPCIeDeviceBackedByCanonicalNIC(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
|
||||
if !looksLikeReplayNetworkPCIeDevice(doc, functionDocs) {
|
||||
return false
|
||||
}
|
||||
for _, fn := range functionDocs {
|
||||
if hasRedfishLinkedMember(fn, "NetworkDeviceFunctions") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func replayNetworkFunctionBackedByCanonicalNIC(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
if !looksLikeReplayNetworkClass(dev.DeviceClass) {
|
||||
return false
|
||||
}
|
||||
return hasRedfishLinkedMember(doc, "NetworkDeviceFunctions")
|
||||
}
|
||||
|
||||
func looksLikeReplayNetworkPCIeDevice(doc map[string]interface{}, functionDocs []map[string]interface{}) bool {
|
||||
for _, fn := range functionDocs {
|
||||
if looksLikeReplayNetworkClass(asString(fn["DeviceClass"])) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
|
||||
asString(doc["DeviceType"]),
|
||||
asString(doc["Description"]),
|
||||
asString(doc["Name"]),
|
||||
asString(doc["Model"]),
|
||||
}, " ")))
|
||||
return strings.Contains(joined, "network")
|
||||
}
|
||||
|
||||
func looksLikeReplayNetworkClass(class string) bool {
|
||||
class = strings.ToLower(strings.TrimSpace(class))
|
||||
return strings.Contains(class, "network") || strings.Contains(class, "ethernet")
|
||||
}
|
||||
|
||||
func isReplayStorageServiceEndpoint(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
|
||||
if class != "massstoragecontroller" && class != "mass storage controller" {
|
||||
return false
|
||||
}
|
||||
name := strings.ToLower(strings.TrimSpace(firstNonEmpty(
|
||||
dev.PartNumber,
|
||||
asString(doc["PartNumber"]),
|
||||
asString(doc["Description"]),
|
||||
)))
|
||||
if strings.Contains(name, "pcie switch management endpoint") {
|
||||
return true
|
||||
}
|
||||
if strings.Contains(name, "volume management device nvme raid controller") {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func hasRedfishLinkedMember(doc map[string]interface{}, key string) bool {
|
||||
links, ok := doc["Links"].(map[string]interface{})
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
if asInt(links[key+"@odata.count"]) > 0 {
|
||||
return true
|
||||
}
|
||||
linked, ok := links[key]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
switch v := linked.(type) {
|
||||
case []interface{}:
|
||||
return len(v) > 0
|
||||
case map[string]interface{}:
|
||||
if asString(v["@odata.id"]) != "" {
|
||||
return true
|
||||
}
|
||||
return len(v) > 0
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isReplayNoisePCIeClass(class string) bool {
|
||||
switch strings.ToLower(strings.TrimSpace(class)) {
|
||||
case "bridge", "processor", "signalprocessingcontroller", "signal processing controller", "serialbuscontroller", "serial bus controller":
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func isReplayDisplayDeviceDuplicate(doc map[string]interface{}, dev models.PCIeDevice) bool {
|
||||
class := strings.ToLower(strings.TrimSpace(dev.DeviceClass))
|
||||
if class != "displaycontroller" && class != "display controller" {
|
||||
return false
|
||||
}
|
||||
return strings.EqualFold(strings.TrimSpace(asString(doc["Description"])), "Display Device")
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) getChassisScopedPCIeSupplementalDocs(doc map[string]interface{}) []map[string]interface{} {
|
||||
docPath := normalizeRedfishPath(asString(doc["@odata.id"]))
|
||||
chassisPath := chassisPathForPCIeDoc(docPath)
|
||||
if chassisPath == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
out := make([]map[string]interface{}, 0, 6)
|
||||
if looksLikeNVSwitchPCIeDoc(doc) {
|
||||
for _, path := range []string{
|
||||
joinPath(chassisPath, "/EnvironmentMetrics"),
|
||||
joinPath(chassisPath, "/ThermalSubsystem/ThermalMetrics"),
|
||||
} {
|
||||
supplementalDoc, err := r.getJSON(path)
|
||||
if err != nil || len(supplementalDoc) == 0 {
|
||||
continue
|
||||
}
|
||||
out = append(out, supplementalDoc)
|
||||
}
|
||||
}
|
||||
deviceDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/Devices"))
|
||||
if err == nil {
|
||||
for _, deviceDoc := range deviceDocs {
|
||||
if !redfishPCIeMatchesChassisDeviceDoc(doc, deviceDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, deviceDoc)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// collectBMCMAC returns the MAC address of the best BMC management interface
|
||||
// found in Managers/*/EthernetInterfaces. Prefer an active link with an IP
|
||||
// address over a passive sideband interface.
|
||||
func (r redfishSnapshotReader) collectBMCMAC(managerPaths []string) string {
|
||||
summary := r.collectBMCManagementSummary(managerPaths)
|
||||
if len(summary) == 0 {
|
||||
return ""
|
||||
}
|
||||
return strings.ToUpper(strings.TrimSpace(asString(summary["mac_address"])))
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectBMCManagementSummary(managerPaths []string) map[string]any {
|
||||
bestScore := -1
|
||||
var best map[string]any
|
||||
for _, managerPath := range managerPaths {
|
||||
collectionPath := joinPath(managerPath, "/EthernetInterfaces")
|
||||
collectionDoc, _ := r.getJSON(collectionPath)
|
||||
ncsiEnabled, lldpMode, lldpByEth := redfishManagerEthernetCollectionHints(collectionDoc)
|
||||
members, err := r.getCollectionMembers(collectionPath)
|
||||
if err != nil || len(members) == 0 {
|
||||
continue
|
||||
}
|
||||
for _, doc := range members {
|
||||
mac := strings.TrimSpace(firstNonEmpty(
|
||||
asString(doc["PermanentMACAddress"]),
|
||||
asString(doc["MACAddress"]),
|
||||
))
|
||||
if mac == "" || strings.EqualFold(mac, "00:00:00:00:00:00") {
|
||||
continue
|
||||
}
|
||||
ifaceID := strings.TrimSpace(firstNonEmpty(asString(doc["Id"]), asString(doc["Name"])))
|
||||
summary := map[string]any{
|
||||
"manager_path": managerPath,
|
||||
"interface_id": ifaceID,
|
||||
"hostname": strings.TrimSpace(asString(doc["HostName"])),
|
||||
"fqdn": strings.TrimSpace(asString(doc["FQDN"])),
|
||||
"mac_address": strings.ToUpper(mac),
|
||||
"link_status": strings.TrimSpace(asString(doc["LinkStatus"])),
|
||||
"speed_mbps": asInt(doc["SpeedMbps"]),
|
||||
"interface_name": strings.TrimSpace(asString(doc["Name"])),
|
||||
"interface_desc": strings.TrimSpace(asString(doc["Description"])),
|
||||
"ncsi_enabled": ncsiEnabled,
|
||||
"lldp_mode": lldpMode,
|
||||
"ipv4_address": redfishManagerIPv4Field(doc, "Address"),
|
||||
"ipv4_gateway": redfishManagerIPv4Field(doc, "Gateway"),
|
||||
"ipv4_subnet": redfishManagerIPv4Field(doc, "SubnetMask"),
|
||||
"ipv6_address": redfishManagerIPv6Field(doc, "Address"),
|
||||
"link_is_active": strings.EqualFold(strings.TrimSpace(asString(doc["LinkStatus"])), "LinkActive"),
|
||||
"interface_score": 0,
|
||||
}
|
||||
if lldp, ok := lldpByEth[strings.ToLower(ifaceID)]; ok {
|
||||
summary["lldp_chassis_name"] = lldp["ChassisName"]
|
||||
summary["lldp_port_desc"] = lldp["PortDesc"]
|
||||
summary["lldp_port_id"] = lldp["PortId"]
|
||||
if vlan := asInt(lldp["VlanId"]); vlan > 0 {
|
||||
summary["lldp_vlan_id"] = vlan
|
||||
}
|
||||
}
|
||||
score := redfishManagerInterfaceScore(summary)
|
||||
summary["interface_score"] = score
|
||||
if score > bestScore {
|
||||
bestScore = score
|
||||
best = summary
|
||||
}
|
||||
}
|
||||
}
|
||||
return best
|
||||
}
|
||||
|
||||
func redfishManagerEthernetCollectionHints(collectionDoc map[string]interface{}) (bool, string, map[string]map[string]interface{}) {
|
||||
lldpByEth := make(map[string]map[string]interface{})
|
||||
if len(collectionDoc) == 0 {
|
||||
return false, "", lldpByEth
|
||||
}
|
||||
oem, _ := collectionDoc["Oem"].(map[string]interface{})
|
||||
public, _ := oem["Public"].(map[string]interface{})
|
||||
ncsiEnabled := asBool(public["NcsiEnabled"])
|
||||
lldp, _ := public["LLDP"].(map[string]interface{})
|
||||
lldpMode := strings.TrimSpace(asString(lldp["LLDPMode"]))
|
||||
if members, ok := lldp["Members"].([]interface{}); ok {
|
||||
for _, item := range members {
|
||||
member, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
ethIndex := strings.ToLower(strings.TrimSpace(asString(member["EthIndex"])))
|
||||
if ethIndex == "" {
|
||||
continue
|
||||
}
|
||||
lldpByEth[ethIndex] = member
|
||||
}
|
||||
}
|
||||
return ncsiEnabled, lldpMode, lldpByEth
|
||||
}
|
||||
|
||||
func redfishManagerIPv4Field(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
}
|
||||
for _, field := range []string{"IPv4Addresses", "IPv4StaticAddresses"} {
|
||||
list, ok := doc[field].([]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, item := range list {
|
||||
entry, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
value := strings.TrimSpace(asString(entry[key]))
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func redfishManagerIPv6Field(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
}
|
||||
list, ok := doc["IPv6Addresses"].([]interface{})
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
for _, item := range list {
|
||||
entry, ok := item.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
value := strings.TrimSpace(asString(entry[key]))
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func redfishManagerInterfaceScore(summary map[string]any) int {
|
||||
score := 0
|
||||
if strings.EqualFold(strings.TrimSpace(asString(summary["link_status"])), "LinkActive") {
|
||||
score += 100
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["ipv4_address"])) != "" {
|
||||
score += 40
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["ipv6_address"])) != "" {
|
||||
score += 10
|
||||
}
|
||||
if strings.TrimSpace(asString(summary["mac_address"])) != "" {
|
||||
score += 10
|
||||
}
|
||||
if asInt(summary["speed_mbps"]) > 0 {
|
||||
score += 5
|
||||
}
|
||||
if ifaceID := strings.ToLower(strings.TrimSpace(asString(summary["interface_id"]))); ifaceID != "" && !strings.HasPrefix(ifaceID, "usb") {
|
||||
score += 3
|
||||
}
|
||||
if asBool(summary["ncsi_enabled"]) {
|
||||
score += 1
|
||||
}
|
||||
return score
|
||||
}
|
||||
|
||||
// findNICIndexByLinkedNetworkAdapter resolves a NetworkInterface document to an
|
||||
// existing NIC in bySlot by following Links.NetworkAdapter → the Chassis
|
||||
// NetworkAdapter doc and reconstructing the canonical NIC identity. Returns -1
|
||||
// if no match is found.
|
||||
func (r redfishSnapshotReader) findNICIndexByLinkedNetworkAdapter(iface map[string]interface{}, existing []models.NetworkAdapter, bySlot map[string]int) int {
|
||||
links, ok := iface["Links"].(map[string]interface{})
|
||||
if !ok {
|
||||
return -1
|
||||
}
|
||||
adapterRef, ok := links["NetworkAdapter"].(map[string]interface{})
|
||||
if !ok {
|
||||
return -1
|
||||
}
|
||||
adapterPath := normalizeRedfishPath(asString(adapterRef["@odata.id"]))
|
||||
if adapterPath == "" {
|
||||
return -1
|
||||
}
|
||||
adapterDoc, err := r.getJSON(adapterPath)
|
||||
if err != nil || len(adapterDoc) == 0 {
|
||||
return -1
|
||||
}
|
||||
adapterNIC := r.buildNICFromAdapterDoc(adapterDoc)
|
||||
if serial := normalizeRedfishIdentityField(adapterNIC.SerialNumber); serial != "" {
|
||||
for idx, nic := range existing {
|
||||
if strings.EqualFold(normalizeRedfishIdentityField(nic.SerialNumber), serial) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
}
|
||||
if bdf := strings.TrimSpace(adapterNIC.BDF); bdf != "" {
|
||||
for idx, nic := range existing {
|
||||
if strings.EqualFold(strings.TrimSpace(nic.BDF), bdf) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
}
|
||||
if slot := strings.ToLower(strings.TrimSpace(adapterNIC.Slot)); slot != "" {
|
||||
if idx, ok := bySlot[slot]; ok {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
for idx, nic := range existing {
|
||||
if networkAdaptersShareMACs(nic, adapterNIC) {
|
||||
return idx
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func networkAdaptersShareMACs(a, b models.NetworkAdapter) bool {
|
||||
if len(a.MACAddresses) == 0 || len(b.MACAddresses) == 0 {
|
||||
return false
|
||||
}
|
||||
seen := make(map[string]struct{}, len(a.MACAddresses))
|
||||
for _, mac := range a.MACAddresses {
|
||||
normalized := strings.ToUpper(strings.TrimSpace(mac))
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
seen[normalized] = struct{}{}
|
||||
}
|
||||
for _, mac := range b.MACAddresses {
|
||||
normalized := strings.ToUpper(strings.TrimSpace(mac))
|
||||
if normalized == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[normalized]; ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// enrichNICMACsFromNetworkDeviceFunctions reads the NetworkDeviceFunctions
|
||||
// collection linked from a NetworkAdapter document and populates the NIC's
|
||||
// MACAddresses from each function's Ethernet.PermanentMACAddress / MACAddress.
|
||||
// Called when PCIe-path enrichment does not produce any MACs.
|
||||
func (r redfishSnapshotReader) enrichNICMACsFromNetworkDeviceFunctions(nic *models.NetworkAdapter, adapterDoc map[string]interface{}) {
|
||||
ndfCol, ok := adapterDoc["NetworkDeviceFunctions"].(map[string]interface{})
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
colPath := asString(ndfCol["@odata.id"])
|
||||
if colPath == "" {
|
||||
return
|
||||
}
|
||||
funcDocs, err := r.getCollectionMembers(colPath)
|
||||
if err != nil || len(funcDocs) == 0 {
|
||||
return
|
||||
}
|
||||
for _, fn := range funcDocs {
|
||||
eth, _ := fn["Ethernet"].(map[string]interface{})
|
||||
if eth == nil {
|
||||
continue
|
||||
}
|
||||
mac := strings.TrimSpace(firstNonEmpty(
|
||||
asString(eth["PermanentMACAddress"]),
|
||||
asString(eth["MACAddress"]),
|
||||
))
|
||||
if mac == "" {
|
||||
continue
|
||||
}
|
||||
nic.MACAddresses = dedupeStrings(append(nic.MACAddresses, strings.ToUpper(mac)))
|
||||
}
|
||||
if len(funcDocs) > 0 && nic.PortCount == 0 {
|
||||
nic.PortCount = sanitizeNetworkPortCount(len(funcDocs))
|
||||
}
|
||||
}
|
||||
100
internal/collector/redfish_replay_profiles.go
Normal file
100
internal/collector/redfish_replay_profiles.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector/redfishprofile"
|
||||
)
|
||||
|
||||
func (r redfishSnapshotReader) collectKnownStorageMembers(systemPath string, relativeCollections []string) []map[string]interface{} {
|
||||
var out []map[string]interface{}
|
||||
for _, rel := range relativeCollections {
|
||||
docs, err := r.getCollectionMembers(joinPath(systemPath, rel))
|
||||
if err != nil || len(docs) == 0 {
|
||||
continue
|
||||
}
|
||||
out = append(out, docs...)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) probeSupermicroNVMeDiskBays(backplanePath string) []map[string]interface{} {
|
||||
return r.probeDirectDiskBayChildren(joinPath(backplanePath, "/Drives"))
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) probeDirectDiskBayChildren(drivesCollectionPath string) []map[string]interface{} {
|
||||
var out []map[string]interface{}
|
||||
for _, path := range directDiskBayCandidates(drivesCollectionPath) {
|
||||
doc, err := r.getJSON(path)
|
||||
if err != nil || !looksLikeDrive(doc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, doc)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func resolveProcessorGPUChassisSerial(chassisByID map[string]map[string]interface{}, gpuID string, plan redfishprofile.ResolvedAnalysisPlan) string {
|
||||
for _, candidateID := range processorGPUChassisCandidateIDs(gpuID, plan) {
|
||||
if chassisDoc, ok := chassisByID[strings.ToUpper(candidateID)]; ok {
|
||||
if serial := strings.TrimSpace(asString(chassisDoc["SerialNumber"])); serial != "" {
|
||||
return serial
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func resolveProcessorGPUChassisPath(chassisPathByID map[string]string, gpuID string, plan redfishprofile.ResolvedAnalysisPlan) string {
|
||||
for _, candidateID := range processorGPUChassisCandidateIDs(gpuID, plan) {
|
||||
if p, ok := chassisPathByID[strings.ToUpper(candidateID)]; ok {
|
||||
return p
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func processorGPUChassisCandidateIDs(gpuID string, plan redfishprofile.ResolvedAnalysisPlan) []string {
|
||||
gpuID = strings.TrimSpace(gpuID)
|
||||
if gpuID == "" {
|
||||
return nil
|
||||
}
|
||||
candidates := []string{gpuID}
|
||||
for _, mode := range plan.ProcessorGPUChassisLookupModes {
|
||||
switch strings.ToLower(strings.TrimSpace(mode)) {
|
||||
case "msi-index":
|
||||
candidates = append(candidates, msiProcessorGPUChassisCandidateIDs(gpuID)...)
|
||||
case "hgx-alias":
|
||||
if strings.HasPrefix(strings.ToUpper(gpuID), "GPU_") {
|
||||
candidates = append(candidates, "HGX_"+gpuID)
|
||||
}
|
||||
}
|
||||
}
|
||||
return dedupeStrings(candidates)
|
||||
}
|
||||
|
||||
func msiProcessorGPUChassisCandidateIDs(gpuID string) []string {
|
||||
gpuID = strings.TrimSpace(strings.ToUpper(gpuID))
|
||||
if gpuID == "" {
|
||||
return nil
|
||||
}
|
||||
var out []string
|
||||
switch {
|
||||
case strings.HasPrefix(gpuID, "GPU_SXM_"):
|
||||
index := strings.TrimPrefix(gpuID, "GPU_SXM_")
|
||||
if index != "" {
|
||||
out = append(out, "GPU"+index, "GPU_"+index)
|
||||
}
|
||||
case strings.HasPrefix(gpuID, "GPU_"):
|
||||
index := strings.TrimPrefix(gpuID, "GPU_")
|
||||
if index != "" {
|
||||
out = append(out, "GPU"+index, "GPU_SXM_"+index)
|
||||
}
|
||||
case strings.HasPrefix(gpuID, "GPU"):
|
||||
index := strings.TrimPrefix(gpuID, "GPU")
|
||||
if index != "" {
|
||||
out = append(out, "GPU_"+index, "GPU_SXM_"+index)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
167
internal/collector/redfish_replay_storage.go
Normal file
167
internal/collector/redfish_replay_storage.go
Normal file
@@ -0,0 +1,167 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"git.mchus.pro/mchus/logpile/internal/collector/redfishprofile"
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishprofile.ResolvedAnalysisPlan) []models.Storage {
|
||||
var out []models.Storage
|
||||
storageMembers, _ := r.getCollectionMembers(joinPath(systemPath, "/Storage"))
|
||||
for _, member := range storageMembers {
|
||||
if driveCollection, ok := member["Drives"].(map[string]interface{}); ok {
|
||||
if driveCollectionPath := asString(driveCollection["@odata.id"]); driveCollectionPath != "" {
|
||||
driveDocs, err := r.getCollectionMembers(driveCollectionPath)
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(driveCollectionPath) {
|
||||
if isAbsentDriveDoc(driveDoc) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
}
|
||||
if drives, ok := member["Drives"].([]interface{}); ok {
|
||||
for _, driveAny := range drives {
|
||||
driveRef, ok := driveAny.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
odata := asString(driveRef["@odata.id"])
|
||||
if odata == "" {
|
||||
continue
|
||||
}
|
||||
driveDoc, err := r.getJSON(odata)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if looksLikeDrive(member) {
|
||||
if isAbsentDriveDoc(member) || isVirtualStorageDrive(member) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(member, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(member, supplementalDocs...))
|
||||
}
|
||||
|
||||
if plan.Directives.EnableStorageEnclosureRecovery {
|
||||
for _, enclosurePath := range redfishLinkRefs(member, "Links", "Enclosures") {
|
||||
driveDocs, err := r.getCollectionMembers(joinPath(enclosurePath, "/Drives"))
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(joinPath(enclosurePath, "/Drives")) {
|
||||
if isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(plan.KnownStorageDriveCollections) > 0 {
|
||||
for _, driveDoc := range r.collectKnownStorageMembers(systemPath, plan.KnownStorageDriveCollections) {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
simpleStorageMembers, _ := r.getCollectionMembers(joinPath(systemPath, "/SimpleStorage"))
|
||||
for _, member := range simpleStorageMembers {
|
||||
devices, ok := member["Devices"].([]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
for _, devAny := range devices {
|
||||
devDoc, ok := devAny.(map[string]interface{})
|
||||
if !ok || !looksLikeDrive(devDoc) || isAbsentDriveDoc(devDoc) || isVirtualStorageDrive(devDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(devDoc))
|
||||
}
|
||||
}
|
||||
|
||||
chassisPaths := r.discoverMemberPaths("/redfish/v1/Chassis", "/redfish/v1/Chassis/1")
|
||||
for _, chassisPath := range chassisPaths {
|
||||
driveDocs, err := r.getCollectionMembers(joinPath(chassisPath, "/Drives"))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
}
|
||||
}
|
||||
if plan.Directives.EnableSupermicroNVMeBackplane {
|
||||
for _, chassisPath := range chassisPaths {
|
||||
if !isSupermicroNVMeBackplanePath(chassisPath) {
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range r.probeSupermicroNVMeDiskBays(chassisPath) {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
}
|
||||
}
|
||||
}
|
||||
return dedupeStorage(out)
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectStorageVolumes(systemPath string, plan redfishprofile.ResolvedAnalysisPlan) []models.StorageVolume {
|
||||
var out []models.StorageVolume
|
||||
storageMembers, _ := r.getCollectionMembers(joinPath(systemPath, "/Storage"))
|
||||
for _, member := range storageMembers {
|
||||
controller := firstNonEmpty(asString(member["Id"]), asString(member["Name"]))
|
||||
volumeCollectionPath := redfishLinkedPath(member, "Volumes")
|
||||
if volumeCollectionPath == "" {
|
||||
continue
|
||||
}
|
||||
volumeDocs, err := r.getCollectionMembers(volumeCollectionPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
for _, volDoc := range volumeDocs {
|
||||
if looksLikeVolume(volDoc) {
|
||||
out = append(out, parseStorageVolume(volDoc, controller))
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(plan.KnownStorageVolumeCollections) > 0 {
|
||||
for _, volDoc := range r.collectKnownStorageMembers(systemPath, plan.KnownStorageVolumeCollections) {
|
||||
if looksLikeVolume(volDoc) {
|
||||
out = append(out, parseStorageVolume(volDoc, storageControllerFromPath(asString(volDoc["@odata.id"]))))
|
||||
}
|
||||
}
|
||||
}
|
||||
return dedupeStorageVolumes(out)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
162
internal/collector/redfishprofile/acquisition.go
Normal file
162
internal/collector/redfishprofile/acquisition.go
Normal file
@@ -0,0 +1,162 @@
|
||||
package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func ResolveAcquisitionPlan(match MatchResult, plan AcquisitionPlan, discovered DiscoveredResources, signals MatchSignals) ResolvedAcquisitionPlan {
|
||||
seedGroups := [][]string{
|
||||
baselineSeedPaths(discovered),
|
||||
expandScopedSuffixes(discovered.SystemPaths, plan.ScopedPaths.SystemSeedSuffixes),
|
||||
expandScopedSuffixes(discovered.ChassisPaths, plan.ScopedPaths.ChassisSeedSuffixes),
|
||||
expandScopedSuffixes(discovered.ManagerPaths, plan.ScopedPaths.ManagerSeedSuffixes),
|
||||
plan.SeedPaths,
|
||||
}
|
||||
if plan.Mode == ModeFallback {
|
||||
seedGroups = append(seedGroups, plan.PlanBPaths)
|
||||
}
|
||||
|
||||
criticalGroups := [][]string{
|
||||
baselineCriticalPaths(discovered),
|
||||
expandScopedSuffixes(discovered.SystemPaths, plan.ScopedPaths.SystemCriticalSuffixes),
|
||||
expandScopedSuffixes(discovered.ChassisPaths, plan.ScopedPaths.ChassisCriticalSuffixes),
|
||||
expandScopedSuffixes(discovered.ManagerPaths, plan.ScopedPaths.ManagerCriticalSuffixes),
|
||||
plan.CriticalPaths,
|
||||
}
|
||||
|
||||
resolved := ResolvedAcquisitionPlan{
|
||||
Plan: plan,
|
||||
SeedPaths: mergeResolvedPaths(seedGroups...),
|
||||
CriticalPaths: mergeResolvedPaths(criticalGroups...),
|
||||
}
|
||||
for _, profile := range match.Profiles {
|
||||
profile.RefineAcquisitionPlan(&resolved, discovered, signals)
|
||||
}
|
||||
resolved.SeedPaths = mergeResolvedPaths(resolved.SeedPaths)
|
||||
resolved.CriticalPaths = mergeResolvedPaths(resolved.CriticalPaths, resolved.Plan.CriticalPaths)
|
||||
resolved.Plan.SeedPaths = mergeResolvedPaths(resolved.Plan.SeedPaths)
|
||||
resolved.Plan.CriticalPaths = mergeResolvedPaths(resolved.Plan.CriticalPaths)
|
||||
resolved.Plan.PlanBPaths = mergeResolvedPaths(resolved.Plan.PlanBPaths)
|
||||
return resolved
|
||||
}
|
||||
|
||||
func baselineSeedPaths(discovered DiscoveredResources) []string {
|
||||
var out []string
|
||||
add := func(p string) {
|
||||
if p = normalizePath(p); p != "" {
|
||||
out = append(out, p)
|
||||
}
|
||||
}
|
||||
|
||||
add("/redfish/v1/UpdateService")
|
||||
add("/redfish/v1/UpdateService/FirmwareInventory")
|
||||
|
||||
for _, p := range discovered.SystemPaths {
|
||||
add(p)
|
||||
add(joinPath(p, "/Bios"))
|
||||
add(joinPath(p, "/Oem/Public"))
|
||||
add(joinPath(p, "/Oem/Public/FRU"))
|
||||
add(joinPath(p, "/Processors"))
|
||||
add(joinPath(p, "/Memory"))
|
||||
add(joinPath(p, "/EthernetInterfaces"))
|
||||
add(joinPath(p, "/NetworkInterfaces"))
|
||||
add(joinPath(p, "/PCIeDevices"))
|
||||
add(joinPath(p, "/PCIeFunctions"))
|
||||
add(joinPath(p, "/Accelerators"))
|
||||
add(joinPath(p, "/GraphicsControllers"))
|
||||
add(joinPath(p, "/Storage"))
|
||||
}
|
||||
for _, p := range discovered.ChassisPaths {
|
||||
add(p)
|
||||
add(joinPath(p, "/Oem/Public"))
|
||||
add(joinPath(p, "/Oem/Public/FRU"))
|
||||
add(joinPath(p, "/PCIeDevices"))
|
||||
add(joinPath(p, "/PCIeSlots"))
|
||||
add(joinPath(p, "/NetworkAdapters"))
|
||||
add(joinPath(p, "/Drives"))
|
||||
add(joinPath(p, "/Power"))
|
||||
}
|
||||
for _, p := range discovered.ManagerPaths {
|
||||
add(p)
|
||||
add(joinPath(p, "/EthernetInterfaces"))
|
||||
add(joinPath(p, "/NetworkProtocol"))
|
||||
}
|
||||
return mergeResolvedPaths(out)
|
||||
}
|
||||
|
||||
func baselineCriticalPaths(discovered DiscoveredResources) []string {
|
||||
var out []string
|
||||
for _, group := range [][]string{
|
||||
{"/redfish/v1"},
|
||||
discovered.SystemPaths,
|
||||
discovered.ChassisPaths,
|
||||
discovered.ManagerPaths,
|
||||
} {
|
||||
out = append(out, group...)
|
||||
}
|
||||
return mergeResolvedPaths(out)
|
||||
}
|
||||
|
||||
func expandScopedSuffixes(basePaths, suffixes []string) []string {
|
||||
if len(basePaths) == 0 || len(suffixes) == 0 {
|
||||
return nil
|
||||
}
|
||||
out := make([]string, 0, len(basePaths)*len(suffixes))
|
||||
for _, basePath := range basePaths {
|
||||
basePath = normalizePath(basePath)
|
||||
if basePath == "" {
|
||||
continue
|
||||
}
|
||||
for _, suffix := range suffixes {
|
||||
suffix = strings.TrimSpace(suffix)
|
||||
if suffix == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, joinPath(basePath, suffix))
|
||||
}
|
||||
}
|
||||
return mergeResolvedPaths(out)
|
||||
}
|
||||
|
||||
func mergeResolvedPaths(groups ...[]string) []string {
|
||||
seen := make(map[string]struct{})
|
||||
out := make([]string, 0)
|
||||
for _, group := range groups {
|
||||
for _, path := range group {
|
||||
path = normalizePath(path)
|
||||
if path == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := seen[path]; ok {
|
||||
continue
|
||||
}
|
||||
seen[path] = struct{}{}
|
||||
out = append(out, path)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func normalizePath(path string) string {
|
||||
path = strings.TrimSpace(path)
|
||||
if path == "" {
|
||||
return ""
|
||||
}
|
||||
if !strings.HasPrefix(path, "/") {
|
||||
path = "/" + path
|
||||
}
|
||||
return strings.TrimRight(path, "/")
|
||||
}
|
||||
|
||||
func joinPath(base, rel string) string {
|
||||
base = normalizePath(base)
|
||||
rel = strings.TrimSpace(rel)
|
||||
if base == "" {
|
||||
return normalizePath(rel)
|
||||
}
|
||||
if rel == "" {
|
||||
return base
|
||||
}
|
||||
if !strings.HasPrefix(rel, "/") {
|
||||
rel = "/" + rel
|
||||
}
|
||||
return normalizePath(base + rel)
|
||||
}
|
||||
100
internal/collector/redfishprofile/analysis.go
Normal file
100
internal/collector/redfishprofile/analysis.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func ResolveAnalysisPlan(match MatchResult, snapshot map[string]interface{}, discovered DiscoveredResources, signals MatchSignals) ResolvedAnalysisPlan {
|
||||
plan := ResolvedAnalysisPlan{
|
||||
Match: match,
|
||||
Directives: AnalysisDirectives{},
|
||||
}
|
||||
if match.Mode == ModeFallback {
|
||||
plan.Directives.EnableProcessorGPUFallback = true
|
||||
plan.Directives.EnableSupermicroNVMeBackplane = true
|
||||
plan.Directives.EnableProcessorGPUChassisAlias = true
|
||||
plan.Directives.EnableGenericGraphicsControllerDedup = true
|
||||
plan.Directives.EnableStorageEnclosureRecovery = true
|
||||
plan.Directives.EnableKnownStorageControllerRecovery = true
|
||||
addAnalysisLookupMode(&plan, "msi-index")
|
||||
addAnalysisLookupMode(&plan, "hgx-alias")
|
||||
addAnalysisStorageDriveCollections(&plan,
|
||||
"/Storage/IntelVROC/Drives",
|
||||
"/Storage/IntelVROC/Controllers/1/Drives",
|
||||
)
|
||||
addAnalysisStorageVolumeCollections(&plan,
|
||||
"/Storage/IntelVROC/Volumes",
|
||||
"/Storage/HA-RAID/Volumes",
|
||||
"/Storage/MRVL.HA-RAID/Volumes",
|
||||
)
|
||||
addAnalysisNote(&plan, "fallback analysis enables broad recovery directives")
|
||||
}
|
||||
for _, profile := range match.Profiles {
|
||||
profile.ApplyAnalysisDirectives(&plan.Directives, signals)
|
||||
}
|
||||
for _, profile := range match.Profiles {
|
||||
profile.RefineAnalysisPlan(&plan, snapshot, discovered, signals)
|
||||
}
|
||||
return plan
|
||||
}
|
||||
|
||||
func snapshotHasPathPrefix(snapshot map[string]interface{}, prefix string) bool {
|
||||
prefix = normalizePath(prefix)
|
||||
if prefix == "" {
|
||||
return false
|
||||
}
|
||||
for path := range snapshot {
|
||||
if strings.HasPrefix(normalizePath(path), prefix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func snapshotHasPathContaining(snapshot map[string]interface{}, sub string) bool {
|
||||
sub = strings.ToLower(strings.TrimSpace(sub))
|
||||
if sub == "" {
|
||||
return false
|
||||
}
|
||||
for path := range snapshot {
|
||||
if strings.Contains(strings.ToLower(path), sub) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func snapshotHasGPUProcessor(snapshot map[string]interface{}, systemPaths []string) bool {
|
||||
for _, systemPath := range systemPaths {
|
||||
prefix := normalizePath(joinPath(systemPath, "/Processors")) + "/"
|
||||
for path, docAny := range snapshot {
|
||||
if !strings.HasPrefix(normalizePath(path), prefix) {
|
||||
continue
|
||||
}
|
||||
doc, ok := docAny.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
if strings.EqualFold(strings.TrimSpace(asString(doc["ProcessorType"])), "GPU") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func snapshotHasStorageControllerHint(snapshot map[string]interface{}, needles ...string) bool {
|
||||
for _, needle := range needles {
|
||||
if snapshotHasPathContaining(snapshot, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func asString(v interface{}) string {
|
||||
switch x := v.(type) {
|
||||
case string:
|
||||
return x
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
450
internal/collector/redfishprofile/fixture_test.go
Normal file
450
internal/collector/redfishprofile/fixture_test.go
Normal file
@@ -0,0 +1,450 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_MSI_CG480(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "msi-cg480.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, discoveredResourcesFromSignals(signals), signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "msi")
|
||||
assertProfileSelected(t, match, "ami-family")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
|
||||
if plan.Tuning.PrefetchWorkers < 6 {
|
||||
t.Fatalf("expected msi prefetch worker tuning, got %d", plan.Tuning.PrefetchWorkers)
|
||||
}
|
||||
if !containsString(resolved.SeedPaths, "/redfish/v1/Chassis/GPU1") {
|
||||
t.Fatalf("expected MSI chassis GPU seed path")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/GPU1/Sensors") {
|
||||
t.Fatal("expected MSI GPU sensor critical path")
|
||||
}
|
||||
if !containsString(resolved.Plan.PlanBPaths, "/redfish/v1/Chassis/GPU1/Sensors") {
|
||||
t.Fatal("expected MSI GPU sensor plan-b path")
|
||||
}
|
||||
if plan.Tuning.ETABaseline.SnapshotSeconds <= 0 {
|
||||
t.Fatal("expected MSI snapshot eta baseline")
|
||||
}
|
||||
if !plan.Tuning.PostProbePolicy.EnableNumericCollectionProbe {
|
||||
t.Fatal("expected MSI fixture to inherit generic numeric post-probe policy")
|
||||
}
|
||||
if !containsString(plan.ScopedPaths.SystemSeedSuffixes, "/SimpleStorage") {
|
||||
t.Fatal("expected MSI fixture to inherit generic SimpleStorage scoped seed suffix")
|
||||
}
|
||||
if !containsString(plan.ScopedPaths.SystemCriticalSuffixes, "/Memory") {
|
||||
t.Fatal("expected MSI fixture to inherit generic system critical suffixes")
|
||||
}
|
||||
if !containsString(plan.Tuning.PrefetchPolicy.IncludeSuffixes, "/Storage") {
|
||||
t.Fatal("expected MSI fixture to inherit generic storage prefetch policy")
|
||||
}
|
||||
if !containsString(plan.CriticalPaths, "/redfish/v1/UpdateService") {
|
||||
t.Fatal("expected MSI fixture to inherit generic top-level critical path")
|
||||
}
|
||||
if !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
t.Fatal("expected MSI fixture to enable profile plan-b")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_MSI_CG480_CopyMatchesSameProfiles(t *testing.T) {
|
||||
originalSignals := loadProfileFixtureSignals(t, "msi-cg480.json")
|
||||
copySignals := loadProfileFixtureSignals(t, "msi-cg480-copy.json")
|
||||
originalMatch := MatchProfiles(originalSignals)
|
||||
copyMatch := MatchProfiles(copySignals)
|
||||
originalPlan := BuildAcquisitionPlan(originalSignals)
|
||||
copyPlan := BuildAcquisitionPlan(copySignals)
|
||||
originalResolved := ResolveAcquisitionPlan(originalMatch, originalPlan, discoveredResourcesFromSignals(originalSignals), originalSignals)
|
||||
copyResolved := ResolveAcquisitionPlan(copyMatch, copyPlan, discoveredResourcesFromSignals(copySignals), copySignals)
|
||||
|
||||
assertSameProfileNames(t, originalMatch, copyMatch)
|
||||
if originalPlan.Tuning.PrefetchWorkers != copyPlan.Tuning.PrefetchWorkers {
|
||||
t.Fatalf("expected same MSI prefetch worker tuning, got %d vs %d", originalPlan.Tuning.PrefetchWorkers, copyPlan.Tuning.PrefetchWorkers)
|
||||
}
|
||||
if containsString(originalResolved.SeedPaths, "/redfish/v1/Chassis/GPU1") != containsString(copyResolved.SeedPaths, "/redfish/v1/Chassis/GPU1") {
|
||||
t.Fatal("expected same MSI GPU chassis seed presence in both fixtures")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_MSI_CG290(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "msi-cg290.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, discoveredResourcesFromSignals(signals), signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "msi")
|
||||
assertProfileSelected(t, match, "ami-family")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
|
||||
if plan.Tuning.PrefetchWorkers < 6 {
|
||||
t.Fatalf("expected MSI prefetch worker tuning, got %d", plan.Tuning.PrefetchWorkers)
|
||||
}
|
||||
if !containsString(resolved.SeedPaths, "/redfish/v1/Chassis/GPU1") {
|
||||
t.Fatalf("expected MSI chassis GPU seed path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_Supermicro_HGX(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "supermicro-hgx.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
discovered := discoveredResourcesFromSignals(signals)
|
||||
discovered.SystemPaths = dedupeSorted(append(discovered.SystemPaths, "/redfish/v1/Systems/HGX_Baseboard_0"))
|
||||
resolved := ResolveAcquisitionPlan(match, plan, discovered, signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "supermicro")
|
||||
assertProfileSelected(t, match, "hgx-topology")
|
||||
assertProfileNotSelected(t, match, "msi")
|
||||
|
||||
if plan.Tuning.SnapshotMaxDocuments < 180000 {
|
||||
t.Fatalf("expected widened HGX snapshot cap, got %d", plan.Tuning.SnapshotMaxDocuments)
|
||||
}
|
||||
if plan.Tuning.NVMePostProbeEnabled == nil || *plan.Tuning.NVMePostProbeEnabled {
|
||||
t.Fatal("expected HGX fixture to disable NVMe post-probe")
|
||||
}
|
||||
if !containsString(resolved.SeedPaths, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("expected HGX baseboard processors seed path")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("expected HGX baseboard processors critical path")
|
||||
}
|
||||
if !containsString(resolved.Plan.PlanBPaths, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("expected HGX baseboard processors plan-b path")
|
||||
}
|
||||
if plan.Tuning.ETABaseline.SnapshotSeconds < 300 {
|
||||
t.Fatalf("expected HGX snapshot eta baseline, got %d", plan.Tuning.ETABaseline.SnapshotSeconds)
|
||||
}
|
||||
if !plan.Tuning.PostProbePolicy.EnableDirectNVMEDiskBayProbe {
|
||||
t.Fatal("expected HGX fixture to retain Supermicro direct NVMe disk bay probe policy")
|
||||
}
|
||||
if !containsString(plan.ScopedPaths.SystemCriticalSuffixes, "/Storage/IntelVROC/Drives") {
|
||||
t.Fatal("expected HGX fixture to inherit generic IntelVROC scoped critical suffix")
|
||||
}
|
||||
if !containsString(plan.ScopedPaths.ChassisCriticalSuffixes, "/Assembly") {
|
||||
t.Fatal("expected HGX fixture to inherit generic chassis critical suffixes")
|
||||
}
|
||||
if !containsString(plan.Tuning.PrefetchPolicy.ExcludeContains, "/Assembly") {
|
||||
t.Fatal("expected HGX fixture to inherit generic assembly prefetch exclusion")
|
||||
}
|
||||
if !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
t.Fatal("expected HGX fixture to enable profile plan-b")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_Supermicro_OAM_NoHGX(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "supermicro-oam-amd.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, discoveredResourcesFromSignals(signals), signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "supermicro")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
assertProfileNotSelected(t, match, "msi")
|
||||
|
||||
if containsString(resolved.SeedPaths, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("did not expect HGX baseboard processors seed path for OAM fixture")
|
||||
}
|
||||
if containsString(resolved.CriticalPaths, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("did not expect HGX baseboard processors critical path for OAM fixture")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory") {
|
||||
t.Fatal("expected Supermicro firmware critical path")
|
||||
}
|
||||
if !containsString(resolved.Plan.PlanBPaths, "/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory") {
|
||||
t.Fatal("expected Supermicro firmware plan-b path")
|
||||
}
|
||||
if plan.Tuning.SnapshotMaxDocuments != 150000 {
|
||||
t.Fatalf("expected generic supermicro snapshot cap, got %d", plan.Tuning.SnapshotMaxDocuments)
|
||||
}
|
||||
if plan.Tuning.NVMePostProbeEnabled != nil {
|
||||
t.Fatal("did not expect HGX NVMe tuning for OAM fixture")
|
||||
}
|
||||
if plan.Tuning.ETABaseline.SnapshotSeconds < 180 {
|
||||
t.Fatalf("expected Supermicro snapshot eta baseline, got %d", plan.Tuning.ETABaseline.SnapshotSeconds)
|
||||
}
|
||||
if !plan.Tuning.PostProbePolicy.EnableDirectNVMEDiskBayProbe {
|
||||
t.Fatal("expected Supermicro OAM fixture to use direct NVMe disk bay probe policy")
|
||||
}
|
||||
if !plan.Tuning.PostProbePolicy.EnableNumericCollectionProbe {
|
||||
t.Fatal("expected Supermicro OAM fixture to inherit generic numeric post-probe policy")
|
||||
}
|
||||
if !containsString(plan.ScopedPaths.SystemSeedSuffixes, "/Storage/IntelVROC") {
|
||||
t.Fatal("expected Supermicro OAM fixture to inherit generic IntelVROC scoped seed suffix")
|
||||
}
|
||||
if !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
t.Fatal("expected Supermicro OAM fixture to enable profile plan-b")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_Dell_R750(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "dell-r750.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/System.Embedded.1"},
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/System.Embedded.1"},
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1", "/redfish/v1/Managers/iDRAC.Embedded.1"},
|
||||
}, signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "dell")
|
||||
assertProfileNotSelected(t, match, "supermicro")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
assertProfileNotSelected(t, match, "msi")
|
||||
|
||||
if !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
t.Fatal("expected dell fixture to enable profile plan-b")
|
||||
}
|
||||
if !containsString(resolved.SeedPaths, "/redfish/v1/Managers/iDRAC.Embedded.1") {
|
||||
t.Fatal("expected Dell refinement to add iDRAC manager seed path")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Managers/iDRAC.Embedded.1") {
|
||||
t.Fatal("expected Dell refinement to add iDRAC manager critical path")
|
||||
}
|
||||
directives := ResolveAnalysisPlan(match, nil, DiscoveredResources{}, signals).Directives
|
||||
if !directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("expected dell fixture to enable graphics controller dedup")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_AMI_Generic(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "ami-generic.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "ami-family")
|
||||
assertProfileNotSelected(t, match, "msi")
|
||||
assertProfileNotSelected(t, match, "supermicro")
|
||||
assertProfileNotSelected(t, match, "dell")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
|
||||
if plan.Tuning.PrefetchEnabled == nil || !*plan.Tuning.PrefetchEnabled {
|
||||
t.Fatal("expected ami-family fixture to force prefetch enabled")
|
||||
}
|
||||
if !containsString(plan.SeedPaths, "/redfish/v1/Oem/Ami") {
|
||||
t.Fatal("expected ami-family fixture seed path /redfish/v1/Oem/Ami")
|
||||
}
|
||||
if !containsString(plan.SeedPaths, "/redfish/v1/Oem/Ami/InventoryData/Status") {
|
||||
t.Fatal("expected ami-family fixture seed path /redfish/v1/Oem/Ami/InventoryData/Status")
|
||||
}
|
||||
if !containsString(plan.CriticalPaths, "/redfish/v1/UpdateService") {
|
||||
t.Fatal("expected ami-family fixture to inherit generic critical path")
|
||||
}
|
||||
|
||||
directives := ResolveAnalysisPlan(match, nil, DiscoveredResources{}, signals).Directives
|
||||
if !directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("expected ami-family fixture to enable graphics controller dedup")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_UnknownVendor(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "unknown-vendor.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/1"},
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1"},
|
||||
}, signals)
|
||||
|
||||
if match.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode for unknown vendor, got %q", match.Mode)
|
||||
}
|
||||
if len(match.Profiles) == 0 {
|
||||
t.Fatal("expected fallback to aggregate profiles")
|
||||
}
|
||||
for _, profile := range match.Profiles {
|
||||
if !profile.SafeForFallback() {
|
||||
t.Fatalf("fallback mode included non-safe profile %q", profile.Name())
|
||||
}
|
||||
}
|
||||
|
||||
if plan.Tuning.SnapshotMaxDocuments < 180000 {
|
||||
t.Fatalf("expected fallback to widen snapshot cap, got %d", plan.Tuning.SnapshotMaxDocuments)
|
||||
}
|
||||
if plan.Tuning.PrefetchEnabled == nil || !*plan.Tuning.PrefetchEnabled {
|
||||
t.Fatal("expected fallback fixture to force prefetch enabled")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Systems/1") {
|
||||
t.Fatal("expected fallback resolved critical paths to include discovered system")
|
||||
}
|
||||
|
||||
analysisPlan := ResolveAnalysisPlan(match, nil, DiscoveredResources{}, signals)
|
||||
if !analysisPlan.Directives.EnableProcessorGPUFallback {
|
||||
t.Fatal("expected fallback fixture to enable processor GPU fallback")
|
||||
}
|
||||
if !analysisPlan.Directives.EnableStorageEnclosureRecovery {
|
||||
t.Fatal("expected fallback fixture to enable storage enclosure recovery")
|
||||
}
|
||||
if !analysisPlan.Directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("expected fallback fixture to enable graphics controller dedup")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_Fixture_xFusion_G5500V7(t *testing.T) {
|
||||
signals := loadProfileFixtureSignals(t, "xfusion-g5500v7.json")
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/1"},
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1"},
|
||||
}, signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode for xFusion, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "xfusion")
|
||||
assertProfileNotSelected(t, match, "supermicro")
|
||||
assertProfileNotSelected(t, match, "hgx-topology")
|
||||
assertProfileNotSelected(t, match, "msi")
|
||||
assertProfileNotSelected(t, match, "dell")
|
||||
|
||||
if plan.Tuning.SnapshotMaxDocuments > 150000 {
|
||||
t.Fatalf("expected xfusion snapshot cap <= 150000, got %d", plan.Tuning.SnapshotMaxDocuments)
|
||||
}
|
||||
if plan.Tuning.PrefetchEnabled == nil || !*plan.Tuning.PrefetchEnabled {
|
||||
t.Fatal("expected xfusion fixture to enable prefetch")
|
||||
}
|
||||
if plan.Tuning.ETABaseline.SnapshotSeconds <= 0 {
|
||||
t.Fatal("expected xfusion snapshot eta baseline")
|
||||
}
|
||||
if !containsString(resolved.CriticalPaths, "/redfish/v1/Systems/1") {
|
||||
t.Fatal("expected system path in critical paths")
|
||||
}
|
||||
|
||||
analysisPlan := ResolveAnalysisPlan(match, map[string]interface{}{
|
||||
"/redfish/v1/Systems/1/Processors/Gpu1": map[string]interface{}{"ProcessorType": "GPU"},
|
||||
}, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
}, signals)
|
||||
if !analysisPlan.Directives.EnableProcessorGPUFallback {
|
||||
t.Fatal("expected xfusion analysis to enable processor GPU fallback when GPU processors present")
|
||||
}
|
||||
if !analysisPlan.Directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("expected xfusion analysis to enable graphics controller dedup")
|
||||
}
|
||||
}
|
||||
|
||||
func loadProfileFixtureSignals(t *testing.T, fixtureName string) MatchSignals {
|
||||
t.Helper()
|
||||
path := filepath.Join("testdata", fixtureName)
|
||||
data, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
t.Fatalf("read fixture %s: %v", path, err)
|
||||
}
|
||||
var signals MatchSignals
|
||||
if err := json.Unmarshal(data, &signals); err != nil {
|
||||
t.Fatalf("decode fixture %s: %v", path, err)
|
||||
}
|
||||
return normalizeSignals(signals)
|
||||
}
|
||||
|
||||
func assertProfileSelected(t *testing.T, match MatchResult, want string) {
|
||||
t.Helper()
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == want {
|
||||
return
|
||||
}
|
||||
}
|
||||
t.Fatalf("expected profile %q in %v", want, profileNames(match))
|
||||
}
|
||||
|
||||
func assertProfileNotSelected(t *testing.T, match MatchResult, want string) {
|
||||
t.Helper()
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == want {
|
||||
t.Fatalf("did not expect profile %q in %v", want, profileNames(match))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func profileNames(match MatchResult) []string {
|
||||
out := make([]string, 0, len(match.Profiles))
|
||||
for _, profile := range match.Profiles {
|
||||
out = append(out, profile.Name())
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func assertSameProfileNames(t *testing.T, left, right MatchResult) {
|
||||
t.Helper()
|
||||
leftNames := profileNames(left)
|
||||
rightNames := profileNames(right)
|
||||
if len(leftNames) != len(rightNames) {
|
||||
t.Fatalf("profile stack size differs: %v vs %v", leftNames, rightNames)
|
||||
}
|
||||
for i := range leftNames {
|
||||
if leftNames[i] != rightNames[i] {
|
||||
t.Fatalf("profile stack differs: %v vs %v", leftNames, rightNames)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func containsString(items []string, want string) bool {
|
||||
for _, item := range items {
|
||||
if item == want {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func discoveredResourcesFromSignals(signals MatchSignals) DiscoveredResources {
|
||||
var discovered DiscoveredResources
|
||||
for _, hint := range signals.ResourceHints {
|
||||
memberPath := discoveredMemberPath(hint)
|
||||
switch {
|
||||
case strings.HasPrefix(memberPath, "/redfish/v1/Systems/"):
|
||||
discovered.SystemPaths = append(discovered.SystemPaths, memberPath)
|
||||
case strings.HasPrefix(memberPath, "/redfish/v1/Chassis/"):
|
||||
discovered.ChassisPaths = append(discovered.ChassisPaths, memberPath)
|
||||
case strings.HasPrefix(memberPath, "/redfish/v1/Managers/"):
|
||||
discovered.ManagerPaths = append(discovered.ManagerPaths, memberPath)
|
||||
}
|
||||
}
|
||||
discovered.SystemPaths = dedupeSorted(discovered.SystemPaths)
|
||||
discovered.ChassisPaths = dedupeSorted(discovered.ChassisPaths)
|
||||
discovered.ManagerPaths = dedupeSorted(discovered.ManagerPaths)
|
||||
return discovered
|
||||
}
|
||||
|
||||
func discoveredMemberPath(path string) string {
|
||||
path = strings.TrimSpace(path)
|
||||
if path == "" {
|
||||
return ""
|
||||
}
|
||||
parts := strings.Split(strings.Trim(path, "/"), "/")
|
||||
if len(parts) < 4 || parts[0] != "redfish" || parts[1] != "v1" {
|
||||
return ""
|
||||
}
|
||||
switch parts[2] {
|
||||
case "Systems", "Chassis", "Managers":
|
||||
return "/" + strings.Join(parts[:4], "/")
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
122
internal/collector/redfishprofile/matcher.go
Normal file
122
internal/collector/redfishprofile/matcher.go
Normal file
@@ -0,0 +1,122 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"sort"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
const (
|
||||
ModeMatched = "matched"
|
||||
ModeFallback = "fallback"
|
||||
)
|
||||
|
||||
func MatchProfiles(signals MatchSignals) MatchResult {
|
||||
type scored struct {
|
||||
profile Profile
|
||||
score int
|
||||
}
|
||||
builtins := BuiltinProfiles()
|
||||
candidates := make([]scored, 0, len(builtins))
|
||||
allScores := make([]ProfileScore, 0, len(builtins))
|
||||
for _, profile := range builtins {
|
||||
score := profile.Match(signals)
|
||||
allScores = append(allScores, ProfileScore{
|
||||
Name: profile.Name(),
|
||||
Score: score,
|
||||
Priority: profile.Priority(),
|
||||
})
|
||||
if score <= 0 {
|
||||
continue
|
||||
}
|
||||
candidates = append(candidates, scored{profile: profile, score: score})
|
||||
}
|
||||
sort.Slice(allScores, func(i, j int) bool {
|
||||
if allScores[i].Score == allScores[j].Score {
|
||||
if allScores[i].Priority == allScores[j].Priority {
|
||||
return allScores[i].Name < allScores[j].Name
|
||||
}
|
||||
return allScores[i].Priority < allScores[j].Priority
|
||||
}
|
||||
return allScores[i].Score > allScores[j].Score
|
||||
})
|
||||
sort.Slice(candidates, func(i, j int) bool {
|
||||
if candidates[i].score == candidates[j].score {
|
||||
return candidates[i].profile.Priority() < candidates[j].profile.Priority()
|
||||
}
|
||||
return candidates[i].score > candidates[j].score
|
||||
})
|
||||
if len(candidates) == 0 || candidates[0].score < 60 {
|
||||
profiles := make([]Profile, 0, len(builtins))
|
||||
active := make(map[string]struct{}, len(builtins))
|
||||
for _, profile := range builtins {
|
||||
if profile.SafeForFallback() {
|
||||
profiles = append(profiles, profile)
|
||||
active[profile.Name()] = struct{}{}
|
||||
}
|
||||
}
|
||||
sortProfiles(profiles)
|
||||
for i := range allScores {
|
||||
_, ok := active[allScores[i].Name]
|
||||
allScores[i].Active = ok
|
||||
}
|
||||
return MatchResult{Mode: ModeFallback, Profiles: profiles, Scores: allScores}
|
||||
}
|
||||
profiles := make([]Profile, 0, len(candidates))
|
||||
seen := make(map[string]struct{}, len(candidates))
|
||||
for _, candidate := range candidates {
|
||||
name := candidate.profile.Name()
|
||||
if _, ok := seen[name]; ok {
|
||||
continue
|
||||
}
|
||||
seen[name] = struct{}{}
|
||||
profiles = append(profiles, candidate.profile)
|
||||
}
|
||||
sortProfiles(profiles)
|
||||
for i := range allScores {
|
||||
_, ok := seen[allScores[i].Name]
|
||||
allScores[i].Active = ok
|
||||
}
|
||||
return MatchResult{Mode: ModeMatched, Profiles: profiles, Scores: allScores}
|
||||
}
|
||||
|
||||
func BuildAcquisitionPlan(signals MatchSignals) AcquisitionPlan {
|
||||
match := MatchProfiles(signals)
|
||||
plan := AcquisitionPlan{Mode: match.Mode}
|
||||
for _, profile := range match.Profiles {
|
||||
plan.Profiles = append(plan.Profiles, profile.Name())
|
||||
profile.ExtendAcquisitionPlan(&plan, signals)
|
||||
}
|
||||
plan.Profiles = dedupeSorted(plan.Profiles)
|
||||
plan.SeedPaths = dedupeSorted(plan.SeedPaths)
|
||||
plan.CriticalPaths = dedupeSorted(plan.CriticalPaths)
|
||||
plan.PlanBPaths = dedupeSorted(plan.PlanBPaths)
|
||||
plan.Notes = dedupeSorted(plan.Notes)
|
||||
if plan.Mode == ModeFallback {
|
||||
ensureSnapshotMaxDocuments(&plan, 180000)
|
||||
ensurePrefetchEnabled(&plan, true)
|
||||
addPlanNote(&plan, "fallback acquisition expands safe profile probes")
|
||||
}
|
||||
return plan
|
||||
}
|
||||
|
||||
func ApplyAnalysisProfiles(result *models.AnalysisResult, snapshot map[string]interface{}, signals MatchSignals) MatchResult {
|
||||
match := MatchProfiles(signals)
|
||||
for _, profile := range match.Profiles {
|
||||
profile.PostAnalyze(result, snapshot, signals)
|
||||
}
|
||||
return match
|
||||
}
|
||||
|
||||
func BuildAnalysisDirectives(match MatchResult) AnalysisDirectives {
|
||||
return ResolveAnalysisPlan(match, nil, DiscoveredResources{}, MatchSignals{}).Directives
|
||||
}
|
||||
|
||||
func sortProfiles(profiles []Profile) {
|
||||
sort.Slice(profiles, func(i, j int) bool {
|
||||
if profiles[i].Priority() == profiles[j].Priority() {
|
||||
return profiles[i].Name() < profiles[j].Name()
|
||||
}
|
||||
return profiles[i].Priority() < profiles[j].Priority()
|
||||
})
|
||||
}
|
||||
390
internal/collector/redfishprofile/matcher_test.go
Normal file
390
internal/collector/redfishprofile/matcher_test.go
Normal file
@@ -0,0 +1,390 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestMatchProfiles_UnknownVendorFallsBackToAggregateProfiles(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{
|
||||
ServiceRootProduct: "Redfish Server",
|
||||
})
|
||||
if match.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode, got %q", match.Mode)
|
||||
}
|
||||
if len(match.Profiles) < 2 {
|
||||
t.Fatalf("expected aggregated fallback profiles, got %d", len(match.Profiles))
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_MSISelectsMatchedMode(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
ResourceHints: []string{"/redfish/v1/Chassis/GPU1"},
|
||||
})
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
found := false
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == "msi" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("expected msi profile to be selected")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_FallbackIncludesProfileNotes(t *testing.T) {
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
ServiceRootVendor: "AMI",
|
||||
})
|
||||
if len(plan.Profiles) == 0 {
|
||||
t.Fatal("expected acquisition plan profiles")
|
||||
}
|
||||
if len(plan.Notes) == 0 {
|
||||
t.Fatal("expected acquisition plan notes")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_FallbackAddsBroadCrawlTuning(t *testing.T) {
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
ServiceRootProduct: "Unknown Redfish",
|
||||
})
|
||||
if plan.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode, got %q", plan.Mode)
|
||||
}
|
||||
if plan.Tuning.SnapshotMaxDocuments < 180000 {
|
||||
t.Fatalf("expected widened snapshot cap, got %d", plan.Tuning.SnapshotMaxDocuments)
|
||||
}
|
||||
if plan.Tuning.PrefetchEnabled == nil || !*plan.Tuning.PrefetchEnabled {
|
||||
t.Fatal("expected fallback to force prefetch enabled")
|
||||
}
|
||||
if !plan.Tuning.RecoveryPolicy.EnableCriticalCollectionMemberRetry {
|
||||
t.Fatal("expected fallback to inherit critical member retry recovery")
|
||||
}
|
||||
if !plan.Tuning.RecoveryPolicy.EnableCriticalSlowProbe {
|
||||
t.Fatal("expected fallback to inherit critical slow probe recovery")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAcquisitionPlan_HGXDisablesNVMePostProbe(t *testing.T) {
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
SystemModel: "HGX B200",
|
||||
ResourceHints: []string{"/redfish/v1/Systems/HGX_Baseboard_0"},
|
||||
})
|
||||
if plan.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", plan.Mode)
|
||||
}
|
||||
if plan.Tuning.NVMePostProbeEnabled == nil || *plan.Tuning.NVMePostProbeEnabled {
|
||||
t.Fatal("expected hgx profile to disable NVMe post-probe")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_ExpandsScopedPaths(t *testing.T) {
|
||||
signals := MatchSignals{}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1", "/redfish/v1/Systems/2"},
|
||||
}, signals)
|
||||
joined := joinResolvedPaths(resolved.SeedPaths)
|
||||
for _, wanted := range []string{
|
||||
"/redfish/v1/Systems/1/SimpleStorage",
|
||||
"/redfish/v1/Systems/1/Storage/IntelVROC",
|
||||
"/redfish/v1/Systems/2/SimpleStorage",
|
||||
"/redfish/v1/Systems/2/Storage/IntelVROC",
|
||||
} {
|
||||
if !containsJoinedPath(joined, wanted) {
|
||||
t.Fatalf("expected resolved seed path %q", wanted)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_CriticalBaselineIsShapedByProfiles(t *testing.T) {
|
||||
signals := MatchSignals{}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/1"},
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1"},
|
||||
}, signals)
|
||||
joined := joinResolvedPaths(resolved.CriticalPaths)
|
||||
for _, wanted := range []string{
|
||||
"/redfish/v1",
|
||||
"/redfish/v1/Systems/1",
|
||||
"/redfish/v1/Systems/1/Memory",
|
||||
"/redfish/v1/Chassis/1/Assembly",
|
||||
"/redfish/v1/Managers/1/NetworkProtocol",
|
||||
"/redfish/v1/UpdateService",
|
||||
} {
|
||||
if !containsJoinedPath(joined, wanted) {
|
||||
t.Fatalf("expected resolved critical path %q", wanted)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_FallbackAppendsPlanBToSeeds(t *testing.T) {
|
||||
signals := MatchSignals{ServiceRootProduct: "Unknown Redfish"}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
if plan.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode, got %q", plan.Mode)
|
||||
}
|
||||
plan.PlanBPaths = append(plan.PlanBPaths, "/redfish/v1/Systems/1/Oem/TestPlanB")
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
}, signals)
|
||||
if !containsJoinedPath(joinResolvedPaths(resolved.SeedPaths), "/redfish/v1/Systems/1/Oem/TestPlanB") {
|
||||
t.Fatal("expected fallback resolved seeds to include plan-b path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_MSIRefinesDiscoveredGPUChassis(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
ResourceHints: []string{"/redfish/v1/Chassis/GPU1", "/redfish/v1/Chassis/GPU4/Sensors"},
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/1", "/redfish/v1/Chassis/GPU1", "/redfish/v1/Chassis/GPU4"},
|
||||
}, signals)
|
||||
joinedSeeds := joinResolvedPaths(resolved.SeedPaths)
|
||||
joinedCritical := joinResolvedPaths(resolved.CriticalPaths)
|
||||
if !containsJoinedPath(joinedSeeds, "/redfish/v1/Chassis/GPU1") || !containsJoinedPath(joinedSeeds, "/redfish/v1/Chassis/GPU4") {
|
||||
t.Fatal("expected MSI refinement to add discovered GPU chassis seed paths")
|
||||
}
|
||||
if containsJoinedPath(joinedSeeds, "/redfish/v1/Chassis/GPU2") {
|
||||
t.Fatal("did not expect undiscovered MSI GPU chassis in resolved seeds")
|
||||
}
|
||||
if !containsJoinedPath(joinedCritical, "/redfish/v1/Chassis/GPU1/Sensors") || !containsJoinedPath(joinedCritical, "/redfish/v1/Chassis/GPU4/Sensors") {
|
||||
t.Fatal("expected MSI refinement to add discovered GPU sensor critical paths")
|
||||
}
|
||||
if containsJoinedPath(joinedCritical, "/redfish/v1/Chassis/GPU3/Sensors") {
|
||||
t.Fatal("did not expect undiscovered MSI GPU sensor critical path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_HGXRefinesDiscoveredBaseboardSystems(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Supermicro",
|
||||
SystemModel: "SYS-821GE-TNHR",
|
||||
ChassisModel: "HGX B200",
|
||||
ResourceHints: []string{
|
||||
"/redfish/v1/Systems/HGX_Baseboard_0",
|
||||
"/redfish/v1/Systems/HGX_Baseboard_0/Processors",
|
||||
"/redfish/v1/Systems/1",
|
||||
},
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1", "/redfish/v1/Systems/HGX_Baseboard_0"},
|
||||
}, signals)
|
||||
joinedSeeds := joinResolvedPaths(resolved.SeedPaths)
|
||||
joinedCritical := joinResolvedPaths(resolved.CriticalPaths)
|
||||
if !containsJoinedPath(joinedSeeds, "/redfish/v1/Systems/HGX_Baseboard_0") || !containsJoinedPath(joinedSeeds, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("expected HGX refinement to add discovered baseboard system paths")
|
||||
}
|
||||
if !containsJoinedPath(joinedCritical, "/redfish/v1/Systems/HGX_Baseboard_0") || !containsJoinedPath(joinedCritical, "/redfish/v1/Systems/HGX_Baseboard_0/Processors") {
|
||||
t.Fatal("expected HGX refinement to add discovered baseboard critical paths")
|
||||
}
|
||||
if containsJoinedPath(joinedSeeds, "/redfish/v1/Systems/HGX_Baseboard_1") {
|
||||
t.Fatal("did not expect undiscovered HGX baseboard system path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_SupermicroRefinesFirmwareInventoryFromHint(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Supermicro",
|
||||
ResourceHints: []string{
|
||||
"/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory",
|
||||
"/redfish/v1/Managers/1/Oem/Supermicro/FanMode",
|
||||
},
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1"},
|
||||
}, signals)
|
||||
joinedCritical := joinResolvedPaths(resolved.CriticalPaths)
|
||||
if !containsJoinedPath(joinedCritical, "/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory") {
|
||||
t.Fatal("expected Supermicro refinement to add firmware inventory critical path")
|
||||
}
|
||||
if !containsJoinedPath(joinResolvedPaths(resolved.Plan.PlanBPaths), "/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory") {
|
||||
t.Fatal("expected Supermicro refinement to add firmware inventory plan-b path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveAcquisitionPlan_DellRefinesDiscoveredIDRACManager(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Dell Inc.",
|
||||
ServiceRootProduct: "iDRAC Redfish Service",
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := BuildAcquisitionPlan(signals)
|
||||
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
|
||||
ManagerPaths: []string{"/redfish/v1/Managers/1", "/redfish/v1/Managers/iDRAC.Embedded.1"},
|
||||
}, signals)
|
||||
joinedSeeds := joinResolvedPaths(resolved.SeedPaths)
|
||||
joinedCritical := joinResolvedPaths(resolved.CriticalPaths)
|
||||
if !containsJoinedPath(joinedSeeds, "/redfish/v1/Managers/iDRAC.Embedded.1") {
|
||||
t.Fatal("expected Dell refinement to add discovered iDRAC manager seed path")
|
||||
}
|
||||
if !containsJoinedPath(joinedCritical, "/redfish/v1/Managers/iDRAC.Embedded.1") {
|
||||
t.Fatal("expected Dell refinement to add discovered iDRAC manager critical path")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAnalysisDirectives_SupermicroEnablesVendorStorageFallbacks(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Supermicro",
|
||||
SystemModel: "SYS-821GE",
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := ResolveAnalysisPlan(match, map[string]interface{}{
|
||||
"/redfish/v1/Chassis/NVMeSSD.1.StorageBackplane/Drives": map[string]interface{}{},
|
||||
}, DiscoveredResources{}, signals)
|
||||
directives := plan.Directives
|
||||
if !directives.EnableSupermicroNVMeBackplane {
|
||||
t.Fatal("expected supermicro nvme backplane fallback")
|
||||
}
|
||||
}
|
||||
|
||||
func joinResolvedPaths(paths []string) string {
|
||||
return "\n" + strings.Join(paths, "\n") + "\n"
|
||||
}
|
||||
|
||||
func containsJoinedPath(joined, want string) bool {
|
||||
return strings.Contains(joined, "\n"+want+"\n")
|
||||
}
|
||||
|
||||
func TestBuildAnalysisDirectives_HGXEnablesGPUFallbacks(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Supermicro",
|
||||
SystemModel: "SYS-821GE-TNHR",
|
||||
ChassisModel: "HGX B200",
|
||||
ResourceHints: []string{"/redfish/v1/Systems/HGX_Baseboard_0", "/redfish/v1/Chassis/HGX_Chassis_0/PCIeDevices/GPU_SXM_1"},
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := ResolveAnalysisPlan(match, map[string]interface{}{
|
||||
"/redfish/v1/Systems/HGX_Baseboard_0/Processors/GPU_SXM_1": map[string]interface{}{"ProcessorType": "GPU"},
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/PCIeDevices/GPU_SXM_1": map[string]interface{}{},
|
||||
}, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/HGX_Baseboard_0"},
|
||||
}, signals)
|
||||
directives := plan.Directives
|
||||
if !directives.EnableProcessorGPUFallback {
|
||||
t.Fatal("expected processor GPU fallback for hgx profile")
|
||||
}
|
||||
if !directives.EnableProcessorGPUChassisAlias {
|
||||
t.Fatal("expected processor GPU chassis alias resolution for hgx profile")
|
||||
}
|
||||
if !directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("expected graphics-controller dedup for hgx profile")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAnalysisDirectives_MSIEnablesMSIChassisLookup(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := ResolveAnalysisPlan(match, map[string]interface{}{
|
||||
"/redfish/v1/Systems/1/Processors/GPU1": map[string]interface{}{"ProcessorType": "GPU"},
|
||||
"/redfish/v1/Chassis/GPU1": map[string]interface{}{},
|
||||
}, DiscoveredResources{
|
||||
SystemPaths: []string{"/redfish/v1/Systems/1"},
|
||||
ChassisPaths: []string{"/redfish/v1/Chassis/GPU1"},
|
||||
}, signals)
|
||||
directives := plan.Directives
|
||||
if !directives.EnableMSIProcessorGPUChassisLookup {
|
||||
t.Fatal("expected MSI processor GPU chassis lookup")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAnalysisDirectives_SupermicroEnablesStorageRecovery(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Supermicro",
|
||||
}
|
||||
match := MatchProfiles(signals)
|
||||
plan := ResolveAnalysisPlan(match, map[string]interface{}{
|
||||
"/redfish/v1/Chassis/1/Drives": map[string]interface{}{},
|
||||
"/redfish/v1/Systems/1/Storage/IntelVROC": map[string]interface{}{},
|
||||
"/redfish/v1/Systems/1/Storage/IntelVROC/Drives": map[string]interface{}{},
|
||||
}, DiscoveredResources{}, signals)
|
||||
directives := plan.Directives
|
||||
if !directives.EnableStorageEnclosureRecovery {
|
||||
t.Fatal("expected storage enclosure recovery for supermicro")
|
||||
}
|
||||
if !directives.EnableKnownStorageControllerRecovery {
|
||||
t.Fatal("expected known storage controller recovery for supermicro")
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_OrderingIsDeterministic(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
ResourceHints: []string{"/redfish/v1/Chassis/GPU1"},
|
||||
}
|
||||
first := MatchProfiles(signals)
|
||||
second := MatchProfiles(signals)
|
||||
if len(first.Profiles) != len(second.Profiles) {
|
||||
t.Fatalf("profile stack size differs across calls: %d vs %d", len(first.Profiles), len(second.Profiles))
|
||||
}
|
||||
for i := range first.Profiles {
|
||||
if first.Profiles[i].Name() != second.Profiles[i].Name() {
|
||||
t.Fatalf("profile ordering differs at index %d: %q vs %q", i, first.Profiles[i].Name(), second.Profiles[i].Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_FallbackOrderingIsDeterministic(t *testing.T) {
|
||||
signals := MatchSignals{ServiceRootProduct: "Unknown Redfish"}
|
||||
first := MatchProfiles(signals)
|
||||
second := MatchProfiles(signals)
|
||||
if first.Mode != ModeFallback || second.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode in both calls")
|
||||
}
|
||||
if len(first.Profiles) != len(second.Profiles) {
|
||||
t.Fatalf("fallback profile stack size differs: %d vs %d", len(first.Profiles), len(second.Profiles))
|
||||
}
|
||||
for i := range first.Profiles {
|
||||
if first.Profiles[i].Name() != second.Profiles[i].Name() {
|
||||
t.Fatalf("fallback profile ordering differs at index %d: %q vs %q", i, first.Profiles[i].Name(), second.Profiles[i].Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_FallbackOnlySelectsSafeProfiles(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{ServiceRootProduct: "Unknown Generic Redfish Server"})
|
||||
if match.Mode != ModeFallback {
|
||||
t.Fatalf("expected fallback mode, got %q", match.Mode)
|
||||
}
|
||||
for _, profile := range match.Profiles {
|
||||
if !profile.SafeForFallback() {
|
||||
t.Fatalf("fallback mode included non-safe profile %q", profile.Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildAnalysisDirectives_GenericMatchedKeepsFallbacksDisabled(t *testing.T) {
|
||||
match := MatchResult{
|
||||
Mode: ModeMatched,
|
||||
Profiles: []Profile{genericProfile()},
|
||||
}
|
||||
directives := ResolveAnalysisPlan(match, nil, DiscoveredResources{}, MatchSignals{}).Directives
|
||||
if directives.EnableProcessorGPUFallback {
|
||||
t.Fatal("did not expect processor GPU fallback for generic matched profile")
|
||||
}
|
||||
if directives.EnableSupermicroNVMeBackplane {
|
||||
t.Fatal("did not expect supermicro nvme fallback for generic matched profile")
|
||||
}
|
||||
if directives.EnableGenericGraphicsControllerDedup {
|
||||
t.Fatal("did not expect generic graphics-controller dedup for generic matched profile")
|
||||
}
|
||||
}
|
||||
33
internal/collector/redfishprofile/profile_ami.go
Normal file
33
internal/collector/redfishprofile/profile_ami.go
Normal file
@@ -0,0 +1,33 @@
|
||||
package redfishprofile
|
||||
|
||||
func amiProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "ami-family",
|
||||
priority: 10,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.ServiceRootVendor, "ami") || containsFold(s.ServiceRootProduct, "ami") {
|
||||
score += 70
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "ami") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
addPlanPaths(&plan.SeedPaths,
|
||||
"/redfish/v1/Oem/Ami",
|
||||
"/redfish/v1/Oem/Ami/InventoryData/Status",
|
||||
)
|
||||
ensurePrefetchEnabled(plan, true)
|
||||
addPlanNote(plan, "ami-family acquisition extensions enabled")
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
}
|
||||
}
|
||||
45
internal/collector/redfishprofile/profile_dell.go
Normal file
45
internal/collector/redfishprofile/profile_dell.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package redfishprofile
|
||||
|
||||
func dellProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "dell",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "dell") || containsFold(s.ChassisManufacturer, "dell") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "dell") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsFold(s.ServiceRootProduct, "idrac") {
|
||||
score += 30
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
addPlanNote(plan, "dell iDRAC acquisition extensions enabled")
|
||||
},
|
||||
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, _ MatchSignals) {
|
||||
for _, managerPath := range discovered.ManagerPaths {
|
||||
if !containsFold(managerPath, "idrac") {
|
||||
continue
|
||||
}
|
||||
addPlanPaths(&resolved.SeedPaths, managerPath)
|
||||
addPlanPaths(&resolved.Plan.SeedPaths, managerPath)
|
||||
addPlanPaths(&resolved.CriticalPaths, managerPath)
|
||||
addPlanPaths(&resolved.Plan.CriticalPaths, managerPath)
|
||||
}
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
}
|
||||
}
|
||||
116
internal/collector/redfishprofile/profile_generic.go
Normal file
116
internal/collector/redfishprofile/profile_generic.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package redfishprofile
|
||||
|
||||
func genericProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "generic",
|
||||
priority: 100,
|
||||
safeForFallback: true,
|
||||
matchFn: func(MatchSignals) int { return 10 },
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensurePrefetchPolicy(plan, AcquisitionPrefetchPolicy{
|
||||
IncludeSuffixes: []string{
|
||||
"/Bios",
|
||||
"/Processors",
|
||||
"/Memory",
|
||||
"/Storage",
|
||||
"/SimpleStorage",
|
||||
"/PCIeDevices",
|
||||
"/PCIeFunctions",
|
||||
"/Accelerators",
|
||||
"/GraphicsControllers",
|
||||
"/EthernetInterfaces",
|
||||
"/NetworkInterfaces",
|
||||
"/NetworkAdapters",
|
||||
"/Drives",
|
||||
"/Power",
|
||||
"/PowerSubsystem/PowerSupplies",
|
||||
"/NetworkProtocol",
|
||||
"/UpdateService",
|
||||
"/UpdateService/FirmwareInventory",
|
||||
},
|
||||
ExcludeContains: []string{
|
||||
"/Fabrics",
|
||||
"/Backplanes",
|
||||
"/Boards",
|
||||
"/Assembly",
|
||||
"/Sensors",
|
||||
"/ThresholdSensors",
|
||||
"/DiscreteSensors",
|
||||
"/ThermalConfig",
|
||||
"/ThermalSubsystem",
|
||||
"/EnvironmentMetrics",
|
||||
"/Certificates",
|
||||
"/LogServices",
|
||||
},
|
||||
})
|
||||
ensureScopedPathPolicy(plan, AcquisitionScopedPathPolicy{
|
||||
SystemCriticalSuffixes: []string{
|
||||
"/Bios",
|
||||
"/Oem/Public",
|
||||
"/Oem/Public/FRU",
|
||||
"/Processors",
|
||||
"/Memory",
|
||||
"/Storage",
|
||||
"/PCIeDevices",
|
||||
"/PCIeFunctions",
|
||||
"/Accelerators",
|
||||
"/GraphicsControllers",
|
||||
"/EthernetInterfaces",
|
||||
"/NetworkInterfaces",
|
||||
"/SimpleStorage",
|
||||
"/Storage/IntelVROC",
|
||||
"/Storage/IntelVROC/Drives",
|
||||
"/Storage/IntelVROC/Volumes",
|
||||
},
|
||||
ChassisCriticalSuffixes: []string{
|
||||
"/Oem/Public",
|
||||
"/Oem/Public/FRU",
|
||||
"/Power",
|
||||
"/NetworkAdapters",
|
||||
"/PCIeDevices",
|
||||
"/Accelerators",
|
||||
"/Drives",
|
||||
"/Assembly",
|
||||
},
|
||||
ManagerCriticalSuffixes: []string{
|
||||
"/NetworkProtocol",
|
||||
},
|
||||
SystemSeedSuffixes: []string{
|
||||
"/SimpleStorage",
|
||||
"/Storage/IntelVROC",
|
||||
"/Storage/IntelVROC/Drives",
|
||||
"/Storage/IntelVROC/Volumes",
|
||||
},
|
||||
})
|
||||
addPlanPaths(&plan.CriticalPaths,
|
||||
"/redfish/v1/UpdateService",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory",
|
||||
)
|
||||
ensureSnapshotMaxDocuments(plan, 100000)
|
||||
ensureSnapshotWorkers(plan, 6)
|
||||
ensurePrefetchWorkers(plan, 4)
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 8,
|
||||
SnapshotSeconds: 90,
|
||||
PrefetchSeconds: 20,
|
||||
CriticalPlanBSeconds: 20,
|
||||
ProfilePlanBSeconds: 15,
|
||||
})
|
||||
ensurePostProbePolicy(plan, AcquisitionPostProbePolicy{
|
||||
EnableNumericCollectionProbe: true,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableCriticalCollectionMemberRetry: true,
|
||||
EnableCriticalSlowProbe: true,
|
||||
EnableEmptyCriticalCollectionRetry: true,
|
||||
})
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 900,
|
||||
ThrottleP95LatencyMS: 1800,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
},
|
||||
}
|
||||
}
|
||||
85
internal/collector/redfishprofile/profile_hgx.go
Normal file
85
internal/collector/redfishprofile/profile_hgx.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package redfishprofile
|
||||
|
||||
func hgxProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "hgx-topology",
|
||||
priority: 30,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemModel, "hgx") || containsFold(s.ChassisModel, "hgx") {
|
||||
score += 70
|
||||
}
|
||||
for _, hint := range s.ResourceHints {
|
||||
if containsFold(hint, "hgx_") || containsFold(hint, "gpu_sxm") {
|
||||
score += 20
|
||||
break
|
||||
}
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensureSnapshotMaxDocuments(plan, 180000)
|
||||
ensureSnapshotWorkers(plan, 4)
|
||||
ensurePrefetchWorkers(plan, 4)
|
||||
ensureNVMePostProbeEnabled(plan, false)
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 20,
|
||||
SnapshotSeconds: 300,
|
||||
PrefetchSeconds: 50,
|
||||
CriticalPlanBSeconds: 90,
|
||||
ProfilePlanBSeconds: 40,
|
||||
})
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1500,
|
||||
ThrottleP95LatencyMS: 3000,
|
||||
MinSnapshotWorkers: 1,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "hgx topology acquisition extensions enabled")
|
||||
},
|
||||
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, _ MatchSignals) {
|
||||
for _, systemPath := range discovered.SystemPaths {
|
||||
if !containsFold(systemPath, "hgx_baseboard_") {
|
||||
continue
|
||||
}
|
||||
addPlanPaths(&resolved.SeedPaths, systemPath, joinPath(systemPath, "/Processors"))
|
||||
addPlanPaths(&resolved.Plan.SeedPaths, systemPath, joinPath(systemPath, "/Processors"))
|
||||
addPlanPaths(&resolved.CriticalPaths, systemPath, joinPath(systemPath, "/Processors"))
|
||||
addPlanPaths(&resolved.Plan.CriticalPaths, systemPath, joinPath(systemPath, "/Processors"))
|
||||
addPlanPaths(&resolved.Plan.PlanBPaths, systemPath, joinPath(systemPath, "/Processors"))
|
||||
}
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
d.EnableStorageEnclosureRecovery = true
|
||||
},
|
||||
refineAnalysis: func(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, discovered DiscoveredResources, _ MatchSignals) {
|
||||
if snapshotHasGPUProcessor(snapshot, discovered.SystemPaths) && (snapshotHasPathContaining(snapshot, "gpu_sxm") || snapshotHasPathContaining(snapshot, "hgx_")) {
|
||||
plan.Directives.EnableProcessorGPUFallback = true
|
||||
plan.Directives.EnableProcessorGPUChassisAlias = true
|
||||
addAnalysisLookupMode(plan, "hgx-alias")
|
||||
addAnalysisNote(plan, "hgx analysis enables processor-gpu alias fallback from snapshot topology")
|
||||
}
|
||||
if snapshotHasStorageControllerHint(snapshot, "/storage/intelvroc", "/storage/ha-raid", "/storage/mrvl.ha-raid") {
|
||||
plan.Directives.EnableKnownStorageControllerRecovery = true
|
||||
addAnalysisStorageDriveCollections(plan,
|
||||
"/Storage/IntelVROC/Drives",
|
||||
"/Storage/IntelVROC/Controllers/1/Drives",
|
||||
)
|
||||
addAnalysisStorageVolumeCollections(plan,
|
||||
"/Storage/IntelVROC/Volumes",
|
||||
"/Storage/HA-RAID/Volumes",
|
||||
"/Storage/MRVL.HA-RAID/Volumes",
|
||||
)
|
||||
}
|
||||
if snapshotHasPathContaining(snapshot, "/chassis/nvmessd.") && snapshotHasPathContaining(snapshot, ".storagebackplane") {
|
||||
plan.Directives.EnableSupermicroNVMeBackplane = true
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package redfishprofile
|
||||
|
||||
func hpeProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "hpe",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "hpe") ||
|
||||
containsFold(s.SystemManufacturer, "hewlett packard") ||
|
||||
containsFold(s.ChassisManufacturer, "hpe") ||
|
||||
containsFold(s.ChassisManufacturer, "hewlett packard") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "hpe") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsFold(s.ServiceRootProduct, "ilo") {
|
||||
score += 30
|
||||
}
|
||||
if containsFold(s.ManagerManufacturer, "hpe") || containsFold(s.ManagerManufacturer, "ilo") {
|
||||
score += 20
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// HPE ProLiant SmartStorage RAID controller inventory is not reachable
|
||||
// via standard Redfish Storage paths — it requires the HPE OEM SmartStorage tree.
|
||||
ensureScopedPathPolicy(plan, AcquisitionScopedPathPolicy{
|
||||
SystemCriticalSuffixes: []string{
|
||||
"/SmartStorage",
|
||||
"/SmartStorageConfig",
|
||||
},
|
||||
ManagerCriticalSuffixes: []string{
|
||||
"/LicenseService",
|
||||
},
|
||||
})
|
||||
// HPE iLO responds more slowly than average BMCs under load; give the
|
||||
// ETA estimator a realistic baseline so progress reports are accurate.
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 12,
|
||||
SnapshotSeconds: 180,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 25,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
// HPE iLO starts throttling under high request rates. Setting a higher
|
||||
// latency tolerance prevents the adaptive throttler from treating normal
|
||||
// iLO slowness as a reason to stall the collection.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1200,
|
||||
ThrottleP95LatencyMS: 2500,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "hpe ilo acquisition extensions enabled")
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,149 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
outboardCardHintRe = regexp.MustCompile(`/outboardPCIeCard\d+(?:/|$)`)
|
||||
obDriveHintRe = regexp.MustCompile(`/Drives/OB\d+$`)
|
||||
fpDriveHintRe = regexp.MustCompile(`/Drives/FP00HDD\d+$`)
|
||||
vrFirmwareHintRe = regexp.MustCompile(`^CPU\d+_PVCC.*_VR$`)
|
||||
)
|
||||
|
||||
var inspurGroupOEMFirmwareHints = map[string]struct{}{
|
||||
"Front_HDD_CPLD0": {},
|
||||
"MainBoard0CPLD": {},
|
||||
"MainBoardCPLD": {},
|
||||
"PDBBoardCPLD": {},
|
||||
"SCMCPLD": {},
|
||||
"SWBoardCPLD": {},
|
||||
}
|
||||
|
||||
func inspurGroupOEMPlatformsProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "inspur-group-oem-platforms",
|
||||
priority: 25,
|
||||
safeForFallback: false,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
topologyScore := 0
|
||||
boardScore := 0
|
||||
chassisOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Chassis/", outboardCardHintRe)
|
||||
systemOutboard := matchedPathTokens(s.ResourceHints, "/redfish/v1/Systems/", outboardCardHintRe)
|
||||
obDrives := matchedPathTokens(s.ResourceHints, "", obDriveHintRe)
|
||||
fpDrives := matchedPathTokens(s.ResourceHints, "", fpDriveHintRe)
|
||||
firmwareNames, vrFirmwareNames := inspurGroupOEMFirmwareMatches(s.ResourceHints)
|
||||
|
||||
if len(chassisOutboard) > 0 {
|
||||
topologyScore += 20
|
||||
}
|
||||
if len(systemOutboard) > 0 {
|
||||
topologyScore += 10
|
||||
}
|
||||
switch {
|
||||
case len(obDrives) > 0 && len(fpDrives) > 0:
|
||||
topologyScore += 15
|
||||
}
|
||||
switch {
|
||||
case len(firmwareNames) >= 2:
|
||||
boardScore += 15
|
||||
}
|
||||
switch {
|
||||
case len(vrFirmwareNames) >= 2:
|
||||
boardScore += 10
|
||||
}
|
||||
if anySignalContains(s, "COMMONbAssembly") {
|
||||
boardScore += 12
|
||||
}
|
||||
if anySignalContains(s, "EnvironmentMetrcs") {
|
||||
boardScore += 8
|
||||
}
|
||||
if anySignalContains(s, "GetServerAllUSBStatus") {
|
||||
boardScore += 8
|
||||
}
|
||||
if topologyScore == 0 || boardScore == 0 {
|
||||
return 0
|
||||
}
|
||||
return min(topologyScore+boardScore, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
addPlanNote(plan, "Inspur Group OEM platform fingerprint matched")
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func matchedPathTokens(paths []string, requiredPrefix string, re *regexp.Regexp) []string {
|
||||
seen := make(map[string]struct{})
|
||||
for _, rawPath := range paths {
|
||||
path := normalizePath(rawPath)
|
||||
if path == "" || (requiredPrefix != "" && !strings.HasPrefix(path, requiredPrefix)) {
|
||||
continue
|
||||
}
|
||||
token := re.FindString(path)
|
||||
if token == "" {
|
||||
continue
|
||||
}
|
||||
token = strings.Trim(token, "/")
|
||||
if token == "" {
|
||||
continue
|
||||
}
|
||||
seen[token] = struct{}{}
|
||||
}
|
||||
out := make([]string, 0, len(seen))
|
||||
for token := range seen {
|
||||
out = append(out, token)
|
||||
}
|
||||
return dedupeSorted(out)
|
||||
}
|
||||
|
||||
func inspurGroupOEMFirmwareMatches(paths []string) ([]string, []string) {
|
||||
firmwareNames := make(map[string]struct{})
|
||||
vrNames := make(map[string]struct{})
|
||||
for _, rawPath := range paths {
|
||||
path := normalizePath(rawPath)
|
||||
if !strings.HasPrefix(path, "/redfish/v1/UpdateService/FirmwareInventory/") {
|
||||
continue
|
||||
}
|
||||
name := strings.TrimSpace(path[strings.LastIndex(path, "/")+1:])
|
||||
if name == "" {
|
||||
continue
|
||||
}
|
||||
if _, ok := inspurGroupOEMFirmwareHints[name]; ok {
|
||||
firmwareNames[name] = struct{}{}
|
||||
}
|
||||
if vrFirmwareHintRe.MatchString(name) {
|
||||
vrNames[name] = struct{}{}
|
||||
}
|
||||
}
|
||||
return mapKeysSorted(firmwareNames), mapKeysSorted(vrNames)
|
||||
}
|
||||
|
||||
func anySignalContains(signals MatchSignals, needle string) bool {
|
||||
needle = strings.TrimSpace(needle)
|
||||
if needle == "" {
|
||||
return false
|
||||
}
|
||||
for _, signal := range signals.ResourceHints {
|
||||
if strings.Contains(signal, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
for _, signal := range signals.DocHints {
|
||||
if strings.Contains(signal, needle) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func mapKeysSorted(items map[string]struct{}) []string {
|
||||
out := make([]string, 0, len(items))
|
||||
for item := range items {
|
||||
out = append(out, item)
|
||||
}
|
||||
return dedupeSorted(out)
|
||||
}
|
||||
@@ -0,0 +1,182 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsSelectsMatchedMode(t *testing.T) {
|
||||
tree := map[string]interface{}{
|
||||
"/redfish/v1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1",
|
||||
},
|
||||
"/redfish/v1/Systems": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Systems/1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1",
|
||||
"Oem": map[string]interface{}{
|
||||
"Public": map[string]interface{}{
|
||||
"USB": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/Oem/Public/GetServerAllUSBStatus",
|
||||
},
|
||||
},
|
||||
},
|
||||
"NetworkInterfaces": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces",
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Systems/1/NetworkInterfaces": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Systems/1/NetworkInterfaces/outboardPCIeCard1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1",
|
||||
"Actions": map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"Public": map[string]interface{}{
|
||||
"NvGpuPowerLimitWatts": map[string]interface{}{
|
||||
"target": "/redfish/v1/Chassis/1/GPU/EnvironmentMetrcs",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"Drives": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1/Drives",
|
||||
},
|
||||
"NetworkAdapters": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters",
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/Drives": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/OB01"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/Drives/FP00HDD00"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/NetworkAdapters": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Chassis/1/NetworkAdapters/outboardPCIeCard1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Chassis/1/Assembly": map[string]interface{}{
|
||||
"Assemblies": []interface{}{
|
||||
map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"COMMONb": map[string]interface{}{
|
||||
"COMMONbAssembly": map[string]interface{}{
|
||||
"@odata.type": "#COMMONbAssembly.v1_0_0.COMMONbAssembly",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Managers": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/Managers/1"},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/Managers/1": map[string]interface{}{
|
||||
"Actions": map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"#PublicManager.ExportConfFile": map[string]interface{}{
|
||||
"target": "/redfish/v1/Managers/1/Actions/Oem/Public/ExportConfFile",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"/redfish/v1/UpdateService/FirmwareInventory": map[string]interface{}{
|
||||
"Members": []interface{}{
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/Front_HDD_CPLD0"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/SCMCPLD"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU0_PVCCD_HV_VR"},
|
||||
map[string]interface{}{"@odata.id": "/redfish/v1/UpdateService/FirmwareInventory/CPU1_PVCCIN_VR"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
signals := CollectSignalsFromTree(tree)
|
||||
match := MatchProfiles(signals)
|
||||
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
assertProfileSelected(t, match, "inspur-group-oem-platforms")
|
||||
}
|
||||
|
||||
func TestCollectSignalsFromTree_InspurGroupOEMPlatformsDoesNotFalsePositiveOnExampleRawExports(t *testing.T) {
|
||||
examples := []string{
|
||||
"2026-03-18 (G5500 V7) - 210619KUGGXGS2000015.zip",
|
||||
"2026-03-11 (SYS-821GE-TNHR) - A514359X5C08846.zip",
|
||||
"2026-03-15 (CG480-S5063) - P5T0006091.zip",
|
||||
"2026-03-18 (CG290-S3063) - PAT0011258.zip",
|
||||
"2024-04-25 (AS -4124GQ-TNMI) - S490387X4418273.zip",
|
||||
}
|
||||
for _, name := range examples {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
tree := loadRawExportTreeFromExampleZip(t, name)
|
||||
match := MatchProfiles(CollectSignalsFromTree(tree))
|
||||
assertProfileNotSelected(t, match, "inspur-group-oem-platforms")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func loadRawExportTreeFromExampleZip(t *testing.T, name string) map[string]interface{} {
|
||||
t.Helper()
|
||||
path := filepath.Join("..", "..", "..", "example", name)
|
||||
f, err := os.Open(path)
|
||||
if err != nil {
|
||||
t.Fatalf("open example zip %s: %v", path, err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
info, err := f.Stat()
|
||||
if err != nil {
|
||||
t.Fatalf("stat example zip %s: %v", path, err)
|
||||
}
|
||||
|
||||
zr, err := zip.NewReader(f, info.Size())
|
||||
if err != nil {
|
||||
t.Fatalf("read example zip %s: %v", path, err)
|
||||
}
|
||||
for _, file := range zr.File {
|
||||
if file.Name != "raw_export.json" {
|
||||
continue
|
||||
}
|
||||
rc, err := file.Open()
|
||||
if err != nil {
|
||||
t.Fatalf("open %s in %s: %v", file.Name, path, err)
|
||||
}
|
||||
defer rc.Close()
|
||||
var payload struct {
|
||||
Source struct {
|
||||
RawPayloads struct {
|
||||
RedfishTree map[string]interface{} `json:"redfish_tree"`
|
||||
} `json:"raw_payloads"`
|
||||
} `json:"source"`
|
||||
}
|
||||
if err := json.NewDecoder(rc).Decode(&payload); err != nil {
|
||||
t.Fatalf("decode raw_export.json from %s: %v", path, err)
|
||||
}
|
||||
if len(payload.Source.RawPayloads.RedfishTree) == 0 {
|
||||
t.Fatalf("example %s has empty redfish_tree", path)
|
||||
}
|
||||
return payload.Source.RawPayloads.RedfishTree
|
||||
}
|
||||
t.Fatalf("raw_export.json not found in %s", path)
|
||||
return nil
|
||||
}
|
||||
74
internal/collector/redfishprofile/profile_msi.go
Normal file
74
internal/collector/redfishprofile/profile_msi.go
Normal file
@@ -0,0 +1,74 @@
|
||||
package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func msiProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "msi",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "micro-star") || containsFold(s.ChassisManufacturer, "micro-star") {
|
||||
score += 80
|
||||
}
|
||||
if containsFold(s.SystemManufacturer, "msi") || containsFold(s.ChassisManufacturer, "msi") {
|
||||
score += 40
|
||||
}
|
||||
for _, hint := range s.ResourceHints {
|
||||
if strings.HasPrefix(hint, "/redfish/v1/Chassis/GPU") {
|
||||
score += 10
|
||||
break
|
||||
}
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensureSnapshotWorkers(plan, 6)
|
||||
ensurePrefetchWorkers(plan, 8)
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 12,
|
||||
SnapshotSeconds: 120,
|
||||
PrefetchSeconds: 25,
|
||||
CriticalPlanBSeconds: 35,
|
||||
ProfilePlanBSeconds: 25,
|
||||
})
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1000,
|
||||
ThrottleP95LatencyMS: 2200,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 2,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
addPlanNote(plan, "msi gpu chassis probes enabled")
|
||||
},
|
||||
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, _ MatchSignals) {
|
||||
for _, chassisPath := range discovered.ChassisPaths {
|
||||
if !strings.HasPrefix(chassisPath, "/redfish/v1/Chassis/GPU") {
|
||||
continue
|
||||
}
|
||||
addPlanPaths(&resolved.SeedPaths, chassisPath)
|
||||
addPlanPaths(&resolved.Plan.SeedPaths, chassisPath)
|
||||
addPlanPaths(&resolved.CriticalPaths, joinPath(chassisPath, "/Sensors"))
|
||||
addPlanPaths(&resolved.Plan.CriticalPaths, joinPath(chassisPath, "/Sensors"))
|
||||
addPlanPaths(&resolved.Plan.PlanBPaths, joinPath(chassisPath, "/Sensors"))
|
||||
}
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
refineAnalysis: func(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, discovered DiscoveredResources, _ MatchSignals) {
|
||||
if snapshotHasGPUProcessor(snapshot, discovered.SystemPaths) && snapshotHasPathPrefix(snapshot, "/redfish/v1/Chassis/GPU") {
|
||||
plan.Directives.EnableProcessorGPUFallback = true
|
||||
plan.Directives.EnableMSIProcessorGPUChassisLookup = true
|
||||
plan.Directives.EnableMSIGhostGPUFilter = true
|
||||
addAnalysisLookupMode(plan, "msi-index")
|
||||
addAnalysisNote(plan, "msi analysis enables processor-gpu fallback from discovered GPU chassis")
|
||||
addAnalysisNote(plan, "msi ghost-gpu filter enabled: GPUs with temperature=0 on powered-on host are excluded")
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
81
internal/collector/redfishprofile/profile_supermicro.go
Normal file
81
internal/collector/redfishprofile/profile_supermicro.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package redfishprofile
|
||||
|
||||
func supermicroProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "supermicro",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "supermicro") || containsFold(s.ChassisManufacturer, "supermicro") {
|
||||
score += 80
|
||||
}
|
||||
for _, hint := range s.ResourceHints {
|
||||
if containsFold(hint, "hgx_baseboard") || containsFold(hint, "hgx_gpu_sxm") {
|
||||
score += 20
|
||||
break
|
||||
}
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensureSnapshotMaxDocuments(plan, 150000)
|
||||
ensureSnapshotWorkers(plan, 6)
|
||||
ensurePrefetchWorkers(plan, 4)
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 15,
|
||||
SnapshotSeconds: 180,
|
||||
PrefetchSeconds: 35,
|
||||
CriticalPlanBSeconds: 45,
|
||||
ProfilePlanBSeconds: 30,
|
||||
})
|
||||
ensurePostProbePolicy(plan, AcquisitionPostProbePolicy{
|
||||
EnableDirectNVMEDiskBayProbe: true,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1200,
|
||||
ThrottleP95LatencyMS: 2400,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "supermicro acquisition extensions enabled")
|
||||
},
|
||||
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, _ DiscoveredResources, signals MatchSignals) {
|
||||
for _, hint := range signals.ResourceHints {
|
||||
if normalizePath(hint) != "/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory" {
|
||||
continue
|
||||
}
|
||||
addPlanPaths(&resolved.CriticalPaths, hint)
|
||||
addPlanPaths(&resolved.Plan.CriticalPaths, hint)
|
||||
addPlanPaths(&resolved.Plan.PlanBPaths, hint)
|
||||
break
|
||||
}
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableStorageEnclosureRecovery = true
|
||||
},
|
||||
refineAnalysis: func(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, _ DiscoveredResources, _ MatchSignals) {
|
||||
if snapshotHasPathContaining(snapshot, "/chassis/nvmessd.") && snapshotHasPathContaining(snapshot, ".storagebackplane") {
|
||||
plan.Directives.EnableSupermicroNVMeBackplane = true
|
||||
addAnalysisNote(plan, "supermicro analysis enables NVMe backplane recovery from snapshot paths")
|
||||
}
|
||||
if snapshotHasStorageControllerHint(snapshot, "/storage/intelvroc", "/storage/ha-raid", "/storage/mrvl.ha-raid") {
|
||||
plan.Directives.EnableKnownStorageControllerRecovery = true
|
||||
addAnalysisStorageDriveCollections(plan,
|
||||
"/Storage/IntelVROC/Drives",
|
||||
"/Storage/IntelVROC/Controllers/1/Drives",
|
||||
)
|
||||
addAnalysisStorageVolumeCollections(plan,
|
||||
"/Storage/IntelVROC/Volumes",
|
||||
"/Storage/HA-RAID/Volumes",
|
||||
"/Storage/MRVL.HA-RAID/Volumes",
|
||||
)
|
||||
addAnalysisNote(plan, "supermicro analysis enables known storage-controller recovery from snapshot paths")
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
55
internal/collector/redfishprofile/profile_xfusion.go
Normal file
55
internal/collector/redfishprofile/profile_xfusion.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package redfishprofile
|
||||
|
||||
func xfusionProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "xfusion",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.ServiceRootVendor, "xfusion") {
|
||||
score += 90
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "xfusion") {
|
||||
score += 20
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsFold(s.SystemManufacturer, "xfusion") || containsFold(s.ChassisManufacturer, "xfusion") {
|
||||
score += 40
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
ensureSnapshotMaxDocuments(plan, 120000)
|
||||
ensureSnapshotWorkers(plan, 4)
|
||||
ensurePrefetchWorkers(plan, 4)
|
||||
ensurePrefetchEnabled(plan, true)
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 10,
|
||||
SnapshotSeconds: 90,
|
||||
PrefetchSeconds: 20,
|
||||
CriticalPlanBSeconds: 30,
|
||||
ProfilePlanBSeconds: 20,
|
||||
})
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 800,
|
||||
ThrottleP95LatencyMS: 1800,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "xfusion ibmc acquisition extensions enabled")
|
||||
},
|
||||
applyAnalysisDirectives: func(d *AnalysisDirectives, _ MatchSignals) {
|
||||
d.EnableGenericGraphicsControllerDedup = true
|
||||
},
|
||||
refineAnalysis: func(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, discovered DiscoveredResources, _ MatchSignals) {
|
||||
if snapshotHasGPUProcessor(snapshot, discovered.SystemPaths) {
|
||||
plan.Directives.EnableProcessorGPUFallback = true
|
||||
addAnalysisNote(plan, "xfusion analysis enables processor-gpu fallback from snapshot topology")
|
||||
}
|
||||
},
|
||||
}
|
||||
}
|
||||
234
internal/collector/redfishprofile/profiles_common.go
Normal file
234
internal/collector/redfishprofile/profiles_common.go
Normal file
@@ -0,0 +1,234 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
type staticProfile struct {
|
||||
name string
|
||||
priority int
|
||||
safeForFallback bool
|
||||
matchFn func(MatchSignals) int
|
||||
extendAcquisition func(*AcquisitionPlan, MatchSignals)
|
||||
refineAcquisition func(*ResolvedAcquisitionPlan, DiscoveredResources, MatchSignals)
|
||||
applyAnalysisDirectives func(*AnalysisDirectives, MatchSignals)
|
||||
refineAnalysis func(*ResolvedAnalysisPlan, map[string]interface{}, DiscoveredResources, MatchSignals)
|
||||
postAnalyze func(*models.AnalysisResult, map[string]interface{}, MatchSignals)
|
||||
}
|
||||
|
||||
func (p staticProfile) Name() string { return p.name }
|
||||
func (p staticProfile) Priority() int { return p.priority }
|
||||
func (p staticProfile) Match(signals MatchSignals) int { return p.matchFn(normalizeSignals(signals)) }
|
||||
func (p staticProfile) SafeForFallback() bool { return p.safeForFallback }
|
||||
func (p staticProfile) ExtendAcquisitionPlan(plan *AcquisitionPlan, signals MatchSignals) {
|
||||
if p.extendAcquisition != nil {
|
||||
p.extendAcquisition(plan, normalizeSignals(signals))
|
||||
}
|
||||
}
|
||||
func (p staticProfile) RefineAcquisitionPlan(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, signals MatchSignals) {
|
||||
if p.refineAcquisition != nil {
|
||||
p.refineAcquisition(resolved, discovered, normalizeSignals(signals))
|
||||
}
|
||||
}
|
||||
func (p staticProfile) ApplyAnalysisDirectives(directives *AnalysisDirectives, signals MatchSignals) {
|
||||
if p.applyAnalysisDirectives != nil {
|
||||
p.applyAnalysisDirectives(directives, normalizeSignals(signals))
|
||||
}
|
||||
}
|
||||
func (p staticProfile) RefineAnalysisPlan(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, discovered DiscoveredResources, signals MatchSignals) {
|
||||
if p.refineAnalysis != nil {
|
||||
p.refineAnalysis(plan, snapshot, discovered, normalizeSignals(signals))
|
||||
}
|
||||
}
|
||||
func (p staticProfile) PostAnalyze(result *models.AnalysisResult, snapshot map[string]interface{}, signals MatchSignals) {
|
||||
if p.postAnalyze != nil {
|
||||
p.postAnalyze(result, snapshot, normalizeSignals(signals))
|
||||
}
|
||||
}
|
||||
|
||||
func BuiltinProfiles() []Profile {
|
||||
return []Profile{
|
||||
genericProfile(),
|
||||
amiProfile(),
|
||||
msiProfile(),
|
||||
supermicroProfile(),
|
||||
dellProfile(),
|
||||
hpeProfile(),
|
||||
inspurGroupOEMPlatformsProfile(),
|
||||
hgxProfile(),
|
||||
xfusionProfile(),
|
||||
}
|
||||
}
|
||||
|
||||
func containsFold(v, sub string) bool {
|
||||
return strings.Contains(strings.ToLower(strings.TrimSpace(v)), strings.ToLower(strings.TrimSpace(sub)))
|
||||
}
|
||||
|
||||
func addPlanPaths(dst *[]string, paths ...string) {
|
||||
*dst = append(*dst, paths...)
|
||||
*dst = dedupeSorted(*dst)
|
||||
}
|
||||
|
||||
func addPlanNote(plan *AcquisitionPlan, note string) {
|
||||
if strings.TrimSpace(note) == "" {
|
||||
return
|
||||
}
|
||||
plan.Notes = append(plan.Notes, note)
|
||||
plan.Notes = dedupeSorted(plan.Notes)
|
||||
}
|
||||
|
||||
func addAnalysisNote(plan *ResolvedAnalysisPlan, note string) {
|
||||
if plan == nil || strings.TrimSpace(note) == "" {
|
||||
return
|
||||
}
|
||||
plan.Notes = append(plan.Notes, note)
|
||||
plan.Notes = dedupeSorted(plan.Notes)
|
||||
}
|
||||
|
||||
func addAnalysisLookupMode(plan *ResolvedAnalysisPlan, mode string) {
|
||||
if plan == nil || strings.TrimSpace(mode) == "" {
|
||||
return
|
||||
}
|
||||
plan.ProcessorGPUChassisLookupModes = dedupeSorted(append(plan.ProcessorGPUChassisLookupModes, mode))
|
||||
}
|
||||
|
||||
func addAnalysisStorageDriveCollections(plan *ResolvedAnalysisPlan, rels ...string) {
|
||||
if plan == nil {
|
||||
return
|
||||
}
|
||||
plan.KnownStorageDriveCollections = dedupeSorted(append(plan.KnownStorageDriveCollections, rels...))
|
||||
}
|
||||
|
||||
func addAnalysisStorageVolumeCollections(plan *ResolvedAnalysisPlan, rels ...string) {
|
||||
if plan == nil {
|
||||
return
|
||||
}
|
||||
plan.KnownStorageVolumeCollections = dedupeSorted(append(plan.KnownStorageVolumeCollections, rels...))
|
||||
}
|
||||
|
||||
func ensureSnapshotMaxDocuments(plan *AcquisitionPlan, n int) {
|
||||
if n <= 0 {
|
||||
return
|
||||
}
|
||||
if plan.Tuning.SnapshotMaxDocuments < n {
|
||||
plan.Tuning.SnapshotMaxDocuments = n
|
||||
}
|
||||
}
|
||||
|
||||
func ensureSnapshotWorkers(plan *AcquisitionPlan, n int) {
|
||||
if n <= 0 {
|
||||
return
|
||||
}
|
||||
if plan.Tuning.SnapshotWorkers < n {
|
||||
plan.Tuning.SnapshotWorkers = n
|
||||
}
|
||||
}
|
||||
|
||||
func ensurePrefetchEnabled(plan *AcquisitionPlan, enabled bool) {
|
||||
if plan.Tuning.PrefetchEnabled == nil {
|
||||
plan.Tuning.PrefetchEnabled = new(bool)
|
||||
}
|
||||
*plan.Tuning.PrefetchEnabled = enabled
|
||||
}
|
||||
|
||||
func ensurePrefetchWorkers(plan *AcquisitionPlan, n int) {
|
||||
if n <= 0 {
|
||||
return
|
||||
}
|
||||
if plan.Tuning.PrefetchWorkers < n {
|
||||
plan.Tuning.PrefetchWorkers = n
|
||||
}
|
||||
}
|
||||
|
||||
func ensureNVMePostProbeEnabled(plan *AcquisitionPlan, enabled bool) {
|
||||
if plan.Tuning.NVMePostProbeEnabled == nil {
|
||||
plan.Tuning.NVMePostProbeEnabled = new(bool)
|
||||
}
|
||||
*plan.Tuning.NVMePostProbeEnabled = enabled
|
||||
}
|
||||
|
||||
func ensureRatePolicy(plan *AcquisitionPlan, policy AcquisitionRatePolicy) {
|
||||
if policy.TargetP95LatencyMS > plan.Tuning.RatePolicy.TargetP95LatencyMS {
|
||||
plan.Tuning.RatePolicy.TargetP95LatencyMS = policy.TargetP95LatencyMS
|
||||
}
|
||||
if policy.ThrottleP95LatencyMS > plan.Tuning.RatePolicy.ThrottleP95LatencyMS {
|
||||
plan.Tuning.RatePolicy.ThrottleP95LatencyMS = policy.ThrottleP95LatencyMS
|
||||
}
|
||||
if policy.MinSnapshotWorkers > plan.Tuning.RatePolicy.MinSnapshotWorkers {
|
||||
plan.Tuning.RatePolicy.MinSnapshotWorkers = policy.MinSnapshotWorkers
|
||||
}
|
||||
if policy.MinPrefetchWorkers > plan.Tuning.RatePolicy.MinPrefetchWorkers {
|
||||
plan.Tuning.RatePolicy.MinPrefetchWorkers = policy.MinPrefetchWorkers
|
||||
}
|
||||
if policy.DisablePrefetchOnErrors {
|
||||
plan.Tuning.RatePolicy.DisablePrefetchOnErrors = true
|
||||
}
|
||||
}
|
||||
|
||||
func ensureETABaseline(plan *AcquisitionPlan, baseline AcquisitionETABaseline) {
|
||||
if baseline.DiscoverySeconds > plan.Tuning.ETABaseline.DiscoverySeconds {
|
||||
plan.Tuning.ETABaseline.DiscoverySeconds = baseline.DiscoverySeconds
|
||||
}
|
||||
if baseline.SnapshotSeconds > plan.Tuning.ETABaseline.SnapshotSeconds {
|
||||
plan.Tuning.ETABaseline.SnapshotSeconds = baseline.SnapshotSeconds
|
||||
}
|
||||
if baseline.PrefetchSeconds > plan.Tuning.ETABaseline.PrefetchSeconds {
|
||||
plan.Tuning.ETABaseline.PrefetchSeconds = baseline.PrefetchSeconds
|
||||
}
|
||||
if baseline.CriticalPlanBSeconds > plan.Tuning.ETABaseline.CriticalPlanBSeconds {
|
||||
plan.Tuning.ETABaseline.CriticalPlanBSeconds = baseline.CriticalPlanBSeconds
|
||||
}
|
||||
if baseline.ProfilePlanBSeconds > plan.Tuning.ETABaseline.ProfilePlanBSeconds {
|
||||
plan.Tuning.ETABaseline.ProfilePlanBSeconds = baseline.ProfilePlanBSeconds
|
||||
}
|
||||
}
|
||||
|
||||
func ensurePostProbePolicy(plan *AcquisitionPlan, policy AcquisitionPostProbePolicy) {
|
||||
if policy.EnableDirectNVMEDiskBayProbe {
|
||||
plan.Tuning.PostProbePolicy.EnableDirectNVMEDiskBayProbe = true
|
||||
}
|
||||
if policy.EnableNumericCollectionProbe {
|
||||
plan.Tuning.PostProbePolicy.EnableNumericCollectionProbe = true
|
||||
}
|
||||
if policy.EnableSensorCollectionProbe {
|
||||
plan.Tuning.PostProbePolicy.EnableSensorCollectionProbe = true
|
||||
}
|
||||
}
|
||||
|
||||
func ensureRecoveryPolicy(plan *AcquisitionPlan, policy AcquisitionRecoveryPolicy) {
|
||||
if policy.EnableCriticalCollectionMemberRetry {
|
||||
plan.Tuning.RecoveryPolicy.EnableCriticalCollectionMemberRetry = true
|
||||
}
|
||||
if policy.EnableCriticalSlowProbe {
|
||||
plan.Tuning.RecoveryPolicy.EnableCriticalSlowProbe = true
|
||||
}
|
||||
if policy.EnableProfilePlanB {
|
||||
plan.Tuning.RecoveryPolicy.EnableProfilePlanB = true
|
||||
}
|
||||
if policy.EnableEmptyCriticalCollectionRetry {
|
||||
plan.Tuning.RecoveryPolicy.EnableEmptyCriticalCollectionRetry = true
|
||||
}
|
||||
}
|
||||
|
||||
func ensureScopedPathPolicy(plan *AcquisitionPlan, policy AcquisitionScopedPathPolicy) {
|
||||
addPlanPaths(&plan.ScopedPaths.SystemSeedSuffixes, policy.SystemSeedSuffixes...)
|
||||
addPlanPaths(&plan.ScopedPaths.SystemCriticalSuffixes, policy.SystemCriticalSuffixes...)
|
||||
addPlanPaths(&plan.ScopedPaths.ChassisSeedSuffixes, policy.ChassisSeedSuffixes...)
|
||||
addPlanPaths(&plan.ScopedPaths.ChassisCriticalSuffixes, policy.ChassisCriticalSuffixes...)
|
||||
addPlanPaths(&plan.ScopedPaths.ManagerSeedSuffixes, policy.ManagerSeedSuffixes...)
|
||||
addPlanPaths(&plan.ScopedPaths.ManagerCriticalSuffixes, policy.ManagerCriticalSuffixes...)
|
||||
}
|
||||
|
||||
func ensurePrefetchPolicy(plan *AcquisitionPlan, policy AcquisitionPrefetchPolicy) {
|
||||
addPlanPaths(&plan.Tuning.PrefetchPolicy.IncludeSuffixes, policy.IncludeSuffixes...)
|
||||
addPlanPaths(&plan.Tuning.PrefetchPolicy.ExcludeContains, policy.ExcludeContains...)
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
177
internal/collector/redfishprofile/signals.go
Normal file
177
internal/collector/redfishprofile/signals.go
Normal file
@@ -0,0 +1,177 @@
|
||||
package redfishprofile
|
||||
|
||||
import "strings"
|
||||
|
||||
func CollectSignals(serviceRootDoc, systemDoc, chassisDoc, managerDoc map[string]interface{}, resourceHints []string, hintDocs ...map[string]interface{}) MatchSignals {
|
||||
resourceHints = append([]string{}, resourceHints...)
|
||||
docHints := make([]string, 0)
|
||||
for _, doc := range append([]map[string]interface{}{serviceRootDoc, systemDoc, chassisDoc, managerDoc}, hintDocs...) {
|
||||
embeddedPaths, embeddedHints := collectDocSignalHints(doc)
|
||||
resourceHints = append(resourceHints, embeddedPaths...)
|
||||
docHints = append(docHints, embeddedHints...)
|
||||
}
|
||||
signals := MatchSignals{
|
||||
ServiceRootVendor: lookupString(serviceRootDoc, "Vendor"),
|
||||
ServiceRootProduct: lookupString(serviceRootDoc, "Product"),
|
||||
SystemManufacturer: lookupString(systemDoc, "Manufacturer"),
|
||||
SystemModel: lookupString(systemDoc, "Model"),
|
||||
SystemSKU: lookupString(systemDoc, "SKU"),
|
||||
ChassisManufacturer: lookupString(chassisDoc, "Manufacturer"),
|
||||
ChassisModel: lookupString(chassisDoc, "Model"),
|
||||
ManagerManufacturer: lookupString(managerDoc, "Manufacturer"),
|
||||
ResourceHints: resourceHints,
|
||||
DocHints: docHints,
|
||||
}
|
||||
signals.OEMNamespaces = dedupeSorted(append(
|
||||
oemNamespaces(serviceRootDoc),
|
||||
append(oemNamespaces(systemDoc), append(oemNamespaces(chassisDoc), oemNamespaces(managerDoc)...)...)...,
|
||||
))
|
||||
return normalizeSignals(signals)
|
||||
}
|
||||
|
||||
func CollectSignalsFromTree(tree map[string]interface{}) MatchSignals {
|
||||
getDoc := func(path string) map[string]interface{} {
|
||||
if v, ok := tree[path]; ok {
|
||||
if doc, ok := v.(map[string]interface{}); ok {
|
||||
return doc
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
memberPath := func(collectionPath, fallbackPath string) string {
|
||||
collection := getDoc(collectionPath)
|
||||
if len(collection) != 0 {
|
||||
if members, ok := collection["Members"].([]interface{}); ok && len(members) > 0 {
|
||||
if ref, ok := members[0].(map[string]interface{}); ok {
|
||||
if path := lookupString(ref, "@odata.id"); path != "" {
|
||||
return path
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return fallbackPath
|
||||
}
|
||||
|
||||
systemPath := memberPath("/redfish/v1/Systems", "/redfish/v1/Systems/1")
|
||||
chassisPath := memberPath("/redfish/v1/Chassis", "/redfish/v1/Chassis/1")
|
||||
managerPath := memberPath("/redfish/v1/Managers", "/redfish/v1/Managers/1")
|
||||
|
||||
resourceHints := make([]string, 0, len(tree))
|
||||
hintDocs := make([]map[string]interface{}, 0, len(tree))
|
||||
for path := range tree {
|
||||
path = strings.TrimSpace(path)
|
||||
if path == "" {
|
||||
continue
|
||||
}
|
||||
resourceHints = append(resourceHints, path)
|
||||
}
|
||||
for _, v := range tree {
|
||||
doc, ok := v.(map[string]interface{})
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
hintDocs = append(hintDocs, doc)
|
||||
}
|
||||
|
||||
return CollectSignals(
|
||||
getDoc("/redfish/v1"),
|
||||
getDoc(systemPath),
|
||||
getDoc(chassisPath),
|
||||
getDoc(managerPath),
|
||||
resourceHints,
|
||||
hintDocs...,
|
||||
)
|
||||
}
|
||||
|
||||
func collectDocSignalHints(doc map[string]interface{}) ([]string, []string) {
|
||||
if len(doc) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
paths := make([]string, 0)
|
||||
hints := make([]string, 0)
|
||||
var walk func(any)
|
||||
walk = func(v any) {
|
||||
switch x := v.(type) {
|
||||
case map[string]interface{}:
|
||||
for rawKey, child := range x {
|
||||
key := strings.TrimSpace(rawKey)
|
||||
if key != "" {
|
||||
hints = append(hints, key)
|
||||
}
|
||||
if s, ok := child.(string); ok {
|
||||
s = strings.TrimSpace(s)
|
||||
if s != "" {
|
||||
switch key {
|
||||
case "@odata.id", "target":
|
||||
paths = append(paths, s)
|
||||
case "@odata.type":
|
||||
hints = append(hints, s)
|
||||
default:
|
||||
if isInterestingSignalString(s) {
|
||||
hints = append(hints, s)
|
||||
if strings.HasPrefix(s, "/") {
|
||||
paths = append(paths, s)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
walk(child)
|
||||
}
|
||||
case []interface{}:
|
||||
for _, child := range x {
|
||||
walk(child)
|
||||
}
|
||||
}
|
||||
}
|
||||
walk(doc)
|
||||
return paths, hints
|
||||
}
|
||||
|
||||
func isInterestingSignalString(s string) bool {
|
||||
switch {
|
||||
case strings.HasPrefix(s, "/"):
|
||||
return true
|
||||
case strings.HasPrefix(s, "#"):
|
||||
return true
|
||||
case strings.Contains(s, "COMMONb"):
|
||||
return true
|
||||
case strings.Contains(s, "EnvironmentMetrcs"):
|
||||
return true
|
||||
case strings.Contains(s, "GetServerAllUSBStatus"):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func lookupString(doc map[string]interface{}, key string) string {
|
||||
if len(doc) == 0 {
|
||||
return ""
|
||||
}
|
||||
value, _ := doc[key]
|
||||
if s, ok := value.(string); ok {
|
||||
return strings.TrimSpace(s)
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func oemNamespaces(doc map[string]interface{}) []string {
|
||||
if len(doc) == 0 {
|
||||
return nil
|
||||
}
|
||||
oem, ok := doc["Oem"].(map[string]interface{})
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
out := make([]string, 0, len(oem))
|
||||
for key := range oem {
|
||||
key = strings.TrimSpace(key)
|
||||
if key == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, key)
|
||||
}
|
||||
return out
|
||||
}
|
||||
17
internal/collector/redfishprofile/testdata/ami-generic.json
vendored
Normal file
17
internal/collector/redfishprofile/testdata/ami-generic.json
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"ServiceRootVendor": "AMI",
|
||||
"ServiceRootProduct": "AMI Redfish Server",
|
||||
"SystemManufacturer": "Gigabyte",
|
||||
"SystemModel": "G292-Z42",
|
||||
"SystemSKU": "",
|
||||
"ChassisManufacturer": "",
|
||||
"ChassisModel": "",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": ["Ami"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/Self",
|
||||
"/redfish/v1/Managers/Self",
|
||||
"/redfish/v1/Oem/Ami",
|
||||
"/redfish/v1/Systems/Self"
|
||||
]
|
||||
}
|
||||
18
internal/collector/redfishprofile/testdata/dell-r750.json
vendored
Normal file
18
internal/collector/redfishprofile/testdata/dell-r750.json
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
{
|
||||
"ServiceRootVendor": "",
|
||||
"ServiceRootProduct": "iDRAC Redfish Service",
|
||||
"SystemManufacturer": "Dell Inc.",
|
||||
"SystemModel": "PowerEdge R750",
|
||||
"SystemSKU": "0A42H9",
|
||||
"ChassisManufacturer": "Dell Inc.",
|
||||
"ChassisModel": "PowerEdge R750",
|
||||
"ManagerManufacturer": "Dell Inc.",
|
||||
"OEMNamespaces": ["Dell"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/System.Embedded.1",
|
||||
"/redfish/v1/Managers/iDRAC.Embedded.1",
|
||||
"/redfish/v1/Managers/iDRAC.Embedded.1/Oem/Dell",
|
||||
"/redfish/v1/Systems/System.Embedded.1",
|
||||
"/redfish/v1/Systems/System.Embedded.1/Storage"
|
||||
]
|
||||
}
|
||||
33
internal/collector/redfishprofile/testdata/msi-cg290.json
vendored
Normal file
33
internal/collector/redfishprofile/testdata/msi-cg290.json
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"ServiceRootVendor": "AMI",
|
||||
"ServiceRootProduct": "AMI Redfish Server",
|
||||
"SystemManufacturer": "Micro-Star International Co., Ltd.",
|
||||
"SystemModel": "CG290-S3063",
|
||||
"SystemSKU": "S3063G290RAU4",
|
||||
"ChassisManufacturer": "NVIDIA",
|
||||
"ChassisModel": "",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": ["Ami"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/GPU1",
|
||||
"/redfish/v1/Chassis/GPU1/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Power",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_TLimit",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Temperature",
|
||||
"/redfish/v1/Chassis/GPU2",
|
||||
"/redfish/v1/Chassis/GPU2/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Power",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_TLimit",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Temperature",
|
||||
"/redfish/v1/Chassis/GPU3",
|
||||
"/redfish/v1/Chassis/GPU3/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Power",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_TLimit",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Temperature",
|
||||
"/redfish/v1/Chassis/GPU4",
|
||||
"/redfish/v1/Chassis/GPU4/NetworkAdapters"
|
||||
]
|
||||
}
|
||||
33
internal/collector/redfishprofile/testdata/msi-cg480-copy.json
vendored
Normal file
33
internal/collector/redfishprofile/testdata/msi-cg480-copy.json
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"ServiceRootVendor": "AMI",
|
||||
"ServiceRootProduct": "AMI Redfish Server",
|
||||
"SystemManufacturer": "Micro-Star International Co., Ltd.",
|
||||
"SystemModel": "CG480-S5063",
|
||||
"SystemSKU": "5063G480RAE20",
|
||||
"ChassisManufacturer": "NVIDIA",
|
||||
"ChassisModel": "",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": ["Ami"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/GPU1",
|
||||
"/redfish/v1/Chassis/GPU1/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Power",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_TLimit",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Temperature",
|
||||
"/redfish/v1/Chassis/GPU2",
|
||||
"/redfish/v1/Chassis/GPU2/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Power",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_TLimit",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Temperature",
|
||||
"/redfish/v1/Chassis/GPU3",
|
||||
"/redfish/v1/Chassis/GPU3/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Power",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_TLimit",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Temperature",
|
||||
"/redfish/v1/Chassis/GPU4",
|
||||
"/redfish/v1/Chassis/GPU4/NetworkAdapters"
|
||||
]
|
||||
}
|
||||
33
internal/collector/redfishprofile/testdata/msi-cg480.json
vendored
Normal file
33
internal/collector/redfishprofile/testdata/msi-cg480.json
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"ServiceRootVendor": "AMI",
|
||||
"ServiceRootProduct": "AMI Redfish Server",
|
||||
"SystemManufacturer": "Micro-Star International Co., Ltd.",
|
||||
"SystemModel": "CG480-S5063",
|
||||
"SystemSKU": "5063G480RAE20",
|
||||
"ChassisManufacturer": "NVIDIA",
|
||||
"ChassisModel": "",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": ["Ami"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/GPU1",
|
||||
"/redfish/v1/Chassis/GPU1/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Power",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_TLimit",
|
||||
"/redfish/v1/Chassis/GPU1/Sensors/GPU1_Temperature",
|
||||
"/redfish/v1/Chassis/GPU2",
|
||||
"/redfish/v1/Chassis/GPU2/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Power",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_TLimit",
|
||||
"/redfish/v1/Chassis/GPU2/Sensors/GPU2_Temperature",
|
||||
"/redfish/v1/Chassis/GPU3",
|
||||
"/redfish/v1/Chassis/GPU3/NetworkAdapters",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Power",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_TLimit",
|
||||
"/redfish/v1/Chassis/GPU3/Sensors/GPU3_Temperature",
|
||||
"/redfish/v1/Chassis/GPU4",
|
||||
"/redfish/v1/Chassis/GPU4/NetworkAdapters"
|
||||
]
|
||||
}
|
||||
33
internal/collector/redfishprofile/testdata/supermicro-hgx.json
vendored
Normal file
33
internal/collector/redfishprofile/testdata/supermicro-hgx.json
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"ServiceRootVendor": "Supermicro",
|
||||
"ServiceRootProduct": "",
|
||||
"SystemManufacturer": "Supermicro",
|
||||
"SystemModel": "SYS-821GE-TNHR",
|
||||
"SystemSKU": "0x1D1415D9",
|
||||
"ChassisManufacturer": "Supermicro",
|
||||
"ChassisModel": "X13DEG-OAD",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": ["Supermicro"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/HGX_BMC_0",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/Assembly",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/Controls",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/Drives",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/EnvironmentMetrics",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/LogServices",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/PCIeDevices",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/PCIeSlots",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/PowerSubsystem",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/PowerSubsystem/PowerSupplies",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/Sensors",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/Sensors/HGX_BMC_0_Temp_0",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/ThermalSubsystem",
|
||||
"/redfish/v1/Chassis/HGX_BMC_0/ThermalSubsystem/ThermalMetrics",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/Assembly",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/Controls",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/Controls/TotalGPU_Power_0",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/Drives",
|
||||
"/redfish/v1/Chassis/HGX_Chassis_0/EnvironmentMetrics"
|
||||
]
|
||||
}
|
||||
51
internal/collector/redfishprofile/testdata/supermicro-oam-amd.json
vendored
Normal file
51
internal/collector/redfishprofile/testdata/supermicro-oam-amd.json
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"ServiceRootVendor": "",
|
||||
"ServiceRootProduct": "H12DGQ-NT6",
|
||||
"SystemManufacturer": "Supermicro",
|
||||
"SystemModel": "AS -4124GQ-TNMI",
|
||||
"SystemSKU": "091715D9",
|
||||
"ChassisManufacturer": "Supermicro",
|
||||
"ChassisModel": "H12DGQ-NT6",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": [
|
||||
"Supermicro"
|
||||
],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/1/PCIeDevices",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU1/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU1/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU2",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU2/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU2/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU3",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU3/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU3/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU4",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU4/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU4/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU5",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU5/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU5/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU6",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU6/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU6/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU7",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU7/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU7/PCIeFunctions/1",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU8",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU8/PCIeFunctions",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices/GPU8/PCIeFunctions/1",
|
||||
"/redfish/v1/Managers/1/Oem/Supermicro/FanMode",
|
||||
"/redfish/v1/Oem/Supermicro/DumpService",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU1",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU2",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU3",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU4",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU5",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU6",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU7",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory/GPU8",
|
||||
"/redfish/v1/UpdateService/Oem/Supermicro/FirmwareInventory"
|
||||
]
|
||||
}
|
||||
16
internal/collector/redfishprofile/testdata/unknown-vendor.json
vendored
Normal file
16
internal/collector/redfishprofile/testdata/unknown-vendor.json
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"ServiceRootVendor": "",
|
||||
"ServiceRootProduct": "Redfish Service",
|
||||
"SystemManufacturer": "",
|
||||
"SystemModel": "",
|
||||
"SystemSKU": "",
|
||||
"ChassisManufacturer": "",
|
||||
"ChassisModel": "",
|
||||
"ManagerManufacturer": "",
|
||||
"OEMNamespaces": [],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/1",
|
||||
"/redfish/v1/Managers/1",
|
||||
"/redfish/v1/Systems/1"
|
||||
]
|
||||
}
|
||||
24
internal/collector/redfishprofile/testdata/xfusion-g5500v7.json
vendored
Normal file
24
internal/collector/redfishprofile/testdata/xfusion-g5500v7.json
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"ServiceRootVendor": "xFusion",
|
||||
"ServiceRootProduct": "G5500 V7",
|
||||
"SystemManufacturer": "OEM",
|
||||
"SystemModel": "G5500 V7",
|
||||
"SystemSKU": "",
|
||||
"ChassisManufacturer": "OEM",
|
||||
"ChassisModel": "G5500 V7",
|
||||
"ManagerManufacturer": "XFUSION",
|
||||
"OEMNamespaces": ["xFusion"],
|
||||
"ResourceHints": [
|
||||
"/redfish/v1/Chassis/1",
|
||||
"/redfish/v1/Chassis/1/Drives",
|
||||
"/redfish/v1/Chassis/1/PCIeDevices",
|
||||
"/redfish/v1/Chassis/1/Sensors",
|
||||
"/redfish/v1/Managers/1",
|
||||
"/redfish/v1/Systems/1",
|
||||
"/redfish/v1/Systems/1/GraphicsControllers",
|
||||
"/redfish/v1/Systems/1/Processors",
|
||||
"/redfish/v1/Systems/1/Processors/Gpu1",
|
||||
"/redfish/v1/Systems/1/Storages",
|
||||
"/redfish/v1/UpdateService/FirmwareInventory"
|
||||
]
|
||||
}
|
||||
171
internal/collector/redfishprofile/types.go
Normal file
171
internal/collector/redfishprofile/types.go
Normal file
@@ -0,0 +1,171 @@
|
||||
package redfishprofile
|
||||
|
||||
import (
|
||||
"sort"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
type MatchSignals struct {
|
||||
ServiceRootVendor string
|
||||
ServiceRootProduct string
|
||||
SystemManufacturer string
|
||||
SystemModel string
|
||||
SystemSKU string
|
||||
ChassisManufacturer string
|
||||
ChassisModel string
|
||||
ManagerManufacturer string
|
||||
OEMNamespaces []string
|
||||
ResourceHints []string
|
||||
DocHints []string
|
||||
}
|
||||
|
||||
type AcquisitionPlan struct {
|
||||
Mode string
|
||||
Profiles []string
|
||||
SeedPaths []string
|
||||
CriticalPaths []string
|
||||
PlanBPaths []string
|
||||
Notes []string
|
||||
ScopedPaths AcquisitionScopedPathPolicy
|
||||
Tuning AcquisitionTuning
|
||||
}
|
||||
|
||||
type DiscoveredResources struct {
|
||||
SystemPaths []string
|
||||
ChassisPaths []string
|
||||
ManagerPaths []string
|
||||
}
|
||||
|
||||
type ResolvedAcquisitionPlan struct {
|
||||
Plan AcquisitionPlan
|
||||
SeedPaths []string
|
||||
CriticalPaths []string
|
||||
}
|
||||
|
||||
type AcquisitionScopedPathPolicy struct {
|
||||
SystemSeedSuffixes []string
|
||||
SystemCriticalSuffixes []string
|
||||
ChassisSeedSuffixes []string
|
||||
ChassisCriticalSuffixes []string
|
||||
ManagerSeedSuffixes []string
|
||||
ManagerCriticalSuffixes []string
|
||||
}
|
||||
|
||||
type AcquisitionTuning struct {
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
}
|
||||
|
||||
type AcquisitionRatePolicy struct {
|
||||
TargetP95LatencyMS int
|
||||
ThrottleP95LatencyMS int
|
||||
MinSnapshotWorkers int
|
||||
MinPrefetchWorkers int
|
||||
DisablePrefetchOnErrors bool
|
||||
}
|
||||
|
||||
type AcquisitionETABaseline struct {
|
||||
DiscoverySeconds int
|
||||
SnapshotSeconds int
|
||||
PrefetchSeconds int
|
||||
CriticalPlanBSeconds int
|
||||
ProfilePlanBSeconds int
|
||||
}
|
||||
|
||||
type AcquisitionPostProbePolicy struct {
|
||||
EnableDirectNVMEDiskBayProbe bool
|
||||
EnableNumericCollectionProbe bool
|
||||
EnableSensorCollectionProbe bool
|
||||
}
|
||||
|
||||
type AcquisitionRecoveryPolicy struct {
|
||||
EnableCriticalCollectionMemberRetry bool
|
||||
EnableCriticalSlowProbe bool
|
||||
EnableProfilePlanB bool
|
||||
EnableEmptyCriticalCollectionRetry bool
|
||||
}
|
||||
|
||||
type AcquisitionPrefetchPolicy struct {
|
||||
IncludeSuffixes []string
|
||||
ExcludeContains []string
|
||||
}
|
||||
|
||||
type AnalysisDirectives struct {
|
||||
EnableProcessorGPUFallback bool
|
||||
EnableSupermicroNVMeBackplane bool
|
||||
EnableProcessorGPUChassisAlias bool
|
||||
EnableGenericGraphicsControllerDedup bool
|
||||
EnableMSIProcessorGPUChassisLookup bool
|
||||
EnableMSIGhostGPUFilter bool
|
||||
EnableStorageEnclosureRecovery bool
|
||||
EnableKnownStorageControllerRecovery bool
|
||||
}
|
||||
|
||||
type ResolvedAnalysisPlan struct {
|
||||
Match MatchResult
|
||||
Directives AnalysisDirectives
|
||||
Notes []string
|
||||
ProcessorGPUChassisLookupModes []string
|
||||
KnownStorageDriveCollections []string
|
||||
KnownStorageVolumeCollections []string
|
||||
}
|
||||
|
||||
type Profile interface {
|
||||
Name() string
|
||||
Priority() int
|
||||
Match(signals MatchSignals) int
|
||||
SafeForFallback() bool
|
||||
ExtendAcquisitionPlan(plan *AcquisitionPlan, signals MatchSignals)
|
||||
RefineAcquisitionPlan(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, signals MatchSignals)
|
||||
ApplyAnalysisDirectives(directives *AnalysisDirectives, signals MatchSignals)
|
||||
RefineAnalysisPlan(plan *ResolvedAnalysisPlan, snapshot map[string]interface{}, discovered DiscoveredResources, signals MatchSignals)
|
||||
PostAnalyze(result *models.AnalysisResult, snapshot map[string]interface{}, signals MatchSignals)
|
||||
}
|
||||
|
||||
type MatchResult struct {
|
||||
Mode string
|
||||
Profiles []Profile
|
||||
Scores []ProfileScore
|
||||
}
|
||||
|
||||
type ProfileScore struct {
|
||||
Name string
|
||||
Score int
|
||||
Active bool
|
||||
Priority int
|
||||
}
|
||||
|
||||
func normalizeSignals(signals MatchSignals) MatchSignals {
|
||||
signals.OEMNamespaces = dedupeSorted(signals.OEMNamespaces)
|
||||
signals.ResourceHints = dedupeSorted(signals.ResourceHints)
|
||||
signals.DocHints = dedupeSorted(signals.DocHints)
|
||||
return signals
|
||||
}
|
||||
|
||||
func dedupeSorted(items []string) []string {
|
||||
if len(items) == 0 {
|
||||
return nil
|
||||
}
|
||||
set := make(map[string]struct{}, len(items))
|
||||
for _, item := range items {
|
||||
if item == "" {
|
||||
continue
|
||||
}
|
||||
set[item] = struct{}{}
|
||||
}
|
||||
out := make([]string, 0, len(set))
|
||||
for item := range set {
|
||||
out = append(out, item)
|
||||
}
|
||||
sort.Strings(out)
|
||||
return out
|
||||
}
|
||||
@@ -15,17 +15,53 @@ type Request struct {
|
||||
Password string
|
||||
Token string
|
||||
TLSMode string
|
||||
PowerOnIfHostOff bool
|
||||
PowerOnIfHostOff bool
|
||||
StopHostAfterCollect bool
|
||||
DebugPayloads bool
|
||||
}
|
||||
|
||||
type Progress struct {
|
||||
Status string
|
||||
Progress int
|
||||
Message string
|
||||
Status string
|
||||
Progress int
|
||||
Message string
|
||||
CurrentPhase string
|
||||
ETASeconds int
|
||||
ActiveModules []ModuleActivation
|
||||
ModuleScores []ModuleScore
|
||||
DebugInfo *CollectDebugInfo
|
||||
}
|
||||
|
||||
type ProgressFn func(Progress)
|
||||
|
||||
type ModuleActivation struct {
|
||||
Name string
|
||||
Score int
|
||||
}
|
||||
|
||||
type ModuleScore struct {
|
||||
Name string
|
||||
Score int
|
||||
Active bool
|
||||
Priority int
|
||||
}
|
||||
|
||||
type CollectDebugInfo struct {
|
||||
AdaptiveThrottled bool
|
||||
SnapshotWorkers int
|
||||
PrefetchWorkers int
|
||||
PrefetchEnabled *bool
|
||||
PhaseTelemetry []PhaseTelemetry
|
||||
}
|
||||
|
||||
type PhaseTelemetry struct {
|
||||
Phase string
|
||||
Requests int
|
||||
Errors int
|
||||
ErrorRate float64
|
||||
AvgMS int64
|
||||
P95MS int64
|
||||
}
|
||||
|
||||
type ProbeResult struct {
|
||||
Reachable bool
|
||||
Protocol string
|
||||
|
||||
@@ -33,7 +33,7 @@ func ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, erro
|
||||
// Determine target host (optional field)
|
||||
targetHost := inferTargetHost(result.TargetHost, result.Filename)
|
||||
|
||||
collectedAt := formatRFC3339(result.CollectedAt)
|
||||
collectedAt := formatRFC3339(reanimatorCollectedAt(result))
|
||||
devices := canonicalDevicesForExport(result.Hardware)
|
||||
|
||||
export := &ReanimatorExport{
|
||||
@@ -58,6 +58,17 @@ func ConvertToReanimator(result *models.AnalysisResult) (*ReanimatorExport, erro
|
||||
return export, nil
|
||||
}
|
||||
|
||||
// reanimatorCollectedAt returns the best timestamp for Reanimator export collected_at.
|
||||
// Prefers InventoryLastModifiedAt when it is set and no older than 30 days; falls back
|
||||
// to CollectedAt (and ultimately to now via formatRFC3339).
|
||||
func reanimatorCollectedAt(result *models.AnalysisResult) time.Time {
|
||||
inv := result.InventoryLastModifiedAt
|
||||
if !inv.IsZero() && time.Since(inv) <= 30*24*time.Hour {
|
||||
return inv
|
||||
}
|
||||
return result.CollectedAt
|
||||
}
|
||||
|
||||
// formatRFC3339 formats time in RFC3339 format, returns current time if zero
|
||||
func formatRFC3339(t time.Time) string {
|
||||
if t.IsZero() {
|
||||
@@ -347,10 +358,12 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
prev.score = canonicalScore(prev.item)
|
||||
byKey[key] = prev
|
||||
}
|
||||
// Secondary pass: for items without serial/BDF (noKey), try to merge into an
|
||||
// existing keyed entry with the same model+manufacturer. This handles the case
|
||||
// where a device appears both in PCIeDevices (with BDF) and NetworkAdapters
|
||||
// (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard with the same model.
|
||||
// Secondary pass: for PCIe-class items without serial/BDF (noKey), try to merge
|
||||
// into an existing keyed entry with the same model+manufacturer. This handles
|
||||
// the case where a device appears both in PCIeDevices (with BDF) and
|
||||
// NetworkAdapters (without BDF) — e.g. Inspur outboardPCIeCard vs PCIeCard
|
||||
// with the same model. Do not apply this to storage: repeated NVMe slots often
|
||||
// share the same model string and would collapse incorrectly.
|
||||
// deviceIdentity returns the best available model name for secondary matching,
|
||||
// preferring Model over DeviceClass (which may hold a resolved device name).
|
||||
deviceIdentity := func(d models.HardwareDevice) string {
|
||||
@@ -366,6 +379,10 @@ func dedupeCanonicalDevices(items []models.HardwareDevice) []models.HardwareDevi
|
||||
var unmatched []models.HardwareDevice
|
||||
for _, item := range noKey {
|
||||
mergeKind := canonicalMergeKind(item.Kind)
|
||||
if mergeKind != "pcie-class" {
|
||||
unmatched = append(unmatched, item)
|
||||
continue
|
||||
}
|
||||
identity := deviceIdentity(item)
|
||||
mfr := strings.ToLower(strings.TrimSpace(item.Manufacturer))
|
||||
if identity == "" {
|
||||
@@ -658,7 +675,17 @@ func convertMemoryFromDevices(devices []models.HardwareDevice, collectedAt strin
|
||||
}
|
||||
present := boolFromPresentPtr(d.Present, true)
|
||||
status := normalizeStatus(d.Status, true)
|
||||
if !present || d.SizeMB == 0 || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
|
||||
mem := models.MemoryDIMM{
|
||||
Present: present,
|
||||
SizeMB: d.SizeMB,
|
||||
Type: d.Type,
|
||||
Description: stringFromDetailMap(d.Details, "description"),
|
||||
Manufacturer: d.Manufacturer,
|
||||
SerialNumber: d.SerialNumber,
|
||||
PartNumber: d.PartNumber,
|
||||
Status: d.Status,
|
||||
}
|
||||
if !mem.IsInstalledInventory() || status == "Empty" || strings.TrimSpace(d.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
|
||||
@@ -700,18 +727,16 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
|
||||
if isVirtualExportStorageDevice(d) {
|
||||
continue
|
||||
}
|
||||
if strings.TrimSpace(d.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
present := d.Present == nil || *d.Present
|
||||
if !present {
|
||||
if !shouldExportStorageDevice(d) {
|
||||
continue
|
||||
}
|
||||
present := boolFromPresentPtr(d.Present, true)
|
||||
status := inferStorageStatus(models.Storage{Present: present})
|
||||
if strings.TrimSpace(d.Status) != "" {
|
||||
status = normalizeStatus(d.Status, false)
|
||||
status = normalizeStatus(d.Status, !present)
|
||||
}
|
||||
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
|
||||
presentValue := present
|
||||
result = append(result, ReanimatorStorage{
|
||||
Slot: d.Slot,
|
||||
Type: d.Type,
|
||||
@@ -721,6 +746,7 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
|
||||
Manufacturer: d.Manufacturer,
|
||||
Firmware: d.Firmware,
|
||||
Interface: d.Interface,
|
||||
Present: &presentValue,
|
||||
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
|
||||
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
|
||||
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
|
||||
@@ -1323,7 +1349,7 @@ func convertMemory(memory []models.MemoryDIMM, collectedAt string) []ReanimatorM
|
||||
|
||||
result := make([]ReanimatorMemory, 0, len(memory))
|
||||
for _, mem := range memory {
|
||||
if !mem.Present || mem.SizeMB == 0 || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
|
||||
if !mem.IsInstalledInventory() || normalizeStatus(mem.Status, true) == "Empty" || strings.TrimSpace(mem.SerialNumber) == "" {
|
||||
continue
|
||||
}
|
||||
status := normalizeStatus(mem.Status, true)
|
||||
@@ -1365,14 +1391,16 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
|
||||
result := make([]ReanimatorStorage, 0, len(storage))
|
||||
for _, stor := range storage {
|
||||
// Skip storage without serial number
|
||||
if stor.SerialNumber == "" {
|
||||
if isVirtualLegacyStorageDevice(stor) {
|
||||
continue
|
||||
}
|
||||
if !shouldExportLegacyStorage(stor) {
|
||||
continue
|
||||
}
|
||||
|
||||
status := inferStorageStatus(stor)
|
||||
if strings.TrimSpace(stor.Status) != "" {
|
||||
status = normalizeStatus(stor.Status, false)
|
||||
status = normalizeStatus(stor.Status, !stor.Present)
|
||||
}
|
||||
meta := buildStatusMeta(
|
||||
status,
|
||||
@@ -1382,6 +1410,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
stor.ErrorDescription,
|
||||
collectedAt,
|
||||
)
|
||||
present := stor.Present
|
||||
|
||||
result = append(result, ReanimatorStorage{
|
||||
Slot: stor.Slot,
|
||||
@@ -1392,6 +1421,7 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
Manufacturer: stor.Manufacturer,
|
||||
Firmware: stor.Firmware,
|
||||
Interface: stor.Interface,
|
||||
Present: &present,
|
||||
RemainingEndurancePct: stor.RemainingEndurancePct,
|
||||
Status: status,
|
||||
StatusCheckedAt: meta.StatusCheckedAt,
|
||||
@@ -1403,6 +1433,53 @@ func convertStorage(storage []models.Storage, collectedAt string) []ReanimatorSt
|
||||
return result
|
||||
}
|
||||
|
||||
func shouldExportStorageDevice(d models.HardwareDevice) bool {
|
||||
if normalizedSerial(d.SerialNumber) != "" {
|
||||
return true
|
||||
}
|
||||
if strings.TrimSpace(d.Slot) != "" {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(d.Model) {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(d.Type) || hasMeaningfulExporterText(d.Interface) {
|
||||
return true
|
||||
}
|
||||
if d.SizeGB > 0 {
|
||||
return true
|
||||
}
|
||||
return d.Present != nil
|
||||
}
|
||||
|
||||
func shouldExportLegacyStorage(stor models.Storage) bool {
|
||||
if normalizedSerial(stor.SerialNumber) != "" {
|
||||
return true
|
||||
}
|
||||
if strings.TrimSpace(stor.Slot) != "" {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(stor.Model) {
|
||||
return true
|
||||
}
|
||||
if hasMeaningfulExporterText(stor.Type) || hasMeaningfulExporterText(stor.Interface) {
|
||||
return true
|
||||
}
|
||||
if stor.SizeGB > 0 {
|
||||
return true
|
||||
}
|
||||
return stor.Present
|
||||
}
|
||||
|
||||
func isVirtualLegacyStorageDevice(stor models.Storage) bool {
|
||||
return isVirtualExportStorageDevice(models.HardwareDevice{
|
||||
Kind: models.DeviceKindStorage,
|
||||
Slot: stor.Slot,
|
||||
Model: stor.Model,
|
||||
Manufacturer: stor.Manufacturer,
|
||||
})
|
||||
}
|
||||
|
||||
// convertPCIeDevices converts PCIe devices, GPUs, and network adapters to Reanimator format
|
||||
func convertPCIeDevices(hw *models.HardwareConfig, collectedAt string) []ReanimatorPCIe {
|
||||
result := make([]ReanimatorPCIe, 0)
|
||||
@@ -2169,10 +2246,8 @@ func normalizePCIeDeviceClass(d models.HardwareDevice) string {
|
||||
|
||||
func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
|
||||
switch strings.ToLower(strings.TrimSpace(deviceClass)) {
|
||||
case "", "network", "network controller", "networkcontroller":
|
||||
case "", "network", "network controller", "networkcontroller", "ethernet", "ethernet controller", "ethernetcontroller":
|
||||
return "NetworkController"
|
||||
case "ethernet", "ethernet controller", "ethernetcontroller":
|
||||
return "EthernetController"
|
||||
case "fibre channel", "fibre channel controller", "fibrechannelcontroller", "fc":
|
||||
return "FibreChannelController"
|
||||
case "display", "displaycontroller", "display controller", "vga":
|
||||
@@ -2193,8 +2268,6 @@ func normalizeLegacyPCIeDeviceClass(deviceClass string) string {
|
||||
func normalizeNetworkDeviceClass(portType, model, description string) string {
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{portType, model, description}, " ")))
|
||||
switch {
|
||||
case strings.Contains(joined, "ethernet"):
|
||||
return "EthernetController"
|
||||
case strings.Contains(joined, "fibre channel") || strings.Contains(joined, " fibrechannel") || strings.Contains(joined, "fc "):
|
||||
return "FibreChannelController"
|
||||
default:
|
||||
|
||||
@@ -259,6 +259,29 @@ func TestConvertMemory(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertMemory_KeepsInstalledDIMMWithUnknownSize(t *testing.T) {
|
||||
memory := []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
SerialNumber: "2B5F92C6",
|
||||
Status: "OK",
|
||||
},
|
||||
}
|
||||
|
||||
result := convertMemory(memory, "2026-03-30T10:00:00Z")
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 inventory-only DIMM, got %d", len(result))
|
||||
}
|
||||
if result[0].PartNumber != "HMCG88AEBRA115N" || result[0].SerialNumber != "2B5F92C6" || result[0].SizeMB != 0 {
|
||||
t.Fatalf("unexpected converted memory: %+v", result[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_CPUSerialIsNotSynthesizedAndSocketIsDeduped(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "cpu-dedupe.json",
|
||||
@@ -424,20 +447,26 @@ func TestConvertStorage(t *testing.T) {
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "INTEL SSDPF2KX076T1",
|
||||
SerialNumber: "", // No serial - should be skipped
|
||||
SerialNumber: "",
|
||||
Present: true,
|
||||
},
|
||||
}
|
||||
|
||||
result := convertStorage(storage, "2026-02-10T15:30:00Z")
|
||||
|
||||
if len(result) != 1 {
|
||||
t.Fatalf("expected 1 storage device (skipped one without serial), got %d", len(result))
|
||||
if len(result) != 2 {
|
||||
t.Fatalf("expected both inventory slots to be exported, got %d", len(result))
|
||||
}
|
||||
|
||||
if result[0].Status != "Unknown" {
|
||||
t.Errorf("expected Unknown status, got %q", result[0].Status)
|
||||
}
|
||||
if result[1].SerialNumber != "" {
|
||||
t.Errorf("expected empty serial for second storage slot, got %q", result[1].SerialNumber)
|
||||
}
|
||||
if result[1].Present == nil || !*result[1].Present {
|
||||
t.Fatalf("expected present=true to be preserved for populated slot without serial")
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_SkipsAMIVirtualStorageDevices(t *testing.T) {
|
||||
@@ -971,6 +1000,52 @@ func TestConvertToReanimator_StatusFallbackUsesCollectedAt(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_ExportsStorageInventoryWithoutSerial(t *testing.T) {
|
||||
collectedAt := time.Date(2026, 4, 1, 9, 0, 0, 0, time.UTC)
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "nvme-inventory.json",
|
||||
CollectedAt: collectedAt,
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-001"},
|
||||
Storage: []models.Storage{
|
||||
{
|
||||
Slot: "OB01",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
SerialNumber: "SSD-001",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB02",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
Present: true,
|
||||
},
|
||||
{
|
||||
Slot: "OB03",
|
||||
Type: "NVMe",
|
||||
Model: "PM9A3",
|
||||
Present: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.Storage) != 3 {
|
||||
t.Fatalf("expected 3 storage entries including inventory slots without serial, got %d", len(out.Hardware.Storage))
|
||||
}
|
||||
if out.Hardware.Storage[1].Slot != "OB02" || out.Hardware.Storage[1].SerialNumber != "" {
|
||||
t.Fatalf("expected OB02 storage slot without serial to survive export, got %#v", out.Hardware.Storage[1])
|
||||
}
|
||||
if out.Hardware.Storage[2].Present == nil || *out.Hardware.Storage[2].Present {
|
||||
t.Fatalf("expected OB03 to preserve present=false, got %#v", out.Hardware.Storage[2])
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_FirmwareExcludesDeviceBoundEntries(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "fw-filter-test.json",
|
||||
@@ -1658,6 +1733,43 @@ func TestConvertToReanimator_ExportsContractV24Telemetry(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_UnifiesEthernetAndNetworkControllers(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Hardware: &models.HardwareConfig{
|
||||
BoardInfo: models.BoardInfo{SerialNumber: "BOARD-123"},
|
||||
Devices: []models.HardwareDevice{
|
||||
{
|
||||
Kind: models.DeviceKindPCIe,
|
||||
Slot: "PCIe1",
|
||||
DeviceClass: "EthernetController",
|
||||
Present: boolPtr(true),
|
||||
SerialNumber: "ETH-001",
|
||||
},
|
||||
{
|
||||
Kind: models.DeviceKindNetwork,
|
||||
Slot: "NIC1",
|
||||
Model: "Ethernet Adapter",
|
||||
Present: boolPtr(true),
|
||||
SerialNumber: "NIC-001",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
out, err := ConvertToReanimator(input)
|
||||
if err != nil {
|
||||
t.Fatalf("ConvertToReanimator() failed: %v", err)
|
||||
}
|
||||
if len(out.Hardware.PCIeDevices) != 2 {
|
||||
t.Fatalf("expected two pcie-class exports, got %d", len(out.Hardware.PCIeDevices))
|
||||
}
|
||||
for _, dev := range out.Hardware.PCIeDevices {
|
||||
if dev.DeviceClass != "NetworkController" {
|
||||
t.Fatalf("expected unified NetworkController class, got %+v", dev)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertToReanimator_PreservesLegacyStorageAndPSUDetails(t *testing.T) {
|
||||
input := &models.AnalysisResult{
|
||||
Filename: "legacy-details.json",
|
||||
|
||||
63
internal/ingest/service.go
Normal file
63
internal/ingest/service.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package ingest
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector"
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
type Service struct{}
|
||||
|
||||
type RedfishSourceMetadata struct {
|
||||
TargetHost string
|
||||
SourceTimezone string
|
||||
Filename string
|
||||
}
|
||||
|
||||
func NewService() *Service {
|
||||
return &Service{}
|
||||
}
|
||||
|
||||
func (s *Service) AnalyzeArchivePayload(filename string, payload []byte) (*models.AnalysisResult, string, error) {
|
||||
p := parser.NewBMCParser()
|
||||
if err := p.ParseFromReader(bytes.NewReader(payload), filename); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return p.Result(), p.DetectedVendor(), nil
|
||||
}
|
||||
|
||||
func (s *Service) AnalyzeRedfishRawPayloads(rawPayloads map[string]any, meta RedfishSourceMetadata) (*models.AnalysisResult, string, error) {
|
||||
result, err := collector.ReplayRedfishFromRawPayloads(rawPayloads, nil)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if result == nil {
|
||||
return nil, "", fmt.Errorf("redfish replay returned nil result")
|
||||
}
|
||||
if strings.TrimSpace(result.Protocol) == "" {
|
||||
result.Protocol = "redfish"
|
||||
}
|
||||
if strings.TrimSpace(result.SourceType) == "" {
|
||||
result.SourceType = models.SourceTypeAPI
|
||||
}
|
||||
if strings.TrimSpace(result.TargetHost) == "" {
|
||||
result.TargetHost = strings.TrimSpace(meta.TargetHost)
|
||||
}
|
||||
if strings.TrimSpace(result.SourceTimezone) == "" {
|
||||
result.SourceTimezone = strings.TrimSpace(meta.SourceTimezone)
|
||||
}
|
||||
if strings.TrimSpace(result.Filename) == "" {
|
||||
if strings.TrimSpace(meta.Filename) != "" {
|
||||
result.Filename = strings.TrimSpace(meta.Filename)
|
||||
} else if target := strings.TrimSpace(result.TargetHost); target != "" {
|
||||
result.Filename = "redfish://" + target
|
||||
} else {
|
||||
result.Filename = "redfish://snapshot"
|
||||
}
|
||||
}
|
||||
return result, "redfish", nil
|
||||
}
|
||||
29
internal/models/memory.go
Normal file
29
internal/models/memory.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package models
|
||||
|
||||
import "strings"
|
||||
|
||||
// HasInventoryIdentity reports whether the DIMM has enough identifying
|
||||
// inventory data to treat it as a populated module even when size is unknown.
|
||||
func (m MemoryDIMM) HasInventoryIdentity() bool {
|
||||
return strings.TrimSpace(m.SerialNumber) != "" ||
|
||||
strings.TrimSpace(m.PartNumber) != "" ||
|
||||
strings.TrimSpace(m.Type) != "" ||
|
||||
strings.TrimSpace(m.Technology) != "" ||
|
||||
strings.TrimSpace(m.Description) != ""
|
||||
}
|
||||
|
||||
// IsInstalledInventory reports whether the DIMM represents an installed module
|
||||
// that should be kept in canonical inventory and exports.
|
||||
func (m MemoryDIMM) IsInstalledInventory() bool {
|
||||
if !m.Present {
|
||||
return false
|
||||
}
|
||||
|
||||
status := strings.ToLower(strings.TrimSpace(m.Status))
|
||||
switch status {
|
||||
case "empty", "absent", "not installed":
|
||||
return false
|
||||
}
|
||||
|
||||
return m.SizeMB > 0 || m.HasInventoryIdentity()
|
||||
}
|
||||
@@ -14,7 +14,8 @@ type AnalysisResult struct {
|
||||
Protocol string `json:"protocol,omitempty"` // redfish | ipmi
|
||||
TargetHost string `json:"target_host,omitempty"` // BMC host for live collect
|
||||
SourceTimezone string `json:"source_timezone,omitempty"` // Source timezone/offset used during collection (e.g. +08:00)
|
||||
CollectedAt time.Time `json:"collected_at,omitempty"` // Collection/upload timestamp
|
||||
CollectedAt time.Time `json:"collected_at,omitempty"` // Collection/upload timestamp
|
||||
InventoryLastModifiedAt time.Time `json:"inventory_last_modified_at,omitempty"` // Redfish inventory last modified (InventoryData/Status)
|
||||
RawPayloads map[string]any `json:"raw_payloads,omitempty"` // Additional source payloads (e.g. Redfish tree)
|
||||
Events []Event `json:"events"`
|
||||
FRU []FRUInfo `json:"fru"`
|
||||
|
||||
@@ -19,6 +19,7 @@ const maxZipArchiveSize = 50 * 1024 * 1024
|
||||
const maxGzipDecompressedSize = 50 * 1024 * 1024
|
||||
|
||||
var supportedArchiveExt = map[string]struct{}{
|
||||
".ahs": {},
|
||||
".gz": {},
|
||||
".tgz": {},
|
||||
".tar": {},
|
||||
@@ -45,6 +46,8 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
ext := strings.ToLower(filepath.Ext(archivePath))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFile(archivePath)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGz(archivePath)
|
||||
case ".tar", ".sds":
|
||||
@@ -66,6 +69,8 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
|
||||
ext := strings.ToLower(filepath.Ext(filename))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFileFromReader(r, filename)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGzFromReader(r, filename)
|
||||
case ".tar", ".sds":
|
||||
|
||||
@@ -76,6 +76,7 @@ func TestIsSupportedArchiveFilename(t *testing.T) {
|
||||
name string
|
||||
want bool
|
||||
}{
|
||||
{name: "HPE_CZ2D1X0GS3_20260330.ahs", want: true},
|
||||
{name: "dump.tar.gz", want: true},
|
||||
{name: "nvidia-bug-report-1651124000923.log.gz", want: true},
|
||||
{name: "snapshot.zip", want: true},
|
||||
@@ -124,3 +125,20 @@ func TestExtractArchiveFromReaderSDS(t *testing.T) {
|
||||
t.Fatalf("expected bmc/pack.info, got %q", files[0].Path)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveFromReaderAHS(t *testing.T) {
|
||||
payload := []byte("ABJRtest")
|
||||
files, err := ExtractArchiveFromReader(bytes.NewReader(payload), "sample.ahs")
|
||||
if err != nil {
|
||||
t.Fatalf("extract ahs from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 extracted file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "sample.ahs" {
|
||||
t.Fatalf("expected sample.ahs, got %q", files[0].Path)
|
||||
}
|
||||
if string(files[0].Content) != string(payload) {
|
||||
t.Fatalf("content mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
601
internal/parser/vendors/easy_bee/parser.go
vendored
Normal file
601
internal/parser/vendors/easy_bee/parser.go
vendored
Normal file
@@ -0,0 +1,601 @@
|
||||
package easy_bee
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.0"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser imports support bundles produced by reanimator-easy-bee.
|
||||
// These archives embed a ready-to-use hardware snapshot in export/bee-audit.json.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string {
|
||||
return "Reanimator Easy Bee Parser"
|
||||
}
|
||||
|
||||
func (p *Parser) Vendor() string {
|
||||
return "easy_bee"
|
||||
}
|
||||
|
||||
func (p *Parser) Version() string {
|
||||
return parserVersion
|
||||
}
|
||||
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
hasManifest := false
|
||||
hasBeeAudit := false
|
||||
hasRuntimeHealth := false
|
||||
hasTechdump := false
|
||||
hasBundlePrefix := false
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(f.Path))
|
||||
content := strings.ToLower(string(f.Content))
|
||||
|
||||
if !hasBundlePrefix && strings.Contains(path, "bee-support-") {
|
||||
hasBundlePrefix = true
|
||||
confidence += 5
|
||||
}
|
||||
|
||||
if (strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") &&
|
||||
strings.Contains(content, "bee_version=") {
|
||||
hasManifest = true
|
||||
confidence += 35
|
||||
if strings.Contains(content, "export_dir=") {
|
||||
confidence += 10
|
||||
}
|
||||
}
|
||||
|
||||
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
|
||||
hasBeeAudit = true
|
||||
confidence += 55
|
||||
}
|
||||
|
||||
if hasBundlePrefix && (strings.HasSuffix(path, "/export/runtime-health.json") || path == "runtime-health.json") {
|
||||
hasRuntimeHealth = true
|
||||
confidence += 10
|
||||
}
|
||||
|
||||
if hasBundlePrefix && !hasTechdump && strings.Contains(path, "/export/techdump/") {
|
||||
hasTechdump = true
|
||||
confidence += 10
|
||||
}
|
||||
}
|
||||
|
||||
if hasManifest && hasBeeAudit {
|
||||
return 100
|
||||
}
|
||||
if hasBeeAudit && (hasRuntimeHealth || hasTechdump) {
|
||||
confidence += 10
|
||||
}
|
||||
if confidence > 100 {
|
||||
return 100
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
snapshotFile := findSnapshotFile(files)
|
||||
if snapshotFile == nil {
|
||||
return nil, fmt.Errorf("easy-bee snapshot not found")
|
||||
}
|
||||
|
||||
var snapshot beeSnapshot
|
||||
if err := json.Unmarshal(snapshotFile.Content, &snapshot); err != nil {
|
||||
return nil, fmt.Errorf("decode %s: %w", snapshotFile.Path, err)
|
||||
}
|
||||
|
||||
manifest := parseManifest(files)
|
||||
|
||||
result := &models.AnalysisResult{
|
||||
SourceType: strings.TrimSpace(snapshot.SourceType),
|
||||
Protocol: strings.TrimSpace(snapshot.Protocol),
|
||||
TargetHost: firstNonEmpty(snapshot.TargetHost, manifest.Host),
|
||||
SourceTimezone: strings.TrimSpace(snapshot.SourceTimezone),
|
||||
CollectedAt: chooseCollectedAt(snapshot, manifest),
|
||||
InventoryLastModifiedAt: snapshot.InventoryLastModifiedAt,
|
||||
RawPayloads: snapshot.RawPayloads,
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: append([]models.FRUInfo(nil), snapshot.FRU...),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: append([]models.FirmwareInfo(nil), snapshot.Hardware.Firmware...),
|
||||
BoardInfo: snapshot.Hardware.Board,
|
||||
Devices: append([]models.HardwareDevice(nil), snapshot.Hardware.Devices...),
|
||||
CPUs: append([]models.CPU(nil), snapshot.Hardware.CPUs...),
|
||||
Memory: append([]models.MemoryDIMM(nil), snapshot.Hardware.Memory...),
|
||||
Storage: append([]models.Storage(nil), snapshot.Hardware.Storage...),
|
||||
Volumes: append([]models.StorageVolume(nil), snapshot.Hardware.Volumes...),
|
||||
PCIeDevices: normalizePCIeDevices(snapshot.Hardware.PCIeDevices),
|
||||
GPUs: append([]models.GPU(nil), snapshot.Hardware.GPUs...),
|
||||
NetworkCards: append([]models.NIC(nil), snapshot.Hardware.NetworkCards...),
|
||||
NetworkAdapters: normalizeNetworkAdapters(snapshot.Hardware.NetworkAdapters),
|
||||
PowerSupply: append([]models.PSU(nil), snapshot.Hardware.PowerSupply...),
|
||||
},
|
||||
}
|
||||
|
||||
result.Events = append(result.Events, snapshot.Events...)
|
||||
result.Events = append(result.Events, convertRuntimeToEvents(snapshot.Runtime, result.CollectedAt)...)
|
||||
result.Events = append(result.Events, convertEventLogs(snapshot.Hardware.EventLogs)...)
|
||||
|
||||
result.Sensors = append(result.Sensors, snapshot.Sensors...)
|
||||
result.Sensors = append(result.Sensors, flattenSensorGroups(snapshot.Hardware.Sensors)...)
|
||||
|
||||
if len(result.FRU) == 0 {
|
||||
if boardFRU, ok := buildBoardFRU(snapshot.Hardware.Board); ok {
|
||||
result.FRU = append(result.FRU, boardFRU)
|
||||
}
|
||||
}
|
||||
|
||||
if result.Hardware == nil || (result.Hardware.BoardInfo.SerialNumber == "" &&
|
||||
len(result.Hardware.CPUs) == 0 &&
|
||||
len(result.Hardware.Memory) == 0 &&
|
||||
len(result.Hardware.Storage) == 0 &&
|
||||
len(result.Hardware.PCIeDevices) == 0 &&
|
||||
len(result.Hardware.Devices) == 0) {
|
||||
return nil, fmt.Errorf("unsupported easy-bee snapshot format")
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
type beeSnapshot struct {
|
||||
SourceType string `json:"source_type,omitempty"`
|
||||
Protocol string `json:"protocol,omitempty"`
|
||||
TargetHost string `json:"target_host,omitempty"`
|
||||
SourceTimezone string `json:"source_timezone,omitempty"`
|
||||
CollectedAt time.Time `json:"collected_at,omitempty"`
|
||||
InventoryLastModifiedAt time.Time `json:"inventory_last_modified_at,omitempty"`
|
||||
RawPayloads map[string]any `json:"raw_payloads,omitempty"`
|
||||
Events []models.Event `json:"events,omitempty"`
|
||||
FRU []models.FRUInfo `json:"fru,omitempty"`
|
||||
Sensors []models.SensorReading `json:"sensors,omitempty"`
|
||||
Hardware beeHardware `json:"hardware"`
|
||||
Runtime beeRuntime `json:"runtime,omitempty"`
|
||||
}
|
||||
|
||||
type beeHardware struct {
|
||||
Board models.BoardInfo `json:"board"`
|
||||
Firmware []models.FirmwareInfo `json:"firmware,omitempty"`
|
||||
Devices []models.HardwareDevice `json:"devices,omitempty"`
|
||||
CPUs []models.CPU `json:"cpus,omitempty"`
|
||||
Memory []models.MemoryDIMM `json:"memory,omitempty"`
|
||||
Storage []models.Storage `json:"storage,omitempty"`
|
||||
Volumes []models.StorageVolume `json:"volumes,omitempty"`
|
||||
PCIeDevices []models.PCIeDevice `json:"pcie_devices,omitempty"`
|
||||
GPUs []models.GPU `json:"gpus,omitempty"`
|
||||
NetworkCards []models.NIC `json:"network_cards,omitempty"`
|
||||
NetworkAdapters []models.NetworkAdapter `json:"network_adapters,omitempty"`
|
||||
PowerSupply []models.PSU `json:"power_supplies,omitempty"`
|
||||
Sensors beeSensorGroups `json:"sensors,omitempty"`
|
||||
EventLogs []beeEventLog `json:"event_logs,omitempty"`
|
||||
}
|
||||
|
||||
type beeSensorGroups struct {
|
||||
Fans []beeFanSensor `json:"fans,omitempty"`
|
||||
Power []beePowerSensor `json:"power,omitempty"`
|
||||
Temperatures []beeTemperatureSensor `json:"temperatures,omitempty"`
|
||||
Other []beeOtherSensor `json:"other,omitempty"`
|
||||
}
|
||||
|
||||
type beeFanSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
RPM int `json:"rpm,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beePowerSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
VoltageV float64 `json:"voltage_v,omitempty"`
|
||||
CurrentA float64 `json:"current_a,omitempty"`
|
||||
PowerW float64 `json:"power_w,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeTemperatureSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
Celsius float64 `json:"celsius,omitempty"`
|
||||
ThresholdWarningCelsius float64 `json:"threshold_warning_celsius,omitempty"`
|
||||
ThresholdCriticalCelsius float64 `json:"threshold_critical_celsius,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeOtherSensor struct {
|
||||
Name string `json:"name"`
|
||||
Location string `json:"location,omitempty"`
|
||||
Value float64 `json:"value,omitempty"`
|
||||
Unit string `json:"unit,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntime struct {
|
||||
Status string `json:"status,omitempty"`
|
||||
CheckedAt time.Time `json:"checked_at,omitempty"`
|
||||
NetworkStatus string `json:"network_status,omitempty"`
|
||||
Issues []beeRuntimeIssue `json:"issues,omitempty"`
|
||||
Services []beeRuntimeStatus `json:"services,omitempty"`
|
||||
Interfaces []beeInterface `json:"interfaces,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntimeIssue struct {
|
||||
Code string `json:"code,omitempty"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
}
|
||||
|
||||
type beeRuntimeStatus struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
}
|
||||
|
||||
type beeInterface struct {
|
||||
Name string `json:"name,omitempty"`
|
||||
State string `json:"state,omitempty"`
|
||||
IPv4 []string `json:"ipv4,omitempty"`
|
||||
Outcome string `json:"outcome,omitempty"`
|
||||
}
|
||||
|
||||
type beeEventLog struct {
|
||||
Source string `json:"source,omitempty"`
|
||||
EventTime string `json:"event_time,omitempty"`
|
||||
Severity string `json:"severity,omitempty"`
|
||||
MessageID string `json:"message_id,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
RawPayload map[string]any `json:"raw_payload,omitempty"`
|
||||
}
|
||||
|
||||
type manifestMetadata struct {
|
||||
Host string
|
||||
GeneratedAtUTC time.Time
|
||||
}
|
||||
|
||||
func findSnapshotFile(files []parser.ExtractedFile) *parser.ExtractedFile {
|
||||
for i := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(files[i].Path))
|
||||
if strings.HasSuffix(path, "/export/bee-audit.json") || path == "bee-audit.json" {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
for i := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(files[i].Path))
|
||||
if strings.HasSuffix(path, ".json") && strings.Contains(path, "reanimator") {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func parseManifest(files []parser.ExtractedFile) manifestMetadata {
|
||||
var meta manifestMetadata
|
||||
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(strings.TrimSpace(f.Path))
|
||||
if !(strings.HasSuffix(path, "/manifest.txt") || path == "manifest.txt") {
|
||||
continue
|
||||
}
|
||||
|
||||
lines := strings.Split(string(f.Content), "\n")
|
||||
for _, line := range lines {
|
||||
key, value, ok := strings.Cut(strings.TrimSpace(line), "=")
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
switch strings.TrimSpace(key) {
|
||||
case "host":
|
||||
meta.Host = strings.TrimSpace(value)
|
||||
case "generated_at_utc":
|
||||
if ts, err := time.Parse(time.RFC3339, strings.TrimSpace(value)); err == nil {
|
||||
meta.GeneratedAtUTC = ts.UTC()
|
||||
}
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
|
||||
return meta
|
||||
}
|
||||
|
||||
func chooseCollectedAt(snapshot beeSnapshot, manifest manifestMetadata) time.Time {
|
||||
switch {
|
||||
case !snapshot.CollectedAt.IsZero():
|
||||
return snapshot.CollectedAt.UTC()
|
||||
case !snapshot.Runtime.CheckedAt.IsZero():
|
||||
return snapshot.Runtime.CheckedAt.UTC()
|
||||
case !manifest.GeneratedAtUTC.IsZero():
|
||||
return manifest.GeneratedAtUTC.UTC()
|
||||
default:
|
||||
return time.Time{}
|
||||
}
|
||||
}
|
||||
|
||||
func convertRuntimeToEvents(runtime beeRuntime, fallback time.Time) []models.Event {
|
||||
events := make([]models.Event, 0)
|
||||
ts := runtime.CheckedAt
|
||||
if ts.IsZero() {
|
||||
ts = fallback
|
||||
}
|
||||
|
||||
if status := strings.TrimSpace(runtime.Status); status != "" {
|
||||
desc := "Bee runtime status: " + status
|
||||
if networkStatus := strings.TrimSpace(runtime.NetworkStatus); networkStatus != "" {
|
||||
desc += " (network: " + networkStatus + ")"
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "Bee Runtime",
|
||||
EventType: "Runtime Status",
|
||||
Severity: mapSeverity(status),
|
||||
Description: desc,
|
||||
})
|
||||
}
|
||||
|
||||
for _, issue := range runtime.Issues {
|
||||
desc := strings.TrimSpace(issue.Description)
|
||||
if desc == "" {
|
||||
desc = "Bee runtime issue"
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "Bee Runtime",
|
||||
EventType: "Runtime Issue",
|
||||
Severity: mapSeverity(issue.Severity),
|
||||
Description: desc,
|
||||
RawData: strings.TrimSpace(issue.Code),
|
||||
})
|
||||
}
|
||||
|
||||
for _, svc := range runtime.Services {
|
||||
status := strings.TrimSpace(svc.Status)
|
||||
if status == "" || strings.EqualFold(status, "active") {
|
||||
continue
|
||||
}
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "systemd",
|
||||
EventType: "Service Status",
|
||||
Severity: mapSeverity(status),
|
||||
Description: fmt.Sprintf("%s is %s", strings.TrimSpace(svc.Name), status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, iface := range runtime.Interfaces {
|
||||
state := strings.TrimSpace(iface.State)
|
||||
outcome := strings.TrimSpace(iface.Outcome)
|
||||
if state == "" && outcome == "" {
|
||||
continue
|
||||
}
|
||||
if strings.EqualFold(state, "up") && strings.EqualFold(outcome, "lease_acquired") {
|
||||
continue
|
||||
}
|
||||
desc := fmt.Sprintf("interface %s state=%s outcome=%s", strings.TrimSpace(iface.Name), state, outcome)
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: "network",
|
||||
EventType: "Interface Status",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: strings.TrimSpace(desc),
|
||||
})
|
||||
}
|
||||
|
||||
return events
|
||||
}
|
||||
|
||||
func convertEventLogs(items []beeEventLog) []models.Event {
|
||||
events := make([]models.Event, 0, len(items))
|
||||
for _, item := range items {
|
||||
message := strings.TrimSpace(item.Message)
|
||||
if message == "" {
|
||||
continue
|
||||
}
|
||||
ts := parseEventTime(item.EventTime)
|
||||
rawData := strings.TrimSpace(item.MessageID)
|
||||
events = append(events, models.Event{
|
||||
Timestamp: ts,
|
||||
Source: firstNonEmpty(strings.TrimSpace(item.Source), "Reanimator"),
|
||||
EventType: "Event Log",
|
||||
Severity: mapSeverity(item.Severity),
|
||||
Description: message,
|
||||
RawData: rawData,
|
||||
})
|
||||
}
|
||||
return events
|
||||
}
|
||||
|
||||
func parseEventTime(raw string) time.Time {
|
||||
raw = strings.TrimSpace(raw)
|
||||
if raw == "" {
|
||||
return time.Time{}
|
||||
}
|
||||
layouts := []string{time.RFC3339Nano, time.RFC3339}
|
||||
for _, layout := range layouts {
|
||||
if ts, err := time.Parse(layout, raw); err == nil {
|
||||
return ts.UTC()
|
||||
}
|
||||
}
|
||||
return time.Time{}
|
||||
}
|
||||
|
||||
func flattenSensorGroups(groups beeSensorGroups) []models.SensorReading {
|
||||
result := make([]models.SensorReading, 0, len(groups.Fans)+len(groups.Power)+len(groups.Temperatures)+len(groups.Other))
|
||||
|
||||
for _, fan := range groups.Fans {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(fan.Name, fan.Location),
|
||||
Type: "fan",
|
||||
Value: float64(fan.RPM),
|
||||
Unit: "RPM",
|
||||
Status: strings.TrimSpace(fan.Status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, power := range groups.Power {
|
||||
name := sensorName(power.Name, power.Location)
|
||||
status := strings.TrimSpace(power.Status)
|
||||
if power.PowerW != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name,
|
||||
Type: "power",
|
||||
Value: power.PowerW,
|
||||
Unit: "W",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
if power.VoltageV != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name + " Voltage",
|
||||
Type: "voltage",
|
||||
Value: power.VoltageV,
|
||||
Unit: "V",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
if power.CurrentA != 0 {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: name + " Current",
|
||||
Type: "current",
|
||||
Value: power.CurrentA,
|
||||
Unit: "A",
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
for _, temp := range groups.Temperatures {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(temp.Name, temp.Location),
|
||||
Type: "temperature",
|
||||
Value: temp.Celsius,
|
||||
Unit: "C",
|
||||
Status: strings.TrimSpace(temp.Status),
|
||||
})
|
||||
}
|
||||
|
||||
for _, other := range groups.Other {
|
||||
result = append(result, models.SensorReading{
|
||||
Name: sensorName(other.Name, other.Location),
|
||||
Type: "other",
|
||||
Value: other.Value,
|
||||
Unit: strings.TrimSpace(other.Unit),
|
||||
Status: strings.TrimSpace(other.Status),
|
||||
})
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func sensorName(name, location string) string {
|
||||
name = strings.TrimSpace(name)
|
||||
location = strings.TrimSpace(location)
|
||||
if name == "" {
|
||||
return location
|
||||
}
|
||||
if location == "" {
|
||||
return name
|
||||
}
|
||||
return name + " [" + location + "]"
|
||||
}
|
||||
|
||||
func normalizePCIeDevices(items []models.PCIeDevice) []models.PCIeDevice {
|
||||
out := append([]models.PCIeDevice(nil), items...)
|
||||
for i := range out {
|
||||
slot := strings.TrimSpace(out[i].Slot)
|
||||
if out[i].BDF == "" && looksLikeBDF(slot) {
|
||||
out[i].BDF = slot
|
||||
}
|
||||
if out[i].Slot == "" && out[i].BDF != "" {
|
||||
out[i].Slot = out[i].BDF
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func normalizeNetworkAdapters(items []models.NetworkAdapter) []models.NetworkAdapter {
|
||||
out := append([]models.NetworkAdapter(nil), items...)
|
||||
for i := range out {
|
||||
slot := strings.TrimSpace(out[i].Slot)
|
||||
if out[i].BDF == "" && looksLikeBDF(slot) {
|
||||
out[i].BDF = slot
|
||||
}
|
||||
if out[i].Slot == "" && out[i].BDF != "" {
|
||||
out[i].Slot = out[i].BDF
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func looksLikeBDF(value string) bool {
|
||||
value = strings.TrimSpace(value)
|
||||
if len(value) != len("0000:00:00.0") {
|
||||
return false
|
||||
}
|
||||
for i, r := range value {
|
||||
switch i {
|
||||
case 4, 7:
|
||||
if r != ':' {
|
||||
return false
|
||||
}
|
||||
case 10:
|
||||
if r != '.' {
|
||||
return false
|
||||
}
|
||||
default:
|
||||
if !((r >= '0' && r <= '9') || (r >= 'a' && r <= 'f') || (r >= 'A' && r <= 'F')) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func buildBoardFRU(board models.BoardInfo) (models.FRUInfo, bool) {
|
||||
if strings.TrimSpace(board.SerialNumber) == "" &&
|
||||
strings.TrimSpace(board.Manufacturer) == "" &&
|
||||
strings.TrimSpace(board.ProductName) == "" &&
|
||||
strings.TrimSpace(board.PartNumber) == "" {
|
||||
return models.FRUInfo{}, false
|
||||
}
|
||||
|
||||
return models.FRUInfo{
|
||||
Description: "System Board",
|
||||
Manufacturer: strings.TrimSpace(board.Manufacturer),
|
||||
ProductName: strings.TrimSpace(board.ProductName),
|
||||
SerialNumber: strings.TrimSpace(board.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(board.PartNumber),
|
||||
}, true
|
||||
}
|
||||
|
||||
func mapSeverity(raw string) models.Severity {
|
||||
switch strings.ToLower(strings.TrimSpace(raw)) {
|
||||
case "critical", "crit", "error", "failed", "failure":
|
||||
return models.SeverityCritical
|
||||
case "warning", "warn", "partial", "degraded", "inactive", "activating", "deactivating":
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
func firstNonEmpty(values ...string) string {
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
if value != "" {
|
||||
return value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
219
internal/parser/vendors/easy_bee/parser_test.go
vendored
Normal file
219
internal/parser/vendors/easy_bee/parser_test.go
vendored
Normal file
@@ -0,0 +1,219 @@
|
||||
package easy_bee
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectBeeSupportArchive(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/manifest.txt",
|
||||
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
|
||||
Content: []byte(`{"hardware":{"board":{"serial_number":"SN-BEE-001"}}}`),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/runtime-health.json",
|
||||
Content: []byte(`{"status":"PARTIAL"}`),
|
||||
},
|
||||
}
|
||||
|
||||
if got := p.Detect(files); got < 90 {
|
||||
t.Fatalf("expected high confidence detect score, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetectRejectsNonBeeArchive(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "random/manifest.txt",
|
||||
Content: []byte("host=test\n"),
|
||||
},
|
||||
{
|
||||
Path: "random/export/runtime-health.json",
|
||||
Content: []byte(`{"status":"OK"}`),
|
||||
},
|
||||
}
|
||||
|
||||
if got := p.Detect(files); got != 0 {
|
||||
t.Fatalf("expected detect score 0, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseBeeAuditSnapshot(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/manifest.txt",
|
||||
Content: []byte("bee_version=1.0.0\nhost=debian\ngenerated_at_utc=2026-03-25T16:20:30Z\nexport_dir=/appdata/bee/export\n"),
|
||||
},
|
||||
{
|
||||
Path: "bee-support-debian-20260325-162030/export/bee-audit.json",
|
||||
Content: []byte(`{
|
||||
"source_type": "manual",
|
||||
"target_host": "debian",
|
||||
"collected_at": "2026-03-25T16:08:09Z",
|
||||
"runtime": {
|
||||
"status": "PARTIAL",
|
||||
"checked_at": "2026-03-25T16:07:56Z",
|
||||
"network_status": "OK",
|
||||
"issues": [
|
||||
{
|
||||
"code": "nvidia_kernel_module_missing",
|
||||
"severity": "warning",
|
||||
"description": "NVIDIA kernel module is not loaded."
|
||||
}
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"name": "bee-web",
|
||||
"status": "inactive"
|
||||
}
|
||||
]
|
||||
},
|
||||
"hardware": {
|
||||
"board": {
|
||||
"manufacturer": "Supermicro",
|
||||
"product_name": "AS-4124GQ-TNMI",
|
||||
"serial_number": "S490387X4418273",
|
||||
"part_number": "H12DGQ-NT6",
|
||||
"uuid": "d868ae00-a61f-11ee-8000-7cc255e10309"
|
||||
},
|
||||
"firmware": [
|
||||
{
|
||||
"device_name": "BIOS",
|
||||
"version": "2.8"
|
||||
}
|
||||
],
|
||||
"cpus": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"socket": 1,
|
||||
"model": "AMD EPYC 7763 64-Core Processor",
|
||||
"cores": 64,
|
||||
"threads": 128,
|
||||
"frequency_mhz": 2450,
|
||||
"max_frequency_mhz": 3525
|
||||
}
|
||||
],
|
||||
"memory": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "P1-DIMMA1",
|
||||
"location": "P0_Node0_Channel0_Dimm0",
|
||||
"present": true,
|
||||
"size_mb": 32768,
|
||||
"type": "DDR4",
|
||||
"max_speed_mhz": 3200,
|
||||
"current_speed_mhz": 2933,
|
||||
"manufacturer": "SK Hynix",
|
||||
"serial_number": "80AD01224887286666",
|
||||
"part_number": "HMA84GR7DJR4N-XN"
|
||||
}
|
||||
],
|
||||
"storage": [
|
||||
{
|
||||
"status": "Unknown",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "nvme0n1",
|
||||
"type": "NVMe",
|
||||
"model": "KCD6XLUL960G",
|
||||
"serial_number": "2470A00XT5M8",
|
||||
"interface": "NVMe",
|
||||
"present": true
|
||||
}
|
||||
],
|
||||
"pcie_devices": [
|
||||
{
|
||||
"status": "OK",
|
||||
"status_checked_at": "2026-03-25T16:08:09Z",
|
||||
"slot": "0000:05:00.0",
|
||||
"vendor_id": 5555,
|
||||
"device_id": 4123,
|
||||
"device_class": "EthernetController",
|
||||
"manufacturer": "Mellanox Technologies",
|
||||
"model": "MT28908 Family [ConnectX-6]",
|
||||
"link_width": 16,
|
||||
"link_speed": "Gen4",
|
||||
"max_link_width": 16,
|
||||
"max_link_speed": "Gen4",
|
||||
"mac_addresses": ["94:6d:ae:9a:75:4a"],
|
||||
"present": true
|
||||
}
|
||||
],
|
||||
"sensors": {
|
||||
"power": [
|
||||
{
|
||||
"name": "PPT",
|
||||
"location": "amdgpu-pci-1100",
|
||||
"power_w": 95
|
||||
}
|
||||
],
|
||||
"temperatures": [
|
||||
{
|
||||
"name": "Composite",
|
||||
"location": "nvme-pci-0600",
|
||||
"celsius": 28.85,
|
||||
"threshold_warning_celsius": 72.85,
|
||||
"threshold_critical_celsius": 81.85,
|
||||
"status": "OK"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}`),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
|
||||
if result.Hardware == nil {
|
||||
t.Fatal("expected hardware to be populated")
|
||||
}
|
||||
if result.TargetHost != "debian" {
|
||||
t.Fatalf("expected target host debian, got %q", result.TargetHost)
|
||||
}
|
||||
wantCollectedAt := time.Date(2026, 3, 25, 16, 8, 9, 0, time.UTC)
|
||||
if !result.CollectedAt.Equal(wantCollectedAt) {
|
||||
t.Fatalf("expected collected_at %s, got %s", wantCollectedAt, result.CollectedAt)
|
||||
}
|
||||
if result.Hardware.BoardInfo.SerialNumber != "S490387X4418273" {
|
||||
t.Fatalf("unexpected board serial %q", result.Hardware.BoardInfo.SerialNumber)
|
||||
}
|
||||
if len(result.Hardware.CPUs) != 1 {
|
||||
t.Fatalf("expected 1 cpu, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected 1 dimm, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected 1 storage device, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
if len(result.Hardware.PCIeDevices) != 1 {
|
||||
t.Fatalf("expected 1 pcie device, got %d", len(result.Hardware.PCIeDevices))
|
||||
}
|
||||
if result.Hardware.PCIeDevices[0].BDF != "0000:05:00.0" {
|
||||
t.Fatalf("expected BDF to be normalized from slot, got %q", result.Hardware.PCIeDevices[0].BDF)
|
||||
}
|
||||
if len(result.Sensors) != 2 {
|
||||
t.Fatalf("expected 2 flattened sensors, got %d", len(result.Sensors))
|
||||
}
|
||||
if len(result.Events) < 3 {
|
||||
t.Fatalf("expected runtime events to be created, got %d", len(result.Events))
|
||||
}
|
||||
if len(result.FRU) == 0 {
|
||||
t.Fatal("expected board FRU fallback to be populated")
|
||||
}
|
||||
}
|
||||
1706
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
1706
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
316
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
316
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
@@ -0,0 +1,316 @@
|
||||
package hpe_ilo_ahs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/binary"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectAHS(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: makeAHSArchive(t, []ahsTestEntry{{Name: "CUST_INFO.DAT", Payload: []byte("x")}}),
|
||||
}})
|
||||
if score < 80 {
|
||||
t.Fatalf("expected high confidence detect, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAHSInventory(t *testing.T) {
|
||||
p := &Parser{}
|
||||
content := makeAHSArchive(t, []ahsTestEntry{
|
||||
{Name: "CUST_INFO.DAT", Payload: make([]byte, 16)},
|
||||
{Name: "0000088-2026-03-30.zbb", Payload: gzipBytes(t, []byte(sampleInventoryBlob()))},
|
||||
{Name: "bcert.pkg", Payload: []byte(sampleBCertBlob())},
|
||||
})
|
||||
|
||||
result, err := p.Parse([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: content,
|
||||
}})
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.Manufacturer != "HPE" {
|
||||
t.Fatalf("unexpected board manufacturer: %q", board.Manufacturer)
|
||||
}
|
||||
if board.ProductName != "ProLiant DL380 Gen11" {
|
||||
t.Fatalf("unexpected board product: %q", board.ProductName)
|
||||
}
|
||||
if board.SerialNumber != "CZ2D1X0GS3" {
|
||||
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
|
||||
}
|
||||
if board.PartNumber != "P52560-421" {
|
||||
t.Fatalf("unexpected board part number: %q", board.PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) != 1 || result.Hardware.CPUs[0].Model != "Intel(R) Xeon(R) Gold 6444Y" {
|
||||
t.Fatalf("unexpected CPUs: %+v", result.Hardware.CPUs)
|
||||
}
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected one DIMM, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if result.Hardware.Memory[0].PartNumber != "HMCG88AEBRA115N" {
|
||||
t.Fatalf("unexpected DIMM part number: %q", result.Hardware.Memory[0].PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 2 {
|
||||
t.Fatalf("expected two network adapters, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 1 {
|
||||
t.Fatalf("expected one PSU, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].SerialNumber != "5XUWB0C4DJG4BV" {
|
||||
t.Fatalf("unexpected PSU serial: %q", result.Hardware.PowerSupply[0].SerialNumber)
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].Firmware != "2.00" {
|
||||
t.Fatalf("unexpected PSU firmware: %q", result.Hardware.PowerSupply[0].Firmware)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected one physical drive, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
drive := result.Hardware.Storage[0]
|
||||
if drive.Model != "SAMSUNGMZ7L3480HCHQ-00A07" {
|
||||
t.Fatalf("unexpected drive model: %q", drive.Model)
|
||||
}
|
||||
if drive.SerialNumber != "S664NC0Y502720" {
|
||||
t.Fatalf("unexpected drive serial: %q", drive.SerialNumber)
|
||||
}
|
||||
if drive.SizeGB != 480 {
|
||||
t.Fatalf("unexpected drive size: %d", drive.SizeGB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Fatalf("expected firmware inventory")
|
||||
}
|
||||
foundILO := false
|
||||
foundControllerFW := false
|
||||
foundNICFW := false
|
||||
foundBackplaneFW := false
|
||||
for _, item := range result.Hardware.Firmware {
|
||||
if item.DeviceName == "iLO 6" && item.Version == "v1.63p20" {
|
||||
foundILO = true
|
||||
}
|
||||
if item.DeviceName == "HPE MR408i-o Gen11" && item.Version == "52.26.3-5379" {
|
||||
foundControllerFW = true
|
||||
}
|
||||
if item.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && item.Version == "20.28.41" {
|
||||
foundNICFW = true
|
||||
}
|
||||
if item.DeviceName == "8 SFF 24G x1NVMe/SAS UBM3 BC BP" && item.Version == "1.24" {
|
||||
foundBackplaneFW = true
|
||||
}
|
||||
}
|
||||
if !foundILO {
|
||||
t.Fatalf("expected iLO firmware entry")
|
||||
}
|
||||
if !foundControllerFW {
|
||||
t.Fatalf("expected controller firmware entry")
|
||||
}
|
||||
if !foundNICFW {
|
||||
t.Fatalf("expected broadcom firmware entry")
|
||||
}
|
||||
if !foundBackplaneFW {
|
||||
t.Fatalf("expected backplane firmware entry")
|
||||
}
|
||||
|
||||
broadcomFound := false
|
||||
backplaneFound := false
|
||||
for _, nic := range result.Hardware.NetworkAdapters {
|
||||
if nic.SerialNumber == "1CH0150001" && nic.Firmware == "20.28.41" {
|
||||
broadcomFound = true
|
||||
}
|
||||
}
|
||||
for _, dev := range result.Hardware.Devices {
|
||||
if dev.DeviceClass == "storage_backplane" && dev.Firmware == "1.24" {
|
||||
backplaneFound = true
|
||||
}
|
||||
}
|
||||
if !broadcomFound {
|
||||
t.Fatalf("expected broadcom adapter firmware to be enriched")
|
||||
}
|
||||
if !backplaneFound {
|
||||
t.Fatalf("expected backplane canonical device")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Devices) < 6 {
|
||||
t.Fatalf("expected canonical devices, got %d", len(result.Hardware.Devices))
|
||||
}
|
||||
if len(result.Events) == 0 {
|
||||
t.Fatalf("expected parsed events")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseExampleAHS(t *testing.T) {
|
||||
path := filepath.Join("..", "..", "..", "..", "example", "HPE_CZ2D1X0GS3_20260330.ahs")
|
||||
content, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
t.Skipf("example fixture unavailable: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse([]parser.ExtractedFile{{
|
||||
Path: filepath.Base(path),
|
||||
Content: content,
|
||||
}})
|
||||
if err != nil {
|
||||
t.Fatalf("parse example failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.ProductName != "ProLiant DL380 Gen11" {
|
||||
t.Fatalf("unexpected board product: %q", board.ProductName)
|
||||
}
|
||||
if board.SerialNumber != "CZ2D1X0GS3" {
|
||||
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) < 2 {
|
||||
t.Fatalf("expected at least two drives, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 2 {
|
||||
t.Fatalf("expected exactly two PSUs, got %d: %+v", len(result.Hardware.PowerSupply), result.Hardware.PowerSupply)
|
||||
}
|
||||
|
||||
foundController := false
|
||||
foundBackplaneFW := false
|
||||
foundNICFW := false
|
||||
for _, device := range result.Hardware.Devices {
|
||||
if device.Model == "HPE MR408i-o Gen11" && device.SerialNumber == "PXSFQ0BBIJY3B3" {
|
||||
foundController = true
|
||||
}
|
||||
if device.DeviceClass == "storage_backplane" && device.Firmware == "1.24" {
|
||||
foundBackplaneFW = true
|
||||
}
|
||||
}
|
||||
if !foundController {
|
||||
t.Fatalf("expected MR408i-o controller in canonical devices")
|
||||
}
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
if fw.DeviceName == "BCM 5719 1Gb 4p BASE-T OCP Adptr" && fw.Version == "20.28.41" {
|
||||
foundNICFW = true
|
||||
}
|
||||
}
|
||||
if !foundBackplaneFW {
|
||||
t.Fatalf("expected backplane device in canonical devices")
|
||||
}
|
||||
if !foundNICFW {
|
||||
t.Fatalf("expected broadcom firmware from bcert/pkg lockdown")
|
||||
}
|
||||
}
|
||||
|
||||
type ahsTestEntry struct {
|
||||
Name string
|
||||
Payload []byte
|
||||
Flag uint32
|
||||
}
|
||||
|
||||
func makeAHSArchive(t *testing.T, entries []ahsTestEntry) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
for _, entry := range entries {
|
||||
header := make([]byte, ahsHeaderSize)
|
||||
copy(header[:4], []byte("ABJR"))
|
||||
binary.LittleEndian.PutUint16(header[4:6], 0x0300)
|
||||
binary.LittleEndian.PutUint16(header[6:8], 0x0002)
|
||||
binary.LittleEndian.PutUint32(header[8:12], uint32(len(entry.Payload)))
|
||||
flag := entry.Flag
|
||||
if flag == 0 {
|
||||
flag = 0x80000002
|
||||
if len(entry.Payload) >= 2 && entry.Payload[0] == 0x1f && entry.Payload[1] == 0x8b {
|
||||
flag = 0x80000001
|
||||
}
|
||||
}
|
||||
binary.LittleEndian.PutUint32(header[16:20], flag)
|
||||
copy(header[20:52], []byte(entry.Name))
|
||||
buf.Write(header)
|
||||
buf.Write(entry.Payload)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func gzipBytes(t *testing.T, payload []byte) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
zw := gzip.NewWriter(&buf)
|
||||
if _, err := zw.Write(payload); err != nil {
|
||||
t.Fatalf("gzip payload: %v", err)
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Fatalf("close gzip writer: %v", err)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func sampleInventoryBlob() string {
|
||||
return stringsJoin(
|
||||
"iLO 6 v1.63p20 built on Sep 13 2024",
|
||||
"HPE",
|
||||
"ProLiant DL380 Gen11",
|
||||
"CZ2D1X0GS3",
|
||||
"P52560-421",
|
||||
"Proc 1",
|
||||
"Intel(R) Corporation",
|
||||
"Intel(R) Xeon(R) Gold 6444Y",
|
||||
"PROC 1 DIMM 3",
|
||||
"Hynix",
|
||||
"HMCG88AEBRA115N",
|
||||
"2B5F92C6",
|
||||
"Power Supply 1",
|
||||
"5XUWB0C4DJG4BV",
|
||||
"P03178-B21",
|
||||
"PciRoot(0x1)/Pci(0x5,0x0)/Pci(0x0,0x0)",
|
||||
"NIC.Slot.1.1",
|
||||
"Network Controller",
|
||||
"Slot 1",
|
||||
"MCX512A-ACAT",
|
||||
"MT2230478382",
|
||||
"PciRoot(0x3)/Pci(0x1,0x0)/Pci(0x0,0x0)",
|
||||
"OCP.Slot.15.1",
|
||||
"Broadcom NetXtreme Gigabit Ethernet - NIC",
|
||||
"OCP Slot 15",
|
||||
"P51183-001",
|
||||
"1CH0150001",
|
||||
"20.28.41",
|
||||
"System ROM",
|
||||
"v2.22 (06/19/2024)",
|
||||
"03/30/2026 09:47:33",
|
||||
"iLO network link down.",
|
||||
`{"@odata.id":"/redfish/v1/Systems/1/Storage/DE00A000/Controllers/0","@odata.type":"#StorageController.v1_7_0.StorageController","Id":"0","Name":"HPE MR408i-o Gen11","FirmwareVersion":"52.26.3-5379","Manufacturer":"HPE","Model":"HPE MR408i-o Gen11","PartNumber":"P58543-001","SKU":"P58335-B21","SerialNumber":"PXSFQ0BBIJY3B3","Status":{"State":"Enabled","Health":"OK"},"Location":{"PartLocation":{"ServiceLabel":"Slot=14","LocationType":"Slot","LocationOrdinalValue":14}},"PCIeInterface":{"PCIeType":"Gen4","LanesInUse":8}}`,
|
||||
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000","@odata.type":"#Fabric.v1_3_0.Fabric","Id":"DE00A000","Name":"8 SFF 24G x1NVMe/SAS UBM3 BC BP","FabricType":"MultiProtocol"}`,
|
||||
`{"@odata.id":"/redfish/v1/Fabrics/DE00A000/Switches/1","@odata.type":"#Switch.v1_9_1.Switch","Id":"1","Name":"Direct Attached","Model":"UBM3","FirmwareVersion":"1.24","SupportedProtocols":["SAS","SATA","NVMe"],"SwitchType":"MultiProtocol","Status":{"State":"Enabled","Health":"OK"}}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/0","@odata.type":"#Drive.v1_17_0.Drive","Id":"0","Name":"480GB 6G SATA SSD","Status":{"State":"StandbyOffline","Health":"OK"},"PhysicalLocation":{"PartLocation":{"ServiceLabel":"Slot=14:Port=1:Box=3:Bay=1","LocationType":"Bay","LocationOrdinalValue":1}},"CapacityBytes":480103981056,"MediaType":"SSD","Model":"SAMSUNGMZ7L3480HCHQ-00A07","Protocol":"SATA","Revision":"JXTC604Q","SerialNumber":"S664NC0Y502720","PredictedMediaLifeLeftPercent":100}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/64515","@odata.type":"#Drive.v1_17_0.Drive","Id":"64515","Name":"Empty Bay","Status":{"State":"Absent","Health":"OK"}}`,
|
||||
)
|
||||
}
|
||||
|
||||
func sampleBCertBlob() string {
|
||||
return `<BC><MfgRecord><PowerSupplySlot id="0"><Present>Yes</Present><SerialNumber>5XUWB0C4DJG4BV</SerialNumber><FirmwareVersion>2.00</FirmwareVersion><SparePartNumber>P44412-001</SparePartNumber></PowerSupplySlot><FirmwareLockdown><SystemProgrammableLogicDevice>0x12</SystemProgrammableLogicDevice><ServerPlatformServicesSPSFirmware>6.1.4.47</ServerPlatformServicesSPSFirmware><STMicroGen11TPM>1.512</STMicroGen11TPM><HPEMR408i-oGen11>52.26.3-5379</HPEMR408i-oGen11><UBM3>UBM3/1.24</UBM3><BCM57191Gb4pBASE-TOCP3>20.28.41</BCM57191Gb4pBASE-TOCP3></FirmwareLockdown></MfgRecord></BC>`
|
||||
}
|
||||
|
||||
func stringsJoin(parts ...string) string {
|
||||
return string(bytes.Join(func() [][]byte {
|
||||
out := make([][]byte, 0, len(parts))
|
||||
for _, part := range parts {
|
||||
out = append(out, []byte(part))
|
||||
}
|
||||
return out
|
||||
}(), []byte{0}))
|
||||
}
|
||||
3
internal/parser/vendors/vendors.go
vendored
3
internal/parser/vendors/vendors.go
vendored
@@ -5,11 +5,14 @@ package vendors
|
||||
import (
|
||||
// Import vendor modules to trigger their init() registration
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/easy_bee"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/h3c"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe_ilo_ahs"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xfusion"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
|
||||
1081
internal/parser/vendors/xfusion/hardware.go
vendored
Normal file
1081
internal/parser/vendors/xfusion/hardware.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
157
internal/parser/vendors/xfusion/parser.go
vendored
Normal file
157
internal/parser/vendors/xfusion/parser.go
vendored
Normal file
@@ -0,0 +1,157 @@
|
||||
// Package xfusion provides parser for xFusion iBMC diagnostic dump archives.
|
||||
// Tested with: xFusion G5500 V7 iBMC dump (tar.gz format, exported via iBMC UI)
|
||||
//
|
||||
// Archive structure: dump_info/AppDump/... and dump_info/LogDump/...
|
||||
//
|
||||
// IMPORTANT: Increment parserVersion when modifying parser logic!
|
||||
package xfusion
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.1"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser implements VendorParser for xFusion iBMC dump archives.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string { return "xFusion iBMC Dump Parser" }
|
||||
func (p *Parser) Vendor() string { return "xfusion" }
|
||||
func (p *Parser) Version() string { return parserVersion }
|
||||
|
||||
// Detect checks if files match the xFusion iBMC dump format.
|
||||
// Returns confidence score 0-100.
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
switch {
|
||||
case strings.Contains(path, "appdump/frudata/fruinfo.txt"):
|
||||
confidence += 50
|
||||
case strings.Contains(path, "rtosdump/versioninfo/app_revision.txt"):
|
||||
confidence += 30
|
||||
case strings.Contains(path, "appdump/sensor_alarm/sensor_info.txt"):
|
||||
confidence += 10
|
||||
case strings.Contains(path, "appdump/card_manage/card_info"):
|
||||
confidence += 20
|
||||
case strings.Contains(path, "logdump/netcard/netcard_info.txt"):
|
||||
confidence += 20
|
||||
}
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
}
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
// Parse parses xFusion iBMC dump and returns an analysis result.
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
Devices: make([]models.HardwareDevice, 0),
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
Volumes: make([]models.StorageVolume, 0),
|
||||
PCIeDevices: make([]models.PCIeDevice, 0),
|
||||
GPUs: make([]models.GPU, 0),
|
||||
NetworkCards: make([]models.NIC, 0),
|
||||
NetworkAdapters: make([]models.NetworkAdapter, 0),
|
||||
PowerSupply: make([]models.PSU, 0),
|
||||
},
|
||||
}
|
||||
|
||||
if f := findByAnyPath(files, "appdump/frudata/fruinfo.txt", "rtosdump/versioninfo/fruinfo.txt"); f != nil {
|
||||
parseFRUInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "appdump/sensor_alarm/sensor_info.txt"); f != nil {
|
||||
result.Sensors = parseSensorInfo(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "appdump/cpumem/cpu_info"); f != nil {
|
||||
result.Hardware.CPUs = parseCPUInfo(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "appdump/cpumem/mem_info"); f != nil {
|
||||
result.Hardware.Memory = parseMemInfo(f.Content)
|
||||
}
|
||||
var nicCards []xfusionNICCard
|
||||
if f := findByPath(files, "appdump/card_manage/card_info"); f != nil {
|
||||
gpus, cards := parseCardInfo(f.Content)
|
||||
result.Hardware.GPUs = gpus
|
||||
nicCards = cards
|
||||
}
|
||||
if f := findByPath(files, "logdump/netcard/netcard_info.txt"); f != nil || len(nicCards) > 0 {
|
||||
var content []byte
|
||||
if f != nil {
|
||||
content = f.Content
|
||||
}
|
||||
adapters, legacyNICs := mergeNetworkAdapters(nicCards, parseNetcardInfo(content))
|
||||
result.Hardware.NetworkAdapters = adapters
|
||||
result.Hardware.NetworkCards = legacyNICs
|
||||
}
|
||||
if f := findByPath(files, "appdump/bmc/psu_info.txt"); f != nil {
|
||||
result.Hardware.PowerSupply = parsePSUInfo(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "appdump/storagemgnt/raid_controller_info.txt"); f != nil {
|
||||
parseStorageControllerInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "rtosdump/versioninfo/app_revision.txt"); f != nil {
|
||||
parseAppRevision(f.Content, result)
|
||||
}
|
||||
for _, f := range findDiskInfoFiles(files) {
|
||||
disk := parseDiskInfo(f.Content)
|
||||
if disk != nil {
|
||||
result.Hardware.Storage = append(result.Hardware.Storage, *disk)
|
||||
}
|
||||
}
|
||||
if f := findByPath(files, "logdump/maintenance_log"); f != nil {
|
||||
result.Events = parseMaintenanceLog(f.Content)
|
||||
}
|
||||
|
||||
result.Protocol = "ipmi"
|
||||
result.SourceType = models.SourceTypeArchive
|
||||
parser.ApplyManufacturedYearWeekFromFRU(result.FRU, result.Hardware)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// findByPath returns the first file whose lowercased path contains the given substring.
|
||||
func findByPath(files []parser.ExtractedFile, substring string) *parser.ExtractedFile {
|
||||
for i := range files {
|
||||
if strings.Contains(strings.ToLower(files[i].Path), substring) {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func findByAnyPath(files []parser.ExtractedFile, substrings ...string) *parser.ExtractedFile {
|
||||
for _, substring := range substrings {
|
||||
if f := findByPath(files, substring); f != nil {
|
||||
return f
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// findDiskInfoFiles returns all PhysicalDrivesInfo disk_info files.
|
||||
func findDiskInfoFiles(files []parser.ExtractedFile) []parser.ExtractedFile {
|
||||
var out []parser.ExtractedFile
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
if strings.Contains(path, "physicaldrivesinfo/") && strings.HasSuffix(path, "/disk_info") {
|
||||
out = append(out, f)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
332
internal/parser/vendors/xfusion/parser_test.go
vendored
Normal file
332
internal/parser/vendors/xfusion/parser_test.go
vendored
Normal file
@@ -0,0 +1,332 @@
|
||||
package xfusion
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
// loadTestArchive extracts the given archive path for use in tests.
|
||||
// Skips the test if the file is not found (CI environments without testdata).
|
||||
func loadTestArchive(t *testing.T, path string) []parser.ExtractedFile {
|
||||
t.Helper()
|
||||
files, err := parser.ExtractArchive(path)
|
||||
if err != nil {
|
||||
t.Skipf("cannot load test archive %s: %v", path, err)
|
||||
}
|
||||
return files
|
||||
}
|
||||
|
||||
func TestDetect_G5500V7(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
score := p.Detect(files)
|
||||
if score < 80 {
|
||||
t.Fatalf("expected Detect score >= 80, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetect_ServerFileExportMarkers(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{
|
||||
{Path: "dump_info/RTOSDump/versioninfo/app_revision.txt", Content: []byte("Product Name: G5500 V7")},
|
||||
{Path: "dump_info/LogDump/netcard/netcard_info.txt", Content: []byte("2026-02-04 03:54:06 UTC")},
|
||||
{Path: "dump_info/AppDump/card_manage/card_info", Content: []byte("OCP Card Info")},
|
||||
})
|
||||
if score < 70 {
|
||||
t.Fatalf("expected Detect score >= 70 for xFusion file export markers, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDetect_Negative(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{
|
||||
{Path: "logs/messages.txt", Content: []byte("plain text")},
|
||||
{Path: "inventory.json", Content: []byte(`{"vendor":"other"}`)},
|
||||
})
|
||||
if score != 0 {
|
||||
t.Fatalf("expected Detect score 0 for non-xFusion input, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_BoardInfo(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatal("Hardware is nil")
|
||||
}
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.SerialNumber != "210619KUGGXGS2000015" {
|
||||
t.Errorf("BoardInfo.SerialNumber = %q, want 210619KUGGXGS2000015", board.SerialNumber)
|
||||
}
|
||||
if board.ProductName != "G5500 V7" {
|
||||
t.Errorf("BoardInfo.ProductName = %q, want G5500 V7", board.ProductName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_CPUs(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Hardware.CPUs) != 2 {
|
||||
t.Fatalf("expected 2 CPUs, got %d", len(result.Hardware.CPUs))
|
||||
}
|
||||
cpu1 := result.Hardware.CPUs[0]
|
||||
if cpu1.Cores != 32 {
|
||||
t.Errorf("CPU1 cores = %d, want 32", cpu1.Cores)
|
||||
}
|
||||
if cpu1.Threads != 64 {
|
||||
t.Errorf("CPU1 threads = %d, want 64", cpu1.Threads)
|
||||
}
|
||||
if cpu1.SerialNumber == "" {
|
||||
t.Error("CPU1 SerialNumber is empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_Memory(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
// Only 2 DIMMs are populated (rest are "NO DIMM")
|
||||
if len(result.Hardware.Memory) != 2 {
|
||||
t.Fatalf("expected 2 populated DIMMs, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
dimm := result.Hardware.Memory[0]
|
||||
if dimm.SizeMB != 65536 {
|
||||
t.Errorf("DIMM0 SizeMB = %d, want 65536", dimm.SizeMB)
|
||||
}
|
||||
if dimm.Type != "DDR5" {
|
||||
t.Errorf("DIMM0 Type = %q, want DDR5", dimm.Type)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_GPUs(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Hardware.GPUs) != 8 {
|
||||
t.Fatalf("expected 8 GPUs, got %d", len(result.Hardware.GPUs))
|
||||
}
|
||||
for _, gpu := range result.Hardware.GPUs {
|
||||
if gpu.SerialNumber == "" {
|
||||
t.Errorf("GPU slot %s has empty SerialNumber", gpu.Slot)
|
||||
}
|
||||
if gpu.Model == "" {
|
||||
t.Errorf("GPU slot %s has empty Model", gpu.Slot)
|
||||
}
|
||||
if gpu.Firmware == "" {
|
||||
t.Errorf("GPU slot %s has empty Firmware", gpu.Slot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_NICs(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Hardware.NetworkCards) < 1 {
|
||||
t.Fatal("expected at least 1 NIC (OCP CX6), got 0")
|
||||
}
|
||||
nic := result.Hardware.NetworkCards[0]
|
||||
if nic.SerialNumber == "" {
|
||||
t.Errorf("NIC SerialNumber is empty")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_ServerFileExport_NetworkAdaptersAndFirmware(t *testing.T) {
|
||||
p := &Parser{}
|
||||
files := []parser.ExtractedFile{
|
||||
{
|
||||
Path: "dump_info/AppDump/card_manage/card_info",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
Pcie Card Info
|
||||
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
|
||||
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
|
||||
|
||||
OCP Card Info
|
||||
Slot | Vender Id | Device Id | Sub Vender Id | Sub Device Id | Segment Number | Bus Number | Device Number | Function Number | Card Desc | Board Id | PCB Version | CPLD Version | Sub Card Bom Id | PartNum | SerialNumber | OriginalPartNum
|
||||
1 | 0x15b3 | 0x101f | 0x1f24 | 0x2011 | 0x00 | 0x27 | 0x00 | 0x00 | MT2894 Family [ConnectX-6 Lx] | N/A | N/A | N/A | N/A | 0302Y238 | 02Y238X6RC000058 |
|
||||
`)),
|
||||
},
|
||||
{
|
||||
Path: "dump_info/LogDump/netcard/netcard_info.txt",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
2026-02-04 03:54:06 UTC
|
||||
ProductName :XC385
|
||||
Manufacture :XFUSION
|
||||
FirmwareVersion :26.39.2048
|
||||
SlotId :1
|
||||
Port0 BDF:0000:27:00.0
|
||||
MacAddr:44:1A:4C:16:E8:03
|
||||
ActualMac:44:1A:4C:16:E8:03
|
||||
Port1 BDF:0000:27:00.1
|
||||
MacAddr:00:00:00:00:00:00
|
||||
ActualMac:44:1A:4C:16:E8:04
|
||||
`)),
|
||||
},
|
||||
{
|
||||
Path: "dump_info/RTOSDump/versioninfo/app_revision.txt",
|
||||
Content: []byte(strings.TrimSpace(`
|
||||
------------------- iBMC INFO -------------------
|
||||
Active iBMC Version: (U68)3.08.05.85
|
||||
Active iBMC Built: 16:46:26 Jan 4 2026
|
||||
SDK Version: 13.16.30.16
|
||||
SDK Built: 07:55:18 Dec 12 2025
|
||||
Active BIOS Version: (U6216)01.02.08.17
|
||||
Active BIOS Built: 00:00:00 Jan 05 2026
|
||||
Product Name: G5500 V7
|
||||
`)),
|
||||
},
|
||||
}
|
||||
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if result.Protocol != "ipmi" || result.SourceType != models.SourceTypeArchive {
|
||||
t.Fatalf("unexpected source metadata: protocol=%q source_type=%q", result.Protocol, result.SourceType)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatal("Hardware is nil")
|
||||
}
|
||||
if len(result.Hardware.NetworkAdapters) != 1 {
|
||||
t.Fatalf("expected 1 network adapter, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
adapter := result.Hardware.NetworkAdapters[0]
|
||||
if adapter.BDF != "0000:27:00.0" {
|
||||
t.Fatalf("expected network adapter BDF 0000:27:00.0, got %q", adapter.BDF)
|
||||
}
|
||||
if adapter.Firmware != "26.39.2048" {
|
||||
t.Fatalf("expected network adapter firmware 26.39.2048, got %q", adapter.Firmware)
|
||||
}
|
||||
if adapter.SerialNumber != "02Y238X6RC000058" {
|
||||
t.Fatalf("expected network adapter serial from card_info, got %q", adapter.SerialNumber)
|
||||
}
|
||||
if len(adapter.MACAddresses) != 2 || adapter.MACAddresses[0] != "44:1A:4C:16:E8:03" || adapter.MACAddresses[1] != "44:1A:4C:16:E8:04" {
|
||||
t.Fatalf("unexpected MAC addresses: %#v", adapter.MACAddresses)
|
||||
}
|
||||
|
||||
fwByDevice := make(map[string]models.FirmwareInfo)
|
||||
for _, fw := range result.Hardware.Firmware {
|
||||
fwByDevice[fw.DeviceName] = fw
|
||||
}
|
||||
if fwByDevice["iBMC"].Version != "(U68)3.08.05.85" {
|
||||
t.Fatalf("expected iBMC firmware from app_revision.txt, got %#v", fwByDevice["iBMC"])
|
||||
}
|
||||
if fwByDevice["BIOS"].Version != "(U6216)01.02.08.17" {
|
||||
t.Fatalf("expected BIOS firmware from app_revision.txt, got %#v", fwByDevice["BIOS"])
|
||||
}
|
||||
if result.Hardware.BoardInfo.ProductName != "G5500 V7" {
|
||||
t.Fatalf("expected board product fallback from app_revision.txt, got %q", result.Hardware.BoardInfo.ProductName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_PSUs(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 4 {
|
||||
t.Fatalf("expected 4 PSUs, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
for _, psu := range result.Hardware.PowerSupply {
|
||||
if psu.WattageW != 3000 {
|
||||
t.Errorf("PSU slot %s wattage = %d, want 3000", psu.Slot, psu.WattageW)
|
||||
}
|
||||
if psu.SerialNumber == "" {
|
||||
t.Errorf("PSU slot %s has empty SerialNumber", psu.Slot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_Storage(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Hardware.Storage) != 2 {
|
||||
t.Fatalf("expected 2 storage devices, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
for _, disk := range result.Hardware.Storage {
|
||||
if disk.SerialNumber == "" {
|
||||
t.Errorf("disk slot %s has empty SerialNumber", disk.Slot)
|
||||
}
|
||||
if disk.Model == "" {
|
||||
t.Errorf("disk slot %s has empty Model", disk.Slot)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_Sensors(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Sensors) < 20 {
|
||||
t.Fatalf("expected at least 20 sensors, got %d", len(result.Sensors))
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_Events(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.Events) < 5 {
|
||||
t.Fatalf("expected at least 5 events, got %d", len(result.Events))
|
||||
}
|
||||
// All events should have real timestamps (not epoch 0)
|
||||
for _, ev := range result.Events {
|
||||
if ev.Timestamp.Year() <= 1970 {
|
||||
t.Errorf("event has epoch timestamp: %v %s", ev.Timestamp, ev.Description)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_G5500V7_FRU(t *testing.T) {
|
||||
files := loadTestArchive(t, "../../../../example/G5500V7_210619KUGGXGS2000015_20260318-1128.tar.gz")
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse: %v", err)
|
||||
}
|
||||
if len(result.FRU) < 3 {
|
||||
t.Fatalf("expected at least 3 FRU entries, got %d", len(result.FRU))
|
||||
}
|
||||
// Check mainboard FRU serial
|
||||
found := false
|
||||
for _, f := range result.FRU {
|
||||
if f.SerialNumber == "210619KUGGXGS2000015" {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Error("mainboard serial 210619KUGGXGS2000015 not found in FRU")
|
||||
}
|
||||
}
|
||||
@@ -44,6 +44,9 @@ func TestParserParseExample(t *testing.T) {
|
||||
examplePath := filepath.Join("..", "..", "..", "..", "example", "xigmanas.txt")
|
||||
raw, err := os.ReadFile(examplePath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
t.Skipf("example file %s not present", examplePath)
|
||||
}
|
||||
t.Fatalf("read example file: %v", err)
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,8 @@ package server
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
@@ -29,7 +31,17 @@ func TestCollectProbe(t *testing.T) {
|
||||
_, ts := newCollectTestServer()
|
||||
defer ts.Close()
|
||||
|
||||
body := `{"host":"bmc-off.local","protocol":"redfish","port":443,"username":"admin","auth_type":"password","password":"secret","tls_mode":"strict"}`
|
||||
ln, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
if err != nil {
|
||||
t.Fatalf("listen probe target: %v", err)
|
||||
}
|
||||
defer ln.Close()
|
||||
addr, ok := ln.Addr().(*net.TCPAddr)
|
||||
if !ok {
|
||||
t.Fatalf("unexpected listener address type: %T", ln.Addr())
|
||||
}
|
||||
|
||||
body := fmt.Sprintf(`{"host":"127.0.0.1","protocol":"redfish","port":%d,"username":"admin-off","auth_type":"password","password":"secret","tls_mode":"strict"}`, addr.Port)
|
||||
resp, err := http.Post(ts.URL+"/api/collect/probe", "application/json", bytes.NewBufferString(body))
|
||||
if err != nil {
|
||||
t.Fatalf("post collect probe failed: %v", err)
|
||||
@@ -91,6 +103,21 @@ func TestCollectLifecycleToTerminal(t *testing.T) {
|
||||
if len(status.Logs) < 4 {
|
||||
t.Fatalf("expected detailed logs, got %v", status.Logs)
|
||||
}
|
||||
if len(status.ActiveModules) == 0 {
|
||||
t.Fatal("expected active modules in collect status")
|
||||
}
|
||||
if status.ActiveModules[0].Name == "" {
|
||||
t.Fatal("expected active module name")
|
||||
}
|
||||
if len(status.ModuleScores) == 0 {
|
||||
t.Fatal("expected module scores in collect status")
|
||||
}
|
||||
if status.DebugInfo == nil {
|
||||
t.Fatal("expected debug info in collect status")
|
||||
}
|
||||
if len(status.DebugInfo.PhaseTelemetry) == 0 {
|
||||
t.Fatal("expected phase telemetry in collect debug info")
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectCancel(t *testing.T) {
|
||||
|
||||
@@ -21,11 +21,15 @@ func (c *mockConnector) Probe(ctx context.Context, req collector.Request) (*coll
|
||||
if strings.Contains(strings.ToLower(req.Host), "fail") {
|
||||
return nil, context.DeadlineExceeded
|
||||
}
|
||||
hostPoweredOn := true
|
||||
if strings.Contains(strings.ToLower(req.Host), "off") || strings.Contains(strings.ToLower(req.Username), "off") {
|
||||
hostPoweredOn = false
|
||||
}
|
||||
return &collector.ProbeResult{
|
||||
Reachable: true,
|
||||
Protocol: c.protocol,
|
||||
HostPowerState: map[bool]string{true: "On", false: "Off"}[!strings.Contains(strings.ToLower(req.Host), "off")],
|
||||
HostPoweredOn: !strings.Contains(strings.ToLower(req.Host), "off"),
|
||||
HostPowerState: map[bool]string{true: "On", false: "Off"}[hostPoweredOn],
|
||||
HostPoweredOn: hostPoweredOn,
|
||||
PowerControlAvailable: true,
|
||||
SystemPath: "/redfish/v1/Systems/1",
|
||||
}, nil
|
||||
@@ -33,6 +37,28 @@ func (c *mockConnector) Probe(ctx context.Context, req collector.Request) (*coll
|
||||
|
||||
func (c *mockConnector) Collect(ctx context.Context, req collector.Request, emit collector.ProgressFn) (*models.AnalysisResult, error) {
|
||||
steps := []collector.Progress{
|
||||
{
|
||||
Status: CollectStatusRunning,
|
||||
Progress: 10,
|
||||
Message: "Подбор модулей Redfish...",
|
||||
ActiveModules: []collector.ModuleActivation{
|
||||
{Name: "supermicro", Score: 80},
|
||||
{Name: "generic", Score: 10},
|
||||
},
|
||||
ModuleScores: []collector.ModuleScore{
|
||||
{Name: "supermicro", Score: 80, Active: true, Priority: 20},
|
||||
{Name: "generic", Score: 10, Active: true, Priority: 100},
|
||||
{Name: "hgx-topology", Score: 0, Active: false, Priority: 30},
|
||||
},
|
||||
DebugInfo: &collector.CollectDebugInfo{
|
||||
AdaptiveThrottled: false,
|
||||
SnapshotWorkers: 6,
|
||||
PrefetchWorkers: 4,
|
||||
PhaseTelemetry: []collector.PhaseTelemetry{
|
||||
{Phase: "discovery", Requests: 6, Errors: 0, ErrorRate: 0, AvgMS: 120, P95MS: 180},
|
||||
},
|
||||
},
|
||||
},
|
||||
{Status: CollectStatusRunning, Progress: 20, Message: "Подключение..."},
|
||||
{Status: CollectStatusRunning, Progress: 50, Message: "Сбор инвентаря..."},
|
||||
{Status: CollectStatusRunning, Progress: 80, Message: "Нормализация..."},
|
||||
|
||||
@@ -19,7 +19,9 @@ type CollectRequest struct {
|
||||
Password string `json:"password,omitempty"`
|
||||
Token string `json:"token,omitempty"`
|
||||
TLSMode string `json:"tls_mode"`
|
||||
PowerOnIfHostOff bool `json:"power_on_if_host_off,omitempty"`
|
||||
PowerOnIfHostOff bool `json:"power_on_if_host_off,omitempty"`
|
||||
StopHostAfterCollect bool `json:"stop_host_after_collect,omitempty"`
|
||||
DebugPayloads bool `json:"debug_payloads,omitempty"`
|
||||
}
|
||||
|
||||
type CollectProbeResponse struct {
|
||||
@@ -39,13 +41,18 @@ type CollectJobResponse struct {
|
||||
}
|
||||
|
||||
type CollectJobStatusResponse struct {
|
||||
JobID string `json:"job_id"`
|
||||
Status string `json:"status"`
|
||||
Progress *int `json:"progress,omitempty"`
|
||||
Logs []string `json:"logs,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
CreatedAt time.Time `json:"created_at,omitempty"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
JobID string `json:"job_id"`
|
||||
Status string `json:"status"`
|
||||
Progress *int `json:"progress,omitempty"`
|
||||
CurrentPhase string `json:"current_phase,omitempty"`
|
||||
ETASeconds *int `json:"eta_seconds,omitempty"`
|
||||
Logs []string `json:"logs,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
ActiveModules []CollectModuleStatus `json:"active_modules,omitempty"`
|
||||
ModuleScores []CollectModuleStatus `json:"module_scores,omitempty"`
|
||||
DebugInfo *CollectDebugInfo `json:"debug_info,omitempty"`
|
||||
CreatedAt time.Time `json:"created_at,omitempty"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
type CollectRequestMeta struct {
|
||||
@@ -58,27 +65,64 @@ type CollectRequestMeta struct {
|
||||
}
|
||||
|
||||
type Job struct {
|
||||
ID string
|
||||
Status string
|
||||
Progress int
|
||||
Logs []string
|
||||
Error string
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
RequestMeta CollectRequestMeta
|
||||
cancel func()
|
||||
ID string
|
||||
Status string
|
||||
Progress int
|
||||
CurrentPhase string
|
||||
ETASeconds int
|
||||
Logs []string
|
||||
Error string
|
||||
ActiveModules []CollectModuleStatus
|
||||
ModuleScores []CollectModuleStatus
|
||||
DebugInfo *CollectDebugInfo
|
||||
CreatedAt time.Time
|
||||
UpdatedAt time.Time
|
||||
RequestMeta CollectRequestMeta
|
||||
cancel func()
|
||||
}
|
||||
|
||||
type CollectModuleStatus struct {
|
||||
Name string `json:"name"`
|
||||
Score int `json:"score"`
|
||||
Active bool `json:"active,omitempty"`
|
||||
Priority int `json:"priority,omitempty"`
|
||||
}
|
||||
|
||||
type CollectDebugInfo struct {
|
||||
AdaptiveThrottled bool `json:"adaptive_throttled"`
|
||||
SnapshotWorkers int `json:"snapshot_workers,omitempty"`
|
||||
PrefetchWorkers int `json:"prefetch_workers,omitempty"`
|
||||
PrefetchEnabled *bool `json:"prefetch_enabled,omitempty"`
|
||||
PhaseTelemetry []CollectPhaseTelemetry `json:"phase_telemetry,omitempty"`
|
||||
}
|
||||
|
||||
type CollectPhaseTelemetry struct {
|
||||
Phase string `json:"phase"`
|
||||
Requests int `json:"requests,omitempty"`
|
||||
Errors int `json:"errors,omitempty"`
|
||||
ErrorRate float64 `json:"error_rate,omitempty"`
|
||||
AvgMS int64 `json:"avg_ms,omitempty"`
|
||||
P95MS int64 `json:"p95_ms,omitempty"`
|
||||
}
|
||||
|
||||
func (j *Job) toStatusResponse() CollectJobStatusResponse {
|
||||
progress := j.Progress
|
||||
resp := CollectJobStatusResponse{
|
||||
JobID: j.ID,
|
||||
Status: j.Status,
|
||||
Progress: &progress,
|
||||
Logs: append([]string(nil), j.Logs...),
|
||||
Error: j.Error,
|
||||
CreatedAt: j.CreatedAt,
|
||||
UpdatedAt: j.UpdatedAt,
|
||||
JobID: j.ID,
|
||||
Status: j.Status,
|
||||
Progress: &progress,
|
||||
CurrentPhase: j.CurrentPhase,
|
||||
Logs: append([]string(nil), j.Logs...),
|
||||
Error: j.Error,
|
||||
ActiveModules: append([]CollectModuleStatus(nil), j.ActiveModules...),
|
||||
ModuleScores: append([]CollectModuleStatus(nil), j.ModuleScores...),
|
||||
DebugInfo: cloneCollectDebugInfo(j.DebugInfo),
|
||||
CreatedAt: j.CreatedAt,
|
||||
UpdatedAt: j.UpdatedAt,
|
||||
}
|
||||
if j.ETASeconds > 0 {
|
||||
eta := j.ETASeconds
|
||||
resp.ETASeconds = &eta
|
||||
}
|
||||
return resp
|
||||
}
|
||||
@@ -91,3 +135,16 @@ func (j *Job) toJobResponse(message string) CollectJobResponse {
|
||||
CreatedAt: j.CreatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func cloneCollectDebugInfo(in *CollectDebugInfo) *CollectDebugInfo {
|
||||
if in == nil {
|
||||
return nil
|
||||
}
|
||||
out := *in
|
||||
out.PhaseTelemetry = append([]CollectPhaseTelemetry(nil), in.PhaseTelemetry...)
|
||||
if in.PrefetchEnabled != nil {
|
||||
value := *in.PrefetchEnabled
|
||||
out.PrefetchEnabled = &value
|
||||
}
|
||||
return &out
|
||||
}
|
||||
|
||||
@@ -81,7 +81,7 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
}
|
||||
|
||||
for _, mem := range hw.Memory {
|
||||
if !mem.Present || mem.SizeMB == 0 {
|
||||
if !mem.IsInstalledInventory() {
|
||||
continue
|
||||
}
|
||||
present := mem.Present
|
||||
@@ -243,6 +243,8 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
Source: "network_adapters",
|
||||
Slot: nic.Slot,
|
||||
Location: nic.Location,
|
||||
BDF: nic.BDF,
|
||||
DeviceClass: "NetworkController",
|
||||
VendorID: nic.VendorID,
|
||||
DeviceID: nic.DeviceID,
|
||||
Model: nic.Model,
|
||||
@@ -253,6 +255,11 @@ func BuildHardwareDevices(hw *models.HardwareConfig) []models.HardwareDevice {
|
||||
PortCount: nic.PortCount,
|
||||
PortType: nic.PortType,
|
||||
MACAddresses: nic.MACAddresses,
|
||||
LinkWidth: nic.LinkWidth,
|
||||
LinkSpeed: nic.LinkSpeed,
|
||||
MaxLinkWidth: nic.MaxLinkWidth,
|
||||
MaxLinkSpeed: nic.MaxLinkSpeed,
|
||||
NUMANode: nic.NUMANode,
|
||||
Present: &present,
|
||||
Status: nic.Status,
|
||||
StatusCheckedAt: nic.StatusCheckedAt,
|
||||
|
||||
@@ -90,6 +90,98 @@ func TestBuildHardwareDevices_MemorySameSerialDifferentSlots_NotDeduped(t *testi
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_ZeroSizeMemoryWithInventoryIsIncluded(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Location: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
SerialNumber: "2B5F92C6",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
memoryCount := 0
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindMemory {
|
||||
continue
|
||||
}
|
||||
memoryCount++
|
||||
if d.Slot != "PROC 1 DIMM 3" || d.PartNumber != "HMCG88AEBRA115N" || d.SerialNumber != "2B5F92C6" {
|
||||
t.Fatalf("unexpected memory device: %+v", d)
|
||||
}
|
||||
}
|
||||
if memoryCount != 1 {
|
||||
t.Fatalf("expected 1 installed zero-size memory record, got %d", memoryCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_NetworkAdapterPreservesPCIeMetadata(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
NetworkAdapters: []models.NetworkAdapter{
|
||||
{
|
||||
Slot: "1",
|
||||
Location: "OCP",
|
||||
Present: true,
|
||||
BDF: "0000:27:00.0",
|
||||
Model: "ConnectX-6 Lx",
|
||||
VendorID: 0x15b3,
|
||||
DeviceID: 0x101f,
|
||||
SerialNumber: "NIC-001",
|
||||
Firmware: "26.39.2048",
|
||||
MACAddresses: []string{"44:1A:4C:16:E8:03", "44:1A:4C:16:E8:04"},
|
||||
LinkWidth: 16,
|
||||
LinkSpeed: "32 GT/s",
|
||||
NUMANode: 1,
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindNetwork {
|
||||
continue
|
||||
}
|
||||
if d.BDF != "0000:27:00.0" || d.LinkWidth != 16 || d.LinkSpeed != "32 GT/s" || d.NUMANode != 1 {
|
||||
t.Fatalf("expected network PCIe metadata to be preserved, got %+v", d)
|
||||
}
|
||||
return
|
||||
}
|
||||
t.Fatal("expected network device in canonical inventory")
|
||||
}
|
||||
|
||||
func TestBuildSpecification_ZeroSizeMemoryWithInventoryIsShown(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "PROC 1 DIMM 3",
|
||||
Present: true,
|
||||
SizeMB: 0,
|
||||
Manufacturer: "Hynix",
|
||||
PartNumber: "HMCG88AEBRA115N",
|
||||
SerialNumber: "2B5F92C6",
|
||||
Status: "ok",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
spec := buildSpecification(hw)
|
||||
for _, line := range spec {
|
||||
if line.Category == "Память" && line.Name == "Hynix HMCG88AEBRA115N (size unknown)" && line.Quantity == 1 {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
t.Fatalf("expected memory spec line for zero-size identified DIMM, got %+v", spec)
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_DuplicateSerials_AreAnnotated(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
@@ -166,6 +258,31 @@ func TestBuildHardwareDevices_SkipsFirmwareOnlyNumericSlots(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildHardwareDevices_NetworkDevicesUseUnifiedControllerClass(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
NetworkAdapters: []models.NetworkAdapter{
|
||||
{
|
||||
Slot: "NIC1",
|
||||
Model: "Ethernet Adapter",
|
||||
Vendor: "Intel",
|
||||
Present: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
devices := BuildHardwareDevices(hw)
|
||||
for _, d := range devices {
|
||||
if d.Kind != models.DeviceKindNetwork {
|
||||
continue
|
||||
}
|
||||
if d.DeviceClass != "NetworkController" {
|
||||
t.Fatalf("expected unified network controller class, got %+v", d)
|
||||
}
|
||||
return
|
||||
}
|
||||
t.Fatalf("expected one canonical network device")
|
||||
}
|
||||
|
||||
func TestHandleGetConfig_ReturnsCanonicalHardware(t *testing.T) {
|
||||
srv := &Server{}
|
||||
srv.SetResult(&models.AnalysisResult{
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"fmt"
|
||||
"html/template"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
@@ -17,10 +18,12 @@ import (
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector"
|
||||
"git.mchus.pro/mchus/logpile/internal/exporter"
|
||||
"git.mchus.pro/mchus/logpile/internal/ingest"
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
chartviewer "reanimator/chart/viewer"
|
||||
@@ -45,7 +48,10 @@ func (s *Server) handleIndex(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
tmpl.Execute(w, nil)
|
||||
tmpl.Execute(w, map[string]string{
|
||||
"AppVersion": s.config.AppVersion,
|
||||
"AppCommit": s.config.AppCommit,
|
||||
})
|
||||
}
|
||||
|
||||
func (s *Server) handleChartCurrent(w http.ResponseWriter, r *http.Request) {
|
||||
@@ -219,13 +225,12 @@ func (s *Server) analyzeUploadedFile(filename, mimeType string, payload []byte)
|
||||
return nil, "", nil, fmt.Errorf("unsupported archive format: %s", strings.ToLower(filepath.Ext(filename)))
|
||||
}
|
||||
|
||||
p := parser.NewBMCParser()
|
||||
if err := p.ParseFromReader(bytes.NewReader(payload), filename); err != nil {
|
||||
result, vendor, err := s.ingestService().AnalyzeArchivePayload(filename, payload)
|
||||
if err != nil {
|
||||
return nil, "", nil, err
|
||||
}
|
||||
result := p.Result()
|
||||
applyArchiveSourceMetadata(result)
|
||||
return result, p.DetectedVendor(), newRawExportFromUploadedFile(filename, mimeType, payload, result), nil
|
||||
return result, vendor, newRawExportFromUploadedFile(filename, mimeType, payload, result), nil
|
||||
}
|
||||
|
||||
func uploadMultipartMaxBytes() int64 {
|
||||
@@ -297,33 +302,18 @@ func (s *Server) reanalyzeRawExportPackage(pkg *RawExportPackage) (*models.Analy
|
||||
if !strings.EqualFold(strings.TrimSpace(pkg.Source.Protocol), "redfish") {
|
||||
return nil, "", fmt.Errorf("unsupported live protocol: %s", pkg.Source.Protocol)
|
||||
}
|
||||
result, err := collector.ReplayRedfishFromRawPayloads(pkg.Source.RawPayloads, nil)
|
||||
result, vendor, err := s.ingestService().AnalyzeRedfishRawPayloads(pkg.Source.RawPayloads, ingest.RedfishSourceMetadata{
|
||||
TargetHost: pkg.Source.TargetHost,
|
||||
SourceTimezone: pkg.Source.SourceTimezone,
|
||||
Filename: pkg.Source.Filename,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if result != nil {
|
||||
if strings.TrimSpace(result.Protocol) == "" {
|
||||
result.Protocol = "redfish"
|
||||
}
|
||||
if strings.TrimSpace(result.SourceType) == "" {
|
||||
result.SourceType = models.SourceTypeAPI
|
||||
}
|
||||
if strings.TrimSpace(result.TargetHost) == "" {
|
||||
result.TargetHost = strings.TrimSpace(pkg.Source.TargetHost)
|
||||
}
|
||||
if strings.TrimSpace(result.SourceTimezone) == "" {
|
||||
result.SourceTimezone = strings.TrimSpace(pkg.Source.SourceTimezone)
|
||||
}
|
||||
result.CollectedAt = inferRawExportCollectedAt(result, pkg)
|
||||
if strings.TrimSpace(result.Filename) == "" {
|
||||
target := result.TargetHost
|
||||
if target == "" {
|
||||
target = "snapshot"
|
||||
}
|
||||
result.Filename = "redfish://" + target
|
||||
}
|
||||
}
|
||||
return result, "redfish", nil
|
||||
return result, vendor, nil
|
||||
default:
|
||||
return nil, "", fmt.Errorf("unsupported raw export source kind: %s", pkg.Source.Kind)
|
||||
}
|
||||
@@ -342,13 +332,12 @@ func (s *Server) parseUploadedPayload(filename string, payload []byte) (*models.
|
||||
return snapshotResult, vendor, nil
|
||||
}
|
||||
|
||||
p := parser.NewBMCParser()
|
||||
if err := p.ParseFromReader(bytes.NewReader(payload), filename); err != nil {
|
||||
result, vendor, err := s.ingestService().AnalyzeArchivePayload(filename, payload)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
result := p.Result()
|
||||
applyArchiveSourceMetadata(result)
|
||||
return result, p.DetectedVendor(), nil
|
||||
return result, vendor, nil
|
||||
}
|
||||
|
||||
func (s *Server) handleGetParsers(w http.ResponseWriter, r *http.Request) {
|
||||
@@ -541,11 +530,21 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
|
||||
continue
|
||||
}
|
||||
present := mem.Present != nil && *mem.Present
|
||||
// Skip empty slots (not present or 0 size)
|
||||
if !present || mem.SizeMB == 0 {
|
||||
if !present {
|
||||
continue
|
||||
}
|
||||
// Include frequency if available
|
||||
|
||||
if mem.SizeMB == 0 {
|
||||
name := strings.TrimSpace(strings.Join(nonEmptyStrings(mem.Manufacturer, mem.PartNumber, mem.Type), " "))
|
||||
if name == "" {
|
||||
name = "Installed DIMM (size unknown)"
|
||||
} else {
|
||||
name += " (size unknown)"
|
||||
}
|
||||
memGroups[name]++
|
||||
continue
|
||||
}
|
||||
|
||||
key := ""
|
||||
currentSpeed := intFromDetails(mem.Details, "current_speed_mhz")
|
||||
if currentSpeed > 0 {
|
||||
@@ -637,6 +636,18 @@ func buildSpecification(hw *models.HardwareConfig) []SpecLine {
|
||||
return spec
|
||||
}
|
||||
|
||||
func nonEmptyStrings(values ...string) []string {
|
||||
out := make([]string, 0, len(values))
|
||||
for _, value := range values {
|
||||
value = strings.TrimSpace(value)
|
||||
if value == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, value)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (s *Server) handleGetSerials(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil {
|
||||
@@ -728,6 +739,19 @@ func hasUsableSerial(serial string) bool {
|
||||
}
|
||||
}
|
||||
|
||||
func hasUsableFirmwareVersion(version string) bool {
|
||||
v := strings.TrimSpace(version)
|
||||
if v == "" {
|
||||
return false
|
||||
}
|
||||
switch strings.ToUpper(v) {
|
||||
case "N/A", "NA", "NONE", "NULL", "UNKNOWN", "-":
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) handleGetFirmware(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
if result == nil || result.Hardware == nil {
|
||||
@@ -955,7 +979,7 @@ func buildFirmwareEntries(hw *models.HardwareConfig) []firmwareEntry {
|
||||
component = strings.TrimSpace(component)
|
||||
model = strings.TrimSpace(model)
|
||||
version = strings.TrimSpace(version)
|
||||
if component == "" || version == "" {
|
||||
if component == "" || !hasUsableFirmwareVersion(version) {
|
||||
return
|
||||
}
|
||||
if model == "" {
|
||||
@@ -1587,6 +1611,32 @@ func (s *Server) handleCollectStart(w http.ResponseWriter, r *http.Request) {
|
||||
_ = json.NewEncoder(w).Encode(job.toJobResponse("Collection job accepted"))
|
||||
}
|
||||
|
||||
// pingHost dials host:port up to total times with 2s timeout each, returns true if
|
||||
// at least need attempts succeeded.
|
||||
func pingHost(host string, port int, total, need int) (bool, string) {
|
||||
addr := fmt.Sprintf("%s:%d", host, port)
|
||||
var successes atomic.Int32
|
||||
done := make(chan struct{}, total)
|
||||
for i := 0; i < total; i++ {
|
||||
go func() {
|
||||
defer func() { done <- struct{}{} }()
|
||||
conn, err := net.DialTimeout("tcp", addr, 2*time.Second)
|
||||
if err == nil {
|
||||
conn.Close()
|
||||
successes.Add(1)
|
||||
}
|
||||
}()
|
||||
}
|
||||
for i := 0; i < total; i++ {
|
||||
<-done
|
||||
}
|
||||
n := int(successes.Load())
|
||||
if n < need {
|
||||
return false, fmt.Sprintf("Хост недоступен: только %d из %d попыток подключения к %s прошли успешно (требуется минимум %d)", n, total, addr, need)
|
||||
}
|
||||
return true, ""
|
||||
}
|
||||
|
||||
func (s *Server) handleCollectProbe(w http.ResponseWriter, r *http.Request) {
|
||||
var req CollectRequest
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
@@ -1608,6 +1658,11 @@ func (s *Server) handleCollectProbe(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
|
||||
if ok, msg := pingHost(req.Host, req.Port, 10, 3); !ok {
|
||||
jsonError(w, msg, http.StatusBadRequest)
|
||||
return
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(r.Context(), 20*time.Second)
|
||||
defer cancel()
|
||||
|
||||
@@ -1706,6 +1761,51 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
|
||||
status = CollectStatusRunning
|
||||
}
|
||||
s.jobManager.UpdateJobStatus(jobID, status, update.Progress, "")
|
||||
if update.CurrentPhase != "" || update.ETASeconds > 0 {
|
||||
s.jobManager.UpdateJobETA(jobID, update.CurrentPhase, update.ETASeconds)
|
||||
}
|
||||
if update.DebugInfo != nil {
|
||||
debugInfo := &CollectDebugInfo{
|
||||
AdaptiveThrottled: update.DebugInfo.AdaptiveThrottled,
|
||||
SnapshotWorkers: update.DebugInfo.SnapshotWorkers,
|
||||
PrefetchWorkers: update.DebugInfo.PrefetchWorkers,
|
||||
PrefetchEnabled: update.DebugInfo.PrefetchEnabled,
|
||||
}
|
||||
if len(update.DebugInfo.PhaseTelemetry) > 0 {
|
||||
debugInfo.PhaseTelemetry = make([]CollectPhaseTelemetry, 0, len(update.DebugInfo.PhaseTelemetry))
|
||||
for _, item := range update.DebugInfo.PhaseTelemetry {
|
||||
debugInfo.PhaseTelemetry = append(debugInfo.PhaseTelemetry, CollectPhaseTelemetry{
|
||||
Phase: item.Phase,
|
||||
Requests: item.Requests,
|
||||
Errors: item.Errors,
|
||||
ErrorRate: item.ErrorRate,
|
||||
AvgMS: item.AvgMS,
|
||||
P95MS: item.P95MS,
|
||||
})
|
||||
}
|
||||
}
|
||||
s.jobManager.UpdateJobDebugInfo(jobID, debugInfo)
|
||||
}
|
||||
if len(update.ActiveModules) > 0 || len(update.ModuleScores) > 0 {
|
||||
activeModules := make([]CollectModuleStatus, 0, len(update.ActiveModules))
|
||||
for _, module := range update.ActiveModules {
|
||||
activeModules = append(activeModules, CollectModuleStatus{
|
||||
Name: module.Name,
|
||||
Score: module.Score,
|
||||
Active: true,
|
||||
})
|
||||
}
|
||||
moduleScores := make([]CollectModuleStatus, 0, len(update.ModuleScores))
|
||||
for _, module := range update.ModuleScores {
|
||||
moduleScores = append(moduleScores, CollectModuleStatus{
|
||||
Name: module.Name,
|
||||
Score: module.Score,
|
||||
Active: module.Active,
|
||||
Priority: module.Priority,
|
||||
})
|
||||
}
|
||||
s.jobManager.UpdateJobModules(jobID, activeModules, moduleScores)
|
||||
}
|
||||
if update.Message != "" {
|
||||
s.jobManager.AppendJobLog(jobID, update.Message)
|
||||
}
|
||||
@@ -1927,15 +2027,17 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
|
||||
|
||||
func toCollectorRequest(req CollectRequest) collector.Request {
|
||||
return collector.Request{
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
PowerOnIfHostOff: req.PowerOnIfHostOff,
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
PowerOnIfHostOff: req.PowerOnIfHostOff,
|
||||
StopHostAfterCollect: req.StopHostAfterCollect,
|
||||
DebugPayloads: req.DebugPayloads,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -62,3 +62,22 @@ func TestBuildFirmwareEntries_IncludesGPUFirmwareFallback(t *testing.T) {
|
||||
t.Fatalf("expected GPU firmware entry from hardware.gpus fallback")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildFirmwareEntries_SkipsPlaceholderVersions(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
Firmware: []models.FirmwareInfo{
|
||||
{DeviceName: "BMC", Version: "3.13.42P13"},
|
||||
{DeviceName: "Front_BP_1", Version: "NA"},
|
||||
{DeviceName: "Rear_BP_0", Version: "N/A"},
|
||||
{DeviceName: "HDD_BP", Version: "-"},
|
||||
},
|
||||
}
|
||||
|
||||
entries := buildFirmwareEntries(hw)
|
||||
if len(entries) != 1 {
|
||||
t.Fatalf("expected only usable firmware entries, got %#v", entries)
|
||||
}
|
||||
if entries[0].Component != "BMC" || entries[0].Version != "3.13.42P13" {
|
||||
t.Fatalf("unexpected remaining firmware entry: %#v", entries[0])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -128,6 +128,53 @@ func (m *JobManager) AppendJobLog(id, message string) (*Job, bool) {
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) UpdateJobModules(id string, activeModules, moduleScores []CollectModuleStatus) (*Job, bool) {
|
||||
m.mu.Lock()
|
||||
job, ok := m.jobs[id]
|
||||
if !ok || job == nil {
|
||||
m.mu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
job.ActiveModules = append([]CollectModuleStatus(nil), activeModules...)
|
||||
job.ModuleScores = append([]CollectModuleStatus(nil), moduleScores...)
|
||||
job.UpdatedAt = time.Now().UTC()
|
||||
|
||||
cloned := cloneJob(job)
|
||||
m.mu.Unlock()
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) UpdateJobETA(id, phase string, etaSeconds int) (*Job, bool) {
|
||||
m.mu.Lock()
|
||||
job, ok := m.jobs[id]
|
||||
if !ok || job == nil {
|
||||
m.mu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
job.CurrentPhase = phase
|
||||
job.ETASeconds = etaSeconds
|
||||
job.UpdatedAt = time.Now().UTC()
|
||||
|
||||
cloned := cloneJob(job)
|
||||
m.mu.Unlock()
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) UpdateJobDebugInfo(id string, info *CollectDebugInfo) (*Job, bool) {
|
||||
m.mu.Lock()
|
||||
job, ok := m.jobs[id]
|
||||
if !ok || job == nil {
|
||||
m.mu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
job.DebugInfo = cloneCollectDebugInfo(info)
|
||||
job.UpdatedAt = time.Now().UTC()
|
||||
|
||||
cloned := cloneJob(job)
|
||||
m.mu.Unlock()
|
||||
return cloned, true
|
||||
}
|
||||
|
||||
func (m *JobManager) AttachJobCancel(id string, cancelFn context.CancelFunc) bool {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
@@ -176,6 +223,11 @@ func cloneJob(job *Job) *Job {
|
||||
}
|
||||
cloned := *job
|
||||
cloned.Logs = append([]string(nil), job.Logs...)
|
||||
cloned.ActiveModules = append([]CollectModuleStatus(nil), job.ActiveModules...)
|
||||
cloned.ModuleScores = append([]CollectModuleStatus(nil), job.ModuleScores...)
|
||||
cloned.DebugInfo = cloneCollectDebugInfo(job.DebugInfo)
|
||||
cloned.CurrentPhase = job.CurrentPhase
|
||||
cloned.ETASeconds = job.ETASeconds
|
||||
cloned.cancel = nil
|
||||
return &cloned
|
||||
}
|
||||
|
||||
72
internal/server/manual_input_inspect_test.go
Normal file
72
internal/server/manual_input_inspect_test.go
Normal file
@@ -0,0 +1,72 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
// TestManualInspectInput is a persistent local debugging harness for checking
|
||||
// how the current server code analyzes a real input file. It is skipped unless
|
||||
// LOGPILE_MANUAL_INPUT points to a file on disk.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// LOGPILE_MANUAL_INPUT=/abs/path/to/file.zip go test ./internal/server -run TestManualInspectInput -v
|
||||
func TestManualInspectInput(t *testing.T) {
|
||||
path := strings.TrimSpace(os.Getenv("LOGPILE_MANUAL_INPUT"))
|
||||
if path == "" {
|
||||
t.Skip("set LOGPILE_MANUAL_INPUT to inspect a real input file")
|
||||
}
|
||||
|
||||
payload, err := os.ReadFile(path)
|
||||
if err != nil {
|
||||
t.Fatalf("read input: %v", err)
|
||||
}
|
||||
|
||||
s := &Server{}
|
||||
filename := path
|
||||
|
||||
if rawPkg, ok, err := parseRawExportBundle(payload); err != nil {
|
||||
t.Fatalf("parseRawExportBundle: %v", err)
|
||||
} else if ok {
|
||||
result, vendor, err := s.reanalyzeRawExportPackage(rawPkg)
|
||||
if err != nil {
|
||||
t.Fatalf("reanalyzeRawExportPackage: %v", err)
|
||||
}
|
||||
logManualAnalysisResult(t, "raw_export_bundle", vendor, result)
|
||||
return
|
||||
}
|
||||
|
||||
result, vendor, err := s.parseUploadedPayload(filename, payload)
|
||||
if err != nil {
|
||||
t.Fatalf("parseUploadedPayload: %v", err)
|
||||
}
|
||||
logManualAnalysisResult(t, "uploaded_payload", vendor, result)
|
||||
}
|
||||
|
||||
func logManualAnalysisResult(t *testing.T, mode, vendor string, result *models.AnalysisResult) {
|
||||
t.Helper()
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatalf("missing hardware result")
|
||||
}
|
||||
|
||||
t.Logf("mode=%s vendor=%s source_type=%s protocol=%s target=%s", mode, vendor, result.SourceType, result.Protocol, result.TargetHost)
|
||||
t.Logf("counts: gpus=%d pcie=%d cpus=%d memory=%d storage=%d nics=%d psus=%d",
|
||||
len(result.Hardware.GPUs),
|
||||
len(result.Hardware.PCIeDevices),
|
||||
len(result.Hardware.CPUs),
|
||||
len(result.Hardware.Memory),
|
||||
len(result.Hardware.Storage),
|
||||
len(result.Hardware.NetworkAdapters),
|
||||
len(result.Hardware.PowerSupply),
|
||||
)
|
||||
for i, g := range result.Hardware.GPUs {
|
||||
t.Logf("gpu[%d]: slot=%s model=%s bdf=%s serial=%s status=%s", i, g.Slot, g.Model, g.BDF, g.SerialNumber, g.Status)
|
||||
}
|
||||
for i, p := range result.Hardware.PCIeDevices {
|
||||
t.Logf("pcie[%d]: slot=%s class=%s model=%s bdf=%s serial=%s vendor=%s", i, p.Slot, p.DeviceClass, p.PartNumber, p.BDF, p.SerialNumber, p.Manufacturer)
|
||||
}
|
||||
}
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/collector"
|
||||
"git.mchus.pro/mchus/logpile/internal/ingest"
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
chartviewer "reanimator/chart/viewer"
|
||||
)
|
||||
@@ -38,6 +39,7 @@ type Server struct {
|
||||
|
||||
jobManager *JobManager
|
||||
collectors *collector.Registry
|
||||
ingest *ingest.Service
|
||||
}
|
||||
|
||||
type ConvertArtifact struct {
|
||||
@@ -51,6 +53,7 @@ func New(cfg Config) *Server {
|
||||
mux: http.NewServeMux(),
|
||||
jobManager: NewJobManager(),
|
||||
collectors: collector.NewDefaultRegistry(),
|
||||
ingest: ingest.NewService(),
|
||||
convertJobs: make(map[string]struct{}),
|
||||
convertOutput: make(map[string]ConvertArtifact),
|
||||
}
|
||||
@@ -160,6 +163,17 @@ func (s *Server) ClientVersionString() string {
|
||||
return fmt.Sprintf("LOGPile %s (commit: %s)", v, c)
|
||||
}
|
||||
|
||||
func (s *Server) ingestService() *ingest.Service {
|
||||
if s != nil && s.ingest != nil {
|
||||
return s.ingest
|
||||
}
|
||||
svc := ingest.NewService()
|
||||
if s != nil {
|
||||
s.ingest = svc
|
||||
}
|
||||
return svc
|
||||
}
|
||||
|
||||
// SetDetectedVendor sets the detected vendor name
|
||||
func (s *Server) SetDetectedVendor(vendor string) {
|
||||
s.mu.Lock()
|
||||
|
||||
@@ -210,7 +210,6 @@ main {
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
#api-check-btn,
|
||||
#api-connect-btn,
|
||||
#api-power-on-collect-btn,
|
||||
#api-collect-off-btn,
|
||||
@@ -229,7 +228,6 @@ main {
|
||||
transition: background-color 0.2s ease, opacity 0.2s ease;
|
||||
}
|
||||
|
||||
#api-check-btn:hover,
|
||||
#api-connect-btn:hover,
|
||||
#api-power-on-collect-btn:hover,
|
||||
#api-collect-off-btn:hover,
|
||||
@@ -242,7 +240,6 @@ main {
|
||||
|
||||
#convert-run-btn:disabled,
|
||||
#convert-folder-btn:disabled,
|
||||
#api-check-btn:disabled,
|
||||
#api-connect-btn:disabled,
|
||||
#api-power-on-collect-btn:disabled,
|
||||
#api-collect-off-btn:disabled,
|
||||
@@ -252,6 +249,127 @@ main {
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
#api-collect-btn {
|
||||
background: #1f8f4c;
|
||||
color: #fff;
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
padding: 0.6rem 1.25rem;
|
||||
font-size: 0.95rem;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: background-color 0.2s ease, opacity 0.2s ease;
|
||||
}
|
||||
|
||||
#api-collect-btn:hover {
|
||||
background: #176e3a;
|
||||
}
|
||||
|
||||
#api-collect-btn:disabled {
|
||||
opacity: 0.6;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
.api-probe-options {
|
||||
margin-top: 0.9rem;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 0.45rem;
|
||||
}
|
||||
|
||||
.api-form-checkbox {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.45rem;
|
||||
font-size: 0.9rem;
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}
|
||||
|
||||
.api-form-checkbox input[type="checkbox"] {
|
||||
width: 1rem;
|
||||
height: 1rem;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.api-form-checkbox input[type="checkbox"]:disabled {
|
||||
cursor: not-allowed;
|
||||
opacity: 0.6;
|
||||
}
|
||||
|
||||
.api-form-checkbox span {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
.api-form-checkbox-sub {
|
||||
padding-left: 0.25rem;
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.api-probe-options-separator {
|
||||
margin: 0.5rem 0;
|
||||
border-top: 1px solid #e2e8f0;
|
||||
}
|
||||
|
||||
.api-confirm-modal-backdrop {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
background: rgba(0, 0, 0, 0.45);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
z-index: 1000;
|
||||
}
|
||||
|
||||
.api-confirm-modal {
|
||||
background: #fff;
|
||||
border-radius: 10px;
|
||||
padding: 1.5rem 1.75rem;
|
||||
max-width: 380px;
|
||||
width: 90%;
|
||||
box-shadow: 0 8px 32px rgba(0,0,0,0.18);
|
||||
}
|
||||
|
||||
.api-confirm-modal p {
|
||||
margin-bottom: 1.1rem;
|
||||
font-size: 0.95rem;
|
||||
color: #333;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions {
|
||||
display: flex;
|
||||
gap: 0.6rem;
|
||||
justify-content: flex-end;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions button {
|
||||
border: none;
|
||||
border-radius: 6px;
|
||||
padding: 0.5rem 1rem;
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions .btn-cancel {
|
||||
background: #e2e8f0;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions .btn-cancel:hover {
|
||||
background: #cbd5e1;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions .btn-confirm {
|
||||
background: #dc3545;
|
||||
color: #fff;
|
||||
}
|
||||
|
||||
.api-confirm-modal-actions .btn-confirm:hover {
|
||||
background: #b02a37;
|
||||
}
|
||||
|
||||
.api-connect-status {
|
||||
margin-top: 0.75rem;
|
||||
font-size: 0.85rem;
|
||||
@@ -357,6 +475,82 @@ main {
|
||||
transition: width 0.25s ease;
|
||||
}
|
||||
|
||||
.job-active-modules {
|
||||
margin-bottom: 0.85rem;
|
||||
}
|
||||
|
||||
.job-module-chips {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 0.45rem;
|
||||
margin-top: 0.35rem;
|
||||
}
|
||||
|
||||
.job-module-chip {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 0.4rem;
|
||||
background: #eef6ff;
|
||||
border: 1px solid #bfdcff;
|
||||
border-radius: 999px;
|
||||
padding: 0.32rem 0.68rem;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.job-module-chip-name {
|
||||
font-size: 0.82rem;
|
||||
color: #1f2937;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.job-module-chip-score {
|
||||
font-size: 0.72rem;
|
||||
color: #1d4ed8;
|
||||
background: #dbeafe;
|
||||
border: 1px solid #bfdbfe;
|
||||
border-radius: 999px;
|
||||
padding: 0.1rem 0.38rem;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace;
|
||||
}
|
||||
|
||||
.job-debug-info {
|
||||
margin-bottom: 0.85rem;
|
||||
border: 1px solid #dbe5f0;
|
||||
background: #f8fbff;
|
||||
border-radius: 8px;
|
||||
padding: 0.75rem;
|
||||
}
|
||||
|
||||
.job-debug-summary {
|
||||
font-size: 0.82rem;
|
||||
color: #334155;
|
||||
margin-top: 0.35rem;
|
||||
}
|
||||
|
||||
.job-phase-telemetry {
|
||||
margin-top: 0.55rem;
|
||||
display: grid;
|
||||
gap: 0.35rem;
|
||||
}
|
||||
|
||||
.job-phase-row {
|
||||
display: grid;
|
||||
grid-template-columns: minmax(120px, 180px) repeat(4, minmax(60px, auto));
|
||||
gap: 0.5rem;
|
||||
align-items: center;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
|
||||
.job-phase-name {
|
||||
color: #0f172a;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.job-phase-metric {
|
||||
color: #475569;
|
||||
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, monospace;
|
||||
}
|
||||
|
||||
.meta-label {
|
||||
color: #64748b;
|
||||
font-weight: 600;
|
||||
|
||||
@@ -91,14 +91,18 @@ function initApiSource() {
|
||||
}
|
||||
|
||||
const cancelJobButton = document.getElementById('cancel-job-btn');
|
||||
const checkButton = document.getElementById('api-check-btn');
|
||||
const collectOffButton = document.getElementById('api-collect-off-btn');
|
||||
const powerOnCollectButton = document.getElementById('api-power-on-collect-btn');
|
||||
const connectButton = document.getElementById('api-connect-btn');
|
||||
const collectButton = document.getElementById('api-collect-btn');
|
||||
const powerOffCheckbox = document.getElementById('api-power-off');
|
||||
const fieldNames = ['host', 'port', 'username', 'password'];
|
||||
|
||||
apiForm.addEventListener('submit', (event) => {
|
||||
event.preventDefault();
|
||||
startCollectionFromCurrentProbe(false);
|
||||
if (apiProbeResult && apiProbeResult.reachable) {
|
||||
startCollectionWithOptions();
|
||||
} else {
|
||||
startApiProbe();
|
||||
}
|
||||
});
|
||||
|
||||
if (cancelJobButton) {
|
||||
@@ -106,21 +110,29 @@ function initApiSource() {
|
||||
cancelCollectionJob();
|
||||
});
|
||||
}
|
||||
if (checkButton) {
|
||||
checkButton.addEventListener('click', () => {
|
||||
if (connectButton) {
|
||||
connectButton.addEventListener('click', () => {
|
||||
startApiProbe();
|
||||
});
|
||||
}
|
||||
if (collectOffButton) {
|
||||
collectOffButton.addEventListener('click', () => {
|
||||
clearApiPowerDecisionTimer();
|
||||
startCollectionFromCurrentProbe(false);
|
||||
if (collectButton) {
|
||||
collectButton.addEventListener('click', () => {
|
||||
startCollectionWithOptions();
|
||||
});
|
||||
}
|
||||
if (powerOnCollectButton) {
|
||||
powerOnCollectButton.addEventListener('click', () => {
|
||||
clearApiPowerDecisionTimer();
|
||||
startCollectionFromCurrentProbe(true);
|
||||
if (powerOffCheckbox) {
|
||||
powerOffCheckbox.addEventListener('change', () => {
|
||||
if (!powerOffCheckbox.checked) {
|
||||
return;
|
||||
}
|
||||
// If host was already on when probed, warn before enabling shutdown
|
||||
if (apiProbeResult && apiProbeResult.host_powered_on) {
|
||||
showConfirmModal(
|
||||
'Хост был включён до начала сбора. Вы уверены, что хотите выключить его после завершения сбора?',
|
||||
() => { /* confirmed — leave checked */ },
|
||||
() => { powerOffCheckbox.checked = false; }
|
||||
);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
@@ -151,11 +163,42 @@ function initApiSource() {
|
||||
renderCollectionJob();
|
||||
}
|
||||
|
||||
function showConfirmModal(message, onConfirm, onCancel) {
|
||||
const backdrop = document.createElement('div');
|
||||
backdrop.className = 'api-confirm-modal-backdrop';
|
||||
backdrop.innerHTML = `
|
||||
<div class="api-confirm-modal" role="dialog" aria-modal="true">
|
||||
<p>${escapeHtml(message)}</p>
|
||||
<div class="api-confirm-modal-actions">
|
||||
<button class="btn-cancel">Отмена</button>
|
||||
<button class="btn-confirm">Да, выключить</button>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
document.body.appendChild(backdrop);
|
||||
|
||||
const close = () => document.body.removeChild(backdrop);
|
||||
backdrop.querySelector('.btn-cancel').addEventListener('click', () => {
|
||||
close();
|
||||
if (onCancel) onCancel();
|
||||
});
|
||||
backdrop.querySelector('.btn-confirm').addEventListener('click', () => {
|
||||
close();
|
||||
if (onConfirm) onConfirm();
|
||||
});
|
||||
backdrop.addEventListener('click', (e) => {
|
||||
if (e.target === backdrop) {
|
||||
close();
|
||||
if (onCancel) onCancel();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
function startApiProbe() {
|
||||
const { isValid, payload, errors } = validateCollectForm();
|
||||
renderFormErrors(errors);
|
||||
if (!isValid) {
|
||||
renderApiConnectStatus(false, null);
|
||||
renderApiConnectStatus(false);
|
||||
resetApiProbeState();
|
||||
return;
|
||||
}
|
||||
@@ -163,7 +206,7 @@ function startApiProbe() {
|
||||
apiConnectPayload = payload;
|
||||
resetApiProbeState();
|
||||
setApiFormBlocked(true);
|
||||
renderApiConnectStatus(true, { ...payload, password: '***' });
|
||||
renderApiConnectStatus(true);
|
||||
|
||||
fetch('/api/collect/probe', {
|
||||
method: 'POST',
|
||||
@@ -181,7 +224,7 @@ function startApiProbe() {
|
||||
})
|
||||
.catch((err) => {
|
||||
resetApiProbeState();
|
||||
renderApiConnectStatus(false, null);
|
||||
renderApiConnectStatus(false);
|
||||
const status = document.getElementById('api-connect-status');
|
||||
if (status) {
|
||||
status.textContent = err.message || 'Проверка подключения не удалась';
|
||||
@@ -195,12 +238,11 @@ function startApiProbe() {
|
||||
});
|
||||
}
|
||||
|
||||
function startCollectionFromCurrentProbe(powerOnIfHostOff) {
|
||||
function startCollectionWithOptions() {
|
||||
const { isValid, payload, errors } = validateCollectForm();
|
||||
renderFormErrors(errors);
|
||||
if (!isValid) {
|
||||
renderApiConnectStatus(false, null);
|
||||
resetApiProbeState();
|
||||
renderApiConnectStatus(false);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -213,71 +255,78 @@ function startCollectionFromCurrentProbe(powerOnIfHostOff) {
|
||||
return;
|
||||
}
|
||||
|
||||
clearApiPowerDecisionTimer();
|
||||
payload.power_on_if_host_off = Boolean(powerOnIfHostOff);
|
||||
const powerOnCheckbox = document.getElementById('api-power-on');
|
||||
const powerOffCheckbox = document.getElementById('api-power-off');
|
||||
const debugPayloads = document.getElementById('api-debug-payloads');
|
||||
payload.power_on_if_host_off = powerOnCheckbox ? powerOnCheckbox.checked : false;
|
||||
payload.stop_host_after_collect = powerOffCheckbox ? powerOffCheckbox.checked : false;
|
||||
payload.debug_payloads = debugPayloads ? debugPayloads.checked : false;
|
||||
startCollectionJob(payload);
|
||||
}
|
||||
|
||||
function renderApiProbeState() {
|
||||
const collectButton = document.getElementById('api-connect-btn');
|
||||
const connectButton = document.getElementById('api-connect-btn');
|
||||
const probeOptions = document.getElementById('api-probe-options');
|
||||
const status = document.getElementById('api-connect-status');
|
||||
const decision = document.getElementById('api-power-decision');
|
||||
const decisionText = document.getElementById('api-power-decision-text');
|
||||
if (!collectButton || !status || !decision || !decisionText) {
|
||||
const powerOnCheckbox = document.getElementById('api-power-on');
|
||||
const powerOffCheckbox = document.getElementById('api-power-off');
|
||||
if (!connectButton || !probeOptions || !status) {
|
||||
return;
|
||||
}
|
||||
|
||||
decision.classList.add('hidden');
|
||||
clearApiPowerDecisionTimer();
|
||||
collectButton.disabled = !apiProbeResult || !apiProbeResult.reachable;
|
||||
|
||||
if (!apiProbeResult || !apiProbeResult.reachable) {
|
||||
status.textContent = 'Проверка подключения не пройдена.';
|
||||
status.className = 'api-connect-status error';
|
||||
probeOptions.classList.add('hidden');
|
||||
connectButton.textContent = 'Подключиться';
|
||||
return;
|
||||
}
|
||||
|
||||
if (apiProbeResult.host_powered_on) {
|
||||
status.textContent = apiProbeResult.message || 'Связь с BMC есть, host включен.';
|
||||
const hostOn = apiProbeResult.host_powered_on;
|
||||
const powerControlAvailable = apiProbeResult.power_control_available;
|
||||
|
||||
if (hostOn) {
|
||||
status.textContent = apiProbeResult.message || 'Связь с BMC есть, host включён.';
|
||||
status.className = 'api-connect-status success';
|
||||
collectButton.disabled = false;
|
||||
return;
|
||||
} else {
|
||||
status.textContent = apiProbeResult.message || 'Связь с BMC есть, host выключен.';
|
||||
status.className = 'api-connect-status warning';
|
||||
}
|
||||
|
||||
status.textContent = apiProbeResult.message || 'Связь с BMC есть, host выключен.';
|
||||
status.className = 'api-connect-status warning';
|
||||
if (!apiProbeResult.power_control_available) {
|
||||
collectButton.disabled = false;
|
||||
return;
|
||||
}
|
||||
probeOptions.classList.remove('hidden');
|
||||
|
||||
decision.classList.remove('hidden');
|
||||
let secondsLeft = 5;
|
||||
const updateDecisionText = () => {
|
||||
decisionText.textContent = `Если не выбрать действие, сбор начнется без включения через ${secondsLeft} сек.`;
|
||||
};
|
||||
updateDecisionText();
|
||||
apiPowerDecisionTimer = window.setInterval(() => {
|
||||
secondsLeft -= 1;
|
||||
if (secondsLeft <= 0) {
|
||||
clearApiPowerDecisionTimer();
|
||||
startCollectionFromCurrentProbe(false);
|
||||
return;
|
||||
// "Включить" checkbox
|
||||
if (powerOnCheckbox) {
|
||||
if (hostOn) {
|
||||
// Host already on — checkbox is checked and disabled
|
||||
powerOnCheckbox.checked = true;
|
||||
powerOnCheckbox.disabled = true;
|
||||
} else {
|
||||
// Host off — default: checked (will power on), enabled
|
||||
powerOnCheckbox.checked = true;
|
||||
powerOnCheckbox.disabled = !powerControlAvailable;
|
||||
}
|
||||
updateDecisionText();
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
// "Выключить" checkbox — default: unchecked
|
||||
if (powerOffCheckbox) {
|
||||
powerOffCheckbox.checked = false;
|
||||
powerOffCheckbox.disabled = !powerControlAvailable;
|
||||
}
|
||||
|
||||
connectButton.textContent = 'Переподключиться';
|
||||
}
|
||||
|
||||
function resetApiProbeState() {
|
||||
apiProbeResult = null;
|
||||
clearApiPowerDecisionTimer();
|
||||
const collectButton = document.getElementById('api-connect-btn');
|
||||
const decision = document.getElementById('api-power-decision');
|
||||
if (collectButton) {
|
||||
collectButton.disabled = true;
|
||||
const connectButton = document.getElementById('api-connect-btn');
|
||||
const probeOptions = document.getElementById('api-probe-options');
|
||||
if (connectButton) {
|
||||
connectButton.textContent = 'Подключиться';
|
||||
}
|
||||
if (decision) {
|
||||
decision.classList.add('hidden');
|
||||
if (probeOptions) {
|
||||
probeOptions.classList.add('hidden');
|
||||
}
|
||||
}
|
||||
|
||||
@@ -368,7 +417,7 @@ function renderFormErrors(errors) {
|
||||
summary.innerHTML = `<strong>Исправьте ошибки в форме:</strong><ul>${messages.map(msg => `<li>${escapeHtml(msg)}</li>`).join('')}</ul>`;
|
||||
}
|
||||
|
||||
function renderApiConnectStatus(isValid, payload) {
|
||||
function renderApiConnectStatus(isValid) {
|
||||
const status = document.getElementById('api-connect-status');
|
||||
if (!status) {
|
||||
return;
|
||||
@@ -380,16 +429,8 @@ function renderApiConnectStatus(isValid, payload) {
|
||||
return;
|
||||
}
|
||||
|
||||
const payloadPreview = { ...payload };
|
||||
if (payloadPreview.password) {
|
||||
payloadPreview.password = '***';
|
||||
}
|
||||
if (payloadPreview.token) {
|
||||
payloadPreview.token = '***';
|
||||
}
|
||||
|
||||
status.textContent = `Payload сформирован: ${JSON.stringify(payloadPreview)}`;
|
||||
status.className = 'api-connect-status success';
|
||||
status.textContent = 'Подключение...';
|
||||
status.className = 'api-connect-status info';
|
||||
}
|
||||
|
||||
function clearApiConnectStatus() {
|
||||
@@ -422,7 +463,12 @@ function startCollectionJob(payload) {
|
||||
id: body.job_id,
|
||||
status: normalizeJobStatus(body.status || 'queued'),
|
||||
progress: 0,
|
||||
currentPhase: '',
|
||||
etaSeconds: null,
|
||||
logs: [],
|
||||
activeModules: [],
|
||||
moduleScores: [],
|
||||
debugInfo: null,
|
||||
payload
|
||||
};
|
||||
appendJobLog(body.message || 'Задача поставлена в очередь');
|
||||
@@ -435,7 +481,7 @@ function startCollectionJob(payload) {
|
||||
.catch((err) => {
|
||||
setApiFormBlocked(false);
|
||||
clearApiConnectStatus();
|
||||
renderApiConnectStatus(false, null);
|
||||
renderApiConnectStatus(false);
|
||||
const status = document.getElementById('api-connect-status');
|
||||
if (status) {
|
||||
status.textContent = err.message || 'Ошибка запуска задачи';
|
||||
@@ -460,7 +506,12 @@ function pollCollectionJobStatus() {
|
||||
const prevStatus = collectionJob.status;
|
||||
collectionJob.status = normalizeJobStatus(body.status || collectionJob.status);
|
||||
collectionJob.progress = Number.isFinite(body.progress) ? body.progress : collectionJob.progress;
|
||||
collectionJob.currentPhase = body.current_phase || collectionJob.currentPhase || '';
|
||||
collectionJob.etaSeconds = Number.isFinite(body.eta_seconds) ? body.eta_seconds : collectionJob.etaSeconds;
|
||||
collectionJob.error = body.error || '';
|
||||
collectionJob.activeModules = Array.isArray(body.active_modules) ? body.active_modules : collectionJob.activeModules;
|
||||
collectionJob.moduleScores = Array.isArray(body.module_scores) ? body.module_scores : collectionJob.moduleScores;
|
||||
collectionJob.debugInfo = body.debug_info || collectionJob.debugInfo || null;
|
||||
syncServerLogs(body.logs);
|
||||
renderCollectionJob();
|
||||
|
||||
@@ -513,14 +564,61 @@ function appendJobLog(message) {
|
||||
return;
|
||||
}
|
||||
|
||||
const time = new Date().toLocaleTimeString('ru-RU', { hour12: false });
|
||||
const parsed = parseServerLogLine(message);
|
||||
if (isCollectLogNoise(parsed.message)) {
|
||||
// Still count toward log length so syncServerLogs offset stays correct,
|
||||
// but mark as hidden so renderCollectionJob skips it.
|
||||
collectionJob.logs.push({
|
||||
id: ++collectionJobLogCounter,
|
||||
time: parsed.time || new Date().toLocaleTimeString('ru-RU', { hour12: false }),
|
||||
message: parsed.message,
|
||||
hidden: true
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
collectionJob.logs.push({
|
||||
id: ++collectionJobLogCounter,
|
||||
time,
|
||||
message
|
||||
time: parsed.time || new Date().toLocaleTimeString('ru-RU', { hour12: false }),
|
||||
message: humanizeCollectLogMessage(parsed.message)
|
||||
});
|
||||
}
|
||||
|
||||
// Transform technical log messages into human-readable form for the UI.
|
||||
// The original messages are preserved in collect.log / raw_export.
|
||||
function humanizeCollectLogMessage(msg) {
|
||||
// "Redfish snapshot: документов=520, ETA≈16s, корни=Chassis(294), Systems(114), последний=/redfish/v1/..."
|
||||
// → "Snapshot: /Chassis/Self/PCIeDevices/00_34_04"
|
||||
let m = msg.match(/snapshot:\s+документов=\d+[^,]*,.*последний=(\S+)/i);
|
||||
if (m) {
|
||||
const path = m[1].replace(/^\.\.\./, '').replace(/^\/redfish\/v1/, '') || m[1];
|
||||
return `Snapshot: ${path}`;
|
||||
}
|
||||
|
||||
// "Redfish snapshot: собрано N документов"
|
||||
m = msg.match(/snapshot:\s+собрано\s+(\d+)\s+документов/i);
|
||||
if (m) {
|
||||
return `Snapshot: итого ${m[1]} документов`;
|
||||
}
|
||||
|
||||
// "Redfish: plan-B завершен за 30s (targets=18, recovered=0)"
|
||||
m = msg.match(/plan-B завершен за ([^(]+)\(targets=(\d+),\s*recovered=(\d+)\)/i);
|
||||
if (m) {
|
||||
const recovered = parseInt(m[3], 10);
|
||||
const suffix = recovered > 0 ? `, восстановлено ${m[3]}` : '';
|
||||
return `Plan-B: завершен за ${m[1].trim()}${suffix}`;
|
||||
}
|
||||
|
||||
// "Redfish: prefetch критичных endpoint (адаптивно 9/72)..."
|
||||
m = msg.match(/prefetch критичных endpoint[^(]*\(([^)]+)\)/i);
|
||||
if (m) {
|
||||
return `Prefetch критичных endpoint (${m[1]})`;
|
||||
}
|
||||
|
||||
// Strip "Redfish: " / "Redfish snapshot: " prefix — redundant in context
|
||||
return msg.replace(/^Redfish(?:\s+snapshot)?:\s+/i, '');
|
||||
}
|
||||
|
||||
function renderCollectionJob() {
|
||||
const jobStatusBlock = document.getElementById('api-job-status');
|
||||
const jobIdValue = document.getElementById('job-id-value');
|
||||
@@ -528,9 +626,14 @@ function renderCollectionJob() {
|
||||
const progressValue = document.getElementById('job-progress-value');
|
||||
const etaValue = document.getElementById('job-eta-value');
|
||||
const progressBar = document.getElementById('job-progress-bar');
|
||||
const activeModulesBlock = document.getElementById('job-active-modules');
|
||||
const activeModulesList = document.getElementById('job-active-modules-list');
|
||||
const debugInfoBlock = document.getElementById('job-debug-info');
|
||||
const debugSummary = document.getElementById('job-debug-summary');
|
||||
const phaseTelemetryNode = document.getElementById('job-phase-telemetry');
|
||||
const logsList = document.getElementById('job-logs-list');
|
||||
const cancelButton = document.getElementById('cancel-job-btn');
|
||||
if (!jobStatusBlock || !jobIdValue || !statusValue || !progressValue || !etaValue || !progressBar || !logsList || !cancelButton) {
|
||||
if (!jobStatusBlock || !jobIdValue || !statusValue || !progressValue || !etaValue || !progressBar || !activeModulesBlock || !activeModulesList || !debugInfoBlock || !debugSummary || !phaseTelemetryNode || !logsList || !cancelButton) {
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -558,16 +661,23 @@ function renderCollectionJob() {
|
||||
etaValue.textContent = eta;
|
||||
progressBar.style.width = `${progressPercent}%`;
|
||||
progressBar.textContent = `${progressPercent}%`;
|
||||
renderJobActiveModules(activeModulesBlock, activeModulesList);
|
||||
renderJobDebugInfo(debugInfoBlock, debugSummary, phaseTelemetryNode);
|
||||
|
||||
logsList.innerHTML = [...collectionJob.logs].reverse().map((log) => (
|
||||
`<li><span class="log-time">${escapeHtml(log.time)}</span><span class="log-message">${escapeHtml(log.message)}</span></li>`
|
||||
)).join('');
|
||||
logsList.innerHTML = [...collectionJob.logs].reverse()
|
||||
.filter((log) => !log.hidden)
|
||||
.map((log) => (
|
||||
`<li><span class="log-time">${escapeHtml(log.time)}</span><span class="log-message">${escapeHtml(log.message)}</span></li>`
|
||||
)).join('');
|
||||
|
||||
cancelButton.disabled = isTerminal;
|
||||
setApiFormBlocked(!isTerminal);
|
||||
}
|
||||
|
||||
function latestCollectionActivityMessage() {
|
||||
if (collectionJob && collectionJob.currentPhase) {
|
||||
return humanizeCollectionPhase(collectionJob.currentPhase);
|
||||
}
|
||||
if (!collectionJob || !Array.isArray(collectionJob.logs) || collectionJob.logs.length === 0) {
|
||||
return 'Сбор данных...';
|
||||
}
|
||||
@@ -584,6 +694,9 @@ function latestCollectionActivityMessage() {
|
||||
}
|
||||
|
||||
function latestCollectionETA() {
|
||||
if (collectionJob && Number.isFinite(collectionJob.etaSeconds) && collectionJob.etaSeconds > 0) {
|
||||
return formatDurationSeconds(collectionJob.etaSeconds);
|
||||
}
|
||||
if (!collectionJob || !Array.isArray(collectionJob.logs) || collectionJob.logs.length === 0) {
|
||||
return '-';
|
||||
}
|
||||
@@ -645,10 +758,130 @@ function syncServerLogs(logs) {
|
||||
}
|
||||
}
|
||||
|
||||
// Patterns for log lines that are internal debug noise and should not be shown in the UI.
|
||||
const _collectLogNoisePatterns = [
|
||||
/plan-B \(\d+\/\d+/, // individual plan-B step lines
|
||||
/plan-B топ веток/,
|
||||
/snapshot: heartbeat/,
|
||||
/snapshot: post-probe коллекций \(/,
|
||||
/snapshot: топ веток/,
|
||||
/prefetch завершен/,
|
||||
/cooldown перед повторным добором/,
|
||||
/Redfish telemetry:/,
|
||||
/redfish-postprobe-metrics:/,
|
||||
/redfish-prefetch-metrics:/,
|
||||
/redfish-collect:/,
|
||||
/redfish-profile-plan:/,
|
||||
/redfish replay:/,
|
||||
];
|
||||
|
||||
function isCollectLogNoise(message) {
|
||||
return _collectLogNoisePatterns.some((re) => re.test(message));
|
||||
}
|
||||
|
||||
// Strip the server-side RFC3339Nano timestamp prefix from a log line and return {time, message}.
|
||||
function parseServerLogLine(raw) {
|
||||
const m = String(raw).match(/^(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d+)?Z)\s+(.*)/s);
|
||||
if (!m) {
|
||||
return { time: null, message: String(raw).trim() };
|
||||
}
|
||||
const d = new Date(m[1]);
|
||||
const time = isNaN(d) ? null : d.toLocaleTimeString('ru-RU', { hour12: false });
|
||||
return { time, message: m[2].trim() };
|
||||
}
|
||||
|
||||
function normalizeJobStatus(status) {
|
||||
return String(status || '').trim().toLowerCase();
|
||||
}
|
||||
|
||||
function humanizeCollectionPhase(phase) {
|
||||
const value = String(phase || '').trim().toLowerCase();
|
||||
return {
|
||||
discovery: 'Discovery',
|
||||
snapshot: 'Snapshot',
|
||||
snapshot_postprobe_nvme: 'Snapshot NVMe post-probe',
|
||||
snapshot_postprobe_collections: 'Snapshot collection post-probe',
|
||||
prefetch: 'Prefetch critical endpoints',
|
||||
critical_plan_b: 'Critical plan-B',
|
||||
profile_plan_b: 'Profile plan-B'
|
||||
}[value] || value || 'Сбор данных...';
|
||||
}
|
||||
|
||||
function formatDurationSeconds(totalSeconds) {
|
||||
const seconds = Math.max(0, Math.round(Number(totalSeconds) || 0));
|
||||
if (seconds <= 0) {
|
||||
return '-';
|
||||
}
|
||||
const minutes = Math.floor(seconds / 60);
|
||||
const remainingSeconds = seconds % 60;
|
||||
if (minutes === 0) {
|
||||
return `${remainingSeconds}s`;
|
||||
}
|
||||
if (remainingSeconds === 0) {
|
||||
return `${minutes}m`;
|
||||
}
|
||||
return `${minutes}m ${remainingSeconds}s`;
|
||||
}
|
||||
|
||||
function renderJobActiveModules(activeModulesBlock, activeModulesList) {
|
||||
const activeModules = collectionJob && Array.isArray(collectionJob.activeModules) ? collectionJob.activeModules : [];
|
||||
if (activeModules.length === 0) {
|
||||
activeModulesBlock.classList.add('hidden');
|
||||
activeModulesList.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
activeModulesBlock.classList.remove('hidden');
|
||||
activeModulesList.innerHTML = activeModules.map((module) => {
|
||||
const score = Number.isFinite(module.score) ? module.score : 0;
|
||||
return `<span class="job-module-chip" title="${escapeHtml(moduleTitle(module))}">
|
||||
<span class="job-module-chip-name">${escapeHtml(module.name || '-')}</span>
|
||||
<span class="job-module-chip-score">${escapeHtml(String(score))}</span>
|
||||
</span>`;
|
||||
}).join('');
|
||||
}
|
||||
|
||||
function renderJobDebugInfo(debugInfoBlock, debugSummary, phaseTelemetryNode) {
|
||||
const debug = collectionJob && collectionJob.debugInfo ? collectionJob.debugInfo : null;
|
||||
if (!debug) {
|
||||
debugInfoBlock.classList.add('hidden');
|
||||
debugSummary.innerHTML = '';
|
||||
phaseTelemetryNode.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
debugInfoBlock.classList.remove('hidden');
|
||||
const throttled = debug.adaptive_throttled ? 'on' : 'off';
|
||||
const prefetchEnabled = typeof debug.prefetch_enabled === 'boolean' ? String(debug.prefetch_enabled) : 'auto';
|
||||
debugSummary.innerHTML = `adaptive_throttling=<strong>${escapeHtml(throttled)}</strong>, snapshot_workers=<strong>${escapeHtml(String(debug.snapshot_workers || 0))}</strong>, prefetch_workers=<strong>${escapeHtml(String(debug.prefetch_workers || 0))}</strong>, prefetch_enabled=<strong>${escapeHtml(prefetchEnabled)}</strong>`;
|
||||
|
||||
const phases = Array.isArray(debug.phase_telemetry) ? debug.phase_telemetry : [];
|
||||
if (phases.length === 0) {
|
||||
phaseTelemetryNode.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
phaseTelemetryNode.innerHTML = phases.map((item) => (
|
||||
`<div class="job-phase-row">
|
||||
<span class="job-phase-name">${escapeHtml(humanizeCollectionPhase(item.phase || ''))}</span>
|
||||
<span class="job-phase-metric">req=${escapeHtml(String(item.requests || 0))}</span>
|
||||
<span class="job-phase-metric">err=${escapeHtml(String(item.errors || 0))}</span>
|
||||
<span class="job-phase-metric">avg=${escapeHtml(String(item.avg_ms || 0))}ms</span>
|
||||
<span class="job-phase-metric">p95=${escapeHtml(String(item.p95_ms || 0))}ms</span>
|
||||
</div>`
|
||||
)).join('');
|
||||
}
|
||||
|
||||
function moduleTitle(activeModule) {
|
||||
const name = String(activeModule && activeModule.name || '').trim();
|
||||
const scores = collectionJob && Array.isArray(collectionJob.moduleScores) ? collectionJob.moduleScores : [];
|
||||
const full = scores.find((item) => String(item && item.name || '').trim() === name);
|
||||
if (!full) {
|
||||
return name;
|
||||
}
|
||||
const state = full.active ? 'active' : 'inactive';
|
||||
return `${name}: score=${Number.isFinite(full.score) ? full.score : 0}, priority=${Number.isFinite(full.priority) ? full.priority : 0}, ${state}`;
|
||||
}
|
||||
|
||||
async function loadDataFromStatus() {
|
||||
try {
|
||||
const response = await fetch('/api/status');
|
||||
|
||||
@@ -36,9 +36,9 @@
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
<p class="hint">Поддерживаемые форматы: ahs, tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
@@ -76,16 +76,25 @@
|
||||
</div>
|
||||
|
||||
<div class="api-form-actions">
|
||||
<button id="api-check-btn" type="button">Проверить</button>
|
||||
<button id="api-connect-btn" type="submit" disabled>Собрать</button>
|
||||
<button id="api-connect-btn" type="button">Подключиться</button>
|
||||
</div>
|
||||
<div id="api-connect-status" class="api-connect-status"></div>
|
||||
<div id="api-power-decision" class="api-connect-status hidden">
|
||||
<strong>Host выключен.</strong>
|
||||
<p id="api-power-decision-text">Если не выбрать действие, сбор начнется без включения через 5 секунд.</p>
|
||||
<div id="api-probe-options" class="api-probe-options hidden">
|
||||
<label class="api-form-checkbox" for="api-power-on">
|
||||
<input id="api-power-on" name="power_on_if_host_off" type="checkbox">
|
||||
<span>Включить перед сбором</span>
|
||||
</label>
|
||||
<label class="api-form-checkbox" for="api-power-off">
|
||||
<input id="api-power-off" name="stop_host_after_collect" type="checkbox">
|
||||
<span>Выключить после сбора</span>
|
||||
</label>
|
||||
<div class="api-probe-options-separator"></div>
|
||||
<label class="api-form-checkbox" for="api-debug-payloads">
|
||||
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
|
||||
<span>Сбор расширенных метрик для отладки</span>
|
||||
</label>
|
||||
<div class="api-form-actions">
|
||||
<button id="api-power-on-collect-btn" type="button">Включить и собрать</button>
|
||||
<button id="api-collect-off-btn" type="button">Собирать выключенный</button>
|
||||
<button id="api-collect-btn" type="submit">Собрать</button>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
@@ -107,6 +116,15 @@
|
||||
<div class="job-progress" aria-label="Прогресс задачи">
|
||||
<div id="job-progress-bar" class="job-progress-bar" style="width: 0%">0%</div>
|
||||
</div>
|
||||
<div id="job-active-modules" class="job-active-modules hidden">
|
||||
<p class="meta-label">Активные модули:</p>
|
||||
<div id="job-active-modules-list" class="job-module-chips"></div>
|
||||
</div>
|
||||
<div id="job-debug-info" class="job-debug-info hidden">
|
||||
<p class="meta-label">Redfish debug:</p>
|
||||
<div id="job-debug-summary" class="job-debug-summary"></div>
|
||||
<div id="job-phase-telemetry" class="job-phase-telemetry"></div>
|
||||
</div>
|
||||
<div class="job-status-logs">
|
||||
<p class="meta-label">Журнал шагов:</p>
|
||||
<ul id="job-logs-list"></ul>
|
||||
@@ -156,7 +174,7 @@
|
||||
<div class="footer-buttons">
|
||||
</div>
|
||||
<div class="footer-info">
|
||||
<p>Автор: <a href="https://mchus.pro" target="_blank">mchus.pro</a> | <a href="https://git.mchus.pro/mchus/logpile" target="_blank">Git Repository</a></p>
|
||||
<p>Автор: <a href="https://mchus.pro" target="_blank">mchus.pro</a> | <a href="https://git.mchus.pro/mchus/logpile" target="_blank">Git Repository</a>{{if .AppVersion}} | v{{.AppVersion}}{{end}}</p>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user