17 Commits

Author SHA1 Message Date
Mikhail Chusavitin
ca457ac72b fix(exporter): propagate iommu_group through PCIe export pipeline
IOMMUGroup was added to models.PCIeDevice but never wired into the
converter — missing from Details in buildDevicesFromLegacy, no field
in ReanimatorPCIe, and convertPCIeFromDevices never read it.

Add IOMMUGroup *int to ReanimatorPCIe, propagate through Details,
add intPtrFromDetailMap helper.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 16:05:30 +03:00
Mikhail Chusavitin
78d0e26fd0 feat: sync with hardware ingest contract v2.10
- PCIeDevice: add model, firmware, present, iommu_group, telemetry fields
  (temperature_c, power_w, ecc_corrected_total, ecc_uncorrected_total,
  hw_slowdown) — were silently dropped on JSON parse, breaking bee audit display
- buildDevicesFromLegacy: use pcie.Model as fallback (PartNumber > Model >
  Description), copy MACAddresses/Present/Firmware, propagate telemetry into
  Details so convertPCIeFromDevices picks them up
- Storage: add logical_block_size_bytes, physical_block_size_bytes,
  metadata_bytes_per_block (contract v2.10, 2026-04-29) to models, exporter
  struct and converter pipeline
- ReanimatorHardware: add platform_config map[string]any (contract v2.9)
- Update internal/chart submodule to v2.0 (contract 2.10 viewer support:
  event_logs section, platform_config section, storage block size columns)
- Update bible submodule

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-30 15:55:26 +03:00
Mikhail Chusavitin
88e4e8dd49 Trim noisy Lenovo Redfish collection paths 2026-04-30 15:55:26 +03:00
Mikhail Chusavitin
cf9cf5d0cf Improve Lenovo XCC inventory enrichment 2026-04-30 15:55:26 +03:00
aba7a54990 feat(parser): lenovo xcc vroc volume parsing - v1.2
Parse inventory_volume.log: Intel VROC (VMD) RAID volumes including
RAID level, capacity (GiB/TiB support added), status and member drives.
Add Drives []string to StorageVolume model.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-24 17:00:27 +03:00
835df2676c docs: defer generic IPMI live collector
Closes #12
2026-04-22 20:51:58 +03:00
b86d51c921 chore: close completed Redfish profile framework issue
Fixes #14
2026-04-22 20:45:42 +03:00
Mikhail Chusavitin
a82fb227e5 submodule update 2026-04-16 15:33:48 +03:00
c9969fc3da feat(parser): lenovo xcc warnings and redfish logs - v1.1 2026-04-13 20:34:04 +03:00
89b6701f43 feat(parser): add Lenovo XCC mini-log parser 2026-04-13 20:20:37 +03:00
b04877549a feat(collector): add Lenovo XCC profile to skip noisy snapshot paths
Lenovo ThinkSystem SR650 V3 (and similar XCC-based servers) caused
collection runs of 23+ minutes because the BMC exposes two large high-
error-rate subtrees in the snapshot BFS:

  - Chassis/1/Sensors: 315 individual sensor members, 282/315 failing,
    ~3.7s per request → ~19 minutes wasted. These documents are never
    read by any LOGPile parser (thermal/power data comes from aggregate
    Chassis/*/Thermal and Chassis/*/Power endpoints).

  - Chassis/1/Oem/Lenovo: 75 requests (LEDs×47, Slots×26, etc.),
    68/75 failing → 8+ minutes wasted on non-inventory data.

Add a Lenovo profile (matched on SystemManufacturer/OEMNamespace "Lenovo")
that sets SnapshotExcludeContains to block individual sensor documents and
non-inventory Lenovo OEM subtrees from the snapshot BFS queue. Also sets
rate policy thresholds appropriate for XCC BMC latency (p95 often 3-5s).

Add SnapshotExcludeContains []string to AcquisitionTuning and check it
in the snapshot enqueue closure in redfish.go.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 19:29:04 +03:00
8ca173c99b fix(exporter): preserve all HGX GPUs with generic PCIe slot name
Supermicro HGX BMC reports all 8 B200 GPU PCIe devices with Name
"PCIe Device" — a generic label shared by every GPU, not a unique
hardware position. pcieDedupKey used slot as the primary key, so all
8 GPUs collapsed to one entry in the UI (the first, serial 1654925165720).

Add isGenericPCIeSlotName to detect non-positional slot labels and fall
through to serial/BDF for dedup instead, preserving each GPU separately.
Positional slots (#GPU0, SLOT-NIC1, etc.) continue to use slot-first dedup.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 16:05:49 +03:00
f19a3454fa fix(redfish): gate hgx diagnostic plan-b by debug toggle 2026-04-13 14:45:41 +03:00
Mikhail Chusavitin
becdca1d7e fix(redfish): read PCIeInterface link width for GPU PCIe devices
parseGPUWithSupplementalDocs did not read PCIeInterface from the device
doc, only from function docs. xFusion GPU PCIeCard entries carry link
width/speed in PCIeInterface (LanesInUse/Maxlanes/PCIeType/MaxPCIeType)
so GPU link width was always empty for xFusion servers.

Also apply the xFusion OEM function-level fallback for GPU function docs,
consistent with the NIC and PCIeDevice paths.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 13:35:29 +03:00
Mikhail Chusavitin
e10440ae32 fix(redfish): collect PCIe link width from xFusion servers
xFusion iBMC exposes PCIe link width in two non-standard ways:
- PCIeInterface uses "Maxlanes" (lowercase 'l') instead of "MaxLanes"
- PCIeFunction docs carry width/speed in Oem.xFusion.LinkWidth ("X8"),
  Oem.xFusion.LinkWidthAbility, Oem.xFusion.LinkSpeed, and
  Oem.xFusion.LinkSpeedAbility rather than the standard CurrentLinkWidth int

Add redfishEnrichFromOEMxFusionPCIeLink and parseXFusionLinkWidth helpers,
apply them as fallbacks in NIC and PCIeDevice enrichment paths.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 13:35:29 +03:00
5c2a21aff1 chore: update bible and chart submodules
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-11 12:17:40 +03:00
Mikhail Chusavitin
9df13327aa feat(collect): remove power-on/off, add skip-hung for Redfish collection
Remove power-on and power-off functionality from the Redfish collector;
keep host power-state detection and show a warning in the UI when the
host is powered off before collection starts.

Add a "Пропустить зависшие" (skip hung) button that lets the user abort
stuck Redfish collection phases without losing already-collected data.
Introduces a two-level context model in Collect(): the outer job context
covers the full lifecycle including replay; an inner collectCtx covers
snapshot, prefetch, and plan-B phases only. Closing the skipCh cancels
collectCtx immediately — aborts all in-flight HTTP requests and exits
plan-B loops — then replay runs on whatever rawTree was collected.

Signal path: UI → POST /api/collect/{id}/skip → JobManager.SkipJob()
→ close(skipCh) → goroutine in Collect() → cancelCollect().

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 13:12:38 +03:00
37 changed files with 3600 additions and 3053 deletions

2
bible

Submodule bible updated: 52444350c1...d2600f1279

View File

@@ -58,6 +58,7 @@ Responses:
Optional request field:
- `power_on_if_host_off`: when `true`, Redfish collection may power on the host before collection if preflight found it powered off
- `debug_payloads`: when `true`, collector keeps extra diagnostic payloads and enables extended plan-B retries for slow HGX component inventory branches (`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`)
### `POST /api/collect/probe`

View File

@@ -27,6 +27,7 @@ Request fields passed from the server:
- credential field (`password` or token)
- `tls_mode`
- optional `power_on_if_host_off`
- optional `debug_payloads` for extended diagnostics
### Core rule
@@ -35,18 +36,38 @@ If the collector adds a fallback, probe, or normalization rule, replay must mirr
### Preflight and host power
- `Probe()` may be used before collection to verify API connectivity and current host `PowerState`
- if the host is off and the user chose power-on, the collector may issue `ComputerSystem.Reset`
with `ResetType=On`
- power-on attempts are bounded and logged
- after a successful power-on, the collector waits an extra stabilization window, then checks
`PowerState` again and only starts collection if the host is still on
- if the collector powered on the host itself for collection, it must attempt to power it back off
after collection completes
- if the host was already on before collection, the collector must not power it off afterward
- if power-on fails, collection still continues against the powered-off host
- all power-control decisions and attempts must be visible in the collection log so they are
preserved in raw-export bundles
- `Probe()` is used before collection to verify API connectivity and report current host `PowerState`
- if the host is off, the collector logs a warning and proceeds with collection; inventory data may
be incomplete when the host is powered off
- power-on and power-off are not performed by the collector
### Skip hung requests
Redfish collection uses a two-level context model:
- `ctx` — job lifetime context, cancelled only on explicit job cancel
- `collectCtx` — collection phase context, derived from `ctx`; covers snapshot, prefetch, and plan-B
`collectCtx` is cancelled when the user presses "Пропустить зависшие" (skip hung).
On skip, all in-flight HTTP requests in the current phase are aborted immediately via context
cancellation, the crawler and plan-B loops exit, and execution proceeds to the replay phase using
whatever was collected in `rawTree`. The result is partial but valid.
The skip signal travels: UI button → `POST /api/collect/{id}/skip``JobManager.SkipJob()`
closes `skipCh` → goroutine in `Collect()``cancelCollect()`.
The skip button is visible during `running` state and hidden once the job reaches a terminal state.
### Extended diagnostics toggle
The live collect form exposes a user-facing checkbox for extended diagnostics.
- default collection prioritizes inventory completeness and bounded runtime
- when extended diagnostics is off, heavy HGX component-chassis critical plan-B retries
(`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`) are skipped
- when extended diagnostics is on, those retries are allowed and extra debug payloads are collected
This toggle is intended for operator-driven deep diagnostics on problematic hosts, not for the default path.
### Discovery model
@@ -159,3 +180,10 @@ When changing collection logic:
Status: mock scaffold only.
It remains registered for protocol completeness, but it is not a real collection path.
The project is Redfish-first for live collection:
- Redfish already covers the current product goals for inventory, sensors, and hardware event logs
- the live architecture depends on replayable `raw_payloads.redfish_tree`
- a generic IPMI collector would require a separate raw snapshot and replay contract
IPMI should be reconsidered only as a narrow fallback for real field cases where Redfish is
missing or unreliable for a specific capability such as SEL, FRU, or sensors.

View File

@@ -55,6 +55,7 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
| `lenovo_xcc` | Lenovo XCC mini-log ZIP archives | JSON inventory + platform event logs |
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
@@ -194,6 +195,7 @@ and `LogDump/` trees.
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
| Lenovo XCC mini-log | `lenovo_xcc` | Ready | ThinkSystem SR650 V3 XCC mini-log ZIP |
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
| Unraid | `unraid` | Ready | Unraid diagnostics archives |

View File

@@ -57,6 +57,11 @@ Current behavior:
7. Packages any already-present binaries from `bin/`
8. Generates `SHA256SUMS.txt`
Release tag format:
- project release tags use `vN.M`
- do not create `vN.M.P` tags for LOGPile releases
- release artifacts and `main.version` inherit the exact git tag string
Important limitation:
- `scripts/release.sh` does not run `make build-all` for you
- if you want Linux or additional macOS archives in the release directory, build them before running the script

View File

@@ -1120,3 +1120,81 @@ incomplete for UI and Reanimator consumers.
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
- xFusion archives participate more reliably in canonical device/export flows without special UI
cases.
---
## ADL-043 — Extended HGX diagnostic plan-B is opt-in from the live collect form
**Date:** 2026-04-13
**Context:** Some Supermicro HGX Redfish targets expose slow or hanging component-chassis inventory
collections during critical plan-B, especially under `Chassis/HGX_*` for `Assembly`,
`Accelerators`, `Drives`, `NetworkAdapters`, and `PCIeDevices`. Default collection should not
block operators on deep diagnostic retries that are useful mainly for troubleshooting.
**Decision:** Keep the normal snapshot/replay path unchanged, but gate those heavy HGX
component-chassis critical plan-B retries behind the existing live-collect `debug_payloads` flag,
presented in the UI as "Сбор расширенных данных для диагностики".
**Consequences:**
- Default live collection skips those heavy diagnostic plan-B retries and reaches replay faster.
- Operators can explicitly opt into the slower diagnostic path when they need deeper collection.
- The same user-facing toggle continues to enable extra debug payload capture for troubleshooting.
---
## ADL-044 — LOGPile project release tags use `vN.M`
**Date:** 2026-04-13
**Context:** The repository accumulated release tags in `vN.M.P` form, while the shared module
versioning contract in `bible/rules/patterns/module-versioning/contract.md` standardizes version
shape as `N.M`. Release tooling reads the git tag verbatim into build metadata and release
artifacts, so inconsistent tag shape leaks directly into packaged versions.
**Decision:** Use `vN.M` for LOGPile project release tags going forward. Do not create new
`vN.M.P` tags for repository releases. Build metadata, release directory names, and release notes
continue to inherit the exact git tag string from `git describe --tags`.
**Consequences:**
- Future project releases have a two-component version string such as `v1.12`.
- Release artifacts and `--version` output stay aligned with the tag shape without extra mapping.
- Existing historical `vN.M.P` tags remain as-is unless explicitly rewritten.
---
## ADL-045 — Generic live IPMI collector is deferred; Redfish remains the only production live path
**Date:** 2026-04-22
**Context:** Sprint issue `#12` proposed a generic IPMI collector for SEL/FRU/sensors. By this
point LOGPile already has a production Redfish pipeline with replayable raw snapshots, profile-
driven acquisition, and normalized event/sensor/inventory extraction. Redfish also already covers
the current product goals better than IPMI for live collection: richer inventory, structured
resource relationships, and vendor log access via `LogServices`, including SEL-style logs on many
implementations.
**Decision:** Do not build a generic live IPMI collector now. Keep `ipmi_mock.go` only as a
protocol placeholder in the registry and UI/API contract. Treat Redfish as the only production
live collection path. Revisit IPMI only if real field evidence shows that a specific target class
cannot provide required data over Redfish. If revisited, prefer a narrow fallback scope such as
`IPMI SEL fallback`, `IPMI FRU fallback`, or `IPMI sensor fallback` rather than a second full
collector architecture.
**Consequences:**
- Issue `#12` is closed as deferred/not planned, not as implemented.
- Live collection architecture stays centered on replayable `raw_payloads.redfish_tree`.
- The codebase avoids introducing a second generic live-ingest/replay contract for IPMI data.
- Future IPMI work must be justified by concrete Redfish gaps on real hardware, not by protocol
symmetry alone.
---
## ADL-046 — The web shell delegates report rendering to `internal/chart`
**Date:** 2026-04-22
**Context:** The frontend had two competing report paths: the embedded `internal/chart` viewer and
an older client-side renderer in `web/static/js/app.js` for config, firmware, sensors, serials,
events, and parse errors. That duplication left dead controls in the shell and made the report
source of truth ambiguous.
**Decision:** The `web/` frontend shell is responsible only for data intake, job control, and
top-level actions. The report itself must be rendered exclusively through `internal/chart`.
Do not keep parallel report sections, filters, or table renderers in shell JavaScript.
**Consequences:**
- The browser UI has a single report rendering path: `/chart/current` inside the embedded viewer.
- Report-level filtering or extra report sections must be implemented in `internal/chart`, not in
`web/static/js/app.js`.
- Removing legacy DOM renderers from the shell is a correctness fix, not a behavior regression.

View File

@@ -8,6 +8,7 @@ import (
"os"
"os/exec"
"runtime"
"strings"
"time"
"git.mchus.pro/mchus/logpile/internal/parser"
@@ -38,10 +39,11 @@ func main() {
server.WebFS = web.FS
cfg := server.Config{
Port: *port,
PreloadFile: *file,
AppVersion: version,
AppCommit: commit,
Port: *port,
PreloadFile: *file,
AppVersion: version,
AppCommit: commit,
ChartVersion: detectChartVersion(),
}
srv := server.New(cfg)
@@ -92,6 +94,15 @@ func openBrowser(url string) {
}
}
func detectChartVersion() string {
cmd := exec.Command("git", "-C", "internal/chart", "describe", "--tags", "--always", "--dirty", "--abbrev=7")
out, err := cmd.Output()
if err != nil {
return ""
}
return strings.TrimSpace(string(out))
}
func maybeWaitForCrashInput(enabled bool) {
if !enabled || !isInteractiveConsole() {
return

View File

@@ -112,12 +112,11 @@ func (c *RedfishConnector) Probe(ctx context.Context, req Request) (*ProbeResult
}
powerState := redfishSystemPowerState(systemDoc)
return &ProbeResult{
Reachable: true,
Protocol: "redfish",
HostPowerState: powerState,
HostPoweredOn: isRedfishHostPoweredOn(powerState),
PowerControlAvailable: redfishResetActionTarget(systemDoc) != "",
SystemPath: primarySystem,
Reachable: true,
Protocol: "redfish",
HostPowerState: powerState,
HostPoweredOn: isRedfishHostPoweredOn(powerState),
SystemPath: primarySystem,
}, nil
}
@@ -160,17 +159,6 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
systemPaths := c.discoverMemberPaths(discoveryCtx, snapshotClient, req, baseURL, "/redfish/v1/Systems", "/redfish/v1/Systems/1")
primarySystem := firstNonEmptyPath(systemPaths, "/redfish/v1/Systems/1")
if primarySystem != "" {
c.ensureHostPowerForCollection(ctx, snapshotClient, req, baseURL, primarySystem, emit)
}
defer func() {
if primarySystem == "" || !req.StopHostAfterCollect {
return
}
shutdownCtx, cancel := context.WithTimeout(context.Background(), 45*time.Second)
defer cancel()
c.restoreHostPowerAfterCollection(shutdownCtx, snapshotClient, req, baseURL, primarySystem, emit)
}()
chassisPaths := c.discoverMemberPaths(discoveryCtx, snapshotClient, req, baseURL, "/redfish/v1/Chassis", "/redfish/v1/Chassis/1")
managerPaths := c.discoverMemberPaths(discoveryCtx, snapshotClient, req, baseURL, "/redfish/v1/Managers", "/redfish/v1/Managers/1")
primaryChassis := firstNonEmptyPath(chassisPaths, "/redfish/v1/Chassis/1")
@@ -269,12 +257,35 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
emit(Progress{Status: "running", Progress: 80, Message: "Redfish: подготовка расширенного snapshot...", CurrentPhase: "snapshot", ETASeconds: acquisitionPlan.Tuning.ETABaseline.SnapshotSeconds})
emit(Progress{Status: "running", Progress: 90, Message: "Redfish: сбор расширенного snapshot...", CurrentPhase: "snapshot", ETASeconds: acquisitionPlan.Tuning.ETABaseline.SnapshotSeconds})
}
// collectCtx covers all data-fetching phases (snapshot, prefetch, plan-B).
// Cancelling it via the skip signal aborts only the collection phases while
// leaving the replay phase intact so results from already-fetched data are preserved.
collectCtx, cancelCollect := context.WithCancel(ctx)
defer cancelCollect()
if req.SkipHungCh != nil {
go func() {
select {
case <-req.SkipHungCh:
if emit != nil {
emit(Progress{
Status: "running",
Progress: 97,
Message: "Redfish: пропуск зависших запросов, анализ уже собранных данных...",
})
}
log.Printf("redfish: skip-hung triggered, cancelling collection phases")
cancelCollect()
case <-ctx.Done():
}
}()
}
c.debugSnapshotf("snapshot crawl start host=%s port=%d", req.Host, req.Port)
rawTree, fetchErrors, postProbeMetrics, snapshotTimingSummary := c.collectRawRedfishTree(withRedfishTelemetryPhase(ctx, "snapshot"), snapshotClient, req, baseURL, seedPaths, acquisitionPlan.Tuning, emit)
rawTree, fetchErrors, postProbeMetrics, snapshotTimingSummary := c.collectRawRedfishTree(withRedfishTelemetryPhase(collectCtx, "snapshot"), snapshotClient, req, baseURL, seedPaths, acquisitionPlan.Tuning, emit)
c.debugSnapshotf("snapshot crawl done docs=%d", len(rawTree))
fetchErrMap := redfishFetchErrorListToMap(fetchErrors)
prefetchedCritical, prefetchMetrics := c.prefetchCriticalRedfishDocs(withRedfishTelemetryPhase(ctx, "prefetch"), prefetchClient, req, baseURL, criticalPaths, rawTree, fetchErrMap, acquisitionPlan.Tuning, emit)
prefetchedCritical, prefetchMetrics := c.prefetchCriticalRedfishDocs(withRedfishTelemetryPhase(collectCtx, "prefetch"), prefetchClient, req, baseURL, criticalPaths, rawTree, fetchErrMap, acquisitionPlan.Tuning, emit)
for p, doc := range prefetchedCritical {
if _, exists := rawTree[p]; exists {
continue
@@ -295,10 +306,10 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
prefetchMetrics.Duration.Round(time.Millisecond),
firstNonEmpty(prefetchMetrics.SkipReason, "-"),
)
if recoveredN := c.recoverCriticalRedfishDocsPlanB(withRedfishTelemetryPhase(ctx, "critical_plan_b"), criticalClient, req, baseURL, criticalPaths, rawTree, fetchErrMap, acquisitionPlan.Tuning, emit); recoveredN > 0 {
if recoveredN := c.recoverCriticalRedfishDocsPlanB(withRedfishTelemetryPhase(collectCtx, "critical_plan_b"), criticalClient, req, baseURL, criticalPaths, rawTree, fetchErrMap, acquisitionPlan.Tuning, emit); recoveredN > 0 {
c.debugSnapshotf("critical plan-b recovered docs=%d", recoveredN)
}
if recoveredN := c.recoverProfilePlanBDocs(withRedfishTelemetryPhase(ctx, "profile_plan_b"), criticalClient, req, baseURL, acquisitionPlan, rawTree, emit); recoveredN > 0 {
if recoveredN := c.recoverProfilePlanBDocs(withRedfishTelemetryPhase(collectCtx, "profile_plan_b"), criticalClient, req, baseURL, acquisitionPlan, rawTree, emit); recoveredN > 0 {
c.debugSnapshotf("profile plan-b recovered docs=%d", recoveredN)
}
// Hide transient fetch errors for endpoints that were eventually recovered into rawTree.
@@ -334,8 +345,9 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
"manager_critical_suffixes": acquisitionPlan.ScopedPaths.ManagerCriticalSuffixes,
},
"tuning": map[string]any{
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
"snapshot_exclude_contains": acquisitionPlan.Tuning.SnapshotExcludeContains,
"prefetch_workers": acquisitionPlan.Tuning.PrefetchWorkers,
"prefetch_enabled": boolPointerValue(acquisitionPlan.Tuning.PrefetchEnabled),
"nvme_post_probe": boolPointerValue(acquisitionPlan.Tuning.NVMePostProbeEnabled),
@@ -485,231 +497,6 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
return result, nil
}
func (c *RedfishConnector) ensureHostPowerForCollection(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string, emit ProgressFn) (hostOn bool, poweredOnByCollector bool) {
systemDoc, err := c.getJSON(ctx, client, req, baseURL, systemPath)
if err != nil {
if emit != nil {
emit(Progress{Status: "running", Progress: 18, Message: "Redfish: не удалось проверить PowerState host, сбор продолжается без power-control"})
}
return false, false
}
powerState := redfishSystemPowerState(systemDoc)
if isRedfishHostPoweredOn(powerState) {
if emit != nil {
emit(Progress{Status: "running", Progress: 18, Message: fmt.Sprintf("Redfish: host включен (%s)", firstNonEmpty(powerState, "On"))})
}
return true, false
}
if emit != nil {
emit(Progress{Status: "running", Progress: 18, Message: fmt.Sprintf("Redfish: host выключен (%s)", firstNonEmpty(powerState, "Off"))})
}
if !req.PowerOnIfHostOff {
if emit != nil {
emit(Progress{Status: "running", Progress: 19, Message: "Redfish: включение host не запрошено, сбор продолжается на выключенном host"})
}
return false, false
}
// Invalidate all inventory CRC groups before powering on so the BMC accepts
// fresh inventory from the host after boot. Best-effort: failure is logged but
// does not block power-on.
c.invalidateRedfishInventory(ctx, client, req, baseURL, systemPath, emit)
resetTarget := redfishResetActionTarget(systemDoc)
resetType := redfishPickResetType(systemDoc, "On", "ForceOn")
if resetTarget == "" || resetType == "" {
if emit != nil {
emit(Progress{Status: "running", Progress: 19, Message: "Redfish: action ComputerSystem.Reset недоступен, сбор продолжается на выключенном host"})
}
return false, false
}
waitWindows := []time.Duration{5 * time.Second, 10 * time.Second, 30 * time.Second}
for i, waitFor := range waitWindows {
if emit != nil {
emit(Progress{Status: "running", Progress: 19, Message: fmt.Sprintf("Redfish: попытка включения host (%d/%d), ожидание %s", i+1, len(waitWindows), waitFor)})
}
if err := c.postJSON(ctx, client, req, baseURL, resetTarget, map[string]any{"ResetType": resetType}); err != nil {
if emit != nil {
emit(Progress{Status: "running", Progress: 19, Message: fmt.Sprintf("Redfish: включение host не удалось (%v)", err)})
}
continue
}
if c.waitForHostPowerState(ctx, client, req, baseURL, systemPath, true, waitFor) {
if !c.waitForStablePoweredOnHost(ctx, client, req, baseURL, systemPath, emit) {
if emit != nil {
emit(Progress{Status: "running", Progress: 20, Message: "Redfish: host включился, но не подтвердил стабильное состояние; сбор продолжается на выключенном host"})
}
return false, false
}
if emit != nil {
emit(Progress{Status: "running", Progress: 20, Message: "Redfish: host успешно включен и стабилен перед сбором"})
}
return true, true
}
if emit != nil {
emit(Progress{Status: "running", Progress: 20, Message: fmt.Sprintf("Redfish: host не включился за %s", waitFor)})
}
}
if emit != nil {
emit(Progress{Status: "running", Progress: 20, Message: "Redfish: host не удалось включить, сбор продолжается на выключенном host"})
}
return false, false
}
func (c *RedfishConnector) waitForStablePoweredOnHost(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string, emit ProgressFn) bool {
stabilizationDelay := redfishPowerOnStabilizationDelay()
if stabilizationDelay > 0 {
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: host включен, ожидание стабилизации %s перед началом сбора", stabilizationDelay),
})
}
timer := time.NewTimer(stabilizationDelay)
select {
case <-ctx.Done():
timer.Stop()
return false
case <-timer.C:
timer.Stop()
}
}
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: "Redfish: повторная проверка PowerState после стабилизации host",
})
}
if !c.waitForHostPowerState(ctx, client, req, baseURL, systemPath, true, 5*time.Second) {
return false
}
// After the initial stabilization wait, the BMC may still be populating its
// hardware inventory (PCIeDevices, memory summary). Poll readiness with
// increasing back-off (default: +60s, +120s), then warn and proceed.
readinessWaits := redfishBMCReadinessWaits()
for attempt, extraWait := range readinessWaits {
ready, reason := c.isBMCInventoryReady(ctx, client, req, baseURL, systemPath)
if ready {
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: BMC готов (%s)", reason),
})
}
return true
}
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: BMC не готов (%s), ожидание %s (попытка %d/%d)", reason, extraWait, attempt+1, len(readinessWaits)),
})
}
timer := time.NewTimer(extraWait)
select {
case <-ctx.Done():
timer.Stop()
return false
case <-timer.C:
timer.Stop()
}
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: повторная проверка готовности BMC (%d/%d)...", attempt+1, len(readinessWaits)),
})
}
}
ready, reason := c.isBMCInventoryReady(ctx, client, req, baseURL, systemPath)
if !ready {
if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: WARNING — BMC не подтвердил готовность (%s), сбор может быть неполным", reason),
})
}
} else if emit != nil {
emit(Progress{
Status: "running",
Progress: 20,
Message: fmt.Sprintf("Redfish: BMC готов (%s)", reason),
})
}
return true
}
// isBMCInventoryReady checks whether the BMC has finished populating its
// hardware inventory after a power-on. Returns (ready, reason).
// It considers the BMC ready if either the system memory summary reports
// a non-zero total or the PCIeDevices collection is non-empty.
func (c *RedfishConnector) isBMCInventoryReady(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string) (bool, string) {
systemDoc, err := c.getJSON(ctx, client, req, baseURL, systemPath)
if err != nil {
return false, "не удалось прочитать System"
}
if summary, ok := systemDoc["MemorySummary"].(map[string]interface{}); ok {
if asFloat(summary["TotalSystemMemoryGiB"]) > 0 {
return true, "MemorySummary заполнен"
}
}
pcieDoc, err := c.getJSON(ctx, client, req, baseURL, joinPath(systemPath, "/PCIeDevices"))
if err == nil {
if asInt(pcieDoc["Members@odata.count"]) > 0 {
return true, "PCIeDevices не пуст"
}
if members, ok := pcieDoc["Members"].([]interface{}); ok && len(members) > 0 {
return true, "PCIeDevices не пуст"
}
}
return false, "MemorySummary=0, PCIeDevices пуст"
}
func (c *RedfishConnector) restoreHostPowerAfterCollection(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string, emit ProgressFn) {
systemDoc, err := c.getJSON(ctx, client, req, baseURL, systemPath)
if err != nil {
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: "Redfish: не удалось повторно прочитать system перед выключением host"})
}
return
}
resetTarget := redfishResetActionTarget(systemDoc)
resetType := redfishPickResetType(systemDoc, "GracefulShutdown", "ForceOff", "PushPowerButton")
if resetTarget == "" || resetType == "" {
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: "Redfish: выключение host после сбора недоступно"})
}
return
}
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: "Redfish: выключаем host после завершения сбора"})
}
if err := c.postJSON(ctx, client, req, baseURL, resetTarget, map[string]any{"ResetType": resetType}); err != nil {
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: fmt.Sprintf("Redfish: не удалось выключить host после сбора (%v)", err)})
}
return
}
if c.waitForHostPowerState(ctx, client, req, baseURL, systemPath, false, 20*time.Second) {
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: "Redfish: host выключен после завершения сбора"})
}
return
}
if emit != nil {
emit(Progress{Status: "running", Progress: 100, Message: "Redfish: не удалось подтвердить выключение host после сбора"})
}
}
// collectDebugPayloads fetches vendor-specific diagnostic endpoints on a best-effort basis.
// Results are stored in rawPayloads["redfish_debug_payloads"] and exported with the bundle.
// Enabled only when Request.DebugPayloads is true.
@@ -724,50 +511,6 @@ func (c *RedfishConnector) collectDebugPayloads(ctx context.Context, client *htt
return out
}
// invalidateRedfishInventory POSTs to the AMI/MSI InventoryCrc endpoint to zero out
// all known CRC groups before a host power-on. This causes the BMC to accept fresh
// inventory from the host after boot, preventing stale inventory (ghost GPUs, wrong
// BIOS version, etc.) from persisting across hardware changes.
// Best-effort: any error is logged and the call silently returns.
func (c *RedfishConnector) invalidateRedfishInventory(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string, emit ProgressFn) {
crcPath := joinPath(systemPath, "/Oem/Ami/Inventory/Crc")
body := map[string]any{
"GroupCrcList": []map[string]any{
{"CPU": 0},
{"DIMM": 0},
{"PCIE": 0},
},
}
if err := c.postJSON(ctx, client, req, baseURL, crcPath, body); err != nil {
log.Printf("redfish: inventory invalidation skipped (not AMI/MSI or endpoint unavailable): %v", err)
return
}
log.Printf("redfish: inventory CRC groups invalidated at %s before host power-on", crcPath)
if emit != nil {
emit(Progress{Status: "running", Progress: 19, Message: "Redfish: инвентарь BMC инвалидирован перед включением host (все CRC группы сброшены)"})
}
}
func (c *RedfishConnector) waitForHostPowerState(ctx context.Context, client *http.Client, req Request, baseURL, systemPath string, wantOn bool, timeout time.Duration) bool {
deadline := time.Now().Add(timeout)
for {
systemDoc, err := c.getJSON(ctx, client, req, baseURL, systemPath)
if err == nil {
if isRedfishHostPoweredOn(redfishSystemPowerState(systemDoc)) == wantOn {
return true
}
}
if time.Now().After(deadline) {
return false
}
select {
case <-ctx.Done():
return false
case <-time.After(1 * time.Second):
}
}
}
func firstNonEmptyPath(paths []string, fallback string) string {
for _, p := range paths {
if strings.TrimSpace(p) != "" {
@@ -799,49 +542,6 @@ func redfishSystemPowerState(systemDoc map[string]interface{}) string {
return ""
}
func redfishResetActionTarget(systemDoc map[string]interface{}) string {
if systemDoc == nil {
return ""
}
actions, _ := systemDoc["Actions"].(map[string]interface{})
reset, _ := actions["#ComputerSystem.Reset"].(map[string]interface{})
target := strings.TrimSpace(asString(reset["target"]))
if target != "" {
return target
}
odataID := strings.TrimSpace(asString(systemDoc["@odata.id"]))
if odataID == "" {
return ""
}
return joinPath(odataID, "/Actions/ComputerSystem.Reset")
}
func redfishPickResetType(systemDoc map[string]interface{}, preferred ...string) string {
actions, _ := systemDoc["Actions"].(map[string]interface{})
reset, _ := actions["#ComputerSystem.Reset"].(map[string]interface{})
allowedRaw, _ := reset["ResetType@Redfish.AllowableValues"].([]interface{})
if len(allowedRaw) == 0 {
if len(preferred) > 0 {
return preferred[0]
}
return ""
}
allowed := make([]string, 0, len(allowedRaw))
for _, item := range allowedRaw {
if v := strings.TrimSpace(asString(item)); v != "" {
allowed = append(allowed, v)
}
}
for _, want := range preferred {
for _, have := range allowed {
if strings.EqualFold(want, have) {
return have
}
}
}
return ""
}
func (c *RedfishConnector) postJSON(ctx context.Context, client *http.Client, req Request, baseURL, resourcePath string, payload map[string]any) error {
body, err := json.Marshal(payload)
if err != nil {
@@ -1644,6 +1344,11 @@ func (c *RedfishConnector) collectRawRedfishTree(ctx context.Context, client *ht
if !shouldCrawlPath(path) {
return
}
for _, pattern := range tuning.SnapshotExcludeContains {
if pattern != "" && strings.Contains(path, pattern) {
return
}
}
mu.Lock()
if len(seen) >= maxDocuments {
mu.Unlock()
@@ -2597,34 +2302,6 @@ func redfishCriticalSlowGap() time.Duration {
return 1200 * time.Millisecond
}
func redfishPowerOnStabilizationDelay() time.Duration {
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_POWERON_STABILIZATION")); v != "" {
if d, err := time.ParseDuration(v); err == nil && d >= 0 {
return d
}
}
return 60 * time.Second
}
// redfishBMCReadinessWaits returns the extra wait durations used when polling
// BMC inventory readiness after power-on. Defaults: [60s, 120s].
// Override with LOGPILE_REDFISH_BMC_READY_WAITS (comma-separated durations,
// e.g. "60s,120s").
func redfishBMCReadinessWaits() []time.Duration {
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_BMC_READY_WAITS")); v != "" {
var out []time.Duration
for _, part := range strings.Split(v, ",") {
if d, err := time.ParseDuration(strings.TrimSpace(part)); err == nil && d >= 0 {
out = append(out, d)
}
}
if len(out) > 0 {
return out
}
}
return []time.Duration{60 * time.Second, 120 * time.Second}
}
func redfishSnapshotMemoryRequestTimeout() time.Duration {
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_MEMORY_TIMEOUT")); v != "" {
if d, err := time.ParseDuration(v); err == nil && d > 0 {
@@ -3203,11 +2880,16 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
timings := newRedfishPathTimingCollector(4)
var targets []string
seenTargets := make(map[string]struct{})
skippedDiagnosticTargets := 0
addTarget := func(path string) {
path = normalizeRedfishPath(path)
if path == "" {
return
}
if !shouldIncludeCriticalPlanBPath(req, path) {
skippedDiagnosticTargets++
return
}
if _, ok := seenTargets[path]; ok {
return
}
@@ -3293,6 +2975,13 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
return 0
}
if emit != nil {
if skippedDiagnosticTargets > 0 {
emit(Progress{
Status: "running",
Progress: 97,
Message: fmt.Sprintf("Redfish: расширенная диагностика выключена, пропущено %d тяжелых diagnostic endpoint", skippedDiagnosticTargets),
})
}
totalETA := redfishCriticalCooldown() + estimatePlanBETA(len(targets))
emit(Progress{
Status: "running",
@@ -3398,6 +3087,39 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
return recovered
}
func shouldIncludeCriticalPlanBPath(req Request, path string) bool {
if req.DebugPayloads {
return true
}
return !isExtendedDiagnosticCriticalPlanBPath(path)
}
func isExtendedDiagnosticCriticalPlanBPath(path string) bool {
path = normalizeRedfishPath(path)
if path == "" {
return false
}
parts := strings.Split(strings.Trim(path, "/"), "/")
if len(parts) < 5 || parts[0] != "redfish" || parts[1] != "v1" || parts[2] != "Chassis" {
return false
}
if !strings.HasPrefix(parts[3], "HGX_") {
return false
}
for _, suffix := range []string{
"/Accelerators",
"/Assembly",
"/Drives",
"/NetworkAdapters",
"/PCIeDevices",
} {
if strings.HasSuffix(path, suffix) {
return true
}
}
return false
}
func (c *RedfishConnector) recoverProfilePlanBDocs(ctx context.Context, client *http.Client, req Request, baseURL string, plan redfishprofile.AcquisitionPlan, rawTree map[string]interface{}, emit ProgressFn) int {
if len(plan.PlanBPaths) == 0 || plan.Mode == redfishprofile.ModeFallback || !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
return 0
@@ -3917,7 +3639,7 @@ func parseNIC(doc map[string]interface{}) models.NetworkAdapter {
}
if pcieIf, ok := ctrl["PCIeInterface"].(map[string]interface{}); ok && linkWidth == 0 && maxLinkWidth == 0 && linkSpeed == "" && maxLinkSpeed == "" {
linkWidth = asInt(pcieIf["LanesInUse"])
maxLinkWidth = asInt(pcieIf["MaxLanes"])
maxLinkWidth = firstNonZeroInt(asInt(pcieIf["MaxLanes"]), asInt(pcieIf["Maxlanes"]))
linkSpeed = firstNonEmpty(asString(pcieIf["PCIeType"]), asString(pcieIf["CurrentLinkSpeedGTs"]), asString(pcieIf["CurrentLinkSpeed"]))
maxLinkSpeed = firstNonEmpty(asString(pcieIf["MaxPCIeType"]), asString(pcieIf["MaxLinkSpeedGTs"]), asString(pcieIf["MaxLinkSpeed"]))
}
@@ -4030,6 +3752,9 @@ func enrichNICFromPCIe(nic *models.NetworkAdapter, pcieDoc map[string]interface{
if strings.TrimSpace(nic.MaxLinkSpeed) == "" {
nic.MaxLinkSpeed = firstNonEmpty(asString(pcieDoc["MaxLinkSpeedGTs"]), asString(pcieDoc["MaxLinkSpeed"]))
}
if nic.LinkWidth == 0 || nic.MaxLinkWidth == 0 || nic.LinkSpeed == "" || nic.MaxLinkSpeed == "" {
redfishEnrichFromOEMxFusionPCIeLink(pcieDoc, &nic.LinkWidth, &nic.MaxLinkWidth, &nic.LinkSpeed, &nic.MaxLinkSpeed)
}
if normalizeRedfishIdentityField(nic.SerialNumber) == "" {
nic.SerialNumber = findFirstNormalizedStringByKeys(pcieDoc, "SerialNumber")
}
@@ -4061,6 +3786,9 @@ func enrichNICFromPCIe(nic *models.NetworkAdapter, pcieDoc map[string]interface{
if strings.TrimSpace(nic.MaxLinkSpeed) == "" {
nic.MaxLinkSpeed = firstNonEmpty(asString(fn["MaxLinkSpeedGTs"]), asString(fn["MaxLinkSpeed"]))
}
if nic.LinkWidth == 0 || nic.MaxLinkWidth == 0 || nic.LinkSpeed == "" || nic.MaxLinkSpeed == "" {
redfishEnrichFromOEMxFusionPCIeLink(fn, &nic.LinkWidth, &nic.MaxLinkWidth, &nic.LinkSpeed, &nic.MaxLinkSpeed)
}
if normalizeRedfishIdentityField(nic.SerialNumber) == "" {
nic.SerialNumber = findFirstNormalizedStringByKeys(fn, "SerialNumber")
}
@@ -4627,6 +4355,21 @@ func parseGPUWithSupplementalDocs(doc map[string]interface{}, functionDocs []map
gpu.DeviceID = asHexOrInt(doc["DeviceId"])
}
if pcieIf, ok := doc["PCIeInterface"].(map[string]interface{}); ok {
if gpu.CurrentLinkWidth == 0 {
gpu.CurrentLinkWidth = asInt(pcieIf["LanesInUse"])
}
if gpu.MaxLinkWidth == 0 {
gpu.MaxLinkWidth = firstNonZeroInt(asInt(pcieIf["MaxLanes"]), asInt(pcieIf["Maxlanes"]))
}
if gpu.CurrentLinkSpeed == "" {
gpu.CurrentLinkSpeed = firstNonEmpty(asString(pcieIf["PCIeType"]), asString(pcieIf["CurrentLinkSpeedGTs"]), asString(pcieIf["CurrentLinkSpeed"]))
}
if gpu.MaxLinkSpeed == "" {
gpu.MaxLinkSpeed = firstNonEmpty(asString(pcieIf["MaxPCIeType"]), asString(pcieIf["MaxLinkSpeedGTs"]), asString(pcieIf["MaxLinkSpeed"]))
}
}
for _, fn := range functionDocs {
if gpu.BDF == "" {
gpu.BDF = sanitizeRedfishBDF(asString(fn["FunctionId"]))
@@ -4649,6 +4392,9 @@ func parseGPUWithSupplementalDocs(doc map[string]interface{}, functionDocs []map
if gpu.CurrentLinkSpeed == "" {
gpu.CurrentLinkSpeed = firstNonEmpty(asString(fn["CurrentLinkSpeedGTs"]), asString(fn["CurrentLinkSpeed"]))
}
if gpu.CurrentLinkWidth == 0 || gpu.MaxLinkWidth == 0 || gpu.CurrentLinkSpeed == "" || gpu.MaxLinkSpeed == "" {
redfishEnrichFromOEMxFusionPCIeLink(fn, &gpu.CurrentLinkWidth, &gpu.MaxLinkWidth, &gpu.CurrentLinkSpeed, &gpu.MaxLinkSpeed)
}
}
if isMissingOrRawPCIModel(gpu.Model) {
@@ -4709,6 +4455,9 @@ func parsePCIeDeviceWithSupplementalDocs(doc map[string]interface{}, functionDoc
if dev.MaxLinkSpeed == "" {
dev.MaxLinkSpeed = firstNonEmpty(asString(fn["MaxLinkSpeedGTs"]), asString(fn["MaxLinkSpeed"]))
}
if dev.LinkWidth == 0 || dev.MaxLinkWidth == 0 || dev.LinkSpeed == "" || dev.MaxLinkSpeed == "" {
redfishEnrichFromOEMxFusionPCIeLink(fn, &dev.LinkWidth, &dev.MaxLinkWidth, &dev.LinkSpeed, &dev.MaxLinkSpeed)
}
}
if dev.DeviceClass == "" || isGenericPCIeClassLabel(dev.DeviceClass) {
dev.DeviceClass = firstNonEmpty(redfishFirstStringAcrossDocs(supplementalDocs, "DeviceType"), dev.DeviceClass)
@@ -4958,6 +4707,59 @@ func buildBDFfromOemPublic(doc map[string]interface{}) string {
return fmt.Sprintf("%04x:%02x:%02x.%x", segment, bus, dev, fn)
}
// redfishEnrichFromOEMxFusionPCIeLink fills in missing PCIe link width/speed
// from the xFusion OEM namespace. xFusion reports link width as a string like
// "X8" in Oem.xFusion.LinkWidth / Oem.xFusion.LinkWidthAbility, and link speed
// as a string like "Gen4 (16.0GT/s)" in Oem.xFusion.LinkSpeed /
// Oem.xFusion.LinkSpeedAbility. These fields appear on PCIeFunction docs.
func redfishEnrichFromOEMxFusionPCIeLink(doc map[string]interface{}, linkWidth, maxLinkWidth *int, linkSpeed, maxLinkSpeed *string) {
oem, _ := doc["Oem"].(map[string]interface{})
if oem == nil {
return
}
xf, _ := oem["xFusion"].(map[string]interface{})
if xf == nil {
return
}
if *linkWidth == 0 {
*linkWidth = parseXFusionLinkWidth(asString(xf["LinkWidth"]))
}
if *maxLinkWidth == 0 {
*maxLinkWidth = parseXFusionLinkWidth(asString(xf["LinkWidthAbility"]))
}
if strings.TrimSpace(*linkSpeed) == "" {
*linkSpeed = strings.TrimSpace(asString(xf["LinkSpeed"]))
}
if strings.TrimSpace(*maxLinkSpeed) == "" {
*maxLinkSpeed = strings.TrimSpace(asString(xf["LinkSpeedAbility"]))
}
}
// parseXFusionLinkWidth converts an xFusion link-width string like "X8" or
// "x16" to the integer lane count. Returns 0 for unrecognised values.
func parseXFusionLinkWidth(s string) int {
s = strings.TrimSpace(s)
if s == "" {
return 0
}
s = strings.TrimPrefix(strings.ToUpper(s), "X")
v := asInt(s)
if v <= 0 {
return 0
}
return v
}
// firstNonZeroInt returns the first argument that is non-zero.
func firstNonZeroInt(vals ...int) int {
for _, v := range vals {
if v != 0 {
return v
}
}
return 0
}
func normalizeRedfishIdentityField(v string) string {
v = strings.TrimSpace(v)
if v == "" {

View File

@@ -50,11 +50,15 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
}
for _, systemPath := range systemPaths {
collectFrom(joinPath(systemPath, "/LogServices"), isHardwareLogService)
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, systemPath, "LogServices") {
collectFrom(logServicesPath, isHardwareLogService)
}
}
// Managers hold the IPMI SEL on AMI/MSI BMCs — include only the "SEL" service.
for _, managerPath := range managerPaths {
collectFrom(joinPath(managerPath, "/LogServices"), isManagerSELService)
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, managerPath, "LogServices") {
collectFrom(logServicesPath, isManagerSELService)
}
}
if len(out) > 0 {
@@ -63,6 +67,42 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
return out
}
func (c *RedfishConnector) redfishLinkedCollectionPaths(
ctx context.Context,
client *http.Client,
req Request,
baseURL, resourcePath, linkKey string,
) []string {
resourcePath = normalizeRedfishPath(resourcePath)
if resourcePath == "" || strings.TrimSpace(linkKey) == "" {
return nil
}
seen := make(map[string]struct{}, 2)
var out []string
add := func(path string) {
path = normalizeRedfishPath(path)
if path == "" {
return
}
if _, ok := seen[path]; ok {
return
}
seen[path] = struct{}{}
out = append(out, path)
}
add(joinPath(resourcePath, "/"+strings.TrimSpace(linkKey)))
resourceDoc, err := c.getJSON(ctx, client, req, baseURL, resourcePath)
if err == nil {
if linked := redfishLinkedPath(resourceDoc, linkKey); linked != "" {
add(linked)
}
}
return out
}
// fetchRedfishLogEntriesWithPaging fetches entries from a LogEntry collection,
// following nextLink pages. Stops early when entries older than cutoff are encountered
// (assumes BMC returns entries newest-first, which is typical).
@@ -182,7 +222,7 @@ func redfishLogServiceEntriesPath(svc map[string]interface{}) string {
// Audit, authentication, and session events are excluded.
func isHardwareLogEntry(entry map[string]interface{}) bool {
entryType := strings.TrimSpace(asString(entry["EntryType"]))
if strings.EqualFold(entryType, "Oem") {
if strings.EqualFold(entryType, "Oem") && !strings.EqualFold(strings.TrimSpace(asString(entry["OemRecordFormat"])), "Lenovo") {
return false
}
@@ -362,6 +402,9 @@ func parseIPMIDumpKV(message string) map[string]string {
// AMI/MSI BMCs often set Severity="OK" on all SEL records regardless of content,
// so we fall back to inferring severity from SensorType when the explicit field is unhelpful.
func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
if redfishLogEntryLooksLikeWarning(entry) {
return models.SeverityWarning
}
// Newer Redfish uses MessageSeverity; older uses Severity.
raw := strings.ToLower(firstNonEmpty(
strings.TrimSpace(asString(entry["MessageSeverity"])),
@@ -380,6 +423,16 @@ func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
}
}
func redfishLogEntryLooksLikeWarning(entry map[string]interface{}) bool {
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
asString(entry["Message"]),
asString(entry["Name"]),
asString(entry["SensorType"]),
asString(entry["EntryCode"]),
}, " ")))
return strings.Contains(joined, "unqualified dimm")
}
// redfishSeverityFromSensorType infers event severity from the IPMI/Redfish SensorType string.
func redfishSeverityFromSensorType(sensorType string) models.Severity {
switch strings.ToLower(sensorType) {

View File

@@ -0,0 +1,125 @@
package collector
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
)
func TestCollectRedfishLogEntries_UsesLinkedManagerLogServicesPath(t *testing.T) {
mux := http.NewServeMux()
register := func(path string, payload interface{}) {
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(payload)
})
}
register("/redfish/v1/Managers/1", map[string]interface{}{
"Id": "1",
"LogServices": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1/LogServices",
},
})
register("/redfish/v1/Systems/1/LogServices", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL"},
},
})
register("/redfish/v1/Systems/1/LogServices/SEL", map[string]interface{}{
"Id": "SEL",
"Entries": map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries",
},
})
register("/redfish/v1/Systems/1/LogServices/SEL/Entries", map[string]interface{}{
"Members": []map[string]string{
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries/1"},
},
})
register("/redfish/v1/Systems/1/LogServices/SEL/Entries/1", map[string]interface{}{
"Id": "1",
"Created": time.Now().UTC().Format(time.RFC3339),
"Message": "System found Unqualified DIMM in slot DIMM A1",
"MessageSeverity": "OK",
"SensorType": "Memory",
"EntryType": "Event",
})
ts := httptest.NewServer(mux)
defer ts.Close()
c := NewRedfishConnector()
got := c.collectRedfishLogEntries(context.Background(), ts.Client(), Request{
Host: ts.URL,
Port: 443,
Protocol: "redfish",
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "strict",
}, ts.URL, nil, []string{"/redfish/v1/Managers/1"})
if len(got) != 1 {
t.Fatalf("expected 1 collected log entry, got %d", len(got))
}
if got[0]["Message"] != "System found Unqualified DIMM in slot DIMM A1" {
t.Fatalf("unexpected collected message: %#v", got[0]["Message"])
}
}
func TestParseRedfishLogEntries_UnqualifiedDIMMBecomesWarning(t *testing.T) {
rawPayloads := map[string]any{
"redfish_log_entries": []any{
map[string]any{
"Id": "sel-1",
"Created": "2026-04-13T12:00:00Z",
"Message": "System found Unqualified DIMM in slot DIMM A1",
"MessageSeverity": "OK",
"SensorType": "Memory",
"EntryType": "Event",
},
},
}
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
if len(events) != 1 {
t.Fatalf("expected 1 event, got %d", len(events))
}
if events[0].Severity != models.SeverityWarning {
t.Fatalf("expected warning severity, got %q", events[0].Severity)
}
if events[0].Description != "System found Unqualified DIMM in slot DIMM A1" {
t.Fatalf("unexpected description: %q", events[0].Description)
}
}
func TestParseRedfishLogEntries_LenovoOEMEntryIsKept(t *testing.T) {
rawPayloads := map[string]any{
"redfish_log_entries": []any{
map[string]any{
"Id": "plat-55",
"Created": "2026-04-13T12:00:00Z",
"Message": "DIMM A1 is unqualified",
"MessageSeverity": "Warning",
"SensorType": "Memory",
"EntryType": "Oem",
"OemRecordFormat": "Lenovo",
"EntryCode": "Assert",
},
},
}
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
if len(events) != 1 {
t.Fatalf("expected 1 Lenovo OEM event, got %d", len(events))
}
if events[0].Severity != models.SeverityWarning {
t.Fatalf("expected warning severity, got %q", events[0].Severity)
}
}

View File

@@ -0,0 +1,57 @@
package collector
import "testing"
func TestShouldIncludeCriticalPlanBPath(t *testing.T) {
tests := []struct {
name string
req Request
path string
want bool
}{
{
name: "skip hgx erot pcie without extended diagnostics",
req: Request{},
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
want: false,
},
{
name: "skip hgx chassis assembly without extended diagnostics",
req: Request{},
path: "/redfish/v1/Chassis/HGX_Chassis_0/Assembly",
want: false,
},
{
name: "keep standard chassis inventory without extended diagnostics",
req: Request{},
path: "/redfish/v1/Chassis/1/PCIeDevices",
want: true,
},
{
name: "keep nvme storage backplane drives without extended diagnostics",
req: Request{},
path: "/redfish/v1/Chassis/NVMeSSD.0.Group.0.StorageBackplane/Drives",
want: true,
},
{
name: "keep system processors without extended diagnostics",
req: Request{},
path: "/redfish/v1/Systems/HGX_Baseboard_0/Processors",
want: true,
},
{
name: "include hgx erot pcie when extended diagnostics enabled",
req: Request{DebugPayloads: true},
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
want: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := shouldIncludeCriticalPlanBPath(tt.req, tt.path); got != tt.want {
t.Fatalf("shouldIncludeCriticalPlanBPath(%q) = %v, want %v", tt.path, got, tt.want)
}
})
}
}

View File

@@ -265,9 +265,6 @@ func TestRedfishConnectorProbe(t *testing.T) {
if got.HostPowerState != "Off" {
t.Fatalf("expected power state Off, got %q", got.HostPowerState)
}
if !got.PowerControlAvailable {
t.Fatalf("expected power control available")
}
}
func TestRedfishConnectorProbe_FallsBackToPowerSummary(t *testing.T) {
@@ -330,225 +327,6 @@ func TestRedfishConnectorProbe_FallsBackToPowerSummary(t *testing.T) {
if got.HostPowerState != "On" {
t.Fatalf("expected power state On, got %q", got.HostPowerState)
}
if !got.PowerControlAvailable {
t.Fatalf("expected power control available")
}
}
func TestEnsureHostPowerForCollection_WaitsForStablePowerOn(t *testing.T) {
t.Setenv("LOGPILE_REDFISH_POWERON_STABILIZATION", "1ms")
t.Setenv("LOGPILE_REDFISH_BMC_READY_WAITS", "1ms,1ms")
powerState := "Off"
resetCalls := 0
mux := http.NewServeMux()
mux.HandleFunc("/redfish/v1/Systems/1", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1",
"PowerState": powerState,
"MemorySummary": map[string]interface{}{
"TotalSystemMemoryGiB": 128,
},
"Actions": map[string]interface{}{
"#ComputerSystem.Reset": map[string]interface{}{
"target": "/redfish/v1/Systems/1/Actions/ComputerSystem.Reset",
"ResetType@Redfish.AllowableValues": []interface{}{"On"},
},
},
})
})
mux.HandleFunc("/redfish/v1/Systems/1/Actions/ComputerSystem.Reset", func(w http.ResponseWriter, r *http.Request) {
resetCalls++
powerState = "On"
w.WriteHeader(http.StatusOK)
})
ts := httptest.NewTLSServer(mux)
defer ts.Close()
u, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("parse server url: %v", err)
}
port := 443
if u.Port() != "" {
fmt.Sscanf(u.Port(), "%d", &port)
}
c := NewRedfishConnector()
hostOn, changed := c.ensureHostPowerForCollection(context.Background(), c.httpClientWithTimeout(Request{TLSMode: "insecure"}, 5*time.Second), Request{
Host: u.Hostname(),
Protocol: "redfish",
Port: port,
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "insecure",
PowerOnIfHostOff: true,
}, ts.URL, "/redfish/v1/Systems/1", nil)
if !hostOn || !changed {
t.Fatalf("expected stable power-on result, got hostOn=%v changed=%v", hostOn, changed)
}
if resetCalls != 1 {
t.Fatalf("expected one reset call, got %d", resetCalls)
}
}
func TestEnsureHostPowerForCollection_FailsIfHostDoesNotStayOnAfterStabilization(t *testing.T) {
t.Setenv("LOGPILE_REDFISH_POWERON_STABILIZATION", "1ms")
t.Setenv("LOGPILE_REDFISH_BMC_READY_WAITS", "1ms,1ms")
powerState := "Off"
mux := http.NewServeMux()
mux.HandleFunc("/redfish/v1/Systems/1", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
current := powerState
if powerState == "On" {
powerState = "Off"
}
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1",
"PowerState": current,
"Actions": map[string]interface{}{
"#ComputerSystem.Reset": map[string]interface{}{
"target": "/redfish/v1/Systems/1/Actions/ComputerSystem.Reset",
"ResetType@Redfish.AllowableValues": []interface{}{"On"},
},
},
})
})
mux.HandleFunc("/redfish/v1/Systems/1/Actions/ComputerSystem.Reset", func(w http.ResponseWriter, r *http.Request) {
powerState = "On"
w.WriteHeader(http.StatusOK)
})
ts := httptest.NewTLSServer(mux)
defer ts.Close()
u, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("parse server url: %v", err)
}
port := 443
if u.Port() != "" {
fmt.Sscanf(u.Port(), "%d", &port)
}
c := NewRedfishConnector()
hostOn, changed := c.ensureHostPowerForCollection(context.Background(), c.httpClientWithTimeout(Request{TLSMode: "insecure"}, 5*time.Second), Request{
Host: u.Hostname(),
Protocol: "redfish",
Port: port,
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "insecure",
PowerOnIfHostOff: true,
}, ts.URL, "/redfish/v1/Systems/1", nil)
if hostOn || changed {
t.Fatalf("expected unstable power-on result to fail, got hostOn=%v changed=%v", hostOn, changed)
}
}
func TestEnsureHostPowerForCollection_UsesPowerSummaryState(t *testing.T) {
t.Setenv("LOGPILE_REDFISH_POWERON_STABILIZATION", "1ms")
t.Setenv("LOGPILE_REDFISH_BMC_READY_WAITS", "1ms,1ms")
powerState := "On"
mux := http.NewServeMux()
mux.HandleFunc("/redfish/v1/Systems/1", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1",
"PowerSummary": map[string]interface{}{
"PowerState": powerState,
},
"MemorySummary": map[string]interface{}{
"TotalSystemMemoryGiB": 128,
},
"Actions": map[string]interface{}{
"#ComputerSystem.Reset": map[string]interface{}{
"target": "/redfish/v1/Systems/1/Actions/ComputerSystem.Reset",
"ResetType@Redfish.AllowableValues": []interface{}{"On"},
},
},
})
})
ts := httptest.NewTLSServer(mux)
defer ts.Close()
u, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("parse server url: %v", err)
}
port := 443
if u.Port() != "" {
fmt.Sscanf(u.Port(), "%d", &port)
}
c := NewRedfishConnector()
hostOn, changed := c.ensureHostPowerForCollection(context.Background(), c.httpClientWithTimeout(Request{TLSMode: "insecure"}, 5*time.Second), Request{
Host: u.Hostname(),
Protocol: "redfish",
Port: port,
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "insecure",
PowerOnIfHostOff: true,
}, ts.URL, "/redfish/v1/Systems/1", nil)
if !hostOn || changed {
t.Fatalf("expected already-on host from PowerSummary, got hostOn=%v changed=%v", hostOn, changed)
}
}
func TestWaitForHostPowerState_UsesPowerSummaryState(t *testing.T) {
powerState := "Off"
mux := http.NewServeMux()
mux.HandleFunc("/redfish/v1/Systems/1", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
current := powerState
if powerState == "Off" {
powerState = "On"
}
_ = json.NewEncoder(w).Encode(map[string]interface{}{
"@odata.id": "/redfish/v1/Systems/1",
"PowerSummary": map[string]interface{}{
"PowerState": current,
},
})
})
ts := httptest.NewTLSServer(mux)
defer ts.Close()
u, err := url.Parse(ts.URL)
if err != nil {
t.Fatalf("parse server url: %v", err)
}
port := 443
if u.Port() != "" {
fmt.Sscanf(u.Port(), "%d", &port)
}
c := NewRedfishConnector()
ok := c.waitForHostPowerState(context.Background(), c.httpClientWithTimeout(Request{TLSMode: "insecure"}, 5*time.Second), Request{
Host: u.Hostname(),
Protocol: "redfish",
Port: port,
Username: "admin",
AuthType: "password",
Password: "secret",
TLSMode: "insecure",
}, ts.URL, "/redfish/v1/Systems/1", true, 3*time.Second)
if !ok {
t.Fatalf("expected waitForHostPowerState to use PowerSummary")
}
}
func TestParsePCIeDeviceSlot_FromNestedRedfishSlotLocation(t *testing.T) {
@@ -1563,6 +1341,48 @@ func TestParseNIC_PrefersControllerSlotLabelAndPCIeInterface(t *testing.T) {
}
}
func TestParseNIC_xFusionMaxlanesAndOEMLinkWidth(t *testing.T) {
// xFusion uses "Maxlanes" (lowercase 'l') in PCIeInterface, not "MaxLanes".
// xFusion also stores per-function link width as Oem.xFusion.LinkWidth = "X8".
nic := parseNIC(map[string]interface{}{
"Id": "OCPCard1",
"Model": "ConnectX-6 Lx",
"Controllers": []interface{}{
map[string]interface{}{
"PCIeInterface": map[string]interface{}{
"LanesInUse": 8,
"Maxlanes": 8, // xFusion uses lowercase 'l'
"PCIeType": "Gen4",
"MaxPCIeType": "Gen4",
},
},
},
})
if nic.LinkWidth != 8 || nic.MaxLinkWidth != 8 {
t.Fatalf("expected link widths 8/8 from xFusion Maxlanes, got current=%d max=%d", nic.LinkWidth, nic.MaxLinkWidth)
}
// enrichNICFromPCIe: OEM xFusion LinkWidth on a PCIeFunction doc.
nic2 := models.NetworkAdapter{}
fnDoc := map[string]interface{}{
"Oem": map[string]interface{}{
"xFusion": map[string]interface{}{
"LinkWidth": "X8",
"LinkWidthAbility": "X8",
"LinkSpeed": "Gen4 (16.0GT/s)",
"LinkSpeedAbility": "Gen4 (16.0GT/s)",
},
},
}
enrichNICFromPCIe(&nic2, map[string]interface{}{}, []map[string]interface{}{fnDoc}, nil)
if nic2.LinkWidth != 8 || nic2.MaxLinkWidth != 8 {
t.Fatalf("expected link width 8 from xFusion OEM LinkWidth, got current=%d max=%d", nic2.LinkWidth, nic2.MaxLinkWidth)
}
if nic2.LinkSpeed != "Gen4 (16.0GT/s)" || nic2.MaxLinkSpeed != "Gen4 (16.0GT/s)" {
t.Fatalf("expected link speed from xFusion OEM LinkSpeed, got current=%q max=%q", nic2.LinkSpeed, nic2.MaxLinkSpeed)
}
}
func TestParseNIC_DropsUnrealisticPortCount(t *testing.T) {
nic := parseNIC(map[string]interface{}{
"Id": "1",
@@ -2995,6 +2815,28 @@ func TestReplayCollectGPUs_DedupUsesRedfishPathBeforeHeuristics(t *testing.T) {
}
}
func TestParseGPU_xFusionPCIeInterfaceMaxlanes(t *testing.T) {
// xFusion GPU PCIeDevices (PCIeCard1..N) carry link width in PCIeInterface
// with "Maxlanes" (lowercase 'l') rather than "MaxLanes".
doc := map[string]interface{}{
"Id": "PCIeCard1",
"Model": "RTX PRO 6000",
"PCIeInterface": map[string]interface{}{
"LanesInUse": 16,
"Maxlanes": 16,
"PCIeType": "Gen5",
"MaxPCIeType": "Gen5",
},
}
gpu := parseGPU(doc, nil, 1)
if gpu.CurrentLinkWidth != 16 || gpu.MaxLinkWidth != 16 {
t.Fatalf("expected link widths 16/16 from PCIeInterface, got current=%d max=%d", gpu.CurrentLinkWidth, gpu.MaxLinkWidth)
}
if gpu.CurrentLinkSpeed != "Gen5" || gpu.MaxLinkSpeed != "Gen5" {
t.Fatalf("expected link speeds Gen5/Gen5 from PCIeInterface, got current=%q max=%q", gpu.CurrentLinkSpeed, gpu.MaxLinkSpeed)
}
}
func TestParseGPU_UsesNestedOemSerialNumber(t *testing.T) {
doc := map[string]interface{}{
"Id": "GPU4",

View File

@@ -326,6 +326,95 @@ func TestBuildAnalysisDirectives_SupermicroEnablesStorageRecovery(t *testing.T)
}
}
func TestMatchProfiles_LenovoXCCSelectsMatchedModeAndExcludesSensors(t *testing.T) {
match := MatchProfiles(MatchSignals{
SystemManufacturer: "Lenovo",
ChassisManufacturer: "Lenovo",
OEMNamespaces: []string{"Lenovo"},
})
if match.Mode != ModeMatched {
t.Fatalf("expected matched mode, got %q", match.Mode)
}
found := false
for _, profile := range match.Profiles {
if profile.Name() == "lenovo" {
found = true
break
}
}
if !found {
t.Fatal("expected lenovo profile to be selected")
}
// Verify the acquisition plan excludes noisy Lenovo-specific snapshot paths.
plan := BuildAcquisitionPlan(MatchSignals{
SystemManufacturer: "Lenovo",
ChassisManufacturer: "Lenovo",
OEMNamespaces: []string{"Lenovo"},
})
wantExcluded := []string{
"/Sensors/",
"/Oem/Lenovo/LEDs/",
"/Oem/Lenovo/Slots/",
"/Oem/Lenovo/Configuration",
"/NetworkProtocol/Oem/Lenovo/",
"/VirtualMedia/",
"/ThermalSubsystem/Fans/",
}
for _, want := range wantExcluded {
found := false
for _, ex := range plan.Tuning.SnapshotExcludeContains {
if ex == want {
found = true
break
}
}
if !found {
t.Errorf("expected SnapshotExcludeContains to include %q, got %v", want, plan.Tuning.SnapshotExcludeContains)
}
}
}
func TestResolveAcquisitionPlan_LenovoFiltersNonInventoryChassisBranches(t *testing.T) {
signals := MatchSignals{
SystemManufacturer: "Lenovo",
ChassisManufacturer: "Lenovo",
OEMNamespaces: []string{"Lenovo"},
ResourceHints: []string{
"/redfish/v1/Chassis/1/Power",
"/redfish/v1/Chassis/1/Thermal",
"/redfish/v1/Chassis/1/NetworkAdapters",
"/redfish/v1/Chassis/3",
"/redfish/v1/Chassis/IO_Board",
},
}
match := MatchProfiles(signals)
plan := BuildAcquisitionPlan(signals)
resolved := ResolveAcquisitionPlan(match, plan, DiscoveredResources{
ChassisPaths: []string{
"/redfish/v1/Chassis/1",
"/redfish/v1/Chassis/3",
"/redfish/v1/Chassis/IO_Board",
},
}, signals)
if !containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/1/Power") {
t.Fatal("expected primary Lenovo chassis power path to remain critical")
}
if containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/3/Power") {
t.Fatal("did not expect non-inventory Lenovo backplane chassis power path")
}
if containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/IO_Board/Assembly") {
t.Fatal("did not expect IO board assembly path without inventory hints")
}
if containsString(resolved.Plan.PlanBPaths, "/redfish/v1/Chassis/3/Assembly") {
t.Fatal("did not expect non-inventory Lenovo chassis plan-b target")
}
if !containsString(resolved.CriticalPaths, "/redfish/v1/Chassis/3") {
t.Fatal("expected chassis root to remain discoverable even when suffixes are filtered")
}
}
func TestMatchProfiles_OrderingIsDeterministic(t *testing.T) {
signals := MatchSignals{
SystemManufacturer: "Micro-Star International Co., Ltd.",

View File

@@ -0,0 +1,175 @@
package redfishprofile
import "strings"
func lenovoProfile() Profile {
return staticProfile{
name: "lenovo",
priority: 20,
safeForFallback: true,
matchFn: func(s MatchSignals) int {
score := 0
if containsFold(s.SystemManufacturer, "lenovo") ||
containsFold(s.ChassisManufacturer, "lenovo") {
score += 80
}
for _, ns := range s.OEMNamespaces {
if containsFold(ns, "lenovo") {
score += 30
break
}
}
// Lenovo XClarity Controller (XCC) is the BMC product line.
if containsFold(s.ServiceRootProduct, "xclarity") ||
containsFold(s.ServiceRootProduct, "xcc") {
score += 30
}
return min(score, 100)
},
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
// Lenovo XCC BMC exposes Chassis/1/Sensors with hundreds of individual
// sensor member documents (e.g. Chassis/1/Sensors/101L1). These are
// not used by any LOGPile parser — thermal/power data is read from
// the aggregate Chassis/*/Thermal and Chassis/*/Power endpoints. On
// a real server they largely return errors, wasting many minutes.
// Lenovo OEM subtrees under Oem/Lenovo/LEDs and Oem/Lenovo/Slots also
// enumerate dozens of individual documents not relevant to inventory.
ensureSnapshotExcludeContains(plan,
"/Sensors/", // individual sensor docs (Chassis/1/Sensors/NNN)
"/Oem/Lenovo/LEDs/", // individual LED status entries (~47 per server)
"/Oem/Lenovo/Slots/", // individual slot detail entries (~26 per server)
"/Oem/Lenovo/Metrics/", // operational metrics, not inventory
"/Oem/Lenovo/History", // historical telemetry
"/Oem/Lenovo/Configuration", // BMC config service, not inventory
"/Oem/Lenovo/DateTimeService", // BMC time service config
"/Oem/Lenovo/GroupService", // XCC fleet/group management state
"/Oem/Lenovo/Recipients", // alert recipient config
"/Oem/Lenovo/RemoteControl", // remote-media/session management
"/Oem/Lenovo/RemoteMap", // remote-media mapping config
"/Oem/Lenovo/SecureKeyLifecycleService", // key lifecycle/cert config
"/Oem/Lenovo/ServerProfile", // profile export/import config
"/Oem/Lenovo/ServiceData", // support/service metadata
"/Oem/Lenovo/SsoCertificates", // SSO certificate config
"/Oem/Lenovo/SystemGuard", // snapshot/history service
"/Oem/Lenovo/Watchdogs", // watchdog config
"/Oem/Lenovo/ScheduledPower", // power scheduling config
"/Oem/Lenovo/BootSettings/BootOrder", // individual boot order lists
"/NetworkProtocol/Oem/Lenovo/", // DNS/LDAP/SMTP/SNMP manager config
"/PortForwardingMap/", // network port forwarding config
"/VirtualMedia/", // virtual media inventory/config, not hardware
"/Boot/Certificates", // secure boot certificate stores, not inventory
"/ThermalSubsystem/Fans/", // per-fan member docs; replay uses aggregate Thermal only
)
// Lenovo XCC BMC is typically slow (p95 latency often 3-5s even under
// normal load). Set rate thresholds that don't over-throttle on the
// first few requests, and give the ETA estimator a realistic baseline.
ensureRatePolicy(plan, AcquisitionRatePolicy{
TargetP95LatencyMS: 2000,
ThrottleP95LatencyMS: 4000,
MinSnapshotWorkers: 2,
MinPrefetchWorkers: 1,
DisablePrefetchOnErrors: true,
})
ensureETABaseline(plan, AcquisitionETABaseline{
DiscoverySeconds: 15,
SnapshotSeconds: 120,
PrefetchSeconds: 30,
CriticalPlanBSeconds: 40,
ProfilePlanBSeconds: 20,
})
addPlanNote(plan, "lenovo xcc acquisition extensions enabled: noisy sensor/oem paths excluded from snapshot")
},
refineAcquisition: func(resolved *ResolvedAcquisitionPlan, discovered DiscoveredResources, signals MatchSignals) {
allowedChassis := lenovoAllowedInventoryChassis(discovered.ChassisPaths, signals.ResourceHints)
resolved.SeedPaths = filterLenovoChassisInventoryPaths(resolved.SeedPaths, allowedChassis)
resolved.CriticalPaths = filterLenovoChassisInventoryPaths(resolved.CriticalPaths, allowedChassis)
resolved.Plan.SeedPaths = filterLenovoChassisInventoryPaths(resolved.Plan.SeedPaths, allowedChassis)
resolved.Plan.CriticalPaths = filterLenovoChassisInventoryPaths(resolved.Plan.CriticalPaths, allowedChassis)
resolved.Plan.PlanBPaths = filterLenovoChassisInventoryPaths(resolved.Plan.PlanBPaths, allowedChassis)
},
}
}
func lenovoAllowedInventoryChassis(chassisPaths, resourceHints []string) map[string]struct{} {
allowed := make(map[string]struct{}, len(chassisPaths))
for _, chassisPath := range chassisPaths {
normalized := normalizePath(chassisPath)
if normalized == "" {
continue
}
if normalized == "/redfish/v1/Chassis/1" {
allowed[normalized] = struct{}{}
continue
}
for _, hint := range resourceHints {
hint = normalizePath(hint)
if !strings.HasPrefix(hint, normalized+"/") {
continue
}
if lenovoHintLooksLikeChassisInventory(hint) {
allowed[normalized] = struct{}{}
break
}
}
}
return allowed
}
func lenovoHintLooksLikeChassisInventory(path string) bool {
for _, suffix := range []string{
"/Power",
"/PowerSubsystem",
"/PowerSubsystem/PowerSupplies",
"/Thermal",
"/ThresholdSensors",
"/DiscreteSensors",
"/SensorsList",
"/NetworkAdapters",
"/PCIeDevices",
"/Drives",
"/Assembly",
} {
if strings.HasSuffix(path, suffix) || strings.Contains(path, suffix+"/") {
return true
}
}
return false
}
func filterLenovoChassisInventoryPaths(paths []string, allowedChassis map[string]struct{}) []string {
if len(paths) == 0 {
return nil
}
out := make([]string, 0, len(paths))
for _, path := range paths {
normalized := normalizePath(path)
chassis := lenovoPathChassisRoot(normalized)
if chassis == "" {
out = append(out, normalized)
continue
}
if normalized == chassis {
out = append(out, normalized)
continue
}
if _, ok := allowedChassis[chassis]; ok {
out = append(out, normalized)
}
}
return dedupeSorted(out)
}
func lenovoPathChassisRoot(path string) string {
const prefix = "/redfish/v1/Chassis/"
if !strings.HasPrefix(path, prefix) {
return ""
}
rest := strings.TrimPrefix(path, prefix)
if rest == "" {
return ""
}
if idx := strings.IndexByte(rest, '/'); idx >= 0 {
return prefix + rest[:idx]
}
return prefix + rest
}

View File

@@ -56,6 +56,7 @@ func BuiltinProfiles() []Profile {
supermicroProfile(),
dellProfile(),
hpeProfile(),
lenovoProfile(),
inspurGroupOEMPlatformsProfile(),
hgxProfile(),
xfusionProfile(),
@@ -226,6 +227,10 @@ func ensurePrefetchPolicy(plan *AcquisitionPlan, policy AcquisitionPrefetchPolic
addPlanPaths(&plan.Tuning.PrefetchPolicy.ExcludeContains, policy.ExcludeContains...)
}
func ensureSnapshotExcludeContains(plan *AcquisitionPlan, patterns ...string) {
addPlanPaths(&plan.Tuning.SnapshotExcludeContains, patterns...)
}
func min(a, b int) int {
if a < b {
return a

View File

@@ -53,16 +53,17 @@ type AcquisitionScopedPathPolicy struct {
}
type AcquisitionTuning struct {
SnapshotMaxDocuments int
SnapshotWorkers int
PrefetchEnabled *bool
PrefetchWorkers int
NVMePostProbeEnabled *bool
RatePolicy AcquisitionRatePolicy
ETABaseline AcquisitionETABaseline
PostProbePolicy AcquisitionPostProbePolicy
RecoveryPolicy AcquisitionRecoveryPolicy
PrefetchPolicy AcquisitionPrefetchPolicy
SnapshotMaxDocuments int
SnapshotWorkers int
SnapshotExcludeContains []string
PrefetchEnabled *bool
PrefetchWorkers int
NVMePostProbeEnabled *bool
RatePolicy AcquisitionRatePolicy
ETABaseline AcquisitionETABaseline
PostProbePolicy AcquisitionPostProbePolicy
RecoveryPolicy AcquisitionRecoveryPolicy
PrefetchPolicy AcquisitionPrefetchPolicy
}
type AcquisitionRatePolicy struct {

View File

@@ -15,9 +15,8 @@ type Request struct {
Password string
Token string
TLSMode string
PowerOnIfHostOff bool
StopHostAfterCollect bool
DebugPayloads bool
DebugPayloads bool
SkipHungCh <-chan struct{}
}
type Progress struct {
@@ -65,10 +64,9 @@ type PhaseTelemetry struct {
type ProbeResult struct {
Reachable bool
Protocol string
HostPowerState string
HostPoweredOn bool
PowerControlAvailable bool
SystemPath string
HostPowerState string
HostPoweredOn bool
SystemPath string
}
type Connector interface {

View File

@@ -159,6 +159,16 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
}
for _, stor := range hw.Storage {
present := stor.Present
storDetails := mergeDetailMaps(nil, stor.Details)
if stor.LogicalBlockSizeBytes != 0 {
storDetails = mergeDetailMaps(storDetails, map[string]any{"logical_block_size_bytes": stor.LogicalBlockSizeBytes})
}
if stor.PhysicalBlockSizeBytes != 0 {
storDetails = mergeDetailMaps(storDetails, map[string]any{"physical_block_size_bytes": stor.PhysicalBlockSizeBytes})
}
if stor.MetadataBytesPerBlock != 0 {
storDetails = mergeDetailMaps(storDetails, map[string]any{"metadata_bytes_per_block": stor.MetadataBytesPerBlock})
}
appendDevice(models.HardwareDevice{
Kind: models.DeviceKindStorage,
Slot: stor.Slot,
@@ -177,27 +187,41 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
StatusAtCollect: stor.StatusAtCollect,
StatusHistory: stor.StatusHistory,
ErrorDescription: stor.ErrorDescription,
Details: mergeDetailMaps(nil, stor.Details),
Details: storDetails,
})
}
for _, pcie := range hw.PCIeDevices {
// Use PartNumber as model when available; fall back to chip description.
// Description contains the chip/product name (e.g. "BCM57414 NetXtreme-E …")
// while PartNumber is a part/product code. Prefer PartNumber when set.
pcieModel := pcie.PartNumber
if pcieModel == "" {
pcieModel = pcie.Description
}
// Priority: PartNumber (vendor P/N) > Model (product name) > Description (chip label).
pcieModel := firstNonEmptyString(pcie.PartNumber, pcie.Model, pcie.Description)
details := mergeDetailMaps(nil, pcie.Details)
pcieFirmware := stringFromDetailMap(details, "firmware")
// Firmware: prefer direct field, fall back to details, then NVSwitch lookup.
pcieFirmware := firstNonEmptyString(pcie.Firmware, stringFromDetailMap(details, "firmware"))
if pcieFirmware == "" && isNVSwitchPCIeDevice(pcie) {
pcieFirmware = nvswitchFirmwareBySlot[normalizeNVSwitchSlotForLookup(pcie.Slot)]
if pcieFirmware != "" {
details = mergeDetailMaps(details, map[string]any{
"firmware": pcieFirmware,
})
}
}
if pcieFirmware != "" {
details = mergeDetailMaps(details, map[string]any{"firmware": pcieFirmware})
}
// Telemetry fields: put into details so convertPCIeFromDevices can pick them up.
if pcie.TemperatureC != nil {
details = mergeDetailMaps(details, map[string]any{"temperature_c": *pcie.TemperatureC})
}
if pcie.PowerW != nil {
details = mergeDetailMaps(details, map[string]any{"power_w": *pcie.PowerW})
}
if pcie.ECCCorrectedTotal != nil {
details = mergeDetailMaps(details, map[string]any{"ecc_corrected_total": *pcie.ECCCorrectedTotal})
}
if pcie.ECCUncorrectedTotal != nil {
details = mergeDetailMaps(details, map[string]any{"ecc_uncorrected_total": *pcie.ECCUncorrectedTotal})
}
if pcie.HWSlowdown != nil {
details = mergeDetailMaps(details, map[string]any{"hw_slowdown": *pcie.HWSlowdown})
}
if pcie.IOMMUGroup != nil {
details = mergeDetailMaps(details, map[string]any{"iommu_group": *pcie.IOMMUGroup})
}
present := pcie.Present
appendDevice(models.HardwareDevice{
Kind: models.DeviceKindPCIe,
Slot: pcie.Slot,
@@ -209,11 +233,13 @@ func buildDevicesFromLegacy(hw *models.HardwareConfig) []models.HardwareDevice {
PartNumber: pcie.PartNumber,
Manufacturer: pcie.Manufacturer,
SerialNumber: pcie.SerialNumber,
MACAddresses: append([]string(nil), pcie.MACAddresses...),
LinkWidth: pcie.LinkWidth,
LinkSpeed: pcie.LinkSpeed,
MaxLinkWidth: pcie.MaxLinkWidth,
MaxLinkSpeed: pcie.MaxLinkSpeed,
NUMANode: pcie.NUMANode,
Present: present,
Status: pcie.Status,
StatusCheckedAt: pcie.StatusCheckedAt,
StatusChangedAt: pcie.StatusChangedAt,
@@ -738,36 +764,39 @@ func convertStorageFromDevices(devices []models.HardwareDevice, collectedAt stri
meta := buildStatusMeta(status, d.StatusCheckedAt, d.StatusChangedAt, d.StatusHistory, d.ErrorDescription, collectedAt)
presentValue := present
result = append(result, ReanimatorStorage{
Slot: d.Slot,
Type: d.Type,
Model: d.Model,
SizeGB: d.SizeGB,
SerialNumber: d.SerialNumber,
Manufacturer: d.Manufacturer,
Firmware: d.Firmware,
Interface: d.Interface,
Present: &presentValue,
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
UnsafeShutdowns: int64FromDetailMap(d.Details, "unsafe_shutdowns"),
MediaErrors: int64FromDetailMap(d.Details, "media_errors"),
ErrorLogEntries: int64FromDetailMap(d.Details, "error_log_entries"),
WrittenBytes: int64FromDetailMap(d.Details, "written_bytes"),
ReadBytes: int64FromDetailMap(d.Details, "read_bytes"),
LifeUsedPct: floatFromDetailMap(d.Details, "life_used_pct"),
RemainingEndurancePct: d.RemainingEndurancePct,
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
AvailableSparePct: floatFromDetailMap(d.Details, "available_spare_pct"),
ReallocatedSectors: int64FromDetailMap(d.Details, "reallocated_sectors"),
CurrentPendingSectors: int64FromDetailMap(d.Details, "current_pending_sectors"),
OfflineUncorrectable: int64FromDetailMap(d.Details, "offline_uncorrectable"),
Status: status,
StatusCheckedAt: meta.StatusCheckedAt,
StatusChangedAt: meta.StatusChangedAt,
ManufacturedYearWeek: manufacturedYearWeekFromDetails(d.Details),
StatusHistory: meta.StatusHistory,
ErrorDescription: meta.ErrorDescription,
Slot: d.Slot,
Type: d.Type,
Model: d.Model,
SizeGB: d.SizeGB,
SerialNumber: d.SerialNumber,
Manufacturer: d.Manufacturer,
Firmware: d.Firmware,
Interface: d.Interface,
Present: &presentValue,
LogicalBlockSizeBytes: int64FromDetailMap(d.Details, "logical_block_size_bytes"),
PhysicalBlockSizeBytes: int64FromDetailMap(d.Details, "physical_block_size_bytes"),
MetadataBytesPerBlock: int64FromDetailMap(d.Details, "metadata_bytes_per_block"),
TemperatureC: floatFromDetailMap(d.Details, "temperature_c"),
PowerOnHours: int64FromDetailMap(d.Details, "power_on_hours"),
PowerCycles: int64FromDetailMap(d.Details, "power_cycles"),
UnsafeShutdowns: int64FromDetailMap(d.Details, "unsafe_shutdowns"),
MediaErrors: int64FromDetailMap(d.Details, "media_errors"),
ErrorLogEntries: int64FromDetailMap(d.Details, "error_log_entries"),
WrittenBytes: int64FromDetailMap(d.Details, "written_bytes"),
ReadBytes: int64FromDetailMap(d.Details, "read_bytes"),
LifeUsedPct: floatFromDetailMap(d.Details, "life_used_pct"),
RemainingEndurancePct: d.RemainingEndurancePct,
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
AvailableSparePct: floatFromDetailMap(d.Details, "available_spare_pct"),
ReallocatedSectors: int64FromDetailMap(d.Details, "reallocated_sectors"),
CurrentPendingSectors: int64FromDetailMap(d.Details, "current_pending_sectors"),
OfflineUncorrectable: int64FromDetailMap(d.Details, "offline_uncorrectable"),
Status: status,
StatusCheckedAt: meta.StatusCheckedAt,
StatusChangedAt: meta.StatusChangedAt,
ManufacturedYearWeek: manufacturedYearWeekFromDetails(d.Details),
StatusHistory: meta.StatusHistory,
ErrorDescription: meta.ErrorDescription,
})
}
return result
@@ -818,6 +847,7 @@ func convertPCIeFromDevices(devices []models.HardwareDevice, collectedAt string)
VendorID: d.VendorID,
DeviceID: d.DeviceID,
NUMANode: d.NUMANode,
IOMMUGroup: intPtrFromDetailMap(d.Details, "iommu_group"),
TemperatureC: temperatureC,
PowerW: powerW,
LifeRemainingPct: floatFromDetailMap(d.Details, "life_remaining_pct"),
@@ -1961,7 +1991,10 @@ func pcieDedupKey(item ReanimatorPCIe) string {
slot := strings.ToLower(strings.TrimSpace(item.Slot))
serial := strings.ToLower(strings.TrimSpace(item.SerialNumber))
bdf := strings.ToLower(strings.TrimSpace(item.BDF))
if slot != "" {
// Generic slot names (e.g. "PCIe Device" from HGX BMC) are not unique
// hardware positions — multiple distinct devices share the same name.
// Fall through to serial/BDF so they are not incorrectly collapsed.
if slot != "" && !isGenericPCIeSlotName(slot) {
return "slot:" + slot
}
if serial != "" {
@@ -1970,9 +2003,22 @@ func pcieDedupKey(item ReanimatorPCIe) string {
if bdf != "" {
return "bdf:" + bdf
}
if slot != "" {
return "slot:" + slot
}
return strings.ToLower(strings.TrimSpace(item.DeviceClass)) + "|" + strings.ToLower(strings.TrimSpace(item.Model))
}
// isGenericPCIeSlotName reports whether slot is a generic device-type label
// rather than a unique hardware position identifier.
func isGenericPCIeSlotName(slot string) bool {
switch slot {
case "pcie device", "pcie slot", "pcie":
return true
}
return false
}
func pcieQualityScore(item ReanimatorPCIe) int {
score := 0
if strings.TrimSpace(item.SerialNumber) != "" {
@@ -2077,6 +2123,17 @@ func parseSocketFromSlot(slot string) int {
return v
}
func intPtrFromDetailMap(details map[string]any, key string) *int {
if details == nil {
return nil
}
if _, ok := details[key]; !ok {
return nil
}
v := intFromDetailMap(details, key)
return &v
}
func intFromDetailMap(details map[string]any, key string) int {
if details == nil {
return 0

View File

@@ -733,6 +733,42 @@ func TestConvertPCIeDevices_SkipsDisplayControllerDuplicates(t *testing.T) {
}
}
func TestConvertPCIeDevices_PreservesAllGPUsWithGenericSlot(t *testing.T) {
// Supermicro HGX BMC reports all GPU PCIe devices with Name "PCIe Device" —
// a generic label that is not a unique hardware position. All 8 GPUs must
// be preserved; dedup by generic slot name must not collapse them into one.
gpus := make([]models.GPU, 8)
serials := []string{
"1654925165720", "1654925166160", "1654925165942", "1654925165271",
"1654925165719", "1654925165252", "1654925165304", "1654925165587",
}
for i, sn := range serials {
gpus[i] = models.GPU{
Slot: "PCIe Device",
Model: "B200 180GB HBM3e",
Manufacturer: "NVIDIA",
SerialNumber: sn,
PartNumber: "2901-886-A1",
Status: "OK",
}
}
hw := &models.HardwareConfig{GPUs: gpus}
result := convertPCIeDevices(hw, "2026-04-13T10:00:00Z")
if len(result) != 8 {
t.Fatalf("expected 8 GPU entries (one per serial), got %d", len(result))
}
seen := make(map[string]bool)
for _, r := range result {
if seen[r.SerialNumber] {
t.Fatalf("duplicate serial %q in PCIe result", r.SerialNumber)
}
seen[r.SerialNumber] = true
if r.DeviceClass != "VideoController" {
t.Fatalf("expected VideoController device class, got %q", r.DeviceClass)
}
}
}
func TestConvertPCIeDevices_MapsGPUStatusHistory(t *testing.T) {
hw := &models.HardwareConfig{
GPUs: []models.GPU{

View File

@@ -12,15 +12,16 @@ type ReanimatorExport struct {
// ReanimatorHardware contains all hardware components
type ReanimatorHardware struct {
Board ReanimatorBoard `json:"board"`
Firmware []ReanimatorFirmware `json:"firmware,omitempty"`
CPUs []ReanimatorCPU `json:"cpus,omitempty"`
Memory []ReanimatorMemory `json:"memory,omitempty"`
Storage []ReanimatorStorage `json:"storage,omitempty"`
PCIeDevices []ReanimatorPCIe `json:"pcie_devices,omitempty"`
PowerSupplies []ReanimatorPSU `json:"power_supplies,omitempty"`
Sensors *ReanimatorSensors `json:"sensors,omitempty"`
EventLogs []ReanimatorEventLog `json:"event_logs,omitempty"`
Board ReanimatorBoard `json:"board"`
Firmware []ReanimatorFirmware `json:"firmware,omitempty"`
CPUs []ReanimatorCPU `json:"cpus,omitempty"`
Memory []ReanimatorMemory `json:"memory,omitempty"`
Storage []ReanimatorStorage `json:"storage,omitempty"`
PCIeDevices []ReanimatorPCIe `json:"pcie_devices,omitempty"`
PowerSupplies []ReanimatorPSU `json:"power_supplies,omitempty"`
Sensors *ReanimatorSensors `json:"sensors,omitempty"`
EventLogs []ReanimatorEventLog `json:"event_logs,omitempty"`
PlatformConfig map[string]any `json:"platform_config,omitempty"`
}
// ReanimatorBoard represents motherboard/server information
@@ -101,17 +102,20 @@ type ReanimatorMemory struct {
// ReanimatorStorage represents a storage device
type ReanimatorStorage struct {
Slot string `json:"slot"`
Type string `json:"type,omitempty"`
Model string `json:"model"`
SizeGB int `json:"size_gb,omitempty"`
SerialNumber string `json:"serial_number"`
Manufacturer string `json:"manufacturer,omitempty"`
Firmware string `json:"firmware,omitempty"`
Interface string `json:"interface,omitempty"`
Present *bool `json:"present,omitempty"`
TemperatureC float64 `json:"temperature_c,omitempty"`
PowerOnHours int64 `json:"power_on_hours,omitempty"`
Slot string `json:"slot"`
Type string `json:"type,omitempty"`
Model string `json:"model"`
SizeGB int `json:"size_gb,omitempty"`
SerialNumber string `json:"serial_number"`
Manufacturer string `json:"manufacturer,omitempty"`
Firmware string `json:"firmware,omitempty"`
Interface string `json:"interface,omitempty"`
Present *bool `json:"present,omitempty"`
LogicalBlockSizeBytes int64 `json:"logical_block_size_bytes,omitempty"`
PhysicalBlockSizeBytes int64 `json:"physical_block_size_bytes,omitempty"`
MetadataBytesPerBlock int64 `json:"metadata_bytes_per_block,omitempty"`
TemperatureC float64 `json:"temperature_c,omitempty"`
PowerOnHours int64 `json:"power_on_hours,omitempty"`
PowerCycles int64 `json:"power_cycles,omitempty"`
UnsafeShutdowns int64 `json:"unsafe_shutdowns,omitempty"`
MediaErrors int64 `json:"media_errors,omitempty"`
@@ -139,6 +143,7 @@ type ReanimatorPCIe struct {
VendorID int `json:"vendor_id,omitempty"`
DeviceID int `json:"device_id,omitempty"`
NUMANode int `json:"numa_node,omitempty"`
IOMMUGroup *int `json:"iommu_group,omitempty"`
TemperatureC float64 `json:"temperature_c,omitempty"`
PowerW float64 `json:"power_w,omitempty"`
LifeRemainingPct float64 `json:"life_remaining_pct,omitempty"`

View File

@@ -245,6 +245,9 @@ type Storage struct {
Location string `json:"location,omitempty"` // Front/Rear
BackplaneID int `json:"backplane_id,omitempty"`
RemainingEndurancePct *int `json:"remaining_endurance_pct,omitempty"` // 0-100 %; nil = not reported
LogicalBlockSizeBytes int64 `json:"logical_block_size_bytes,omitempty"`
PhysicalBlockSizeBytes int64 `json:"physical_block_size_bytes,omitempty"`
MetadataBytesPerBlock int64 `json:"metadata_bytes_per_block,omitempty"`
Status string `json:"status,omitempty"`
Details map[string]any `json:"details,omitempty"`
@@ -257,15 +260,16 @@ type Storage struct {
// StorageVolume represents a logical storage volume (RAID/VROC/etc.).
type StorageVolume struct {
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Controller string `json:"controller,omitempty"`
RAIDLevel string `json:"raid_level,omitempty"`
SizeGB int `json:"size_gb,omitempty"`
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
Status string `json:"status,omitempty"`
Bootable bool `json:"bootable,omitempty"`
Encrypted bool `json:"encrypted,omitempty"`
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
Controller string `json:"controller,omitempty"`
RAIDLevel string `json:"raid_level,omitempty"`
SizeGB int `json:"size_gb,omitempty"`
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
Status string `json:"status,omitempty"`
Bootable bool `json:"bootable,omitempty"`
Encrypted bool `json:"encrypted,omitempty"`
Drives []string `json:"drives,omitempty"` // member drive names/labels
}
// PCIeDevice represents a PCIe device
@@ -277,6 +281,8 @@ type PCIeDevice struct {
BDF string `json:"bdf"`
DeviceClass string `json:"device_class"`
Manufacturer string `json:"manufacturer,omitempty"`
Model string `json:"model,omitempty"`
Firmware string `json:"firmware,omitempty"`
LinkWidth int `json:"link_width"`
LinkSpeed string `json:"link_speed"`
MaxLinkWidth int `json:"max_link_width"`
@@ -285,8 +291,17 @@ type PCIeDevice struct {
SerialNumber string `json:"serial_number,omitempty"`
MACAddresses []string `json:"mac_addresses,omitempty"`
NUMANode int `json:"numa_node,omitempty"` // 0 = not reported/N/A
Present *bool `json:"present,omitempty"`
IOMMUGroup *int `json:"iommu_group,omitempty"`
Status string `json:"status,omitempty"`
// GPU telemetry fields (populated by bee audit for GPU devices)
TemperatureC *float64 `json:"temperature_c,omitempty"`
PowerW *float64 `json:"power_w,omitempty"`
ECCCorrectedTotal *int64 `json:"ecc_corrected_total,omitempty"`
ECCUncorrectedTotal *int64 `json:"ecc_uncorrected_total,omitempty"`
HWSlowdown *bool `json:"hw_slowdown,omitempty"`
StatusCheckedAt *time.Time `json:"status_checked_at,omitempty"`
StatusChangedAt *time.Time `json:"status_changed_at,omitempty"`
StatusAtCollect *StatusAtCollection `json:"status_at_collection,omitempty"`

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,506 @@
package lenovo_xcc
import (
"testing"
"time"
"git.mchus.pro/mchus/logpile/internal/models"
"git.mchus.pro/mchus/logpile/internal/parser"
)
const exampleArchive = "/Users/mchusavitin/Documents/git/logpile/example/7D76CTO1WW_JF0002KT_xcc_mini-log_20260413-122150.zip"
func TestDetect_LenovoXCCMiniLog(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
score := p.Detect(files)
if score < 80 {
t.Errorf("expected Detect score >= 80 for XCC mini-log archive, got %d", score)
}
}
func TestParse_LenovoXCCMiniLog_BasicSysInfo(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, err := p.Parse(files)
if err != nil {
t.Fatalf("Parse returned error: %v", err)
}
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil result or hardware")
}
hw := result.Hardware
if hw.BoardInfo.SerialNumber == "" {
t.Error("BoardInfo.SerialNumber is empty")
}
if hw.BoardInfo.ProductName == "" {
t.Error("BoardInfo.ProductName is empty")
}
t.Logf("BoardInfo: serial=%s model=%s uuid=%s", hw.BoardInfo.SerialNumber, hw.BoardInfo.ProductName, hw.BoardInfo.UUID)
}
func TestParse_LenovoXCCMiniLog_CPUs(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
if len(result.Hardware.CPUs) == 0 {
t.Error("expected at least one CPU, got none")
}
for i, cpu := range result.Hardware.CPUs {
t.Logf("CPU[%d]: socket=%d model=%q cores=%d threads=%d freq=%dMHz", i, cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz)
}
}
func TestParse_LenovoXCCMiniLog_Memory(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
if len(result.Hardware.Memory) == 0 {
t.Error("expected memory DIMMs, got none")
}
t.Logf("Memory: %d DIMMs", len(result.Hardware.Memory))
for i, m := range result.Hardware.Memory {
t.Logf("DIMM[%d]: slot=%s present=%v size=%dMB sn=%s", i, m.Slot, m.Present, m.SizeMB, m.SerialNumber)
}
}
func TestParse_LenovoXCCMiniLog_Storage(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
t.Logf("Storage: %d disks", len(result.Hardware.Storage))
for i, s := range result.Hardware.Storage {
t.Logf("Disk[%d]: slot=%s model=%q size=%dGB sn=%s", i, s.Slot, s.Model, s.SizeGB, s.SerialNumber)
}
}
func TestParse_LenovoXCCMiniLog_PCIeCards(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
t.Logf("PCIe cards: %d", len(result.Hardware.PCIeDevices))
for i, c := range result.Hardware.PCIeDevices {
t.Logf("Card[%d]: slot=%s desc=%q bdf=%s", i, c.Slot, c.Description, c.BDF)
}
}
func TestParse_LenovoXCCMiniLog_PSUs(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
if len(result.Hardware.PowerSupply) == 0 {
t.Error("expected PSUs, got none")
}
for i, p := range result.Hardware.PowerSupply {
t.Logf("PSU[%d]: slot=%s wattage=%dW status=%s sn=%s", i, p.Slot, p.WattageW, p.Status, p.SerialNumber)
}
}
func TestParse_LenovoXCCMiniLog_Sensors(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil {
t.Fatal("Parse returned nil")
}
if len(result.Sensors) == 0 {
t.Error("expected sensors, got none")
}
t.Logf("Sensors: %d", len(result.Sensors))
}
func TestParse_LenovoXCCMiniLog_Events(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil {
t.Fatal("Parse returned nil")
}
if len(result.Events) == 0 {
t.Error("expected events, got none")
}
t.Logf("Events: %d", len(result.Events))
for i, e := range result.Events {
if i >= 5 {
break
}
t.Logf("Event[%d]: severity=%s ts=%s desc=%q", i, e.Severity, e.Timestamp.Format("2006-01-02T15:04:05"), e.Description)
}
}
func TestParse_LenovoXCCMiniLog_FRU(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil {
t.Fatal("Parse returned nil")
}
t.Logf("FRU: %d entries", len(result.FRU))
for i, f := range result.FRU {
t.Logf("FRU[%d]: desc=%q product=%q serial=%q", i, f.Description, f.ProductName, f.SerialNumber)
}
}
func TestParse_LenovoXCCMiniLog_Firmware(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
if len(result.Hardware.Firmware) == 0 {
t.Error("expected firmware entries, got none")
}
for i, f := range result.Hardware.Firmware {
t.Logf("FW[%d]: name=%q version=%q buildtime=%q", i, f.DeviceName, f.Version, f.BuildTime)
}
}
func TestParse_LenovoXCCMiniLog_VROCVolumes(t *testing.T) {
files, err := parser.ExtractArchive(exampleArchive)
if err != nil {
t.Skipf("example archive not available: %v", err)
}
p := &Parser{}
result, _ := p.Parse(files)
if result == nil || result.Hardware == nil {
t.Fatal("Parse returned nil")
}
if len(result.Hardware.Volumes) == 0 {
t.Error("expected at least one VROC volume, got none")
}
for i, v := range result.Hardware.Volumes {
t.Logf("Volume[%d]: id=%s controller=%q raid=%s size=%dGB status=%s drives=%v",
i, v.ID, v.Controller, v.RAIDLevel, v.SizeGB, v.Status, v.Drives)
if v.RAIDLevel == "" {
t.Errorf("Volume[%d]: RAIDLevel is empty", i)
}
if v.Status == "" {
t.Errorf("Volume[%d]: Status is empty", i)
}
}
}
func TestParseVolumes_IntelVROC(t *testing.T) {
content := []byte(`{
"identifier": "storage.id",
"items": [{
"volumes": [{
"id": 1,
"name": "",
"drives": "M.2 Drive 0, M.2 Drive 1",
"rdlvlstr": "RAID 1",
"capacityStr": "893.750 GiB",
"status": 3,
"statusStr": "Optimal"
}],
"totalCapacityStr": "893.750 GiB"
}]
}`)
vols := parseVolumes(content)
if len(vols) != 1 {
t.Fatalf("expected 1 volume, got %d", len(vols))
}
v := vols[0]
if v.ID != "1" {
t.Errorf("expected ID=1, got %q", v.ID)
}
if v.RAIDLevel != "RAID 1" {
t.Errorf("expected RAIDLevel=RAID 1, got %q", v.RAIDLevel)
}
if v.Status != "Optimal" {
t.Errorf("expected Status=Optimal, got %q", v.Status)
}
if v.Controller != "Intel VROC" {
t.Errorf("expected Controller=Intel VROC, got %q", v.Controller)
}
if len(v.Drives) != 2 {
t.Errorf("expected 2 drives, got %d: %v", len(v.Drives), v.Drives)
}
if v.SizeGB < 900 || v.SizeGB > 1000 {
t.Errorf("expected SizeGB ~960, got %d", v.SizeGB)
}
}
func TestParseDIMMs_UnqualifiedDIMMAddsWarningEvent(t *testing.T) {
content := []byte(`{
"items": [{
"memory": [{
"memory_name": "DIMM A1",
"memory_status": "Unqualified DIMM",
"memory_type": "DDR5",
"memory_capacity": 32
}]
}]
}`)
memory, events := parseDIMMs(content)
if len(memory) != 1 {
t.Fatalf("expected 1 DIMM, got %d", len(memory))
}
if len(events) != 1 {
t.Fatalf("expected 1 warning event, got %d", len(events))
}
if events[0].Severity != models.SeverityWarning {
t.Fatalf("expected warning severity, got %q", events[0].Severity)
}
if events[0].SensorName != "DIMM A1" {
t.Fatalf("unexpected sensor name: %q", events[0].SensorName)
}
}
func TestSeverity_UnqualifiedDIMMMessageBecomesWarning(t *testing.T) {
if got := xccSeverity("I", "System found Unqualified DIMM in slot DIMM A1"); got != models.SeverityWarning {
t.Fatalf("expected warning severity, got %q", got)
}
}
func TestApplyDIMMWarningsFromEvents_UpdatesDIMMStatusForExport(t *testing.T) {
result := &models.AnalysisResult{
Events: []models.Event{
{
Timestamp: time.Date(2026, 4, 13, 11, 37, 38, 0, time.UTC),
Severity: models.SeverityWarning,
Description: "Unqualified DIMM 3 has been detected, the DIMM serial number is 80CE042328460C5D88-V20.",
},
},
Hardware: &models.HardwareConfig{
Memory: []models.MemoryDIMM{
{
Slot: "DIMM 3",
Present: true,
SerialNumber: "80CE042328460C5D88",
Status: "Normal",
},
},
},
}
applyDIMMWarningsFromEvents(result)
dimm := result.Hardware.Memory[0]
if dimm.Status != "Warning" {
t.Fatalf("expected DIMM status Warning, got %q", dimm.Status)
}
if dimm.ErrorDescription == "" || dimm.ErrorDescription != result.Events[0].Description {
t.Fatalf("expected DIMM error description to be populated, got %q", dimm.ErrorDescription)
}
if dimm.StatusChangedAt == nil || !dimm.StatusChangedAt.Equal(result.Events[0].Timestamp) {
t.Fatalf("expected status_changed_at from event timestamp, got %#v", dimm.StatusChangedAt)
}
if len(dimm.StatusHistory) != 1 || dimm.StatusHistory[0].Status != "Warning" {
t.Fatalf("expected warning status history entry, got %#v", dimm.StatusHistory)
}
}
func TestParseBasicSysInfo_CleansPlaceholderValuesAndSetsTargetHost(t *testing.T) {
result := &models.AnalysisResult{Hardware: &models.HardwareConfig{}}
content := []byte(`{
"items": [{
"machine_name": " sr650v3-node01 ",
"machine_typemodel": " 7D76CTO1WW ",
"serial_number": " Not Specified ",
"uuid": "N/A"
}]
}`)
parseBasicSysInfo(content, result)
if result.TargetHost != "sr650v3-node01" {
t.Fatalf("unexpected target host: %q", result.TargetHost)
}
if result.Hardware.BoardInfo.ProductName != "7D76CTO1WW" {
t.Fatalf("unexpected product name: %q", result.Hardware.BoardInfo.ProductName)
}
if result.Hardware.BoardInfo.SerialNumber != "" {
t.Fatalf("expected serial number to be cleaned, got %q", result.Hardware.BoardInfo.SerialNumber)
}
if result.Hardware.BoardInfo.UUID != "" {
t.Fatalf("expected UUID to be cleaned, got %q", result.Hardware.BoardInfo.UUID)
}
}
func TestEnrichBoardFromFRU_SystemBoardManufacturerOnly(t *testing.T) {
result := &models.AnalysisResult{
Hardware: &models.HardwareConfig{},
FRU: []models.FRUInfo{
{Description: "Power Supply 1", Manufacturer: "Ignore Me"},
{Description: "System Board", Manufacturer: " Lenovo "},
},
}
enrichBoardFromFRU(result)
if result.Hardware.BoardInfo.Manufacturer != "Lenovo" {
t.Fatalf("unexpected manufacturer: %q", result.Hardware.BoardInfo.Manufacturer)
}
}
func TestEnrichPSUsFromSensors_AssignsTelemetryBySlot(t *testing.T) {
psus := []models.PSU{
{Slot: "1"},
{Slot: "2"},
}
sensors := []models.SensorReading{
{Name: "PSU1 Input Power", Value: 430},
{Name: "Power Supply 1 Output Power", Value: 390},
{Name: "PWS1 AC Voltage", Value: 230.5},
{Name: "PSU2 Input Power", Value: 0},
{Name: "PSU3 Input Power", Value: 999},
{Name: "Fan 1", Value: 12000},
}
got := enrichPSUsFromSensors(psus, sensors)
if got[0].InputPowerW != 430 {
t.Fatalf("unexpected PSU1 input power: %d", got[0].InputPowerW)
}
if got[0].OutputPowerW != 390 {
t.Fatalf("unexpected PSU1 output power: %d", got[0].OutputPowerW)
}
if got[0].InputVoltage != 230.5 {
t.Fatalf("unexpected PSU1 input voltage: %v", got[0].InputVoltage)
}
if got[1].InputPowerW != 0 || got[1].OutputPowerW != 0 || got[1].InputVoltage != 0 {
t.Fatalf("unexpected telemetry assigned to PSU2: %+v", got[1])
}
}
func TestMapDiskHealthStatus(t *testing.T) {
tests := []struct {
name string
code int
stateStr string
want string
}{
{name: "normal", code: 2, stateStr: "Online", want: "OK"},
{name: "warning", code: 1, stateStr: "Online", want: "Warning"},
{name: "predictive failure", code: 4, stateStr: "Online", want: "Warning"},
{name: "critical", code: 3, stateStr: "Failed", want: "Critical"},
{name: "fallback state", code: 0, stateStr: "Rebuilding", want: "Rebuilding"},
{name: "unknown", code: 0, stateStr: "", want: "Unknown"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := mapDiskHealthStatus(tt.code, tt.stateStr); got != tt.want {
t.Fatalf("got %q, want %q", got, tt.want)
}
})
}
}
func TestClassifySensorType(t *testing.T) {
tests := []struct {
name string
in string
unit string
want string
}{
{name: "unit rpm", in: "Fan 1", unit: "RPM", want: "fan"},
{name: "unit celsius", in: "CPU Temp", unit: "C", want: "temperature"},
{name: "unit watts", in: "PSU1 Input Power", unit: "W", want: "power"},
{name: "unit volts", in: "PWS1 AC Voltage", unit: "V", want: "voltage"},
{name: "unit amps", in: "PSU1 Current", unit: "A", want: "current"},
{name: "name fallback", in: "GPU Temp", unit: "", want: "temperature"},
{name: "other", in: "Presence", unit: "", want: "other"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := classifySensorType(tt.in, tt.unit); got != tt.want {
t.Fatalf("got %q, want %q", got, tt.want)
}
})
}
}
func TestCleanXCCValue(t *testing.T) {
tests := []struct {
in string
want string
}{
{in: " Lenovo ", want: "Lenovo"},
{in: "N/A", want: ""},
{in: " not specified ", want: ""},
{in: "-", want: ""},
}
for _, tt := range tests {
if got := cleanXCCValue(tt.in); got != tt.want {
t.Fatalf("cleanXCCValue(%q) = %q, want %q", tt.in, got, tt.want)
}
}
}

View File

@@ -14,6 +14,7 @@ import (
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xfusion"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo_xcc"
// Generic fallback parser (must be last for lowest priority)
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"

View File

@@ -24,6 +24,7 @@ func newCollectTestServer() (*Server, *httptest.Server) {
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
return s, httptest.NewServer(mux)
}
@@ -65,9 +66,6 @@ func TestCollectProbe(t *testing.T) {
if payload.HostPowerState != "Off" {
t.Fatalf("expected host power state Off, got %q", payload.HostPowerState)
}
if !payload.PowerControlAvailable {
t.Fatalf("expected power control to be available")
}
}
func TestCollectLifecycleToTerminal(t *testing.T) {

View File

@@ -26,12 +26,11 @@ func (c *mockConnector) Probe(ctx context.Context, req collector.Request) (*coll
hostPoweredOn = false
}
return &collector.ProbeResult{
Reachable: true,
Protocol: c.protocol,
HostPowerState: map[bool]string{true: "On", false: "Off"}[hostPoweredOn],
HostPoweredOn: hostPoweredOn,
PowerControlAvailable: true,
SystemPath: "/redfish/v1/Systems/1",
Reachable: true,
Protocol: c.protocol,
HostPowerState: map[bool]string{true: "On", false: "Off"}[hostPoweredOn],
HostPoweredOn: hostPoweredOn,
SystemPath: "/redfish/v1/Systems/1",
}, nil
}

View File

@@ -19,18 +19,15 @@ type CollectRequest struct {
Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"`
TLSMode string `json:"tls_mode"`
PowerOnIfHostOff bool `json:"power_on_if_host_off,omitempty"`
StopHostAfterCollect bool `json:"stop_host_after_collect,omitempty"`
DebugPayloads bool `json:"debug_payloads,omitempty"`
DebugPayloads bool `json:"debug_payloads,omitempty"`
}
type CollectProbeResponse struct {
Reachable bool `json:"reachable"`
Protocol string `json:"protocol,omitempty"`
HostPowerState string `json:"host_power_state,omitempty"`
HostPoweredOn bool `json:"host_powered_on"`
PowerControlAvailable bool `json:"power_control_available"`
Message string `json:"message,omitempty"`
HostPowerState string `json:"host_power_state,omitempty"`
HostPoweredOn bool `json:"host_powered_on"`
Message string `json:"message,omitempty"`
}
type CollectJobResponse struct {
@@ -78,7 +75,8 @@ type Job struct {
CreatedAt time.Time
UpdatedAt time.Time
RequestMeta CollectRequestMeta
cancel func()
cancel func()
skipFn func()
}
type CollectModuleStatus struct {

View File

@@ -18,6 +18,7 @@ import (
"sort"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
@@ -49,11 +50,20 @@ func (s *Server) handleIndex(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
tmpl.Execute(w, map[string]string{
"AppVersion": s.config.AppVersion,
"AppCommit": s.config.AppCommit,
"AppVersion": normalizeDisplayVersion(s.config.AppVersion),
"AppCommit": s.config.AppCommit,
"ChartVersion": normalizeDisplayVersion(s.config.ChartVersion),
})
}
func normalizeDisplayVersion(v string) string {
v = strings.TrimSpace(v)
if v == "" {
return ""
}
return strings.TrimPrefix(v, "v")
}
func (s *Server) handleChartCurrent(w http.ResponseWriter, r *http.Request) {
result := s.GetResult()
title := chartTitle(result)
@@ -1674,34 +1684,28 @@ func (s *Server) handleCollectProbe(w http.ResponseWriter, r *http.Request) {
message := "Связь с BMC установлена"
if result != nil {
switch {
case !result.HostPoweredOn && result.PowerControlAvailable:
message = "Связь с BMC установлена, host выключен. Можно включить перед сбором."
case !result.HostPoweredOn:
message = "Связь с BMC установлена, host выключен."
default:
message = "Связь с BMC установлена, host включен."
if result.HostPoweredOn {
message = "Связь с BMC установлена, host включён."
} else {
message = "Связь с BMC установлена, host выключен. Данные инвентаря могут быть неполными."
}
}
hostPowerState := ""
hostPoweredOn := false
powerControlAvailable := false
reachable := false
if result != nil {
reachable = result.Reachable
hostPowerState = strings.TrimSpace(result.HostPowerState)
hostPoweredOn = result.HostPoweredOn
powerControlAvailable = result.PowerControlAvailable
}
jsonResponse(w, CollectProbeResponse{
Reachable: reachable,
Protocol: req.Protocol,
HostPowerState: hostPowerState,
HostPoweredOn: hostPoweredOn,
PowerControlAvailable: powerControlAvailable,
Message: message,
Reachable: reachable,
Protocol: req.Protocol,
HostPowerState: hostPowerState,
HostPoweredOn: hostPoweredOn,
Message: message,
})
}
@@ -1737,6 +1741,22 @@ func (s *Server) handleCollectCancel(w http.ResponseWriter, r *http.Request) {
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) handleCollectSkip(w http.ResponseWriter, r *http.Request) {
jobID := strings.TrimSpace(r.PathValue("id"))
if !isValidCollectJobID(jobID) {
jsonError(w, "Invalid collect job id", http.StatusBadRequest)
return
}
job, ok := s.jobManager.SkipJob(jobID)
if !ok {
jsonError(w, "Collect job not found", http.StatusNotFound)
return
}
jsonResponse(w, job.toStatusResponse())
}
func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
ctx, cancel := context.WithCancel(context.Background())
if attached := s.jobManager.AttachJobCancel(jobID, cancel); !attached {
@@ -1744,6 +1764,11 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
return
}
skipCh := make(chan struct{})
var skipOnce sync.Once
skipFn := func() { skipOnce.Do(func() { close(skipCh) }) }
s.jobManager.AttachJobSkip(jobID, skipFn)
go func() {
connector, ok := s.getCollector(req.Protocol)
if !ok {
@@ -1811,7 +1836,9 @@ func (s *Server) startCollectionJob(jobID string, req CollectRequest) {
}
}
result, err := connector.Collect(ctx, toCollectorRequest(req), emitProgress)
collectorReq := toCollectorRequest(req)
collectorReq.SkipHungCh = skipCh
result, err := connector.Collect(ctx, collectorReq, emitProgress)
if err != nil {
if ctx.Err() != nil {
return
@@ -2027,17 +2054,15 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
func toCollectorRequest(req CollectRequest) collector.Request {
return collector.Request{
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
Password: req.Password,
Token: req.Token,
TLSMode: req.TLSMode,
PowerOnIfHostOff: req.PowerOnIfHostOff,
StopHostAfterCollect: req.StopHostAfterCollect,
DebugPayloads: req.DebugPayloads,
Host: req.Host,
Protocol: req.Protocol,
Port: req.Port,
Username: req.Username,
AuthType: req.AuthType,
Password: req.Password,
Token: req.Token,
TLSMode: req.TLSMode,
DebugPayloads: req.DebugPayloads,
}
}

View File

@@ -175,6 +175,43 @@ func (m *JobManager) UpdateJobDebugInfo(id string, info *CollectDebugInfo) (*Job
return cloned, true
}
func (m *JobManager) AttachJobSkip(id string, skipFn func()) bool {
m.mu.Lock()
defer m.mu.Unlock()
job, ok := m.jobs[id]
if !ok || job == nil || isTerminalCollectStatus(job.Status) {
return false
}
job.skipFn = skipFn
return true
}
func (m *JobManager) SkipJob(id string) (*Job, bool) {
m.mu.Lock()
job, ok := m.jobs[id]
if !ok || job == nil {
m.mu.Unlock()
return nil, false
}
if isTerminalCollectStatus(job.Status) {
cloned := cloneJob(job)
m.mu.Unlock()
return cloned, true
}
skipFn := job.skipFn
job.skipFn = nil
job.UpdatedAt = time.Now().UTC()
job.Logs = append(job.Logs, formatCollectLogLine(job.UpdatedAt, "Пропуск зависших запросов по команде пользователя"))
cloned := cloneJob(job)
m.mu.Unlock()
if skipFn != nil {
skipFn()
}
return cloned, true
}
func (m *JobManager) AttachJobCancel(id string, cancelFn context.CancelFunc) bool {
m.mu.Lock()
defer m.mu.Unlock()
@@ -229,5 +266,6 @@ func cloneJob(job *Job) *Job {
cloned.CurrentPhase = job.CurrentPhase
cloned.ETASeconds = job.ETASeconds
cloned.cancel = nil
cloned.skipFn = nil
return &cloned
}

View File

@@ -19,10 +19,11 @@ import (
var WebFS embed.FS
type Config struct {
Port int
PreloadFile string
AppVersion string
AppCommit string
Port int
PreloadFile string
AppVersion string
AppCommit string
ChartVersion string
}
type Server struct {
@@ -99,6 +100,7 @@ func (s *Server) setupRoutes() {
s.mux.HandleFunc("POST /api/collect/probe", s.handleCollectProbe)
s.mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
s.mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
s.mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
}
func (s *Server) Run() error {

View File

@@ -24,6 +24,7 @@ func newFlowTestServer() (*Server, *httptest.Server) {
mux.HandleFunc("POST /api/collect", s.handleCollectStart)
mux.HandleFunc("GET /api/collect/{id}", s.handleCollectStatus)
mux.HandleFunc("POST /api/collect/{id}/cancel", s.handleCollectCancel)
mux.HandleFunc("POST /api/collect/{id}/skip", s.handleCollectSkip)
return s, httptest.NewServer(mux)
}

BIN
logpile

Binary file not shown.

View File

@@ -128,6 +128,7 @@ echo ""
# Show next steps
echo -e "${YELLOW}Next steps:${NC}"
echo " 1. Create git tag:"
echo " # LOGPile release tags use vN.M, for example: v1.12"
echo " git tag -a ${VERSION} -m \"Release ${VERSION}\""
echo ""
echo " 2. Push tag to remote:"

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,5 @@
<!DOCTYPE html>
<html lang="ru">
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
@@ -7,57 +7,63 @@
<link rel="stylesheet" href="/static/css/style.css">
</head>
<body>
<header>
<div class="app-header-row">
<div class="app-header-brand">
<h1>LOGPile <span class="header-domain">mchus.pro</span></h1>
<p>Анализатор диагностических данных BMC/IPMI</p>
</div>
<div id="header-log-meta" class="header-log-meta hidden">
<div class="header-actions">
<button id="clear-btn" class="hidden" onclick="clearData()">Очистить данные</button>
<button id="header-raw-btn" class="hidden" onclick="exportData('json')">Export Raw Data</button>
<button id="header-reanimator-btn" class="hidden" onclick="exportData('reanimator')">Экспорт Reanimator</button>
<button id="restart-btn" onclick="restartApp()">Перезапуск</button>
<button id="exit-btn" onclick="exitApp()">Выход</button>
</div>
<header class="page-header">
<div class="page-header-brand">
<p class="page-eyebrow">Diagnostic Workbench</p>
<h1>LOGPile</h1>
<p class="page-subtitle">BMC diagnostic data analyzer</p>
</div>
<div id="header-log-meta" class="header-log-meta hidden">
<div class="header-actions">
<button id="clear-btn" class="header-action hidden" onclick="clearData()">Clear Data</button>
<button id="header-raw-btn" class="header-action hidden" onclick="exportData('json')">Raw Data</button>
<button id="header-reanimator-btn" class="header-action hidden" onclick="exportData('reanimator')">Reanimator</button>
<button id="restart-btn" class="header-action" onclick="restartApp()">Restart</button>
<button id="exit-btn" class="header-action" onclick="exitApp()">Exit</button>
</div>
</div>
</header>
<main>
<section id="upload-section">
<div class="source-switch" role="tablist" aria-label="Источник данных">
<button type="button" class="source-switch-btn active" data-source-type="archive">Архив</button>
<main class="page-main">
<section id="upload-section" class="control-deck">
<div class="source-switch" role="tablist" aria-label="Data source">
<button type="button" class="source-switch-btn active" data-source-type="archive">Archive</button>
<button type="button" class="source-switch-btn" data-source-type="api">API</button>
<button type="button" class="source-switch-btn" data-source-type="convert">Convert</button>
</div>
<div id="archive-source-content">
<div class="upload-area" id="drop-zone">
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
<div id="archive-source-content" class="surface-panel upload-panel">
<h2>Open Archive</h2>
<p>Upload a support archive, plain log, or raw JSON snapshot to open the hardware report.</p>
<div class="upload-area upload-dropzone" id="drop-zone">
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
<p class="hint">Поддерживаемые форматы: ahs, tar.gz, tar, tgz, sds, zip, json, txt, log</p>
<span class="upload-kicker">Archive Import</span>
<strong>Drop a file here</strong>
<span class="upload-copy">LOGPile will parse it and open the report immediately.</span>
<div class="upload-actions">
<button type="button" onclick="document.getElementById('file-input').click()">Select File</button>
</div>
<p class="hint">Supported formats: `.ahs`, `.tar.gz`, `.tar`, `.tgz`, `.sds`, `.zip`, `.json`, `.txt`, `.log`</p>
</div>
<div id="upload-status"></div>
<div id="parsers-info" class="parsers-info"></div>
</div>
<div id="api-source-content" class="api-placeholder hidden">
<div id="api-source-content" class="surface-panel upload-panel hidden">
<h2>BMC API</h2>
<p>Validate access and start live collection through the production Redfish pipeline.</p>
<form id="api-connect-form" novalidate>
<h3>Подключение к BMC API</h3>
<div id="api-form-errors" class="form-errors hidden"></div>
<div class="api-form-grid">
<label class="api-form-field" for="api-host">
<span>Host</span>
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 или bmc.example.local">
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 or bmc.example.local">
<span class="field-error" data-error-for="host"></span>
</label>
<label class="api-form-field" for="api-port">
<span>Порт</span>
<span>Port</span>
<input id="api-port" name="port" type="number" min="1" max="65535" value="443" placeholder="443">
<span class="field-error" data-error-for="port"></span>
</label>
@@ -69,55 +75,52 @@
</label>
<label class="api-form-field" id="api-password-field" for="api-password">
<span>Пароль</span>
<span>Password</span>
<input id="api-password" name="password" type="password" autocomplete="current-password">
<span class="field-error" data-error-for="password"></span>
</label>
</div>
<div class="api-form-actions">
<button id="api-connect-btn" type="button">Подключиться</button>
<button id="api-connect-btn" type="button">Connect</button>
</div>
<div id="api-connect-status" class="api-connect-status"></div>
<div id="api-probe-options" class="api-probe-options hidden">
<label class="api-form-checkbox" for="api-power-on">
<input id="api-power-on" name="power_on_if_host_off" type="checkbox">
<span>Включить перед сбором</span>
</label>
<label class="api-form-checkbox" for="api-power-off">
<input id="api-power-off" name="stop_host_after_collect" type="checkbox">
<span>Выключить после сбора</span>
</label>
<div class="api-probe-options-separator"></div>
<div id="api-host-off-warning" class="api-host-off-warning hidden">
&#9888; Host is powered off. Inventory data may be incomplete.
</div>
<label class="api-form-checkbox" for="api-debug-payloads">
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
<span>Сбор расширенных метрик для отладки</span>
<span>Collect extended diagnostics</span>
</label>
<div class="api-form-actions">
<button id="api-collect-btn" type="submit">Собрать</button>
<button id="api-collect-btn" type="submit">Collect</button>
</div>
</div>
</form>
<section id="api-job-status" class="job-status hidden" aria-live="polite">
<div class="job-status-header">
<h4>Статус задачи сбора</h4>
<button id="cancel-job-btn" type="button">Отменить</button>
<h4>Collection Job Status</h4>
<div class="job-status-actions">
<button id="skip-hung-btn" type="button" class="hidden" title="Abort hung requests and continue with analysis of collected data">Skip Hung Requests</button>
<button id="cancel-job-btn" type="button">Cancel</button>
</div>
</div>
<div class="job-status-meta">
<div><span class="meta-label">jobId:</span> <code id="job-id-value">-</code></div>
<div>
<span class="meta-label">Статус:</span>
<span class="meta-label">Status:</span>
<span id="job-status-value" class="job-status-badge">Queued</span>
</div>
<div><span class="meta-label">Этап:</span> <span id="job-progress-value">Сбор данных...</span></div>
<div><span class="meta-label">Stage:</span> <span id="job-progress-value">Collecting data...</span></div>
<div><span class="meta-label">ETA:</span> <span id="job-eta-value">-</span></div>
</div>
<div class="job-progress" aria-label="Прогресс задачи">
<div class="job-progress" aria-label="Job progress">
<div id="job-progress-bar" class="job-progress-bar" style="width: 0%">0%</div>
</div>
<div id="job-active-modules" class="job-active-modules hidden">
<p class="meta-label">Активные модули:</p>
<p class="meta-label">Active modules:</p>
<div id="job-active-modules-list" class="job-module-chips"></div>
</div>
<div id="job-debug-info" class="job-debug-info hidden">
@@ -126,23 +129,23 @@
<div id="job-phase-telemetry" class="job-phase-telemetry"></div>
</div>
<div class="job-status-logs">
<p class="meta-label">Журнал шагов:</p>
<p class="meta-label">Step log:</p>
<ul id="job-logs-list"></ul>
</div>
</section>
</div>
<div id="convert-source-content" class="api-placeholder hidden">
<h3>Пакетная выгрузка Reanimator</h3>
<p>Выберите папку с файлами поддерживаемого типа. Для каждого файла будет создан отдельный экспорт Reanimator.</p>
<div id="convert-source-content" class="surface-panel upload-panel hidden">
<h2>Batch Convert</h2>
<p>Select a folder with supported files. A separate Reanimator export will be produced for each file.</p>
<div class="api-form-actions">
<input type="file" id="convert-folder-input" webkitdirectory directory multiple hidden>
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Выбрать папку</button>
<button id="convert-run-btn" type="button">Конвертировать в Reanimator</button>
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Choose Folder</button>
<button id="convert-run-btn" type="button">Convert to Reanimator</button>
</div>
<div id="convert-progress" class="convert-progress hidden" aria-live="polite">
<div class="convert-progress-meta">
<span id="convert-progress-label">Подготовка...</span>
<span id="convert-progress-label">Preparing...</span>
<span id="convert-progress-value">0%</span>
</div>
<div class="convert-progress-track">
@@ -155,12 +158,12 @@
</section>
<section id="data-section" class="hidden">
<section class="result-panel">
<section class="viewer-panel">
<div class="audit-viewer-shell">
<iframe
id="audit-viewer-frame"
class="audit-viewer-frame"
title="Reanimator chart viewer"
title="Hardware report"
loading="eager"
scrolling="no"
referrerpolicy="same-origin">
@@ -170,11 +173,9 @@
</section>
</main>
<footer>
<div class="footer-buttons">
</div>
<footer class="page-footer">
<div class="footer-info">
<p>Автор: <a href="https://mchus.pro" target="_blank">mchus.pro</a> | <a href="https://git.mchus.pro/mchus/logpile" target="_blank">Git Repository</a>{{if .AppVersion}} | v{{.AppVersion}}{{end}}</p>
<p>{{if .AppVersion}}LOGPile {{.AppVersion}}{{end}}{{if and .AppVersion .ChartVersion}} · {{end}}{{if .ChartVersion}}Chart {{.ChartVersion}}{{end}}</p>
</div>
</footer>