Compare commits
7 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| c9969fc3da | |||
| 89b6701f43 | |||
| b04877549a | |||
| 8ca173c99b | |||
| f19a3454fa | |||
|
|
becdca1d7e | ||
|
|
e10440ae32 |
@@ -58,6 +58,7 @@ Responses:
|
||||
|
||||
Optional request field:
|
||||
- `power_on_if_host_off`: when `true`, Redfish collection may power on the host before collection if preflight found it powered off
|
||||
- `debug_payloads`: when `true`, collector keeps extra diagnostic payloads and enables extended plan-B retries for slow HGX component inventory branches (`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`)
|
||||
|
||||
### `POST /api/collect/probe`
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ Request fields passed from the server:
|
||||
- credential field (`password` or token)
|
||||
- `tls_mode`
|
||||
- optional `power_on_if_host_off`
|
||||
- optional `debug_payloads` for extended diagnostics
|
||||
|
||||
### Core rule
|
||||
|
||||
@@ -57,6 +58,17 @@ closes `skipCh` → goroutine in `Collect()` → `cancelCollect()`.
|
||||
|
||||
The skip button is visible during `running` state and hidden once the job reaches a terminal state.
|
||||
|
||||
### Extended diagnostics toggle
|
||||
|
||||
The live collect form exposes a user-facing checkbox for extended diagnostics.
|
||||
|
||||
- default collection prioritizes inventory completeness and bounded runtime
|
||||
- when extended diagnostics is off, heavy HGX component-chassis critical plan-B retries
|
||||
(`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`) are skipped
|
||||
- when extended diagnostics is on, those retries are allowed and extra debug payloads are collected
|
||||
|
||||
This toggle is intended for operator-driven deep diagnostics on problematic hosts, not for the default path.
|
||||
|
||||
### Discovery model
|
||||
|
||||
The collector does not rely on one fixed vendor tree.
|
||||
|
||||
@@ -55,6 +55,7 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
|
||||
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
|
||||
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
|
||||
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
|
||||
| `lenovo_xcc` | Lenovo XCC mini-log ZIP archives | JSON inventory + platform event logs |
|
||||
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
|
||||
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
|
||||
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
|
||||
@@ -194,6 +195,7 @@ and `LogDump/` trees.
|
||||
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
|
||||
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| Lenovo XCC mini-log | `lenovo_xcc` | Ready | ThinkSystem SR650 V3 XCC mini-log ZIP |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
|
||||
|
||||
@@ -57,6 +57,11 @@ Current behavior:
|
||||
7. Packages any already-present binaries from `bin/`
|
||||
8. Generates `SHA256SUMS.txt`
|
||||
|
||||
Release tag format:
|
||||
- project release tags use `vN.M`
|
||||
- do not create `vN.M.P` tags for LOGPile releases
|
||||
- release artifacts and `main.version` inherit the exact git tag string
|
||||
|
||||
Important limitation:
|
||||
- `scripts/release.sh` does not run `make build-all` for you
|
||||
- if you want Linux or additional macOS archives in the release directory, build them before running the script
|
||||
|
||||
@@ -1120,3 +1120,37 @@ incomplete for UI and Reanimator consumers.
|
||||
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
|
||||
- xFusion archives participate more reliably in canonical device/export flows without special UI
|
||||
cases.
|
||||
|
||||
---
|
||||
|
||||
## ADL-043 — Extended HGX diagnostic plan-B is opt-in from the live collect form
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** Some Supermicro HGX Redfish targets expose slow or hanging component-chassis inventory
|
||||
collections during critical plan-B, especially under `Chassis/HGX_*` for `Assembly`,
|
||||
`Accelerators`, `Drives`, `NetworkAdapters`, and `PCIeDevices`. Default collection should not
|
||||
block operators on deep diagnostic retries that are useful mainly for troubleshooting.
|
||||
**Decision:** Keep the normal snapshot/replay path unchanged, but gate those heavy HGX
|
||||
component-chassis critical plan-B retries behind the existing live-collect `debug_payloads` flag,
|
||||
presented in the UI as "Сбор расширенных данных для диагностики".
|
||||
**Consequences:**
|
||||
- Default live collection skips those heavy diagnostic plan-B retries and reaches replay faster.
|
||||
- Operators can explicitly opt into the slower diagnostic path when they need deeper collection.
|
||||
- The same user-facing toggle continues to enable extra debug payload capture for troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
## ADL-044 — LOGPile project release tags use `vN.M`
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** The repository accumulated release tags in `vN.M.P` form, while the shared module
|
||||
versioning contract in `bible/rules/patterns/module-versioning/contract.md` standardizes version
|
||||
shape as `N.M`. Release tooling reads the git tag verbatim into build metadata and release
|
||||
artifacts, so inconsistent tag shape leaks directly into packaged versions.
|
||||
**Decision:** Use `vN.M` for LOGPile project release tags going forward. Do not create new
|
||||
`vN.M.P` tags for repository releases. Build metadata, release directory names, and release notes
|
||||
continue to inherit the exact git tag string from `git describe --tags`.
|
||||
**Consequences:**
|
||||
- Future project releases have a two-component version string such as `v1.12`.
|
||||
- Release artifacts and `--version` output stay aligned with the tag shape without extra mapping.
|
||||
- Existing historical `vN.M.P` tags remain as-is unless explicitly rewritten.
|
||||
|
||||
@@ -345,8 +345,9 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
|
||||
"manager_critical_suffixes": acquisitionPlan.ScopedPaths.ManagerCriticalSuffixes,
|
||||
},
|
||||
"tuning": map[string]any{
|
||||
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
|
||||
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
|
||||
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
|
||||
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
|
||||
"snapshot_exclude_contains": acquisitionPlan.Tuning.SnapshotExcludeContains,
|
||||
"prefetch_workers": acquisitionPlan.Tuning.PrefetchWorkers,
|
||||
"prefetch_enabled": boolPointerValue(acquisitionPlan.Tuning.PrefetchEnabled),
|
||||
"nvme_post_probe": boolPointerValue(acquisitionPlan.Tuning.NVMePostProbeEnabled),
|
||||
@@ -496,7 +497,6 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
|
||||
return result, nil
|
||||
}
|
||||
|
||||
|
||||
// collectDebugPayloads fetches vendor-specific diagnostic endpoints on a best-effort basis.
|
||||
// Results are stored in rawPayloads["redfish_debug_payloads"] and exported with the bundle.
|
||||
// Enabled only when Request.DebugPayloads is true.
|
||||
@@ -511,7 +511,6 @@ func (c *RedfishConnector) collectDebugPayloads(ctx context.Context, client *htt
|
||||
return out
|
||||
}
|
||||
|
||||
|
||||
func firstNonEmptyPath(paths []string, fallback string) string {
|
||||
for _, p := range paths {
|
||||
if strings.TrimSpace(p) != "" {
|
||||
@@ -543,7 +542,6 @@ func redfishSystemPowerState(systemDoc map[string]interface{}) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
|
||||
func (c *RedfishConnector) postJSON(ctx context.Context, client *http.Client, req Request, baseURL, resourcePath string, payload map[string]any) error {
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
@@ -1346,6 +1344,11 @@ func (c *RedfishConnector) collectRawRedfishTree(ctx context.Context, client *ht
|
||||
if !shouldCrawlPath(path) {
|
||||
return
|
||||
}
|
||||
for _, pattern := range tuning.SnapshotExcludeContains {
|
||||
if pattern != "" && strings.Contains(path, pattern) {
|
||||
return
|
||||
}
|
||||
}
|
||||
mu.Lock()
|
||||
if len(seen) >= maxDocuments {
|
||||
mu.Unlock()
|
||||
@@ -2299,7 +2302,6 @@ func redfishCriticalSlowGap() time.Duration {
|
||||
return 1200 * time.Millisecond
|
||||
}
|
||||
|
||||
|
||||
func redfishSnapshotMemoryRequestTimeout() time.Duration {
|
||||
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_MEMORY_TIMEOUT")); v != "" {
|
||||
if d, err := time.ParseDuration(v); err == nil && d > 0 {
|
||||
@@ -2878,11 +2880,16 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
timings := newRedfishPathTimingCollector(4)
|
||||
var targets []string
|
||||
seenTargets := make(map[string]struct{})
|
||||
skippedDiagnosticTargets := 0
|
||||
addTarget := func(path string) {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
if !shouldIncludeCriticalPlanBPath(req, path) {
|
||||
skippedDiagnosticTargets++
|
||||
return
|
||||
}
|
||||
if _, ok := seenTargets[path]; ok {
|
||||
return
|
||||
}
|
||||
@@ -2968,6 +2975,13 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
return 0
|
||||
}
|
||||
if emit != nil {
|
||||
if skippedDiagnosticTargets > 0 {
|
||||
emit(Progress{
|
||||
Status: "running",
|
||||
Progress: 97,
|
||||
Message: fmt.Sprintf("Redfish: расширенная диагностика выключена, пропущено %d тяжелых diagnostic endpoint", skippedDiagnosticTargets),
|
||||
})
|
||||
}
|
||||
totalETA := redfishCriticalCooldown() + estimatePlanBETA(len(targets))
|
||||
emit(Progress{
|
||||
Status: "running",
|
||||
@@ -3073,6 +3087,39 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
return recovered
|
||||
}
|
||||
|
||||
func shouldIncludeCriticalPlanBPath(req Request, path string) bool {
|
||||
if req.DebugPayloads {
|
||||
return true
|
||||
}
|
||||
return !isExtendedDiagnosticCriticalPlanBPath(path)
|
||||
}
|
||||
|
||||
func isExtendedDiagnosticCriticalPlanBPath(path string) bool {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
parts := strings.Split(strings.Trim(path, "/"), "/")
|
||||
if len(parts) < 5 || parts[0] != "redfish" || parts[1] != "v1" || parts[2] != "Chassis" {
|
||||
return false
|
||||
}
|
||||
if !strings.HasPrefix(parts[3], "HGX_") {
|
||||
return false
|
||||
}
|
||||
for _, suffix := range []string{
|
||||
"/Accelerators",
|
||||
"/Assembly",
|
||||
"/Drives",
|
||||
"/NetworkAdapters",
|
||||
"/PCIeDevices",
|
||||
} {
|
||||
if strings.HasSuffix(path, suffix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *RedfishConnector) recoverProfilePlanBDocs(ctx context.Context, client *http.Client, req Request, baseURL string, plan redfishprofile.AcquisitionPlan, rawTree map[string]interface{}, emit ProgressFn) int {
|
||||
if len(plan.PlanBPaths) == 0 || plan.Mode == redfishprofile.ModeFallback || !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
return 0
|
||||
@@ -3592,7 +3639,7 @@ func parseNIC(doc map[string]interface{}) models.NetworkAdapter {
|
||||
}
|
||||
if pcieIf, ok := ctrl["PCIeInterface"].(map[string]interface{}); ok && linkWidth == 0 && maxLinkWidth == 0 && linkSpeed == "" && maxLinkSpeed == "" {
|
||||
linkWidth = asInt(pcieIf["LanesInUse"])
|
||||
maxLinkWidth = asInt(pcieIf["MaxLanes"])
|
||||
maxLinkWidth = firstNonZeroInt(asInt(pcieIf["MaxLanes"]), asInt(pcieIf["Maxlanes"]))
|
||||
linkSpeed = firstNonEmpty(asString(pcieIf["PCIeType"]), asString(pcieIf["CurrentLinkSpeedGTs"]), asString(pcieIf["CurrentLinkSpeed"]))
|
||||
maxLinkSpeed = firstNonEmpty(asString(pcieIf["MaxPCIeType"]), asString(pcieIf["MaxLinkSpeedGTs"]), asString(pcieIf["MaxLinkSpeed"]))
|
||||
}
|
||||
@@ -3705,6 +3752,9 @@ func enrichNICFromPCIe(nic *models.NetworkAdapter, pcieDoc map[string]interface{
|
||||
if strings.TrimSpace(nic.MaxLinkSpeed) == "" {
|
||||
nic.MaxLinkSpeed = firstNonEmpty(asString(pcieDoc["MaxLinkSpeedGTs"]), asString(pcieDoc["MaxLinkSpeed"]))
|
||||
}
|
||||
if nic.LinkWidth == 0 || nic.MaxLinkWidth == 0 || nic.LinkSpeed == "" || nic.MaxLinkSpeed == "" {
|
||||
redfishEnrichFromOEMxFusionPCIeLink(pcieDoc, &nic.LinkWidth, &nic.MaxLinkWidth, &nic.LinkSpeed, &nic.MaxLinkSpeed)
|
||||
}
|
||||
if normalizeRedfishIdentityField(nic.SerialNumber) == "" {
|
||||
nic.SerialNumber = findFirstNormalizedStringByKeys(pcieDoc, "SerialNumber")
|
||||
}
|
||||
@@ -3736,6 +3786,9 @@ func enrichNICFromPCIe(nic *models.NetworkAdapter, pcieDoc map[string]interface{
|
||||
if strings.TrimSpace(nic.MaxLinkSpeed) == "" {
|
||||
nic.MaxLinkSpeed = firstNonEmpty(asString(fn["MaxLinkSpeedGTs"]), asString(fn["MaxLinkSpeed"]))
|
||||
}
|
||||
if nic.LinkWidth == 0 || nic.MaxLinkWidth == 0 || nic.LinkSpeed == "" || nic.MaxLinkSpeed == "" {
|
||||
redfishEnrichFromOEMxFusionPCIeLink(fn, &nic.LinkWidth, &nic.MaxLinkWidth, &nic.LinkSpeed, &nic.MaxLinkSpeed)
|
||||
}
|
||||
if normalizeRedfishIdentityField(nic.SerialNumber) == "" {
|
||||
nic.SerialNumber = findFirstNormalizedStringByKeys(fn, "SerialNumber")
|
||||
}
|
||||
@@ -4302,6 +4355,21 @@ func parseGPUWithSupplementalDocs(doc map[string]interface{}, functionDocs []map
|
||||
gpu.DeviceID = asHexOrInt(doc["DeviceId"])
|
||||
}
|
||||
|
||||
if pcieIf, ok := doc["PCIeInterface"].(map[string]interface{}); ok {
|
||||
if gpu.CurrentLinkWidth == 0 {
|
||||
gpu.CurrentLinkWidth = asInt(pcieIf["LanesInUse"])
|
||||
}
|
||||
if gpu.MaxLinkWidth == 0 {
|
||||
gpu.MaxLinkWidth = firstNonZeroInt(asInt(pcieIf["MaxLanes"]), asInt(pcieIf["Maxlanes"]))
|
||||
}
|
||||
if gpu.CurrentLinkSpeed == "" {
|
||||
gpu.CurrentLinkSpeed = firstNonEmpty(asString(pcieIf["PCIeType"]), asString(pcieIf["CurrentLinkSpeedGTs"]), asString(pcieIf["CurrentLinkSpeed"]))
|
||||
}
|
||||
if gpu.MaxLinkSpeed == "" {
|
||||
gpu.MaxLinkSpeed = firstNonEmpty(asString(pcieIf["MaxPCIeType"]), asString(pcieIf["MaxLinkSpeedGTs"]), asString(pcieIf["MaxLinkSpeed"]))
|
||||
}
|
||||
}
|
||||
|
||||
for _, fn := range functionDocs {
|
||||
if gpu.BDF == "" {
|
||||
gpu.BDF = sanitizeRedfishBDF(asString(fn["FunctionId"]))
|
||||
@@ -4324,6 +4392,9 @@ func parseGPUWithSupplementalDocs(doc map[string]interface{}, functionDocs []map
|
||||
if gpu.CurrentLinkSpeed == "" {
|
||||
gpu.CurrentLinkSpeed = firstNonEmpty(asString(fn["CurrentLinkSpeedGTs"]), asString(fn["CurrentLinkSpeed"]))
|
||||
}
|
||||
if gpu.CurrentLinkWidth == 0 || gpu.MaxLinkWidth == 0 || gpu.CurrentLinkSpeed == "" || gpu.MaxLinkSpeed == "" {
|
||||
redfishEnrichFromOEMxFusionPCIeLink(fn, &gpu.CurrentLinkWidth, &gpu.MaxLinkWidth, &gpu.CurrentLinkSpeed, &gpu.MaxLinkSpeed)
|
||||
}
|
||||
}
|
||||
|
||||
if isMissingOrRawPCIModel(gpu.Model) {
|
||||
@@ -4384,6 +4455,9 @@ func parsePCIeDeviceWithSupplementalDocs(doc map[string]interface{}, functionDoc
|
||||
if dev.MaxLinkSpeed == "" {
|
||||
dev.MaxLinkSpeed = firstNonEmpty(asString(fn["MaxLinkSpeedGTs"]), asString(fn["MaxLinkSpeed"]))
|
||||
}
|
||||
if dev.LinkWidth == 0 || dev.MaxLinkWidth == 0 || dev.LinkSpeed == "" || dev.MaxLinkSpeed == "" {
|
||||
redfishEnrichFromOEMxFusionPCIeLink(fn, &dev.LinkWidth, &dev.MaxLinkWidth, &dev.LinkSpeed, &dev.MaxLinkSpeed)
|
||||
}
|
||||
}
|
||||
if dev.DeviceClass == "" || isGenericPCIeClassLabel(dev.DeviceClass) {
|
||||
dev.DeviceClass = firstNonEmpty(redfishFirstStringAcrossDocs(supplementalDocs, "DeviceType"), dev.DeviceClass)
|
||||
@@ -4633,6 +4707,59 @@ func buildBDFfromOemPublic(doc map[string]interface{}) string {
|
||||
return fmt.Sprintf("%04x:%02x:%02x.%x", segment, bus, dev, fn)
|
||||
}
|
||||
|
||||
// redfishEnrichFromOEMxFusionPCIeLink fills in missing PCIe link width/speed
|
||||
// from the xFusion OEM namespace. xFusion reports link width as a string like
|
||||
// "X8" in Oem.xFusion.LinkWidth / Oem.xFusion.LinkWidthAbility, and link speed
|
||||
// as a string like "Gen4 (16.0GT/s)" in Oem.xFusion.LinkSpeed /
|
||||
// Oem.xFusion.LinkSpeedAbility. These fields appear on PCIeFunction docs.
|
||||
func redfishEnrichFromOEMxFusionPCIeLink(doc map[string]interface{}, linkWidth, maxLinkWidth *int, linkSpeed, maxLinkSpeed *string) {
|
||||
oem, _ := doc["Oem"].(map[string]interface{})
|
||||
if oem == nil {
|
||||
return
|
||||
}
|
||||
xf, _ := oem["xFusion"].(map[string]interface{})
|
||||
if xf == nil {
|
||||
return
|
||||
}
|
||||
if *linkWidth == 0 {
|
||||
*linkWidth = parseXFusionLinkWidth(asString(xf["LinkWidth"]))
|
||||
}
|
||||
if *maxLinkWidth == 0 {
|
||||
*maxLinkWidth = parseXFusionLinkWidth(asString(xf["LinkWidthAbility"]))
|
||||
}
|
||||
if strings.TrimSpace(*linkSpeed) == "" {
|
||||
*linkSpeed = strings.TrimSpace(asString(xf["LinkSpeed"]))
|
||||
}
|
||||
if strings.TrimSpace(*maxLinkSpeed) == "" {
|
||||
*maxLinkSpeed = strings.TrimSpace(asString(xf["LinkSpeedAbility"]))
|
||||
}
|
||||
}
|
||||
|
||||
// parseXFusionLinkWidth converts an xFusion link-width string like "X8" or
|
||||
// "x16" to the integer lane count. Returns 0 for unrecognised values.
|
||||
func parseXFusionLinkWidth(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
if s == "" {
|
||||
return 0
|
||||
}
|
||||
s = strings.TrimPrefix(strings.ToUpper(s), "X")
|
||||
v := asInt(s)
|
||||
if v <= 0 {
|
||||
return 0
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// firstNonZeroInt returns the first argument that is non-zero.
|
||||
func firstNonZeroInt(vals ...int) int {
|
||||
for _, v := range vals {
|
||||
if v != 0 {
|
||||
return v
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func normalizeRedfishIdentityField(v string) string {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
|
||||
@@ -50,11 +50,15 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
}
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
collectFrom(joinPath(systemPath, "/LogServices"), isHardwareLogService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, systemPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isHardwareLogService)
|
||||
}
|
||||
}
|
||||
// Managers hold the IPMI SEL on AMI/MSI BMCs — include only the "SEL" service.
|
||||
for _, managerPath := range managerPaths {
|
||||
collectFrom(joinPath(managerPath, "/LogServices"), isManagerSELService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, managerPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isManagerSELService)
|
||||
}
|
||||
}
|
||||
|
||||
if len(out) > 0 {
|
||||
@@ -63,6 +67,42 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
return out
|
||||
}
|
||||
|
||||
func (c *RedfishConnector) redfishLinkedCollectionPaths(
|
||||
ctx context.Context,
|
||||
client *http.Client,
|
||||
req Request,
|
||||
baseURL, resourcePath, linkKey string,
|
||||
) []string {
|
||||
resourcePath = normalizeRedfishPath(resourcePath)
|
||||
if resourcePath == "" || strings.TrimSpace(linkKey) == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
seen := make(map[string]struct{}, 2)
|
||||
var out []string
|
||||
add := func(path string) {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
if _, ok := seen[path]; ok {
|
||||
return
|
||||
}
|
||||
seen[path] = struct{}{}
|
||||
out = append(out, path)
|
||||
}
|
||||
|
||||
add(joinPath(resourcePath, "/"+strings.TrimSpace(linkKey)))
|
||||
|
||||
resourceDoc, err := c.getJSON(ctx, client, req, baseURL, resourcePath)
|
||||
if err == nil {
|
||||
if linked := redfishLinkedPath(resourceDoc, linkKey); linked != "" {
|
||||
add(linked)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// fetchRedfishLogEntriesWithPaging fetches entries from a LogEntry collection,
|
||||
// following nextLink pages. Stops early when entries older than cutoff are encountered
|
||||
// (assumes BMC returns entries newest-first, which is typical).
|
||||
@@ -182,7 +222,7 @@ func redfishLogServiceEntriesPath(svc map[string]interface{}) string {
|
||||
// Audit, authentication, and session events are excluded.
|
||||
func isHardwareLogEntry(entry map[string]interface{}) bool {
|
||||
entryType := strings.TrimSpace(asString(entry["EntryType"]))
|
||||
if strings.EqualFold(entryType, "Oem") {
|
||||
if strings.EqualFold(entryType, "Oem") && !strings.EqualFold(strings.TrimSpace(asString(entry["OemRecordFormat"])), "Lenovo") {
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -362,6 +402,9 @@ func parseIPMIDumpKV(message string) map[string]string {
|
||||
// AMI/MSI BMCs often set Severity="OK" on all SEL records regardless of content,
|
||||
// so we fall back to inferring severity from SensorType when the explicit field is unhelpful.
|
||||
func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
if redfishLogEntryLooksLikeWarning(entry) {
|
||||
return models.SeverityWarning
|
||||
}
|
||||
// Newer Redfish uses MessageSeverity; older uses Severity.
|
||||
raw := strings.ToLower(firstNonEmpty(
|
||||
strings.TrimSpace(asString(entry["MessageSeverity"])),
|
||||
@@ -380,6 +423,16 @@ func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
}
|
||||
}
|
||||
|
||||
func redfishLogEntryLooksLikeWarning(entry map[string]interface{}) bool {
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
|
||||
asString(entry["Message"]),
|
||||
asString(entry["Name"]),
|
||||
asString(entry["SensorType"]),
|
||||
asString(entry["EntryCode"]),
|
||||
}, " ")))
|
||||
return strings.Contains(joined, "unqualified dimm")
|
||||
}
|
||||
|
||||
// redfishSeverityFromSensorType infers event severity from the IPMI/Redfish SensorType string.
|
||||
func redfishSeverityFromSensorType(sensorType string) models.Severity {
|
||||
switch strings.ToLower(sensorType) {
|
||||
|
||||
125
internal/collector/redfish_logentries_test.go
Normal file
125
internal/collector/redfish_logentries_test.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestCollectRedfishLogEntries_UsesLinkedManagerLogServicesPath(t *testing.T) {
|
||||
mux := http.NewServeMux()
|
||||
register := func(path string, payload interface{}) {
|
||||
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(payload)
|
||||
})
|
||||
}
|
||||
|
||||
register("/redfish/v1/Managers/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"LogServices": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL", map[string]interface{}{
|
||||
"Id": "SEL",
|
||||
"Entries": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries/1"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"Created": time.Now().UTC().Format(time.RFC3339),
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
})
|
||||
|
||||
ts := httptest.NewServer(mux)
|
||||
defer ts.Close()
|
||||
|
||||
c := NewRedfishConnector()
|
||||
got := c.collectRedfishLogEntries(context.Background(), ts.Client(), Request{
|
||||
Host: ts.URL,
|
||||
Port: 443,
|
||||
Protocol: "redfish",
|
||||
Username: "admin",
|
||||
AuthType: "password",
|
||||
Password: "secret",
|
||||
TLSMode: "strict",
|
||||
}, ts.URL, nil, []string{"/redfish/v1/Managers/1"})
|
||||
|
||||
if len(got) != 1 {
|
||||
t.Fatalf("expected 1 collected log entry, got %d", len(got))
|
||||
}
|
||||
if got[0]["Message"] != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected collected message: %#v", got[0]["Message"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_UnqualifiedDIMMBecomesWarning(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "sel-1",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].Description != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected description: %q", events[0].Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_LenovoOEMEntryIsKept(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "plat-55",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "DIMM A1 is unqualified",
|
||||
"MessageSeverity": "Warning",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Oem",
|
||||
"OemRecordFormat": "Lenovo",
|
||||
"EntryCode": "Assert",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 Lenovo OEM event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
}
|
||||
57
internal/collector/redfish_planb_test.go
Normal file
57
internal/collector/redfish_planb_test.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package collector
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestShouldIncludeCriticalPlanBPath(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
req Request
|
||||
path string
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "skip hgx erot pcie without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "skip hgx chassis assembly without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_Chassis_0/Assembly",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "keep standard chassis inventory without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/1/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep nvme storage backplane drives without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/NVMeSSD.0.Group.0.StorageBackplane/Drives",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep system processors without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Systems/HGX_Baseboard_0/Processors",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "include hgx erot pcie when extended diagnostics enabled",
|
||||
req: Request{DebugPayloads: true},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := shouldIncludeCriticalPlanBPath(tt.req, tt.path); got != tt.want {
|
||||
t.Fatalf("shouldIncludeCriticalPlanBPath(%q) = %v, want %v", tt.path, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1341,6 +1341,48 @@ func TestParseNIC_PrefersControllerSlotLabelAndPCIeInterface(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseNIC_xFusionMaxlanesAndOEMLinkWidth(t *testing.T) {
|
||||
// xFusion uses "Maxlanes" (lowercase 'l') in PCIeInterface, not "MaxLanes".
|
||||
// xFusion also stores per-function link width as Oem.xFusion.LinkWidth = "X8".
|
||||
nic := parseNIC(map[string]interface{}{
|
||||
"Id": "OCPCard1",
|
||||
"Model": "ConnectX-6 Lx",
|
||||
"Controllers": []interface{}{
|
||||
map[string]interface{}{
|
||||
"PCIeInterface": map[string]interface{}{
|
||||
"LanesInUse": 8,
|
||||
"Maxlanes": 8, // xFusion uses lowercase 'l'
|
||||
"PCIeType": "Gen4",
|
||||
"MaxPCIeType": "Gen4",
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
if nic.LinkWidth != 8 || nic.MaxLinkWidth != 8 {
|
||||
t.Fatalf("expected link widths 8/8 from xFusion Maxlanes, got current=%d max=%d", nic.LinkWidth, nic.MaxLinkWidth)
|
||||
}
|
||||
|
||||
// enrichNICFromPCIe: OEM xFusion LinkWidth on a PCIeFunction doc.
|
||||
nic2 := models.NetworkAdapter{}
|
||||
fnDoc := map[string]interface{}{
|
||||
"Oem": map[string]interface{}{
|
||||
"xFusion": map[string]interface{}{
|
||||
"LinkWidth": "X8",
|
||||
"LinkWidthAbility": "X8",
|
||||
"LinkSpeed": "Gen4 (16.0GT/s)",
|
||||
"LinkSpeedAbility": "Gen4 (16.0GT/s)",
|
||||
},
|
||||
},
|
||||
}
|
||||
enrichNICFromPCIe(&nic2, map[string]interface{}{}, []map[string]interface{}{fnDoc}, nil)
|
||||
if nic2.LinkWidth != 8 || nic2.MaxLinkWidth != 8 {
|
||||
t.Fatalf("expected link width 8 from xFusion OEM LinkWidth, got current=%d max=%d", nic2.LinkWidth, nic2.MaxLinkWidth)
|
||||
}
|
||||
if nic2.LinkSpeed != "Gen4 (16.0GT/s)" || nic2.MaxLinkSpeed != "Gen4 (16.0GT/s)" {
|
||||
t.Fatalf("expected link speed from xFusion OEM LinkSpeed, got current=%q max=%q", nic2.LinkSpeed, nic2.MaxLinkSpeed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseNIC_DropsUnrealisticPortCount(t *testing.T) {
|
||||
nic := parseNIC(map[string]interface{}{
|
||||
"Id": "1",
|
||||
@@ -2773,6 +2815,28 @@ func TestReplayCollectGPUs_DedupUsesRedfishPathBeforeHeuristics(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseGPU_xFusionPCIeInterfaceMaxlanes(t *testing.T) {
|
||||
// xFusion GPU PCIeDevices (PCIeCard1..N) carry link width in PCIeInterface
|
||||
// with "Maxlanes" (lowercase 'l') rather than "MaxLanes".
|
||||
doc := map[string]interface{}{
|
||||
"Id": "PCIeCard1",
|
||||
"Model": "RTX PRO 6000",
|
||||
"PCIeInterface": map[string]interface{}{
|
||||
"LanesInUse": 16,
|
||||
"Maxlanes": 16,
|
||||
"PCIeType": "Gen5",
|
||||
"MaxPCIeType": "Gen5",
|
||||
},
|
||||
}
|
||||
gpu := parseGPU(doc, nil, 1)
|
||||
if gpu.CurrentLinkWidth != 16 || gpu.MaxLinkWidth != 16 {
|
||||
t.Fatalf("expected link widths 16/16 from PCIeInterface, got current=%d max=%d", gpu.CurrentLinkWidth, gpu.MaxLinkWidth)
|
||||
}
|
||||
if gpu.CurrentLinkSpeed != "Gen5" || gpu.MaxLinkSpeed != "Gen5" {
|
||||
t.Fatalf("expected link speeds Gen5/Gen5 from PCIeInterface, got current=%q max=%q", gpu.CurrentLinkSpeed, gpu.MaxLinkSpeed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseGPU_UsesNestedOemSerialNumber(t *testing.T) {
|
||||
doc := map[string]interface{}{
|
||||
"Id": "GPU4",
|
||||
|
||||
@@ -326,6 +326,47 @@ func TestBuildAnalysisDirectives_SupermicroEnablesStorageRecovery(t *testing.T)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_LenovoXCCSelectsMatchedModeAndExcludesSensors(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
found := false
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == "lenovo" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("expected lenovo profile to be selected")
|
||||
}
|
||||
|
||||
// Verify the acquisition plan excludes noisy Lenovo-specific snapshot paths.
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
wantExcluded := []string{"/Sensors/", "/Oem/Lenovo/LEDs/", "/Oem/Lenovo/Slots/"}
|
||||
for _, want := range wantExcluded {
|
||||
found := false
|
||||
for _, ex := range plan.Tuning.SnapshotExcludeContains {
|
||||
if ex == want {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("expected SnapshotExcludeContains to include %q, got %v", want, plan.Tuning.SnapshotExcludeContains)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_OrderingIsDeterministic(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
|
||||
65
internal/collector/redfishprofile/profile_lenovo.go
Normal file
65
internal/collector/redfishprofile/profile_lenovo.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package redfishprofile
|
||||
|
||||
func lenovoProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "lenovo",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "lenovo") ||
|
||||
containsFold(s.ChassisManufacturer, "lenovo") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "lenovo") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
// Lenovo XClarity Controller (XCC) is the BMC product line.
|
||||
if containsFold(s.ServiceRootProduct, "xclarity") ||
|
||||
containsFold(s.ServiceRootProduct, "xcc") {
|
||||
score += 30
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// Lenovo XCC BMC exposes Chassis/1/Sensors with hundreds of individual
|
||||
// sensor member documents (e.g. Chassis/1/Sensors/101L1). These are
|
||||
// not used by any LOGPile parser — thermal/power data is read from
|
||||
// the aggregate Chassis/*/Thermal and Chassis/*/Power endpoints. On
|
||||
// a real server they largely return errors, wasting many minutes.
|
||||
// Lenovo OEM subtrees under Oem/Lenovo/LEDs and Oem/Lenovo/Slots also
|
||||
// enumerate dozens of individual documents not relevant to inventory.
|
||||
ensureSnapshotExcludeContains(plan,
|
||||
"/Sensors/", // individual sensor docs (Chassis/1/Sensors/NNN)
|
||||
"/Oem/Lenovo/LEDs/", // individual LED status entries (~47 per server)
|
||||
"/Oem/Lenovo/Slots/", // individual slot detail entries (~26 per server)
|
||||
"/Oem/Lenovo/Metrics/", // operational metrics, not inventory
|
||||
"/Oem/Lenovo/History", // historical telemetry
|
||||
"/Oem/Lenovo/ScheduledPower", // power scheduling config
|
||||
"/Oem/Lenovo/BootSettings/BootOrder", // individual boot order lists
|
||||
"/PortForwardingMap/", // network port forwarding config
|
||||
)
|
||||
// Lenovo XCC BMC is typically slow (p95 latency often 3-5s even under
|
||||
// normal load). Set rate thresholds that don't over-throttle on the
|
||||
// first few requests, and give the ETA estimator a realistic baseline.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 2000,
|
||||
ThrottleP95LatencyMS: 4000,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 15,
|
||||
SnapshotSeconds: 120,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 20,
|
||||
})
|
||||
addPlanNote(plan, "lenovo xcc acquisition extensions enabled: noisy sensor/oem paths excluded from snapshot")
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -56,6 +56,7 @@ func BuiltinProfiles() []Profile {
|
||||
supermicroProfile(),
|
||||
dellProfile(),
|
||||
hpeProfile(),
|
||||
lenovoProfile(),
|
||||
inspurGroupOEMPlatformsProfile(),
|
||||
hgxProfile(),
|
||||
xfusionProfile(),
|
||||
@@ -226,6 +227,10 @@ func ensurePrefetchPolicy(plan *AcquisitionPlan, policy AcquisitionPrefetchPolic
|
||||
addPlanPaths(&plan.Tuning.PrefetchPolicy.ExcludeContains, policy.ExcludeContains...)
|
||||
}
|
||||
|
||||
func ensureSnapshotExcludeContains(plan *AcquisitionPlan, patterns ...string) {
|
||||
addPlanPaths(&plan.Tuning.SnapshotExcludeContains, patterns...)
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
|
||||
@@ -53,16 +53,17 @@ type AcquisitionScopedPathPolicy struct {
|
||||
}
|
||||
|
||||
type AcquisitionTuning struct {
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
SnapshotExcludeContains []string
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
}
|
||||
|
||||
type AcquisitionRatePolicy struct {
|
||||
|
||||
@@ -1961,7 +1961,10 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
slot := strings.ToLower(strings.TrimSpace(item.Slot))
|
||||
serial := strings.ToLower(strings.TrimSpace(item.SerialNumber))
|
||||
bdf := strings.ToLower(strings.TrimSpace(item.BDF))
|
||||
if slot != "" {
|
||||
// Generic slot names (e.g. "PCIe Device" from HGX BMC) are not unique
|
||||
// hardware positions — multiple distinct devices share the same name.
|
||||
// Fall through to serial/BDF so they are not incorrectly collapsed.
|
||||
if slot != "" && !isGenericPCIeSlotName(slot) {
|
||||
return "slot:" + slot
|
||||
}
|
||||
if serial != "" {
|
||||
@@ -1970,9 +1973,22 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
if bdf != "" {
|
||||
return "bdf:" + bdf
|
||||
}
|
||||
if slot != "" {
|
||||
return "slot:" + slot
|
||||
}
|
||||
return strings.ToLower(strings.TrimSpace(item.DeviceClass)) + "|" + strings.ToLower(strings.TrimSpace(item.Model))
|
||||
}
|
||||
|
||||
// isGenericPCIeSlotName reports whether slot is a generic device-type label
|
||||
// rather than a unique hardware position identifier.
|
||||
func isGenericPCIeSlotName(slot string) bool {
|
||||
switch slot {
|
||||
case "pcie device", "pcie slot", "pcie":
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func pcieQualityScore(item ReanimatorPCIe) int {
|
||||
score := 0
|
||||
if strings.TrimSpace(item.SerialNumber) != "" {
|
||||
|
||||
@@ -733,6 +733,42 @@ func TestConvertPCIeDevices_SkipsDisplayControllerDuplicates(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_PreservesAllGPUsWithGenericSlot(t *testing.T) {
|
||||
// Supermicro HGX BMC reports all GPU PCIe devices with Name "PCIe Device" —
|
||||
// a generic label that is not a unique hardware position. All 8 GPUs must
|
||||
// be preserved; dedup by generic slot name must not collapse them into one.
|
||||
gpus := make([]models.GPU, 8)
|
||||
serials := []string{
|
||||
"1654925165720", "1654925166160", "1654925165942", "1654925165271",
|
||||
"1654925165719", "1654925165252", "1654925165304", "1654925165587",
|
||||
}
|
||||
for i, sn := range serials {
|
||||
gpus[i] = models.GPU{
|
||||
Slot: "PCIe Device",
|
||||
Model: "B200 180GB HBM3e",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: sn,
|
||||
PartNumber: "2901-886-A1",
|
||||
Status: "OK",
|
||||
}
|
||||
}
|
||||
hw := &models.HardwareConfig{GPUs: gpus}
|
||||
result := convertPCIeDevices(hw, "2026-04-13T10:00:00Z")
|
||||
if len(result) != 8 {
|
||||
t.Fatalf("expected 8 GPU entries (one per serial), got %d", len(result))
|
||||
}
|
||||
seen := make(map[string]bool)
|
||||
for _, r := range result {
|
||||
if seen[r.SerialNumber] {
|
||||
t.Fatalf("duplicate serial %q in PCIe result", r.SerialNumber)
|
||||
}
|
||||
seen[r.SerialNumber] = true
|
||||
if r.DeviceClass != "VideoController" {
|
||||
t.Fatalf("expected VideoController device class, got %q", r.DeviceClass)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_MapsGPUStatusHistory(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
|
||||
710
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
710
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
@@ -0,0 +1,710 @@
|
||||
// Package lenovo_xcc provides parser for Lenovo XCC mini-log archives.
|
||||
// Tested with: ThinkSystem SR650 V3 (XCC mini-log zip, exported via XCC UI)
|
||||
//
|
||||
// Archive structure: zip with tmp/ directory containing JSON .log files.
|
||||
//
|
||||
// IMPORTANT: Increment parserVersion when modifying parser logic!
|
||||
package lenovo_xcc
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.1"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser implements VendorParser for Lenovo XCC mini-log archives.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string { return "Lenovo XCC Mini-Log Parser" }
|
||||
func (p *Parser) Vendor() string { return "lenovo_xcc" }
|
||||
func (p *Parser) Version() string { return parserVersion }
|
||||
|
||||
// Detect checks if files match the Lenovo XCC mini-log archive format.
|
||||
// Returns confidence score 0-100.
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
switch {
|
||||
case strings.HasSuffix(path, "tmp/basic_sys_info.log"):
|
||||
confidence += 60
|
||||
case strings.HasSuffix(path, "tmp/inventory_cpu.log"):
|
||||
confidence += 20
|
||||
case strings.HasSuffix(path, "tmp/xcc_plat_events1.log"):
|
||||
confidence += 20
|
||||
case strings.HasSuffix(path, "tmp/inventory_dimm.log"):
|
||||
confidence += 10
|
||||
case strings.HasSuffix(path, "tmp/inventory_fw.log"):
|
||||
confidence += 10
|
||||
}
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
}
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
// Parse parses the Lenovo XCC mini-log archive and returns an analysis result.
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
PCIeDevices: make([]models.PCIeDevice, 0),
|
||||
PowerSupply: make([]models.PSU, 0),
|
||||
},
|
||||
}
|
||||
|
||||
if f := findByPath(files, "tmp/basic_sys_info.log"); f != nil {
|
||||
parseBasicSysInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_fw.log"); f != nil {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, parseFirmware(f.Content)...)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_cpu.log"); f != nil {
|
||||
result.Hardware.CPUs = parseCPUs(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_dimm.log"); f != nil {
|
||||
memory, events := parseDIMMs(f.Content)
|
||||
result.Hardware.Memory = memory
|
||||
result.Events = append(result.Events, events...)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_disk.log"); f != nil {
|
||||
result.Hardware.Storage = parseDisks(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_card.log"); f != nil {
|
||||
result.Hardware.PCIeDevices = parseCards(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_psu.log"); f != nil {
|
||||
result.Hardware.PowerSupply = parsePSUs(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_ipmi_fru.log"); f != nil {
|
||||
result.FRU = parseFRU(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_ipmi_sensor.log"); f != nil {
|
||||
result.Sensors = parseSensors(f.Content)
|
||||
}
|
||||
for _, f := range findEventFiles(files) {
|
||||
result.Events = append(result.Events, parseEvents(f.Content)...)
|
||||
}
|
||||
|
||||
result.Protocol = "ipmi"
|
||||
result.SourceType = models.SourceTypeArchive
|
||||
parser.ApplyManufacturedYearWeekFromFRU(result.FRU, result.Hardware)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// findByPath returns the first file whose lowercased path ends with the given suffix.
|
||||
func findByPath(files []parser.ExtractedFile, suffix string) *parser.ExtractedFile {
|
||||
for i := range files {
|
||||
if strings.HasSuffix(strings.ToLower(files[i].Path), suffix) {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// findEventFiles returns all xcc_plat_eventsN.log files.
|
||||
func findEventFiles(files []parser.ExtractedFile) []parser.ExtractedFile {
|
||||
var out []parser.ExtractedFile
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
if strings.Contains(path, "tmp/xcc_plat_events") && strings.HasSuffix(path, ".log") {
|
||||
out = append(out, f)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// --- JSON structures ---
|
||||
|
||||
type xccBasicSysInfoDoc struct {
|
||||
Items []xccBasicSysInfoItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccBasicSysInfoItem struct {
|
||||
MachineName string `json:"machine_name"`
|
||||
MachineTypeModel string `json:"machine_typemodel"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
UUID string `json:"uuid"`
|
||||
PowerState string `json:"power_state"`
|
||||
ServerState string `json:"server_state"`
|
||||
CurrentTime string `json:"current_time"`
|
||||
}
|
||||
|
||||
// xccFWEntry covers both basic_sys_info firmware (no type_str) and inventory_fw (has type_str).
|
||||
type xccFWEntry struct {
|
||||
Index int `json:"index"`
|
||||
TypeCode int `json:"type"`
|
||||
TypeStr string `json:"type_str"` // only in inventory_fw.log
|
||||
Version string `json:"version"`
|
||||
Build string `json:"build"`
|
||||
ReleaseDate string `json:"release_date"`
|
||||
}
|
||||
|
||||
type xccFirmwareDoc struct {
|
||||
Items []xccFWEntry `json:"items"`
|
||||
}
|
||||
|
||||
type xccCPUDoc struct {
|
||||
Items []xccCPUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccCPUItem struct {
|
||||
Processors []xccCPU `json:"processors"`
|
||||
}
|
||||
|
||||
type xccCPU struct {
|
||||
Name int `json:"processors_name"`
|
||||
Model string `json:"processors_cpu_model"`
|
||||
Cores json.RawMessage `json:"processors_cores"` // may be int or string
|
||||
Threads json.RawMessage `json:"processors_threads"` // may be int or string
|
||||
ClockSpeed string `json:"processors_clock_speed"`
|
||||
L1DataCache string `json:"processors_l1datacache"`
|
||||
L2Cache string `json:"processors_l2cache"`
|
||||
L3Cache string `json:"processors_l3cache"`
|
||||
Status string `json:"processors_status"`
|
||||
SerialNumber string `json:"processors_serial_number"`
|
||||
}
|
||||
|
||||
type xccDIMMDoc struct {
|
||||
Items []xccDIMMItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccDIMMItem struct {
|
||||
Memory []xccDIMM `json:"memory"`
|
||||
}
|
||||
|
||||
type xccDIMM struct {
|
||||
Index int `json:"memory_index"`
|
||||
Status string `json:"memory_status"`
|
||||
Name string `json:"memory_name"`
|
||||
Type string `json:"memory_type"`
|
||||
Capacity json.RawMessage `json:"memory_capacity"` // int (GB) or string
|
||||
PartNumber string `json:"memory_part_number"`
|
||||
SerialNumber string `json:"memory_serial_number"`
|
||||
Manufacturer string `json:"memory_manufacturer"`
|
||||
MemSpeed json.RawMessage `json:"memory_mem_speed"` // int or string
|
||||
ConfigSpeed json.RawMessage `json:"memory_config_speed"` // int or string
|
||||
}
|
||||
|
||||
type xccDiskDoc struct {
|
||||
Items []xccDiskItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccDiskItem struct {
|
||||
Disks []xccDisk `json:"disks"`
|
||||
}
|
||||
|
||||
type xccDisk struct {
|
||||
ID int `json:"id"`
|
||||
SlotNo int `json:"slotNo"`
|
||||
Type string `json:"type"`
|
||||
Interface string `json:"interface"`
|
||||
Media string `json:"media"`
|
||||
SerialNo string `json:"serialNo"`
|
||||
PartNo string `json:"partNo"`
|
||||
CapacityStr string `json:"capacityStr"` // e.g. "3.20 TB"
|
||||
Manufacture string `json:"manufacture"`
|
||||
ProductName string `json:"productName"`
|
||||
RemainLife int `json:"remainLife"` // 0-100
|
||||
FWVersion string `json:"fwVersion"`
|
||||
Temperature int `json:"temperature"`
|
||||
HealthStatus int `json:"healthStatus"` // int code: 2=Normal
|
||||
State int `json:"state"`
|
||||
StateStr string `json:"statestr"`
|
||||
}
|
||||
|
||||
type xccCardDoc struct {
|
||||
Items []xccCard `json:"items"`
|
||||
}
|
||||
|
||||
type xccCard struct {
|
||||
Key int `json:"key"`
|
||||
SlotNo int `json:"slotNo"`
|
||||
AdapterName string `json:"adapterName"`
|
||||
ConnectorLabel string `json:"connectorLabel"`
|
||||
OOBSupported int `json:"oobSupported"`
|
||||
Location int `json:"location"`
|
||||
Functions []xccCardFunc `json:"functions"`
|
||||
}
|
||||
|
||||
type xccCardFunc struct {
|
||||
FunType int `json:"funType"`
|
||||
BusNo int `json:"generic_busNo"`
|
||||
DevNo int `json:"generic_devNo"`
|
||||
FunNo int `json:"generic_funNo"`
|
||||
VendorID int `json:"generic_vendorId"` // direct int
|
||||
DeviceID int `json:"generic_devId"` // direct int
|
||||
SlotDesignation string `json:"generic_slotDesignation"`
|
||||
}
|
||||
|
||||
type xccPSUDoc struct {
|
||||
Items []xccPSUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccPSUItem struct {
|
||||
Power []xccPSU `json:"power"`
|
||||
}
|
||||
|
||||
type xccPSU struct {
|
||||
Name int `json:"name"`
|
||||
Status string `json:"status"`
|
||||
RatedPower int `json:"rated_power"`
|
||||
PartNumber string `json:"part_number"`
|
||||
FRUNumber string `json:"fru_number"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
ManufID string `json:"manuf_id"`
|
||||
}
|
||||
|
||||
type xccFRUDoc struct {
|
||||
Items []xccFRUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccFRUItem struct {
|
||||
BuiltinFRU []map[string]string `json:"builtin_fru_device"`
|
||||
}
|
||||
|
||||
type xccSensorDoc struct {
|
||||
Items []xccSensor `json:"items"`
|
||||
}
|
||||
|
||||
type xccSensor struct {
|
||||
Name string `json:"Sensor Name"`
|
||||
Value string `json:"Value"`
|
||||
Status string `json:"status"`
|
||||
Unit string `json:"unit"`
|
||||
}
|
||||
|
||||
type xccEventDoc struct {
|
||||
Items []xccEvent `json:"items"`
|
||||
}
|
||||
|
||||
type xccEvent struct {
|
||||
Severity string `json:"severity"` // "I", "W", "E", "C"
|
||||
Source string `json:"source"`
|
||||
Date string `json:"date"` // "2025-12-22T13:24:02.070"
|
||||
Index int `json:"index"`
|
||||
EventID string `json:"eventid"`
|
||||
CmnID string `json:"cmnid"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// --- Parsers ---
|
||||
|
||||
func parseBasicSysInfo(content []byte, result *models.AnalysisResult) {
|
||||
var doc xccBasicSysInfoDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return
|
||||
}
|
||||
item := doc.Items[0]
|
||||
|
||||
result.Hardware.BoardInfo = models.BoardInfo{
|
||||
ProductName: strings.TrimSpace(item.MachineTypeModel),
|
||||
SerialNumber: strings.TrimSpace(item.SerialNumber),
|
||||
UUID: strings.TrimSpace(item.UUID),
|
||||
}
|
||||
|
||||
if t, err := parseXCCTime(item.CurrentTime); err == nil {
|
||||
result.CollectedAt = t.UTC()
|
||||
}
|
||||
}
|
||||
|
||||
func parseFirmware(content []byte) []models.FirmwareInfo {
|
||||
var doc xccFirmwareDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.FirmwareInfo
|
||||
for _, fw := range doc.Items {
|
||||
if fi := xccFWEntryToModel(fw); fi != nil {
|
||||
out = append(out, *fi)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func xccFWEntryToModel(fw xccFWEntry) *models.FirmwareInfo {
|
||||
name := strings.TrimSpace(fw.TypeStr)
|
||||
version := strings.TrimSpace(fw.Version)
|
||||
if name == "" && version == "" {
|
||||
return nil
|
||||
}
|
||||
build := strings.TrimSpace(fw.Build)
|
||||
v := version
|
||||
if build != "" {
|
||||
v = version + " (" + build + ")"
|
||||
}
|
||||
return &models.FirmwareInfo{
|
||||
DeviceName: name,
|
||||
Version: v,
|
||||
BuildTime: strings.TrimSpace(fw.ReleaseDate),
|
||||
}
|
||||
}
|
||||
|
||||
func parseCPUs(content []byte) []models.CPU {
|
||||
var doc xccCPUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.CPU
|
||||
for _, item := range doc.Items {
|
||||
for _, c := range item.Processors {
|
||||
cpu := models.CPU{
|
||||
Socket: c.Name,
|
||||
Model: strings.TrimSpace(c.Model),
|
||||
Cores: rawJSONToInt(c.Cores),
|
||||
Threads: rawJSONToInt(c.Threads),
|
||||
FrequencyMHz: parseMHz(c.ClockSpeed),
|
||||
L1CacheKB: parseKB(c.L1DataCache),
|
||||
L2CacheKB: parseKB(c.L2Cache),
|
||||
L3CacheKB: parseKB(c.L3Cache),
|
||||
Status: strings.TrimSpace(c.Status),
|
||||
SerialNumber: strings.TrimSpace(c.SerialNumber),
|
||||
}
|
||||
out = append(out, cpu)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseDIMMs(content []byte) ([]models.MemoryDIMM, []models.Event) {
|
||||
var doc xccDIMMDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
var out []models.MemoryDIMM
|
||||
var events []models.Event
|
||||
for _, item := range doc.Items {
|
||||
for _, m := range item.Memory {
|
||||
status := strings.TrimSpace(m.Status)
|
||||
present := !strings.EqualFold(status, "not present") &&
|
||||
!strings.EqualFold(status, "absent")
|
||||
// memory_capacity is in GB (int); convert to MB
|
||||
capacityGB := rawJSONToInt(m.Capacity)
|
||||
dimm := models.MemoryDIMM{
|
||||
Slot: strings.TrimSpace(m.Name),
|
||||
Location: strings.TrimSpace(m.Name),
|
||||
Present: present,
|
||||
SizeMB: capacityGB * 1024,
|
||||
Type: strings.TrimSpace(m.Type),
|
||||
MaxSpeedMHz: rawJSONToInt(m.MemSpeed),
|
||||
CurrentSpeedMHz: rawJSONToInt(m.ConfigSpeed),
|
||||
Manufacturer: strings.TrimSpace(m.Manufacturer),
|
||||
SerialNumber: strings.TrimSpace(m.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(strings.TrimRight(m.PartNumber, " ")),
|
||||
Status: status,
|
||||
}
|
||||
out = append(out, dimm)
|
||||
if isUnqualifiedDIMM(status) {
|
||||
events = append(events, models.Event{
|
||||
Source: "Memory",
|
||||
SensorType: "Memory",
|
||||
SensorName: dimm.Slot,
|
||||
EventType: "DIMM Qualification",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: status,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
return out, events
|
||||
}
|
||||
|
||||
func parseDisks(content []byte) []models.Storage {
|
||||
var doc xccDiskDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.Storage
|
||||
for _, item := range doc.Items {
|
||||
for _, d := range item.Disks {
|
||||
sizeGB := parseCapacityToGB(d.CapacityStr)
|
||||
stateStr := strings.TrimSpace(d.StateStr)
|
||||
present := !strings.EqualFold(stateStr, "absent") &&
|
||||
!strings.EqualFold(stateStr, "not present")
|
||||
disk := models.Storage{
|
||||
Slot: fmt.Sprintf("%d", d.SlotNo),
|
||||
Type: strings.TrimSpace(d.Media),
|
||||
Model: strings.TrimSpace(d.ProductName),
|
||||
SizeGB: sizeGB,
|
||||
SerialNumber: strings.TrimSpace(d.SerialNo),
|
||||
Manufacturer: strings.TrimSpace(d.Manufacture),
|
||||
Firmware: strings.TrimSpace(d.FWVersion),
|
||||
Interface: strings.TrimSpace(d.Interface),
|
||||
Present: present,
|
||||
Status: stateStr,
|
||||
}
|
||||
if d.RemainLife >= 0 && d.RemainLife <= 100 {
|
||||
v := d.RemainLife
|
||||
disk.RemainingEndurancePct = &v
|
||||
}
|
||||
out = append(out, disk)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseCards(content []byte) []models.PCIeDevice {
|
||||
var doc xccCardDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.PCIeDevice
|
||||
for _, card := range doc.Items {
|
||||
slot := strings.TrimSpace(card.ConnectorLabel)
|
||||
if slot == "" {
|
||||
slot = fmt.Sprintf("%d", card.SlotNo)
|
||||
}
|
||||
dev := models.PCIeDevice{
|
||||
Slot: slot,
|
||||
Description: strings.TrimSpace(card.AdapterName),
|
||||
}
|
||||
if len(card.Functions) > 0 {
|
||||
fn := card.Functions[0]
|
||||
dev.BDF = fmt.Sprintf("%02x:%02x.%x", fn.BusNo, fn.DevNo, fn.FunNo)
|
||||
dev.VendorID = fn.VendorID
|
||||
dev.DeviceID = fn.DeviceID
|
||||
}
|
||||
out = append(out, dev)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parsePSUs(content []byte) []models.PSU {
|
||||
var doc xccPSUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.PSU
|
||||
for _, item := range doc.Items {
|
||||
for _, p := range item.Power {
|
||||
psu := models.PSU{
|
||||
Slot: fmt.Sprintf("%d", p.Name),
|
||||
Present: true,
|
||||
WattageW: p.RatedPower,
|
||||
SerialNumber: strings.TrimSpace(p.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(p.PartNumber),
|
||||
Vendor: strings.TrimSpace(p.ManufID),
|
||||
Status: strings.TrimSpace(p.Status),
|
||||
}
|
||||
out = append(out, psu)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseFRU(content []byte) []models.FRUInfo {
|
||||
var doc xccFRUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.FRUInfo
|
||||
for _, item := range doc.Items {
|
||||
for _, entry := range item.BuiltinFRU {
|
||||
fru := models.FRUInfo{
|
||||
Description: entry["FRU Device Description"],
|
||||
Manufacturer: entry["Board Mfg"],
|
||||
ProductName: entry["Board Product"],
|
||||
SerialNumber: entry["Board Serial"],
|
||||
PartNumber: entry["Board Part Number"],
|
||||
MfgDate: entry["Board Mfg Date"],
|
||||
}
|
||||
if fru.ProductName == "" {
|
||||
fru.ProductName = entry["Product Name"]
|
||||
}
|
||||
if fru.SerialNumber == "" {
|
||||
fru.SerialNumber = entry["Product Serial"]
|
||||
}
|
||||
if fru.PartNumber == "" {
|
||||
fru.PartNumber = entry["Product Part Number"]
|
||||
}
|
||||
if fru.Description == "" && fru.ProductName == "" && fru.SerialNumber == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, fru)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseSensors(content []byte) []models.SensorReading {
|
||||
var doc xccSensorDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.SensorReading
|
||||
for _, s := range doc.Items {
|
||||
name := strings.TrimSpace(s.Name)
|
||||
if name == "" {
|
||||
continue
|
||||
}
|
||||
sr := models.SensorReading{
|
||||
Name: name,
|
||||
RawValue: strings.TrimSpace(s.Value),
|
||||
Unit: strings.TrimSpace(s.Unit),
|
||||
Status: strings.TrimSpace(s.Status),
|
||||
}
|
||||
if v, err := strconv.ParseFloat(sr.RawValue, 64); err == nil {
|
||||
sr.Value = v
|
||||
}
|
||||
out = append(out, sr)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseEvents(content []byte) []models.Event {
|
||||
var doc xccEventDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.Event
|
||||
for _, e := range doc.Items {
|
||||
ev := models.Event{
|
||||
ID: e.EventID,
|
||||
Source: strings.TrimSpace(e.Source),
|
||||
Description: strings.TrimSpace(e.Message),
|
||||
Severity: xccSeverity(e.Severity, e.Message),
|
||||
}
|
||||
if t, err := parseXCCTime(e.Date); err == nil {
|
||||
ev.Timestamp = t.UTC()
|
||||
}
|
||||
out = append(out, ev)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// --- Helpers ---
|
||||
|
||||
func xccSeverity(s, message string) models.Severity {
|
||||
if isUnqualifiedDIMM(message) {
|
||||
return models.SeverityWarning
|
||||
}
|
||||
switch strings.ToUpper(strings.TrimSpace(s)) {
|
||||
case "C":
|
||||
return models.SeverityCritical
|
||||
case "E":
|
||||
return models.SeverityCritical
|
||||
case "W":
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
func isUnqualifiedDIMM(value string) bool {
|
||||
return strings.Contains(strings.ToLower(strings.TrimSpace(value)), "unqualified dimm")
|
||||
}
|
||||
|
||||
func parseXCCTime(s string) (time.Time, error) {
|
||||
s = strings.TrimSpace(s)
|
||||
formats := []string{
|
||||
"2006-01-02T15:04:05.000",
|
||||
"2006-01-02T15:04:05",
|
||||
"2006-01-02 15:04:05",
|
||||
}
|
||||
for _, f := range formats {
|
||||
if t, err := time.Parse(f, s); err == nil {
|
||||
return t, nil
|
||||
}
|
||||
}
|
||||
return time.Time{}, fmt.Errorf("unparseable time: %q", s)
|
||||
}
|
||||
|
||||
// parseMHz parses "4100 MHz" → 4100
|
||||
func parseMHz(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) == 0 {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.Atoi(parts[0])
|
||||
return v
|
||||
}
|
||||
|
||||
// parseKB parses "384 KB" → 384
|
||||
func parseKB(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) == 0 {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.Atoi(parts[0])
|
||||
return v
|
||||
}
|
||||
|
||||
// parseMB parses "32768 MB" → 32768
|
||||
func parseMB(s string) int {
|
||||
return parseKB(s)
|
||||
}
|
||||
|
||||
// parseMTs parses "4800 MT/s" → 4800 (treated as MHz equivalent)
|
||||
func parseMTs(s string) int {
|
||||
return parseKB(s)
|
||||
}
|
||||
|
||||
// parseCapacityToGB parses "3.20 TB" or "480 GB" → GB integer
|
||||
func parseCapacityToGB(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) < 2 {
|
||||
return 0
|
||||
}
|
||||
v, err := strconv.ParseFloat(parts[0], 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
switch strings.ToUpper(parts[1]) {
|
||||
case "TB":
|
||||
return int(v * 1000)
|
||||
case "GB":
|
||||
return int(v)
|
||||
case "MB":
|
||||
return int(v / 1024)
|
||||
}
|
||||
return int(v)
|
||||
}
|
||||
|
||||
// rawJSONToInt parses a json.RawMessage that may be an int or a quoted string → int
|
||||
func rawJSONToInt(raw json.RawMessage) int {
|
||||
if len(raw) == 0 {
|
||||
return 0
|
||||
}
|
||||
// try direct int
|
||||
var n int
|
||||
if err := json.Unmarshal(raw, &n); err == nil {
|
||||
return n
|
||||
}
|
||||
// try string
|
||||
var s string
|
||||
if err := json.Unmarshal(raw, &s); err == nil {
|
||||
v, _ := strconv.Atoi(strings.TrimSpace(s))
|
||||
return v
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// parseHexID parses "0x15b3" → 5555
|
||||
func parseHexID(s string) int {
|
||||
s = strings.TrimSpace(strings.ToLower(s))
|
||||
s = strings.TrimPrefix(s, "0x")
|
||||
v, _ := strconv.ParseInt(s, 16, 32)
|
||||
return int(v)
|
||||
}
|
||||
258
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
258
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
@@ -0,0 +1,258 @@
|
||||
package lenovo_xcc
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const exampleArchive = "/Users/mchusavitin/Documents/git/logpile/example/7D76CTO1WW_JF0002KT_xcc_mini-log_20260413-122150.zip"
|
||||
|
||||
func TestDetect_LenovoXCCMiniLog(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
score := p.Detect(files)
|
||||
if score < 80 {
|
||||
t.Errorf("expected Detect score >= 80 for XCC mini-log archive, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_BasicSysInfo(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse returned error: %v", err)
|
||||
}
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil result or hardware")
|
||||
}
|
||||
|
||||
hw := result.Hardware
|
||||
if hw.BoardInfo.SerialNumber == "" {
|
||||
t.Error("BoardInfo.SerialNumber is empty")
|
||||
}
|
||||
if hw.BoardInfo.ProductName == "" {
|
||||
t.Error("BoardInfo.ProductName is empty")
|
||||
}
|
||||
t.Logf("BoardInfo: serial=%s model=%s uuid=%s", hw.BoardInfo.SerialNumber, hw.BoardInfo.ProductName, hw.BoardInfo.UUID)
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_CPUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) == 0 {
|
||||
t.Error("expected at least one CPU, got none")
|
||||
}
|
||||
for i, cpu := range result.Hardware.CPUs {
|
||||
t.Logf("CPU[%d]: socket=%d model=%q cores=%d threads=%d freq=%dMHz", i, cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Memory(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) == 0 {
|
||||
t.Error("expected memory DIMMs, got none")
|
||||
}
|
||||
t.Logf("Memory: %d DIMMs", len(result.Hardware.Memory))
|
||||
for i, m := range result.Hardware.Memory {
|
||||
t.Logf("DIMM[%d]: slot=%s present=%v size=%dMB sn=%s", i, m.Slot, m.Present, m.SizeMB, m.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Storage(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("Storage: %d disks", len(result.Hardware.Storage))
|
||||
for i, s := range result.Hardware.Storage {
|
||||
t.Logf("Disk[%d]: slot=%s model=%q size=%dGB sn=%s", i, s.Slot, s.Model, s.SizeGB, s.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PCIeCards(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("PCIe cards: %d", len(result.Hardware.PCIeDevices))
|
||||
for i, c := range result.Hardware.PCIeDevices {
|
||||
t.Logf("Card[%d]: slot=%s desc=%q bdf=%s", i, c.Slot, c.Description, c.BDF)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PSUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.PowerSupply) == 0 {
|
||||
t.Error("expected PSUs, got none")
|
||||
}
|
||||
for i, p := range result.Hardware.PowerSupply {
|
||||
t.Logf("PSU[%d]: slot=%s wattage=%dW status=%s sn=%s", i, p.Slot, p.WattageW, p.Status, p.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Sensors(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Sensors) == 0 {
|
||||
t.Error("expected sensors, got none")
|
||||
}
|
||||
t.Logf("Sensors: %d", len(result.Sensors))
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Events(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Events) == 0 {
|
||||
t.Error("expected events, got none")
|
||||
}
|
||||
t.Logf("Events: %d", len(result.Events))
|
||||
for i, e := range result.Events {
|
||||
if i >= 5 {
|
||||
break
|
||||
}
|
||||
t.Logf("Event[%d]: severity=%s ts=%s desc=%q", i, e.Severity, e.Timestamp.Format("2006-01-02T15:04:05"), e.Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_FRU(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("FRU: %d entries", len(result.FRU))
|
||||
for i, f := range result.FRU {
|
||||
t.Logf("FRU[%d]: desc=%q product=%q serial=%q", i, f.Description, f.ProductName, f.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Firmware(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Error("expected firmware entries, got none")
|
||||
}
|
||||
for i, f := range result.Hardware.Firmware {
|
||||
t.Logf("FW[%d]: name=%q version=%q buildtime=%q", i, f.DeviceName, f.Version, f.BuildTime)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDIMMs_UnqualifiedDIMMAddsWarningEvent(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"items": [{
|
||||
"memory": [{
|
||||
"memory_name": "DIMM A1",
|
||||
"memory_status": "Unqualified DIMM",
|
||||
"memory_type": "DDR5",
|
||||
"memory_capacity": 32
|
||||
}]
|
||||
}]
|
||||
}`)
|
||||
|
||||
memory, events := parseDIMMs(content)
|
||||
if len(memory) != 1 {
|
||||
t.Fatalf("expected 1 DIMM, got %d", len(memory))
|
||||
}
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 warning event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].SensorName != "DIMM A1" {
|
||||
t.Fatalf("unexpected sensor name: %q", events[0].SensorName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSeverity_UnqualifiedDIMMMessageBecomesWarning(t *testing.T) {
|
||||
if got := xccSeverity("I", "System found Unqualified DIMM in slot DIMM A1"); got != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", got)
|
||||
}
|
||||
}
|
||||
1
internal/parser/vendors/vendors.go
vendored
1
internal/parser/vendors/vendors.go
vendored
@@ -14,6 +14,7 @@ import (
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xfusion"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo_xcc"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"
|
||||
|
||||
@@ -128,6 +128,7 @@ echo ""
|
||||
# Show next steps
|
||||
echo -e "${YELLOW}Next steps:${NC}"
|
||||
echo " 1. Create git tag:"
|
||||
echo " # LOGPile release tags use vN.M, for example: v1.12"
|
||||
echo " git tag -a ${VERSION} -m \"Release ${VERSION}\""
|
||||
echo ""
|
||||
echo " 2. Push tag to remote:"
|
||||
|
||||
@@ -85,7 +85,7 @@
|
||||
</div>
|
||||
<label class="api-form-checkbox" for="api-debug-payloads">
|
||||
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
|
||||
<span>Сбор расширенных метрик для отладки</span>
|
||||
<span>Сбор расширенных данных для диагностики</span>
|
||||
</label>
|
||||
<div class="api-form-actions">
|
||||
<button id="api-collect-btn" type="submit">Собрать</button>
|
||||
|
||||
Reference in New Issue
Block a user