Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| aba7a54990 | |||
| 835df2676c | |||
| b86d51c921 | |||
|
|
a82fb227e5 | ||
| c9969fc3da | |||
| 89b6701f43 | |||
| b04877549a | |||
| 8ca173c99b | |||
| f19a3454fa |
2
bible
2
bible
Submodule bible updated: 456c1f022c...52444350c1
@@ -58,6 +58,7 @@ Responses:
|
||||
|
||||
Optional request field:
|
||||
- `power_on_if_host_off`: when `true`, Redfish collection may power on the host before collection if preflight found it powered off
|
||||
- `debug_payloads`: when `true`, collector keeps extra diagnostic payloads and enables extended plan-B retries for slow HGX component inventory branches (`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`)
|
||||
|
||||
### `POST /api/collect/probe`
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ Request fields passed from the server:
|
||||
- credential field (`password` or token)
|
||||
- `tls_mode`
|
||||
- optional `power_on_if_host_off`
|
||||
- optional `debug_payloads` for extended diagnostics
|
||||
|
||||
### Core rule
|
||||
|
||||
@@ -57,6 +58,17 @@ closes `skipCh` → goroutine in `Collect()` → `cancelCollect()`.
|
||||
|
||||
The skip button is visible during `running` state and hidden once the job reaches a terminal state.
|
||||
|
||||
### Extended diagnostics toggle
|
||||
|
||||
The live collect form exposes a user-facing checkbox for extended diagnostics.
|
||||
|
||||
- default collection prioritizes inventory completeness and bounded runtime
|
||||
- when extended diagnostics is off, heavy HGX component-chassis critical plan-B retries
|
||||
(`Assembly`, `Accelerators`, `Drives`, `NetworkAdapters`, `PCIeDevices`) are skipped
|
||||
- when extended diagnostics is on, those retries are allowed and extra debug payloads are collected
|
||||
|
||||
This toggle is intended for operator-driven deep diagnostics on problematic hosts, not for the default path.
|
||||
|
||||
### Discovery model
|
||||
|
||||
The collector does not rely on one fixed vendor tree.
|
||||
@@ -168,3 +180,10 @@ When changing collection logic:
|
||||
Status: mock scaffold only.
|
||||
|
||||
It remains registered for protocol completeness, but it is not a real collection path.
|
||||
The project is Redfish-first for live collection:
|
||||
- Redfish already covers the current product goals for inventory, sensors, and hardware event logs
|
||||
- the live architecture depends on replayable `raw_payloads.redfish_tree`
|
||||
- a generic IPMI collector would require a separate raw snapshot and replay contract
|
||||
|
||||
IPMI should be reconsidered only as a narrow fallback for real field cases where Redfish is
|
||||
missing or unreliable for a specific capability such as SEL, FRU, or sensors.
|
||||
|
||||
@@ -55,6 +55,7 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
|
||||
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
|
||||
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
|
||||
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
|
||||
| `lenovo_xcc` | Lenovo XCC mini-log ZIP archives | JSON inventory + platform event logs |
|
||||
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
|
||||
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
|
||||
| `unraid` | Unraid diagnostics/log bundles | Server and storage-focused parsing |
|
||||
@@ -194,6 +195,7 @@ and `LogDump/` trees.
|
||||
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
|
||||
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| Lenovo XCC mini-log | `lenovo_xcc` | Ready | ThinkSystem SR650 V3 XCC mini-log ZIP |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
| Unraid | `unraid` | Ready | Unraid diagnostics archives |
|
||||
|
||||
@@ -57,6 +57,11 @@ Current behavior:
|
||||
7. Packages any already-present binaries from `bin/`
|
||||
8. Generates `SHA256SUMS.txt`
|
||||
|
||||
Release tag format:
|
||||
- project release tags use `vN.M`
|
||||
- do not create `vN.M.P` tags for LOGPile releases
|
||||
- release artifacts and `main.version` inherit the exact git tag string
|
||||
|
||||
Important limitation:
|
||||
- `scripts/release.sh` does not run `make build-all` for you
|
||||
- if you want Linux or additional macOS archives in the release directory, build them before running the script
|
||||
|
||||
@@ -1120,3 +1120,81 @@ incomplete for UI and Reanimator consumers.
|
||||
- System firmware such as BIOS and iBMC versions survives xFusion file exports.
|
||||
- xFusion archives participate more reliably in canonical device/export flows without special UI
|
||||
cases.
|
||||
|
||||
---
|
||||
|
||||
## ADL-043 — Extended HGX diagnostic plan-B is opt-in from the live collect form
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** Some Supermicro HGX Redfish targets expose slow or hanging component-chassis inventory
|
||||
collections during critical plan-B, especially under `Chassis/HGX_*` for `Assembly`,
|
||||
`Accelerators`, `Drives`, `NetworkAdapters`, and `PCIeDevices`. Default collection should not
|
||||
block operators on deep diagnostic retries that are useful mainly for troubleshooting.
|
||||
**Decision:** Keep the normal snapshot/replay path unchanged, but gate those heavy HGX
|
||||
component-chassis critical plan-B retries behind the existing live-collect `debug_payloads` flag,
|
||||
presented in the UI as "Сбор расширенных данных для диагностики".
|
||||
**Consequences:**
|
||||
- Default live collection skips those heavy diagnostic plan-B retries and reaches replay faster.
|
||||
- Operators can explicitly opt into the slower diagnostic path when they need deeper collection.
|
||||
- The same user-facing toggle continues to enable extra debug payload capture for troubleshooting.
|
||||
|
||||
---
|
||||
|
||||
## ADL-044 — LOGPile project release tags use `vN.M`
|
||||
|
||||
**Date:** 2026-04-13
|
||||
**Context:** The repository accumulated release tags in `vN.M.P` form, while the shared module
|
||||
versioning contract in `bible/rules/patterns/module-versioning/contract.md` standardizes version
|
||||
shape as `N.M`. Release tooling reads the git tag verbatim into build metadata and release
|
||||
artifacts, so inconsistent tag shape leaks directly into packaged versions.
|
||||
**Decision:** Use `vN.M` for LOGPile project release tags going forward. Do not create new
|
||||
`vN.M.P` tags for repository releases. Build metadata, release directory names, and release notes
|
||||
continue to inherit the exact git tag string from `git describe --tags`.
|
||||
**Consequences:**
|
||||
- Future project releases have a two-component version string such as `v1.12`.
|
||||
- Release artifacts and `--version` output stay aligned with the tag shape without extra mapping.
|
||||
- Existing historical `vN.M.P` tags remain as-is unless explicitly rewritten.
|
||||
|
||||
---
|
||||
|
||||
## ADL-045 — Generic live IPMI collector is deferred; Redfish remains the only production live path
|
||||
|
||||
**Date:** 2026-04-22
|
||||
**Context:** Sprint issue `#12` proposed a generic IPMI collector for SEL/FRU/sensors. By this
|
||||
point LOGPile already has a production Redfish pipeline with replayable raw snapshots, profile-
|
||||
driven acquisition, and normalized event/sensor/inventory extraction. Redfish also already covers
|
||||
the current product goals better than IPMI for live collection: richer inventory, structured
|
||||
resource relationships, and vendor log access via `LogServices`, including SEL-style logs on many
|
||||
implementations.
|
||||
|
||||
**Decision:** Do not build a generic live IPMI collector now. Keep `ipmi_mock.go` only as a
|
||||
protocol placeholder in the registry and UI/API contract. Treat Redfish as the only production
|
||||
live collection path. Revisit IPMI only if real field evidence shows that a specific target class
|
||||
cannot provide required data over Redfish. If revisited, prefer a narrow fallback scope such as
|
||||
`IPMI SEL fallback`, `IPMI FRU fallback`, or `IPMI sensor fallback` rather than a second full
|
||||
collector architecture.
|
||||
|
||||
**Consequences:**
|
||||
- Issue `#12` is closed as deferred/not planned, not as implemented.
|
||||
- Live collection architecture stays centered on replayable `raw_payloads.redfish_tree`.
|
||||
- The codebase avoids introducing a second generic live-ingest/replay contract for IPMI data.
|
||||
- Future IPMI work must be justified by concrete Redfish gaps on real hardware, not by protocol
|
||||
symmetry alone.
|
||||
|
||||
---
|
||||
|
||||
## ADL-046 — The web shell delegates report rendering to `internal/chart`
|
||||
|
||||
**Date:** 2026-04-22
|
||||
**Context:** The frontend had two competing report paths: the embedded `internal/chart` viewer and
|
||||
an older client-side renderer in `web/static/js/app.js` for config, firmware, sensors, serials,
|
||||
events, and parse errors. That duplication left dead controls in the shell and made the report
|
||||
source of truth ambiguous.
|
||||
**Decision:** The `web/` frontend shell is responsible only for data intake, job control, and
|
||||
top-level actions. The report itself must be rendered exclusively through `internal/chart`.
|
||||
Do not keep parallel report sections, filters, or table renderers in shell JavaScript.
|
||||
**Consequences:**
|
||||
- The browser UI has a single report rendering path: `/chart/current` inside the embedded viewer.
|
||||
- Report-level filtering or extra report sections must be implemented in `internal/chart`, not in
|
||||
`web/static/js/app.js`.
|
||||
- Removing legacy DOM renderers from the shell is a correctness fix, not a behavior regression.
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
@@ -38,10 +39,11 @@ func main() {
|
||||
server.WebFS = web.FS
|
||||
|
||||
cfg := server.Config{
|
||||
Port: *port,
|
||||
PreloadFile: *file,
|
||||
AppVersion: version,
|
||||
AppCommit: commit,
|
||||
Port: *port,
|
||||
PreloadFile: *file,
|
||||
AppVersion: version,
|
||||
AppCommit: commit,
|
||||
ChartVersion: detectChartVersion(),
|
||||
}
|
||||
|
||||
srv := server.New(cfg)
|
||||
@@ -92,6 +94,15 @@ func openBrowser(url string) {
|
||||
}
|
||||
}
|
||||
|
||||
func detectChartVersion() string {
|
||||
cmd := exec.Command("git", "-C", "internal/chart", "describe", "--tags", "--always", "--dirty", "--abbrev=7")
|
||||
out, err := cmd.Output()
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(string(out))
|
||||
}
|
||||
|
||||
func maybeWaitForCrashInput(enabled bool) {
|
||||
if !enabled || !isInteractiveConsole() {
|
||||
return
|
||||
|
||||
Submodule internal/chart updated: c025ae0477...f6517987b3
@@ -345,8 +345,9 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
|
||||
"manager_critical_suffixes": acquisitionPlan.ScopedPaths.ManagerCriticalSuffixes,
|
||||
},
|
||||
"tuning": map[string]any{
|
||||
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
|
||||
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
|
||||
"snapshot_max_documents": acquisitionPlan.Tuning.SnapshotMaxDocuments,
|
||||
"snapshot_workers": acquisitionPlan.Tuning.SnapshotWorkers,
|
||||
"snapshot_exclude_contains": acquisitionPlan.Tuning.SnapshotExcludeContains,
|
||||
"prefetch_workers": acquisitionPlan.Tuning.PrefetchWorkers,
|
||||
"prefetch_enabled": boolPointerValue(acquisitionPlan.Tuning.PrefetchEnabled),
|
||||
"nvme_post_probe": boolPointerValue(acquisitionPlan.Tuning.NVMePostProbeEnabled),
|
||||
@@ -496,7 +497,6 @@ func (c *RedfishConnector) Collect(ctx context.Context, req Request, emit Progre
|
||||
return result, nil
|
||||
}
|
||||
|
||||
|
||||
// collectDebugPayloads fetches vendor-specific diagnostic endpoints on a best-effort basis.
|
||||
// Results are stored in rawPayloads["redfish_debug_payloads"] and exported with the bundle.
|
||||
// Enabled only when Request.DebugPayloads is true.
|
||||
@@ -511,7 +511,6 @@ func (c *RedfishConnector) collectDebugPayloads(ctx context.Context, client *htt
|
||||
return out
|
||||
}
|
||||
|
||||
|
||||
func firstNonEmptyPath(paths []string, fallback string) string {
|
||||
for _, p := range paths {
|
||||
if strings.TrimSpace(p) != "" {
|
||||
@@ -543,7 +542,6 @@ func redfishSystemPowerState(systemDoc map[string]interface{}) string {
|
||||
return ""
|
||||
}
|
||||
|
||||
|
||||
func (c *RedfishConnector) postJSON(ctx context.Context, client *http.Client, req Request, baseURL, resourcePath string, payload map[string]any) error {
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
@@ -1346,6 +1344,11 @@ func (c *RedfishConnector) collectRawRedfishTree(ctx context.Context, client *ht
|
||||
if !shouldCrawlPath(path) {
|
||||
return
|
||||
}
|
||||
for _, pattern := range tuning.SnapshotExcludeContains {
|
||||
if pattern != "" && strings.Contains(path, pattern) {
|
||||
return
|
||||
}
|
||||
}
|
||||
mu.Lock()
|
||||
if len(seen) >= maxDocuments {
|
||||
mu.Unlock()
|
||||
@@ -2299,7 +2302,6 @@ func redfishCriticalSlowGap() time.Duration {
|
||||
return 1200 * time.Millisecond
|
||||
}
|
||||
|
||||
|
||||
func redfishSnapshotMemoryRequestTimeout() time.Duration {
|
||||
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_MEMORY_TIMEOUT")); v != "" {
|
||||
if d, err := time.ParseDuration(v); err == nil && d > 0 {
|
||||
@@ -2878,11 +2880,16 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
timings := newRedfishPathTimingCollector(4)
|
||||
var targets []string
|
||||
seenTargets := make(map[string]struct{})
|
||||
skippedDiagnosticTargets := 0
|
||||
addTarget := func(path string) {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
if !shouldIncludeCriticalPlanBPath(req, path) {
|
||||
skippedDiagnosticTargets++
|
||||
return
|
||||
}
|
||||
if _, ok := seenTargets[path]; ok {
|
||||
return
|
||||
}
|
||||
@@ -2968,6 +2975,13 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
return 0
|
||||
}
|
||||
if emit != nil {
|
||||
if skippedDiagnosticTargets > 0 {
|
||||
emit(Progress{
|
||||
Status: "running",
|
||||
Progress: 97,
|
||||
Message: fmt.Sprintf("Redfish: расширенная диагностика выключена, пропущено %d тяжелых diagnostic endpoint", skippedDiagnosticTargets),
|
||||
})
|
||||
}
|
||||
totalETA := redfishCriticalCooldown() + estimatePlanBETA(len(targets))
|
||||
emit(Progress{
|
||||
Status: "running",
|
||||
@@ -3073,6 +3087,39 @@ func (c *RedfishConnector) recoverCriticalRedfishDocsPlanB(ctx context.Context,
|
||||
return recovered
|
||||
}
|
||||
|
||||
func shouldIncludeCriticalPlanBPath(req Request, path string) bool {
|
||||
if req.DebugPayloads {
|
||||
return true
|
||||
}
|
||||
return !isExtendedDiagnosticCriticalPlanBPath(path)
|
||||
}
|
||||
|
||||
func isExtendedDiagnosticCriticalPlanBPath(path string) bool {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
parts := strings.Split(strings.Trim(path, "/"), "/")
|
||||
if len(parts) < 5 || parts[0] != "redfish" || parts[1] != "v1" || parts[2] != "Chassis" {
|
||||
return false
|
||||
}
|
||||
if !strings.HasPrefix(parts[3], "HGX_") {
|
||||
return false
|
||||
}
|
||||
for _, suffix := range []string{
|
||||
"/Accelerators",
|
||||
"/Assembly",
|
||||
"/Drives",
|
||||
"/NetworkAdapters",
|
||||
"/PCIeDevices",
|
||||
} {
|
||||
if strings.HasSuffix(path, suffix) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *RedfishConnector) recoverProfilePlanBDocs(ctx context.Context, client *http.Client, req Request, baseURL string, plan redfishprofile.AcquisitionPlan, rawTree map[string]interface{}, emit ProgressFn) int {
|
||||
if len(plan.PlanBPaths) == 0 || plan.Mode == redfishprofile.ModeFallback || !plan.Tuning.RecoveryPolicy.EnableProfilePlanB {
|
||||
return 0
|
||||
|
||||
@@ -50,11 +50,15 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
}
|
||||
|
||||
for _, systemPath := range systemPaths {
|
||||
collectFrom(joinPath(systemPath, "/LogServices"), isHardwareLogService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, systemPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isHardwareLogService)
|
||||
}
|
||||
}
|
||||
// Managers hold the IPMI SEL on AMI/MSI BMCs — include only the "SEL" service.
|
||||
for _, managerPath := range managerPaths {
|
||||
collectFrom(joinPath(managerPath, "/LogServices"), isManagerSELService)
|
||||
for _, logServicesPath := range c.redfishLinkedCollectionPaths(ctx, client, req, baseURL, managerPath, "LogServices") {
|
||||
collectFrom(logServicesPath, isManagerSELService)
|
||||
}
|
||||
}
|
||||
|
||||
if len(out) > 0 {
|
||||
@@ -63,6 +67,42 @@ func (c *RedfishConnector) collectRedfishLogEntries(ctx context.Context, client
|
||||
return out
|
||||
}
|
||||
|
||||
func (c *RedfishConnector) redfishLinkedCollectionPaths(
|
||||
ctx context.Context,
|
||||
client *http.Client,
|
||||
req Request,
|
||||
baseURL, resourcePath, linkKey string,
|
||||
) []string {
|
||||
resourcePath = normalizeRedfishPath(resourcePath)
|
||||
if resourcePath == "" || strings.TrimSpace(linkKey) == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
seen := make(map[string]struct{}, 2)
|
||||
var out []string
|
||||
add := func(path string) {
|
||||
path = normalizeRedfishPath(path)
|
||||
if path == "" {
|
||||
return
|
||||
}
|
||||
if _, ok := seen[path]; ok {
|
||||
return
|
||||
}
|
||||
seen[path] = struct{}{}
|
||||
out = append(out, path)
|
||||
}
|
||||
|
||||
add(joinPath(resourcePath, "/"+strings.TrimSpace(linkKey)))
|
||||
|
||||
resourceDoc, err := c.getJSON(ctx, client, req, baseURL, resourcePath)
|
||||
if err == nil {
|
||||
if linked := redfishLinkedPath(resourceDoc, linkKey); linked != "" {
|
||||
add(linked)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// fetchRedfishLogEntriesWithPaging fetches entries from a LogEntry collection,
|
||||
// following nextLink pages. Stops early when entries older than cutoff are encountered
|
||||
// (assumes BMC returns entries newest-first, which is typical).
|
||||
@@ -182,7 +222,7 @@ func redfishLogServiceEntriesPath(svc map[string]interface{}) string {
|
||||
// Audit, authentication, and session events are excluded.
|
||||
func isHardwareLogEntry(entry map[string]interface{}) bool {
|
||||
entryType := strings.TrimSpace(asString(entry["EntryType"]))
|
||||
if strings.EqualFold(entryType, "Oem") {
|
||||
if strings.EqualFold(entryType, "Oem") && !strings.EqualFold(strings.TrimSpace(asString(entry["OemRecordFormat"])), "Lenovo") {
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -362,6 +402,9 @@ func parseIPMIDumpKV(message string) map[string]string {
|
||||
// AMI/MSI BMCs often set Severity="OK" on all SEL records regardless of content,
|
||||
// so we fall back to inferring severity from SensorType when the explicit field is unhelpful.
|
||||
func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
if redfishLogEntryLooksLikeWarning(entry) {
|
||||
return models.SeverityWarning
|
||||
}
|
||||
// Newer Redfish uses MessageSeverity; older uses Severity.
|
||||
raw := strings.ToLower(firstNonEmpty(
|
||||
strings.TrimSpace(asString(entry["MessageSeverity"])),
|
||||
@@ -380,6 +423,16 @@ func redfishLogEntrySeverity(entry map[string]interface{}) models.Severity {
|
||||
}
|
||||
}
|
||||
|
||||
func redfishLogEntryLooksLikeWarning(entry map[string]interface{}) bool {
|
||||
joined := strings.ToLower(strings.TrimSpace(strings.Join([]string{
|
||||
asString(entry["Message"]),
|
||||
asString(entry["Name"]),
|
||||
asString(entry["SensorType"]),
|
||||
asString(entry["EntryCode"]),
|
||||
}, " ")))
|
||||
return strings.Contains(joined, "unqualified dimm")
|
||||
}
|
||||
|
||||
// redfishSeverityFromSensorType infers event severity from the IPMI/Redfish SensorType string.
|
||||
func redfishSeverityFromSensorType(sensorType string) models.Severity {
|
||||
switch strings.ToLower(sensorType) {
|
||||
|
||||
125
internal/collector/redfish_logentries_test.go
Normal file
125
internal/collector/redfish_logentries_test.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package collector
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
)
|
||||
|
||||
func TestCollectRedfishLogEntries_UsesLinkedManagerLogServicesPath(t *testing.T) {
|
||||
mux := http.NewServeMux()
|
||||
register := func(path string, payload interface{}) {
|
||||
mux.HandleFunc(path, func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_ = json.NewEncoder(w).Encode(payload)
|
||||
})
|
||||
}
|
||||
|
||||
register("/redfish/v1/Managers/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"LogServices": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL", map[string]interface{}{
|
||||
"Id": "SEL",
|
||||
"Entries": map[string]interface{}{
|
||||
"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries",
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries", map[string]interface{}{
|
||||
"Members": []map[string]string{
|
||||
{"@odata.id": "/redfish/v1/Systems/1/LogServices/SEL/Entries/1"},
|
||||
},
|
||||
})
|
||||
register("/redfish/v1/Systems/1/LogServices/SEL/Entries/1", map[string]interface{}{
|
||||
"Id": "1",
|
||||
"Created": time.Now().UTC().Format(time.RFC3339),
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
})
|
||||
|
||||
ts := httptest.NewServer(mux)
|
||||
defer ts.Close()
|
||||
|
||||
c := NewRedfishConnector()
|
||||
got := c.collectRedfishLogEntries(context.Background(), ts.Client(), Request{
|
||||
Host: ts.URL,
|
||||
Port: 443,
|
||||
Protocol: "redfish",
|
||||
Username: "admin",
|
||||
AuthType: "password",
|
||||
Password: "secret",
|
||||
TLSMode: "strict",
|
||||
}, ts.URL, nil, []string{"/redfish/v1/Managers/1"})
|
||||
|
||||
if len(got) != 1 {
|
||||
t.Fatalf("expected 1 collected log entry, got %d", len(got))
|
||||
}
|
||||
if got[0]["Message"] != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected collected message: %#v", got[0]["Message"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_UnqualifiedDIMMBecomesWarning(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "sel-1",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "System found Unqualified DIMM in slot DIMM A1",
|
||||
"MessageSeverity": "OK",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Event",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].Description != "System found Unqualified DIMM in slot DIMM A1" {
|
||||
t.Fatalf("unexpected description: %q", events[0].Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseRedfishLogEntries_LenovoOEMEntryIsKept(t *testing.T) {
|
||||
rawPayloads := map[string]any{
|
||||
"redfish_log_entries": []any{
|
||||
map[string]any{
|
||||
"Id": "plat-55",
|
||||
"Created": "2026-04-13T12:00:00Z",
|
||||
"Message": "DIMM A1 is unqualified",
|
||||
"MessageSeverity": "Warning",
|
||||
"SensorType": "Memory",
|
||||
"EntryType": "Oem",
|
||||
"OemRecordFormat": "Lenovo",
|
||||
"EntryCode": "Assert",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
events := parseRedfishLogEntries(rawPayloads, time.Date(2026, 4, 13, 12, 30, 0, 0, time.UTC))
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 Lenovo OEM event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
}
|
||||
57
internal/collector/redfish_planb_test.go
Normal file
57
internal/collector/redfish_planb_test.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package collector
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestShouldIncludeCriticalPlanBPath(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
req Request
|
||||
path string
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "skip hgx erot pcie without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "skip hgx chassis assembly without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/HGX_Chassis_0/Assembly",
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "keep standard chassis inventory without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/1/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep nvme storage backplane drives without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Chassis/NVMeSSD.0.Group.0.StorageBackplane/Drives",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "keep system processors without extended diagnostics",
|
||||
req: Request{},
|
||||
path: "/redfish/v1/Systems/HGX_Baseboard_0/Processors",
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "include hgx erot pcie when extended diagnostics enabled",
|
||||
req: Request{DebugPayloads: true},
|
||||
path: "/redfish/v1/Chassis/HGX_ERoT_NVSwitch_0/PCIeDevices",
|
||||
want: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := shouldIncludeCriticalPlanBPath(tt.req, tt.path); got != tt.want {
|
||||
t.Fatalf("shouldIncludeCriticalPlanBPath(%q) = %v, want %v", tt.path, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -326,6 +326,47 @@ func TestBuildAnalysisDirectives_SupermicroEnablesStorageRecovery(t *testing.T)
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_LenovoXCCSelectsMatchedModeAndExcludesSensors(t *testing.T) {
|
||||
match := MatchProfiles(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
if match.Mode != ModeMatched {
|
||||
t.Fatalf("expected matched mode, got %q", match.Mode)
|
||||
}
|
||||
found := false
|
||||
for _, profile := range match.Profiles {
|
||||
if profile.Name() == "lenovo" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatal("expected lenovo profile to be selected")
|
||||
}
|
||||
|
||||
// Verify the acquisition plan excludes noisy Lenovo-specific snapshot paths.
|
||||
plan := BuildAcquisitionPlan(MatchSignals{
|
||||
SystemManufacturer: "Lenovo",
|
||||
ChassisManufacturer: "Lenovo",
|
||||
OEMNamespaces: []string{"Lenovo"},
|
||||
})
|
||||
wantExcluded := []string{"/Sensors/", "/Oem/Lenovo/LEDs/", "/Oem/Lenovo/Slots/"}
|
||||
for _, want := range wantExcluded {
|
||||
found := false
|
||||
for _, ex := range plan.Tuning.SnapshotExcludeContains {
|
||||
if ex == want {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("expected SnapshotExcludeContains to include %q, got %v", want, plan.Tuning.SnapshotExcludeContains)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMatchProfiles_OrderingIsDeterministic(t *testing.T) {
|
||||
signals := MatchSignals{
|
||||
SystemManufacturer: "Micro-Star International Co., Ltd.",
|
||||
|
||||
65
internal/collector/redfishprofile/profile_lenovo.go
Normal file
65
internal/collector/redfishprofile/profile_lenovo.go
Normal file
@@ -0,0 +1,65 @@
|
||||
package redfishprofile
|
||||
|
||||
func lenovoProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "lenovo",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "lenovo") ||
|
||||
containsFold(s.ChassisManufacturer, "lenovo") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "lenovo") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
// Lenovo XClarity Controller (XCC) is the BMC product line.
|
||||
if containsFold(s.ServiceRootProduct, "xclarity") ||
|
||||
containsFold(s.ServiceRootProduct, "xcc") {
|
||||
score += 30
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// Lenovo XCC BMC exposes Chassis/1/Sensors with hundreds of individual
|
||||
// sensor member documents (e.g. Chassis/1/Sensors/101L1). These are
|
||||
// not used by any LOGPile parser — thermal/power data is read from
|
||||
// the aggregate Chassis/*/Thermal and Chassis/*/Power endpoints. On
|
||||
// a real server they largely return errors, wasting many minutes.
|
||||
// Lenovo OEM subtrees under Oem/Lenovo/LEDs and Oem/Lenovo/Slots also
|
||||
// enumerate dozens of individual documents not relevant to inventory.
|
||||
ensureSnapshotExcludeContains(plan,
|
||||
"/Sensors/", // individual sensor docs (Chassis/1/Sensors/NNN)
|
||||
"/Oem/Lenovo/LEDs/", // individual LED status entries (~47 per server)
|
||||
"/Oem/Lenovo/Slots/", // individual slot detail entries (~26 per server)
|
||||
"/Oem/Lenovo/Metrics/", // operational metrics, not inventory
|
||||
"/Oem/Lenovo/History", // historical telemetry
|
||||
"/Oem/Lenovo/ScheduledPower", // power scheduling config
|
||||
"/Oem/Lenovo/BootSettings/BootOrder", // individual boot order lists
|
||||
"/PortForwardingMap/", // network port forwarding config
|
||||
)
|
||||
// Lenovo XCC BMC is typically slow (p95 latency often 3-5s even under
|
||||
// normal load). Set rate thresholds that don't over-throttle on the
|
||||
// first few requests, and give the ETA estimator a realistic baseline.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 2000,
|
||||
ThrottleP95LatencyMS: 4000,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 15,
|
||||
SnapshotSeconds: 120,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 20,
|
||||
})
|
||||
addPlanNote(plan, "lenovo xcc acquisition extensions enabled: noisy sensor/oem paths excluded from snapshot")
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -56,6 +56,7 @@ func BuiltinProfiles() []Profile {
|
||||
supermicroProfile(),
|
||||
dellProfile(),
|
||||
hpeProfile(),
|
||||
lenovoProfile(),
|
||||
inspurGroupOEMPlatformsProfile(),
|
||||
hgxProfile(),
|
||||
xfusionProfile(),
|
||||
@@ -226,6 +227,10 @@ func ensurePrefetchPolicy(plan *AcquisitionPlan, policy AcquisitionPrefetchPolic
|
||||
addPlanPaths(&plan.Tuning.PrefetchPolicy.ExcludeContains, policy.ExcludeContains...)
|
||||
}
|
||||
|
||||
func ensureSnapshotExcludeContains(plan *AcquisitionPlan, patterns ...string) {
|
||||
addPlanPaths(&plan.Tuning.SnapshotExcludeContains, patterns...)
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
|
||||
@@ -53,16 +53,17 @@ type AcquisitionScopedPathPolicy struct {
|
||||
}
|
||||
|
||||
type AcquisitionTuning struct {
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
SnapshotMaxDocuments int
|
||||
SnapshotWorkers int
|
||||
SnapshotExcludeContains []string
|
||||
PrefetchEnabled *bool
|
||||
PrefetchWorkers int
|
||||
NVMePostProbeEnabled *bool
|
||||
RatePolicy AcquisitionRatePolicy
|
||||
ETABaseline AcquisitionETABaseline
|
||||
PostProbePolicy AcquisitionPostProbePolicy
|
||||
RecoveryPolicy AcquisitionRecoveryPolicy
|
||||
PrefetchPolicy AcquisitionPrefetchPolicy
|
||||
}
|
||||
|
||||
type AcquisitionRatePolicy struct {
|
||||
|
||||
@@ -1961,7 +1961,10 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
slot := strings.ToLower(strings.TrimSpace(item.Slot))
|
||||
serial := strings.ToLower(strings.TrimSpace(item.SerialNumber))
|
||||
bdf := strings.ToLower(strings.TrimSpace(item.BDF))
|
||||
if slot != "" {
|
||||
// Generic slot names (e.g. "PCIe Device" from HGX BMC) are not unique
|
||||
// hardware positions — multiple distinct devices share the same name.
|
||||
// Fall through to serial/BDF so they are not incorrectly collapsed.
|
||||
if slot != "" && !isGenericPCIeSlotName(slot) {
|
||||
return "slot:" + slot
|
||||
}
|
||||
if serial != "" {
|
||||
@@ -1970,9 +1973,22 @@ func pcieDedupKey(item ReanimatorPCIe) string {
|
||||
if bdf != "" {
|
||||
return "bdf:" + bdf
|
||||
}
|
||||
if slot != "" {
|
||||
return "slot:" + slot
|
||||
}
|
||||
return strings.ToLower(strings.TrimSpace(item.DeviceClass)) + "|" + strings.ToLower(strings.TrimSpace(item.Model))
|
||||
}
|
||||
|
||||
// isGenericPCIeSlotName reports whether slot is a generic device-type label
|
||||
// rather than a unique hardware position identifier.
|
||||
func isGenericPCIeSlotName(slot string) bool {
|
||||
switch slot {
|
||||
case "pcie device", "pcie slot", "pcie":
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func pcieQualityScore(item ReanimatorPCIe) int {
|
||||
score := 0
|
||||
if strings.TrimSpace(item.SerialNumber) != "" {
|
||||
|
||||
@@ -733,6 +733,42 @@ func TestConvertPCIeDevices_SkipsDisplayControllerDuplicates(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_PreservesAllGPUsWithGenericSlot(t *testing.T) {
|
||||
// Supermicro HGX BMC reports all GPU PCIe devices with Name "PCIe Device" —
|
||||
// a generic label that is not a unique hardware position. All 8 GPUs must
|
||||
// be preserved; dedup by generic slot name must not collapse them into one.
|
||||
gpus := make([]models.GPU, 8)
|
||||
serials := []string{
|
||||
"1654925165720", "1654925166160", "1654925165942", "1654925165271",
|
||||
"1654925165719", "1654925165252", "1654925165304", "1654925165587",
|
||||
}
|
||||
for i, sn := range serials {
|
||||
gpus[i] = models.GPU{
|
||||
Slot: "PCIe Device",
|
||||
Model: "B200 180GB HBM3e",
|
||||
Manufacturer: "NVIDIA",
|
||||
SerialNumber: sn,
|
||||
PartNumber: "2901-886-A1",
|
||||
Status: "OK",
|
||||
}
|
||||
}
|
||||
hw := &models.HardwareConfig{GPUs: gpus}
|
||||
result := convertPCIeDevices(hw, "2026-04-13T10:00:00Z")
|
||||
if len(result) != 8 {
|
||||
t.Fatalf("expected 8 GPU entries (one per serial), got %d", len(result))
|
||||
}
|
||||
seen := make(map[string]bool)
|
||||
for _, r := range result {
|
||||
if seen[r.SerialNumber] {
|
||||
t.Fatalf("duplicate serial %q in PCIe result", r.SerialNumber)
|
||||
}
|
||||
seen[r.SerialNumber] = true
|
||||
if r.DeviceClass != "VideoController" {
|
||||
t.Fatalf("expected VideoController device class, got %q", r.DeviceClass)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertPCIeDevices_MapsGPUStatusHistory(t *testing.T) {
|
||||
hw := &models.HardwareConfig{
|
||||
GPUs: []models.GPU{
|
||||
|
||||
@@ -257,15 +257,16 @@ type Storage struct {
|
||||
|
||||
// StorageVolume represents a logical storage volume (RAID/VROC/etc.).
|
||||
type StorageVolume struct {
|
||||
ID string `json:"id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Controller string `json:"controller,omitempty"`
|
||||
RAIDLevel string `json:"raid_level,omitempty"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Bootable bool `json:"bootable,omitempty"`
|
||||
Encrypted bool `json:"encrypted,omitempty"`
|
||||
ID string `json:"id,omitempty"`
|
||||
Name string `json:"name,omitempty"`
|
||||
Controller string `json:"controller,omitempty"`
|
||||
RAIDLevel string `json:"raid_level,omitempty"`
|
||||
SizeGB int `json:"size_gb,omitempty"`
|
||||
CapacityBytes int64 `json:"capacity_bytes,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Bootable bool `json:"bootable,omitempty"`
|
||||
Encrypted bool `json:"encrypted,omitempty"`
|
||||
Drives []string `json:"drives,omitempty"` // member drive names/labels
|
||||
}
|
||||
|
||||
// PCIeDevice represents a PCIe device
|
||||
|
||||
859
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
859
internal/parser/vendors/lenovo_xcc/parser.go
vendored
Normal file
@@ -0,0 +1,859 @@
|
||||
// Package lenovo_xcc provides parser for Lenovo XCC mini-log archives.
|
||||
// Tested with: ThinkSystem SR650 V3 (XCC mini-log zip, exported via XCC UI)
|
||||
//
|
||||
// Archive structure: zip with tmp/ directory containing JSON .log files.
|
||||
//
|
||||
// IMPORTANT: Increment parserVersion when modifying parser logic!
|
||||
package lenovo_xcc
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const parserVersion = "1.2"
|
||||
|
||||
func init() {
|
||||
parser.Register(&Parser{})
|
||||
}
|
||||
|
||||
// Parser implements VendorParser for Lenovo XCC mini-log archives.
|
||||
type Parser struct{}
|
||||
|
||||
func (p *Parser) Name() string { return "Lenovo XCC Mini-Log Parser" }
|
||||
func (p *Parser) Vendor() string { return "lenovo_xcc" }
|
||||
func (p *Parser) Version() string { return parserVersion }
|
||||
|
||||
// Detect checks if files match the Lenovo XCC mini-log archive format.
|
||||
// Returns confidence score 0-100.
|
||||
func (p *Parser) Detect(files []parser.ExtractedFile) int {
|
||||
confidence := 0
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
switch {
|
||||
case strings.HasSuffix(path, "tmp/basic_sys_info.log"):
|
||||
confidence += 60
|
||||
case strings.HasSuffix(path, "tmp/inventory_cpu.log"):
|
||||
confidence += 20
|
||||
case strings.HasSuffix(path, "tmp/xcc_plat_events1.log"):
|
||||
confidence += 20
|
||||
case strings.HasSuffix(path, "tmp/inventory_dimm.log"):
|
||||
confidence += 10
|
||||
case strings.HasSuffix(path, "tmp/inventory_fw.log"):
|
||||
confidence += 10
|
||||
}
|
||||
if confidence >= 100 {
|
||||
return 100
|
||||
}
|
||||
}
|
||||
return confidence
|
||||
}
|
||||
|
||||
// Parse parses the Lenovo XCC mini-log archive and returns an analysis result.
|
||||
func (p *Parser) Parse(files []parser.ExtractedFile) (*models.AnalysisResult, error) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: make([]models.Event, 0),
|
||||
FRU: make([]models.FRUInfo, 0),
|
||||
Sensors: make([]models.SensorReading, 0),
|
||||
Hardware: &models.HardwareConfig{
|
||||
Firmware: make([]models.FirmwareInfo, 0),
|
||||
CPUs: make([]models.CPU, 0),
|
||||
Memory: make([]models.MemoryDIMM, 0),
|
||||
Storage: make([]models.Storage, 0),
|
||||
PCIeDevices: make([]models.PCIeDevice, 0),
|
||||
PowerSupply: make([]models.PSU, 0),
|
||||
},
|
||||
}
|
||||
|
||||
if f := findByPath(files, "tmp/basic_sys_info.log"); f != nil {
|
||||
parseBasicSysInfo(f.Content, result)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_fw.log"); f != nil {
|
||||
result.Hardware.Firmware = append(result.Hardware.Firmware, parseFirmware(f.Content)...)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_cpu.log"); f != nil {
|
||||
result.Hardware.CPUs = parseCPUs(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_dimm.log"); f != nil {
|
||||
memory, events := parseDIMMs(f.Content)
|
||||
result.Hardware.Memory = memory
|
||||
result.Events = append(result.Events, events...)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_disk.log"); f != nil {
|
||||
result.Hardware.Storage = parseDisks(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_volume.log"); f != nil {
|
||||
result.Hardware.Volumes = parseVolumes(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_card.log"); f != nil {
|
||||
result.Hardware.PCIeDevices = parseCards(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_psu.log"); f != nil {
|
||||
result.Hardware.PowerSupply = parsePSUs(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_ipmi_fru.log"); f != nil {
|
||||
result.FRU = parseFRU(f.Content)
|
||||
}
|
||||
if f := findByPath(files, "tmp/inventory_ipmi_sensor.log"); f != nil {
|
||||
result.Sensors = parseSensors(f.Content)
|
||||
}
|
||||
for _, f := range findEventFiles(files) {
|
||||
result.Events = append(result.Events, parseEvents(f.Content)...)
|
||||
}
|
||||
applyDIMMWarningsFromEvents(result)
|
||||
|
||||
result.Protocol = "ipmi"
|
||||
result.SourceType = models.SourceTypeArchive
|
||||
parser.ApplyManufacturedYearWeekFromFRU(result.FRU, result.Hardware)
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// findByPath returns the first file whose lowercased path ends with the given suffix.
|
||||
func findByPath(files []parser.ExtractedFile, suffix string) *parser.ExtractedFile {
|
||||
for i := range files {
|
||||
if strings.HasSuffix(strings.ToLower(files[i].Path), suffix) {
|
||||
return &files[i]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// findEventFiles returns all xcc_plat_eventsN.log files.
|
||||
func findEventFiles(files []parser.ExtractedFile) []parser.ExtractedFile {
|
||||
var out []parser.ExtractedFile
|
||||
for _, f := range files {
|
||||
path := strings.ToLower(f.Path)
|
||||
if strings.Contains(path, "tmp/xcc_plat_events") && strings.HasSuffix(path, ".log") {
|
||||
out = append(out, f)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// --- JSON structures ---
|
||||
|
||||
type xccBasicSysInfoDoc struct {
|
||||
Items []xccBasicSysInfoItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccBasicSysInfoItem struct {
|
||||
MachineName string `json:"machine_name"`
|
||||
MachineTypeModel string `json:"machine_typemodel"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
UUID string `json:"uuid"`
|
||||
PowerState string `json:"power_state"`
|
||||
ServerState string `json:"server_state"`
|
||||
CurrentTime string `json:"current_time"`
|
||||
}
|
||||
|
||||
// xccFWEntry covers both basic_sys_info firmware (no type_str) and inventory_fw (has type_str).
|
||||
type xccFWEntry struct {
|
||||
Index int `json:"index"`
|
||||
TypeCode int `json:"type"`
|
||||
TypeStr string `json:"type_str"` // only in inventory_fw.log
|
||||
Version string `json:"version"`
|
||||
Build string `json:"build"`
|
||||
ReleaseDate string `json:"release_date"`
|
||||
}
|
||||
|
||||
type xccFirmwareDoc struct {
|
||||
Items []xccFWEntry `json:"items"`
|
||||
}
|
||||
|
||||
type xccCPUDoc struct {
|
||||
Items []xccCPUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccCPUItem struct {
|
||||
Processors []xccCPU `json:"processors"`
|
||||
}
|
||||
|
||||
type xccCPU struct {
|
||||
Name int `json:"processors_name"`
|
||||
Model string `json:"processors_cpu_model"`
|
||||
Cores json.RawMessage `json:"processors_cores"` // may be int or string
|
||||
Threads json.RawMessage `json:"processors_threads"` // may be int or string
|
||||
ClockSpeed string `json:"processors_clock_speed"`
|
||||
L1DataCache string `json:"processors_l1datacache"`
|
||||
L2Cache string `json:"processors_l2cache"`
|
||||
L3Cache string `json:"processors_l3cache"`
|
||||
Status string `json:"processors_status"`
|
||||
SerialNumber string `json:"processors_serial_number"`
|
||||
}
|
||||
|
||||
type xccDIMMDoc struct {
|
||||
Items []xccDIMMItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccDIMMItem struct {
|
||||
Memory []xccDIMM `json:"memory"`
|
||||
}
|
||||
|
||||
type xccDIMM struct {
|
||||
Index int `json:"memory_index"`
|
||||
Status string `json:"memory_status"`
|
||||
Name string `json:"memory_name"`
|
||||
Type string `json:"memory_type"`
|
||||
Capacity json.RawMessage `json:"memory_capacity"` // int (GB) or string
|
||||
PartNumber string `json:"memory_part_number"`
|
||||
SerialNumber string `json:"memory_serial_number"`
|
||||
Manufacturer string `json:"memory_manufacturer"`
|
||||
MemSpeed json.RawMessage `json:"memory_mem_speed"` // int or string
|
||||
ConfigSpeed json.RawMessage `json:"memory_config_speed"` // int or string
|
||||
}
|
||||
|
||||
type xccDiskDoc struct {
|
||||
Items []xccDiskItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccDiskItem struct {
|
||||
Disks []xccDisk `json:"disks"`
|
||||
}
|
||||
|
||||
type xccDisk struct {
|
||||
ID int `json:"id"`
|
||||
SlotNo int `json:"slotNo"`
|
||||
Type string `json:"type"`
|
||||
Interface string `json:"interface"`
|
||||
Media string `json:"media"`
|
||||
SerialNo string `json:"serialNo"`
|
||||
PartNo string `json:"partNo"`
|
||||
CapacityStr string `json:"capacityStr"` // e.g. "3.20 TB"
|
||||
Manufacture string `json:"manufacture"`
|
||||
ProductName string `json:"productName"`
|
||||
RemainLife int `json:"remainLife"` // 0-100
|
||||
FWVersion string `json:"fwVersion"`
|
||||
Temperature int `json:"temperature"`
|
||||
HealthStatus int `json:"healthStatus"` // int code: 2=Normal
|
||||
State int `json:"state"`
|
||||
StateStr string `json:"statestr"`
|
||||
}
|
||||
|
||||
type xccCardDoc struct {
|
||||
Items []xccCard `json:"items"`
|
||||
}
|
||||
|
||||
type xccCard struct {
|
||||
Key int `json:"key"`
|
||||
SlotNo int `json:"slotNo"`
|
||||
AdapterName string `json:"adapterName"`
|
||||
ConnectorLabel string `json:"connectorLabel"`
|
||||
OOBSupported int `json:"oobSupported"`
|
||||
Location int `json:"location"`
|
||||
Functions []xccCardFunc `json:"functions"`
|
||||
}
|
||||
|
||||
type xccCardFunc struct {
|
||||
FunType int `json:"funType"`
|
||||
BusNo int `json:"generic_busNo"`
|
||||
DevNo int `json:"generic_devNo"`
|
||||
FunNo int `json:"generic_funNo"`
|
||||
VendorID int `json:"generic_vendorId"` // direct int
|
||||
DeviceID int `json:"generic_devId"` // direct int
|
||||
SlotDesignation string `json:"generic_slotDesignation"`
|
||||
}
|
||||
|
||||
type xccPSUDoc struct {
|
||||
Items []xccPSUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccPSUItem struct {
|
||||
Power []xccPSU `json:"power"`
|
||||
}
|
||||
|
||||
type xccPSU struct {
|
||||
Name int `json:"name"`
|
||||
Status string `json:"status"`
|
||||
RatedPower int `json:"rated_power"`
|
||||
PartNumber string `json:"part_number"`
|
||||
FRUNumber string `json:"fru_number"`
|
||||
SerialNumber string `json:"serial_number"`
|
||||
ManufID string `json:"manuf_id"`
|
||||
}
|
||||
|
||||
type xccFRUDoc struct {
|
||||
Items []xccFRUItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccFRUItem struct {
|
||||
BuiltinFRU []map[string]string `json:"builtin_fru_device"`
|
||||
}
|
||||
|
||||
type xccSensorDoc struct {
|
||||
Items []xccSensor `json:"items"`
|
||||
}
|
||||
|
||||
type xccSensor struct {
|
||||
Name string `json:"Sensor Name"`
|
||||
Value string `json:"Value"`
|
||||
Status string `json:"status"`
|
||||
Unit string `json:"unit"`
|
||||
}
|
||||
|
||||
type xccEventDoc struct {
|
||||
Items []xccEvent `json:"items"`
|
||||
}
|
||||
|
||||
type xccVolumeDoc struct {
|
||||
Items []xccVolumeItem `json:"items"`
|
||||
}
|
||||
|
||||
type xccVolumeItem struct {
|
||||
Volumes []xccVolume `json:"volumes"`
|
||||
TotalCapacityStr string `json:"totalCapacityStr"`
|
||||
}
|
||||
|
||||
type xccVolume struct {
|
||||
ID int `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Drives string `json:"drives"` // e.g. "M.2 Drive 0, M.2 Drive 1"
|
||||
RDLvlStr string `json:"rdlvlstr"` // e.g. "RAID 1"
|
||||
CapacityStr string `json:"capacityStr"` // e.g. "893.750 GiB"
|
||||
Status int `json:"status"`
|
||||
StatusStr string `json:"statusStr"` // e.g. "Optimal"
|
||||
}
|
||||
|
||||
type xccEvent struct {
|
||||
Severity string `json:"severity"` // "I", "W", "E", "C"
|
||||
Source string `json:"source"`
|
||||
Date string `json:"date"` // "2025-12-22T13:24:02.070"
|
||||
Index int `json:"index"`
|
||||
EventID string `json:"eventid"`
|
||||
CmnID string `json:"cmnid"`
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
// --- Parsers ---
|
||||
|
||||
func parseBasicSysInfo(content []byte, result *models.AnalysisResult) {
|
||||
var doc xccBasicSysInfoDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return
|
||||
}
|
||||
item := doc.Items[0]
|
||||
|
||||
result.Hardware.BoardInfo = models.BoardInfo{
|
||||
ProductName: strings.TrimSpace(item.MachineTypeModel),
|
||||
SerialNumber: strings.TrimSpace(item.SerialNumber),
|
||||
UUID: strings.TrimSpace(item.UUID),
|
||||
}
|
||||
|
||||
if t, err := parseXCCTime(item.CurrentTime); err == nil {
|
||||
result.CollectedAt = t.UTC()
|
||||
}
|
||||
}
|
||||
|
||||
func parseFirmware(content []byte) []models.FirmwareInfo {
|
||||
var doc xccFirmwareDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.FirmwareInfo
|
||||
for _, fw := range doc.Items {
|
||||
if fi := xccFWEntryToModel(fw); fi != nil {
|
||||
out = append(out, *fi)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func xccFWEntryToModel(fw xccFWEntry) *models.FirmwareInfo {
|
||||
name := strings.TrimSpace(fw.TypeStr)
|
||||
version := strings.TrimSpace(fw.Version)
|
||||
if name == "" && version == "" {
|
||||
return nil
|
||||
}
|
||||
build := strings.TrimSpace(fw.Build)
|
||||
v := version
|
||||
if build != "" {
|
||||
v = version + " (" + build + ")"
|
||||
}
|
||||
return &models.FirmwareInfo{
|
||||
DeviceName: name,
|
||||
Version: v,
|
||||
BuildTime: strings.TrimSpace(fw.ReleaseDate),
|
||||
}
|
||||
}
|
||||
|
||||
func parseCPUs(content []byte) []models.CPU {
|
||||
var doc xccCPUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.CPU
|
||||
for _, item := range doc.Items {
|
||||
for _, c := range item.Processors {
|
||||
cpu := models.CPU{
|
||||
Socket: c.Name,
|
||||
Model: strings.TrimSpace(c.Model),
|
||||
Cores: rawJSONToInt(c.Cores),
|
||||
Threads: rawJSONToInt(c.Threads),
|
||||
FrequencyMHz: parseMHz(c.ClockSpeed),
|
||||
L1CacheKB: parseKB(c.L1DataCache),
|
||||
L2CacheKB: parseKB(c.L2Cache),
|
||||
L3CacheKB: parseKB(c.L3Cache),
|
||||
Status: strings.TrimSpace(c.Status),
|
||||
SerialNumber: strings.TrimSpace(c.SerialNumber),
|
||||
}
|
||||
out = append(out, cpu)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseDIMMs(content []byte) ([]models.MemoryDIMM, []models.Event) {
|
||||
var doc xccDIMMDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
var out []models.MemoryDIMM
|
||||
var events []models.Event
|
||||
for _, item := range doc.Items {
|
||||
for _, m := range item.Memory {
|
||||
status := strings.TrimSpace(m.Status)
|
||||
present := !strings.EqualFold(status, "not present") &&
|
||||
!strings.EqualFold(status, "absent")
|
||||
// memory_capacity is in GB (int); convert to MB
|
||||
capacityGB := rawJSONToInt(m.Capacity)
|
||||
dimm := models.MemoryDIMM{
|
||||
Slot: strings.TrimSpace(m.Name),
|
||||
Location: strings.TrimSpace(m.Name),
|
||||
Present: present,
|
||||
SizeMB: capacityGB * 1024,
|
||||
Type: strings.TrimSpace(m.Type),
|
||||
MaxSpeedMHz: rawJSONToInt(m.MemSpeed),
|
||||
CurrentSpeedMHz: rawJSONToInt(m.ConfigSpeed),
|
||||
Manufacturer: strings.TrimSpace(m.Manufacturer),
|
||||
SerialNumber: strings.TrimSpace(m.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(strings.TrimRight(m.PartNumber, " ")),
|
||||
Status: status,
|
||||
}
|
||||
out = append(out, dimm)
|
||||
if isUnqualifiedDIMM(status) {
|
||||
events = append(events, models.Event{
|
||||
Source: "Memory",
|
||||
SensorType: "Memory",
|
||||
SensorName: dimm.Slot,
|
||||
EventType: "DIMM Qualification",
|
||||
Severity: models.SeverityWarning,
|
||||
Description: status,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
return out, events
|
||||
}
|
||||
|
||||
func parseDisks(content []byte) []models.Storage {
|
||||
var doc xccDiskDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.Storage
|
||||
for _, item := range doc.Items {
|
||||
for _, d := range item.Disks {
|
||||
sizeGB := parseCapacityToGB(d.CapacityStr)
|
||||
stateStr := strings.TrimSpace(d.StateStr)
|
||||
present := !strings.EqualFold(stateStr, "absent") &&
|
||||
!strings.EqualFold(stateStr, "not present")
|
||||
disk := models.Storage{
|
||||
Slot: fmt.Sprintf("%d", d.SlotNo),
|
||||
Type: strings.TrimSpace(d.Media),
|
||||
Model: strings.TrimSpace(d.ProductName),
|
||||
SizeGB: sizeGB,
|
||||
SerialNumber: strings.TrimSpace(d.SerialNo),
|
||||
Manufacturer: strings.TrimSpace(d.Manufacture),
|
||||
Firmware: strings.TrimSpace(d.FWVersion),
|
||||
Interface: strings.TrimSpace(d.Interface),
|
||||
Present: present,
|
||||
Status: stateStr,
|
||||
}
|
||||
if d.RemainLife >= 0 && d.RemainLife <= 100 {
|
||||
v := d.RemainLife
|
||||
disk.RemainingEndurancePct = &v
|
||||
}
|
||||
out = append(out, disk)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseVolumes(content []byte) []models.StorageVolume {
|
||||
var doc xccVolumeDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.StorageVolume
|
||||
for _, item := range doc.Items {
|
||||
for _, v := range item.Volumes {
|
||||
vol := models.StorageVolume{
|
||||
ID: fmt.Sprintf("%d", v.ID),
|
||||
Name: strings.TrimSpace(v.Name),
|
||||
RAIDLevel: strings.TrimSpace(v.RDLvlStr),
|
||||
SizeGB: parseCapacityToGB(v.CapacityStr),
|
||||
Status: strings.TrimSpace(v.StatusStr),
|
||||
}
|
||||
drives := strings.TrimSpace(v.Drives)
|
||||
if drives != "" {
|
||||
for _, d := range strings.Split(drives, ",") {
|
||||
vol.Drives = append(vol.Drives, strings.TrimSpace(d))
|
||||
}
|
||||
// M.2 NVMe volumes are managed by Intel VROC (VMD)
|
||||
if strings.Contains(strings.ToLower(drives), "m.2") {
|
||||
vol.Controller = "Intel VROC"
|
||||
}
|
||||
}
|
||||
out = append(out, vol)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseCards(content []byte) []models.PCIeDevice {
|
||||
var doc xccCardDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.PCIeDevice
|
||||
for _, card := range doc.Items {
|
||||
slot := strings.TrimSpace(card.ConnectorLabel)
|
||||
if slot == "" {
|
||||
slot = fmt.Sprintf("%d", card.SlotNo)
|
||||
}
|
||||
dev := models.PCIeDevice{
|
||||
Slot: slot,
|
||||
Description: strings.TrimSpace(card.AdapterName),
|
||||
}
|
||||
if len(card.Functions) > 0 {
|
||||
fn := card.Functions[0]
|
||||
dev.BDF = fmt.Sprintf("%02x:%02x.%x", fn.BusNo, fn.DevNo, fn.FunNo)
|
||||
dev.VendorID = fn.VendorID
|
||||
dev.DeviceID = fn.DeviceID
|
||||
}
|
||||
out = append(out, dev)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parsePSUs(content []byte) []models.PSU {
|
||||
var doc xccPSUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.PSU
|
||||
for _, item := range doc.Items {
|
||||
for _, p := range item.Power {
|
||||
psu := models.PSU{
|
||||
Slot: fmt.Sprintf("%d", p.Name),
|
||||
Present: true,
|
||||
WattageW: p.RatedPower,
|
||||
SerialNumber: strings.TrimSpace(p.SerialNumber),
|
||||
PartNumber: strings.TrimSpace(p.PartNumber),
|
||||
Vendor: strings.TrimSpace(p.ManufID),
|
||||
Status: strings.TrimSpace(p.Status),
|
||||
}
|
||||
out = append(out, psu)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseFRU(content []byte) []models.FRUInfo {
|
||||
var doc xccFRUDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil || len(doc.Items) == 0 {
|
||||
return nil
|
||||
}
|
||||
var out []models.FRUInfo
|
||||
for _, item := range doc.Items {
|
||||
for _, entry := range item.BuiltinFRU {
|
||||
fru := models.FRUInfo{
|
||||
Description: entry["FRU Device Description"],
|
||||
Manufacturer: entry["Board Mfg"],
|
||||
ProductName: entry["Board Product"],
|
||||
SerialNumber: entry["Board Serial"],
|
||||
PartNumber: entry["Board Part Number"],
|
||||
MfgDate: entry["Board Mfg Date"],
|
||||
}
|
||||
if fru.ProductName == "" {
|
||||
fru.ProductName = entry["Product Name"]
|
||||
}
|
||||
if fru.SerialNumber == "" {
|
||||
fru.SerialNumber = entry["Product Serial"]
|
||||
}
|
||||
if fru.PartNumber == "" {
|
||||
fru.PartNumber = entry["Product Part Number"]
|
||||
}
|
||||
if fru.Description == "" && fru.ProductName == "" && fru.SerialNumber == "" {
|
||||
continue
|
||||
}
|
||||
out = append(out, fru)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseSensors(content []byte) []models.SensorReading {
|
||||
var doc xccSensorDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.SensorReading
|
||||
for _, s := range doc.Items {
|
||||
name := strings.TrimSpace(s.Name)
|
||||
if name == "" {
|
||||
continue
|
||||
}
|
||||
sr := models.SensorReading{
|
||||
Name: name,
|
||||
RawValue: strings.TrimSpace(s.Value),
|
||||
Unit: strings.TrimSpace(s.Unit),
|
||||
Status: strings.TrimSpace(s.Status),
|
||||
}
|
||||
if v, err := strconv.ParseFloat(sr.RawValue, 64); err == nil {
|
||||
sr.Value = v
|
||||
}
|
||||
out = append(out, sr)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func parseEvents(content []byte) []models.Event {
|
||||
var doc xccEventDoc
|
||||
if err := json.Unmarshal(content, &doc); err != nil {
|
||||
return nil
|
||||
}
|
||||
var out []models.Event
|
||||
for _, e := range doc.Items {
|
||||
ev := models.Event{
|
||||
ID: e.EventID,
|
||||
Source: strings.TrimSpace(e.Source),
|
||||
Description: strings.TrimSpace(e.Message),
|
||||
Severity: xccSeverity(e.Severity, e.Message),
|
||||
}
|
||||
if t, err := parseXCCTime(e.Date); err == nil {
|
||||
ev.Timestamp = t.UTC()
|
||||
}
|
||||
out = append(out, ev)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// --- Helpers ---
|
||||
|
||||
func xccSeverity(s, message string) models.Severity {
|
||||
if isUnqualifiedDIMM(message) {
|
||||
return models.SeverityWarning
|
||||
}
|
||||
switch strings.ToUpper(strings.TrimSpace(s)) {
|
||||
case "C":
|
||||
return models.SeverityCritical
|
||||
case "E":
|
||||
return models.SeverityCritical
|
||||
case "W":
|
||||
return models.SeverityWarning
|
||||
default:
|
||||
return models.SeverityInfo
|
||||
}
|
||||
}
|
||||
|
||||
func isUnqualifiedDIMM(value string) bool {
|
||||
return strings.Contains(strings.ToLower(strings.TrimSpace(value)), "unqualified dimm")
|
||||
}
|
||||
|
||||
var (
|
||||
unqualifiedDIMMSlotRE = regexp.MustCompile(`(?i)\bunqualified dimm\s+(\d+)\b`)
|
||||
unqualifiedDIMMSerialRE = regexp.MustCompile(`(?i)\bserial number is\s+([A-Z0-9-]+)`)
|
||||
)
|
||||
|
||||
func applyDIMMWarningsFromEvents(result *models.AnalysisResult) {
|
||||
if result == nil || result.Hardware == nil || len(result.Hardware.Memory) == 0 || len(result.Events) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
for _, ev := range result.Events {
|
||||
if !isUnqualifiedDIMM(ev.Description) {
|
||||
continue
|
||||
}
|
||||
idx := findDIMMIndexForUnqualifiedEvent(result.Hardware.Memory, ev.Description)
|
||||
if idx < 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
dimm := &result.Hardware.Memory[idx]
|
||||
dimm.Status = "Warning"
|
||||
dimm.ErrorDescription = ev.Description
|
||||
if !ev.Timestamp.IsZero() {
|
||||
ts := ev.Timestamp.UTC()
|
||||
dimm.StatusChangedAt = &ts
|
||||
dimm.StatusCheckedAt = &ts
|
||||
}
|
||||
appendDIMMStatusHistory(dimm, ev)
|
||||
}
|
||||
}
|
||||
|
||||
func findDIMMIndexForUnqualifiedEvent(memory []models.MemoryDIMM, description string) int {
|
||||
slot := extractUnqualifiedDIMMSlot(description)
|
||||
serial := normalizeUnqualifiedDIMMSerial(extractUnqualifiedDIMMSerial(description))
|
||||
|
||||
for i := range memory {
|
||||
if slot != "" && strings.EqualFold(strings.TrimSpace(memory[i].Slot), slot) {
|
||||
return i
|
||||
}
|
||||
}
|
||||
for i := range memory {
|
||||
if serial != "" && normalizeUnqualifiedDIMMSerial(memory[i].SerialNumber) == serial {
|
||||
return i
|
||||
}
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
func extractUnqualifiedDIMMSlot(description string) string {
|
||||
m := unqualifiedDIMMSlotRE.FindStringSubmatch(description)
|
||||
if len(m) < 2 {
|
||||
return ""
|
||||
}
|
||||
return "DIMM " + strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
func extractUnqualifiedDIMMSerial(description string) string {
|
||||
m := unqualifiedDIMMSerialRE.FindStringSubmatch(description)
|
||||
if len(m) < 2 {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimSpace(m[1])
|
||||
}
|
||||
|
||||
func normalizeUnqualifiedDIMMSerial(serial string) string {
|
||||
serial = strings.ToUpper(strings.TrimSpace(serial))
|
||||
if idx := strings.Index(serial, "-"); idx >= 0 {
|
||||
serial = serial[:idx]
|
||||
}
|
||||
return serial
|
||||
}
|
||||
|
||||
func appendDIMMStatusHistory(dimm *models.MemoryDIMM, ev models.Event) {
|
||||
if dimm == nil || ev.Timestamp.IsZero() {
|
||||
return
|
||||
}
|
||||
for _, item := range dimm.StatusHistory {
|
||||
if strings.EqualFold(strings.TrimSpace(item.Status), "Warning") &&
|
||||
item.ChangedAt.Equal(ev.Timestamp.UTC()) &&
|
||||
strings.TrimSpace(item.Details) == strings.TrimSpace(ev.Description) {
|
||||
return
|
||||
}
|
||||
}
|
||||
dimm.StatusHistory = append(dimm.StatusHistory, models.StatusHistoryEntry{
|
||||
Status: "Warning",
|
||||
ChangedAt: ev.Timestamp.UTC(),
|
||||
Details: ev.Description,
|
||||
})
|
||||
}
|
||||
|
||||
func parseXCCTime(s string) (time.Time, error) {
|
||||
s = strings.TrimSpace(s)
|
||||
formats := []string{
|
||||
"2006-01-02T15:04:05.000",
|
||||
"2006-01-02T15:04:05",
|
||||
"2006-01-02 15:04:05",
|
||||
}
|
||||
for _, f := range formats {
|
||||
if t, err := time.Parse(f, s); err == nil {
|
||||
return t, nil
|
||||
}
|
||||
}
|
||||
return time.Time{}, fmt.Errorf("unparseable time: %q", s)
|
||||
}
|
||||
|
||||
// parseMHz parses "4100 MHz" → 4100
|
||||
func parseMHz(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) == 0 {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.Atoi(parts[0])
|
||||
return v
|
||||
}
|
||||
|
||||
// parseKB parses "384 KB" → 384
|
||||
func parseKB(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) == 0 {
|
||||
return 0
|
||||
}
|
||||
v, _ := strconv.Atoi(parts[0])
|
||||
return v
|
||||
}
|
||||
|
||||
// parseMB parses "32768 MB" → 32768
|
||||
func parseMB(s string) int {
|
||||
return parseKB(s)
|
||||
}
|
||||
|
||||
// parseMTs parses "4800 MT/s" → 4800 (treated as MHz equivalent)
|
||||
func parseMTs(s string) int {
|
||||
return parseKB(s)
|
||||
}
|
||||
|
||||
// parseCapacityToGB parses "3.20 TB" or "480 GB" → GB integer
|
||||
func parseCapacityToGB(s string) int {
|
||||
s = strings.TrimSpace(s)
|
||||
parts := strings.Fields(s)
|
||||
if len(parts) < 2 {
|
||||
return 0
|
||||
}
|
||||
v, err := strconv.ParseFloat(parts[0], 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
switch strings.ToUpper(parts[1]) {
|
||||
case "TB":
|
||||
return int(v * 1000)
|
||||
case "TIB":
|
||||
return int(v * 1099.511627776) // 1 TiB = 1099.511... GB
|
||||
case "GB":
|
||||
return int(v)
|
||||
case "GIB":
|
||||
return int(v * 1.073741824) // 1 GiB = 1.073741824 GB
|
||||
case "MB":
|
||||
return int(v / 1024)
|
||||
}
|
||||
return int(v)
|
||||
}
|
||||
|
||||
// rawJSONToInt parses a json.RawMessage that may be an int or a quoted string → int
|
||||
func rawJSONToInt(raw json.RawMessage) int {
|
||||
if len(raw) == 0 {
|
||||
return 0
|
||||
}
|
||||
// try direct int
|
||||
var n int
|
||||
if err := json.Unmarshal(raw, &n); err == nil {
|
||||
return n
|
||||
}
|
||||
// try string
|
||||
var s string
|
||||
if err := json.Unmarshal(raw, &s); err == nil {
|
||||
v, _ := strconv.Atoi(strings.TrimSpace(s))
|
||||
return v
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// parseHexID parses "0x15b3" → 5555
|
||||
func parseHexID(s string) int {
|
||||
s = strings.TrimSpace(strings.ToLower(s))
|
||||
s = strings.TrimPrefix(s, "0x")
|
||||
v, _ := strconv.ParseInt(s, 16, 32)
|
||||
return int(v)
|
||||
}
|
||||
366
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
366
internal/parser/vendors/lenovo_xcc/parser_test.go
vendored
Normal file
@@ -0,0 +1,366 @@
|
||||
package lenovo_xcc
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/models"
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
const exampleArchive = "/Users/mchusavitin/Documents/git/logpile/example/7D76CTO1WW_JF0002KT_xcc_mini-log_20260413-122150.zip"
|
||||
|
||||
func TestDetect_LenovoXCCMiniLog(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
score := p.Detect(files)
|
||||
if score < 80 {
|
||||
t.Errorf("expected Detect score >= 80 for XCC mini-log archive, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_BasicSysInfo(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, err := p.Parse(files)
|
||||
if err != nil {
|
||||
t.Fatalf("Parse returned error: %v", err)
|
||||
}
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil result or hardware")
|
||||
}
|
||||
|
||||
hw := result.Hardware
|
||||
if hw.BoardInfo.SerialNumber == "" {
|
||||
t.Error("BoardInfo.SerialNumber is empty")
|
||||
}
|
||||
if hw.BoardInfo.ProductName == "" {
|
||||
t.Error("BoardInfo.ProductName is empty")
|
||||
}
|
||||
t.Logf("BoardInfo: serial=%s model=%s uuid=%s", hw.BoardInfo.SerialNumber, hw.BoardInfo.ProductName, hw.BoardInfo.UUID)
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_CPUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) == 0 {
|
||||
t.Error("expected at least one CPU, got none")
|
||||
}
|
||||
for i, cpu := range result.Hardware.CPUs {
|
||||
t.Logf("CPU[%d]: socket=%d model=%q cores=%d threads=%d freq=%dMHz", i, cpu.Socket, cpu.Model, cpu.Cores, cpu.Threads, cpu.FrequencyMHz)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Memory(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Memory) == 0 {
|
||||
t.Error("expected memory DIMMs, got none")
|
||||
}
|
||||
t.Logf("Memory: %d DIMMs", len(result.Hardware.Memory))
|
||||
for i, m := range result.Hardware.Memory {
|
||||
t.Logf("DIMM[%d]: slot=%s present=%v size=%dMB sn=%s", i, m.Slot, m.Present, m.SizeMB, m.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Storage(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("Storage: %d disks", len(result.Hardware.Storage))
|
||||
for i, s := range result.Hardware.Storage {
|
||||
t.Logf("Disk[%d]: slot=%s model=%q size=%dGB sn=%s", i, s.Slot, s.Model, s.SizeGB, s.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PCIeCards(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("PCIe cards: %d", len(result.Hardware.PCIeDevices))
|
||||
for i, c := range result.Hardware.PCIeDevices {
|
||||
t.Logf("Card[%d]: slot=%s desc=%q bdf=%s", i, c.Slot, c.Description, c.BDF)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_PSUs(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.PowerSupply) == 0 {
|
||||
t.Error("expected PSUs, got none")
|
||||
}
|
||||
for i, p := range result.Hardware.PowerSupply {
|
||||
t.Logf("PSU[%d]: slot=%s wattage=%dW status=%s sn=%s", i, p.Slot, p.WattageW, p.Status, p.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Sensors(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Sensors) == 0 {
|
||||
t.Error("expected sensors, got none")
|
||||
}
|
||||
t.Logf("Sensors: %d", len(result.Sensors))
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Events(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Events) == 0 {
|
||||
t.Error("expected events, got none")
|
||||
}
|
||||
t.Logf("Events: %d", len(result.Events))
|
||||
for i, e := range result.Events {
|
||||
if i >= 5 {
|
||||
break
|
||||
}
|
||||
t.Logf("Event[%d]: severity=%s ts=%s desc=%q", i, e.Severity, e.Timestamp.Format("2006-01-02T15:04:05"), e.Description)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_FRU(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
t.Logf("FRU: %d entries", len(result.FRU))
|
||||
for i, f := range result.FRU {
|
||||
t.Logf("FRU[%d]: desc=%q product=%q serial=%q", i, f.Description, f.ProductName, f.SerialNumber)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_Firmware(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Error("expected firmware entries, got none")
|
||||
}
|
||||
for i, f := range result.Hardware.Firmware {
|
||||
t.Logf("FW[%d]: name=%q version=%q buildtime=%q", i, f.DeviceName, f.Version, f.BuildTime)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_LenovoXCCMiniLog_VROCVolumes(t *testing.T) {
|
||||
files, err := parser.ExtractArchive(exampleArchive)
|
||||
if err != nil {
|
||||
t.Skipf("example archive not available: %v", err)
|
||||
}
|
||||
|
||||
p := &Parser{}
|
||||
result, _ := p.Parse(files)
|
||||
if result == nil || result.Hardware == nil {
|
||||
t.Fatal("Parse returned nil")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Volumes) == 0 {
|
||||
t.Error("expected at least one VROC volume, got none")
|
||||
}
|
||||
for i, v := range result.Hardware.Volumes {
|
||||
t.Logf("Volume[%d]: id=%s controller=%q raid=%s size=%dGB status=%s drives=%v",
|
||||
i, v.ID, v.Controller, v.RAIDLevel, v.SizeGB, v.Status, v.Drives)
|
||||
if v.RAIDLevel == "" {
|
||||
t.Errorf("Volume[%d]: RAIDLevel is empty", i)
|
||||
}
|
||||
if v.Status == "" {
|
||||
t.Errorf("Volume[%d]: Status is empty", i)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseVolumes_IntelVROC(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"identifier": "storage.id",
|
||||
"items": [{
|
||||
"volumes": [{
|
||||
"id": 1,
|
||||
"name": "",
|
||||
"drives": "M.2 Drive 0, M.2 Drive 1",
|
||||
"rdlvlstr": "RAID 1",
|
||||
"capacityStr": "893.750 GiB",
|
||||
"status": 3,
|
||||
"statusStr": "Optimal"
|
||||
}],
|
||||
"totalCapacityStr": "893.750 GiB"
|
||||
}]
|
||||
}`)
|
||||
|
||||
vols := parseVolumes(content)
|
||||
if len(vols) != 1 {
|
||||
t.Fatalf("expected 1 volume, got %d", len(vols))
|
||||
}
|
||||
v := vols[0]
|
||||
if v.ID != "1" {
|
||||
t.Errorf("expected ID=1, got %q", v.ID)
|
||||
}
|
||||
if v.RAIDLevel != "RAID 1" {
|
||||
t.Errorf("expected RAIDLevel=RAID 1, got %q", v.RAIDLevel)
|
||||
}
|
||||
if v.Status != "Optimal" {
|
||||
t.Errorf("expected Status=Optimal, got %q", v.Status)
|
||||
}
|
||||
if v.Controller != "Intel VROC" {
|
||||
t.Errorf("expected Controller=Intel VROC, got %q", v.Controller)
|
||||
}
|
||||
if len(v.Drives) != 2 {
|
||||
t.Errorf("expected 2 drives, got %d: %v", len(v.Drives), v.Drives)
|
||||
}
|
||||
if v.SizeGB < 900 || v.SizeGB > 1000 {
|
||||
t.Errorf("expected SizeGB ~960, got %d", v.SizeGB)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseDIMMs_UnqualifiedDIMMAddsWarningEvent(t *testing.T) {
|
||||
content := []byte(`{
|
||||
"items": [{
|
||||
"memory": [{
|
||||
"memory_name": "DIMM A1",
|
||||
"memory_status": "Unqualified DIMM",
|
||||
"memory_type": "DDR5",
|
||||
"memory_capacity": 32
|
||||
}]
|
||||
}]
|
||||
}`)
|
||||
|
||||
memory, events := parseDIMMs(content)
|
||||
if len(memory) != 1 {
|
||||
t.Fatalf("expected 1 DIMM, got %d", len(memory))
|
||||
}
|
||||
if len(events) != 1 {
|
||||
t.Fatalf("expected 1 warning event, got %d", len(events))
|
||||
}
|
||||
if events[0].Severity != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", events[0].Severity)
|
||||
}
|
||||
if events[0].SensorName != "DIMM A1" {
|
||||
t.Fatalf("unexpected sensor name: %q", events[0].SensorName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSeverity_UnqualifiedDIMMMessageBecomesWarning(t *testing.T) {
|
||||
if got := xccSeverity("I", "System found Unqualified DIMM in slot DIMM A1"); got != models.SeverityWarning {
|
||||
t.Fatalf("expected warning severity, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyDIMMWarningsFromEvents_UpdatesDIMMStatusForExport(t *testing.T) {
|
||||
result := &models.AnalysisResult{
|
||||
Events: []models.Event{
|
||||
{
|
||||
Timestamp: time.Date(2026, 4, 13, 11, 37, 38, 0, time.UTC),
|
||||
Severity: models.SeverityWarning,
|
||||
Description: "Unqualified DIMM 3 has been detected, the DIMM serial number is 80CE042328460C5D88-V20.",
|
||||
},
|
||||
},
|
||||
Hardware: &models.HardwareConfig{
|
||||
Memory: []models.MemoryDIMM{
|
||||
{
|
||||
Slot: "DIMM 3",
|
||||
Present: true,
|
||||
SerialNumber: "80CE042328460C5D88",
|
||||
Status: "Normal",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
applyDIMMWarningsFromEvents(result)
|
||||
|
||||
dimm := result.Hardware.Memory[0]
|
||||
if dimm.Status != "Warning" {
|
||||
t.Fatalf("expected DIMM status Warning, got %q", dimm.Status)
|
||||
}
|
||||
if dimm.ErrorDescription == "" || dimm.ErrorDescription != result.Events[0].Description {
|
||||
t.Fatalf("expected DIMM error description to be populated, got %q", dimm.ErrorDescription)
|
||||
}
|
||||
if dimm.StatusChangedAt == nil || !dimm.StatusChangedAt.Equal(result.Events[0].Timestamp) {
|
||||
t.Fatalf("expected status_changed_at from event timestamp, got %#v", dimm.StatusChangedAt)
|
||||
}
|
||||
if len(dimm.StatusHistory) != 1 || dimm.StatusHistory[0].Status != "Warning" {
|
||||
t.Fatalf("expected warning status history entry, got %#v", dimm.StatusHistory)
|
||||
}
|
||||
}
|
||||
1
internal/parser/vendors/vendors.go
vendored
1
internal/parser/vendors/vendors.go
vendored
@@ -14,6 +14,7 @@ import (
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/unraid"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xfusion"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/xigmanas"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/lenovo_xcc"
|
||||
|
||||
// Generic fallback parser (must be last for lowest priority)
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/generic"
|
||||
|
||||
@@ -50,11 +50,20 @@ func (s *Server) handleIndex(w http.ResponseWriter, r *http.Request) {
|
||||
|
||||
w.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
tmpl.Execute(w, map[string]string{
|
||||
"AppVersion": s.config.AppVersion,
|
||||
"AppCommit": s.config.AppCommit,
|
||||
"AppVersion": normalizeDisplayVersion(s.config.AppVersion),
|
||||
"AppCommit": s.config.AppCommit,
|
||||
"ChartVersion": normalizeDisplayVersion(s.config.ChartVersion),
|
||||
})
|
||||
}
|
||||
|
||||
func normalizeDisplayVersion(v string) string {
|
||||
v = strings.TrimSpace(v)
|
||||
if v == "" {
|
||||
return ""
|
||||
}
|
||||
return strings.TrimPrefix(v, "v")
|
||||
}
|
||||
|
||||
func (s *Server) handleChartCurrent(w http.ResponseWriter, r *http.Request) {
|
||||
result := s.GetResult()
|
||||
title := chartTitle(result)
|
||||
@@ -2045,14 +2054,14 @@ func applyCollectSourceMetadata(result *models.AnalysisResult, req CollectReques
|
||||
|
||||
func toCollectorRequest(req CollectRequest) collector.Request {
|
||||
return collector.Request{
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
Host: req.Host,
|
||||
Protocol: req.Protocol,
|
||||
Port: req.Port,
|
||||
Username: req.Username,
|
||||
AuthType: req.AuthType,
|
||||
Password: req.Password,
|
||||
Token: req.Token,
|
||||
TLSMode: req.TLSMode,
|
||||
DebugPayloads: req.DebugPayloads,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,10 +19,11 @@ import (
|
||||
var WebFS embed.FS
|
||||
|
||||
type Config struct {
|
||||
Port int
|
||||
PreloadFile string
|
||||
AppVersion string
|
||||
AppCommit string
|
||||
Port int
|
||||
PreloadFile string
|
||||
AppVersion string
|
||||
AppCommit string
|
||||
ChartVersion string
|
||||
}
|
||||
|
||||
type Server struct {
|
||||
|
||||
@@ -128,6 +128,7 @@ echo ""
|
||||
# Show next steps
|
||||
echo -e "${YELLOW}Next steps:${NC}"
|
||||
echo " 1. Create git tag:"
|
||||
echo " # LOGPile release tags use vN.M, for example: v1.12"
|
||||
echo " git tag -a ${VERSION} -m \"Release ${VERSION}\""
|
||||
echo ""
|
||||
echo " 2. Push tag to remote:"
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
1108
web/static/js/app.js
1108
web/static/js/app.js
File diff suppressed because it is too large
Load Diff
@@ -1,5 +1,5 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="ru">
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
@@ -7,57 +7,63 @@
|
||||
<link rel="stylesheet" href="/static/css/style.css">
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<div class="app-header-row">
|
||||
<div class="app-header-brand">
|
||||
<h1>LOGPile <span class="header-domain">mchus.pro</span></h1>
|
||||
<p>Анализатор диагностических данных BMC/IPMI</p>
|
||||
</div>
|
||||
<div id="header-log-meta" class="header-log-meta hidden">
|
||||
<div class="header-actions">
|
||||
<button id="clear-btn" class="hidden" onclick="clearData()">Очистить данные</button>
|
||||
<button id="header-raw-btn" class="hidden" onclick="exportData('json')">Export Raw Data</button>
|
||||
<button id="header-reanimator-btn" class="hidden" onclick="exportData('reanimator')">Экспорт Reanimator</button>
|
||||
<button id="restart-btn" onclick="restartApp()">Перезапуск</button>
|
||||
<button id="exit-btn" onclick="exitApp()">Выход</button>
|
||||
</div>
|
||||
<header class="page-header">
|
||||
<div class="page-header-brand">
|
||||
<p class="page-eyebrow">Diagnostic Workbench</p>
|
||||
<h1>LOGPile</h1>
|
||||
<p class="page-subtitle">BMC diagnostic data analyzer</p>
|
||||
</div>
|
||||
<div id="header-log-meta" class="header-log-meta hidden">
|
||||
<div class="header-actions">
|
||||
<button id="clear-btn" class="header-action hidden" onclick="clearData()">Clear Data</button>
|
||||
<button id="header-raw-btn" class="header-action hidden" onclick="exportData('json')">Raw Data</button>
|
||||
<button id="header-reanimator-btn" class="header-action hidden" onclick="exportData('reanimator')">Reanimator</button>
|
||||
<button id="restart-btn" class="header-action" onclick="restartApp()">Restart</button>
|
||||
<button id="exit-btn" class="header-action" onclick="exitApp()">Exit</button>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<main>
|
||||
<section id="upload-section">
|
||||
<div class="source-switch" role="tablist" aria-label="Источник данных">
|
||||
<button type="button" class="source-switch-btn active" data-source-type="archive">Архив</button>
|
||||
<main class="page-main">
|
||||
<section id="upload-section" class="control-deck">
|
||||
<div class="source-switch" role="tablist" aria-label="Data source">
|
||||
<button type="button" class="source-switch-btn active" data-source-type="archive">Archive</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="api">API</button>
|
||||
<button type="button" class="source-switch-btn" data-source-type="convert">Convert</button>
|
||||
</div>
|
||||
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<div id="archive-source-content" class="surface-panel upload-panel">
|
||||
<h2>Open Archive</h2>
|
||||
<p>Upload a support archive, plain log, or raw JSON snapshot to open the hardware report.</p>
|
||||
<div class="upload-area upload-dropzone" id="drop-zone">
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: ahs, tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
<span class="upload-kicker">Archive Import</span>
|
||||
<strong>Drop a file here</strong>
|
||||
<span class="upload-copy">LOGPile will parse it and open the report immediately.</span>
|
||||
<div class="upload-actions">
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Select File</button>
|
||||
</div>
|
||||
<p class="hint">Supported formats: `.ahs`, `.tar.gz`, `.tar`, `.tgz`, `.sds`, `.zip`, `.json`, `.txt`, `.log`</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
</div>
|
||||
|
||||
<div id="api-source-content" class="api-placeholder hidden">
|
||||
<div id="api-source-content" class="surface-panel upload-panel hidden">
|
||||
<h2>BMC API</h2>
|
||||
<p>Validate access and start live collection through the production Redfish pipeline.</p>
|
||||
<form id="api-connect-form" novalidate>
|
||||
<h3>Подключение к BMC API</h3>
|
||||
<div id="api-form-errors" class="form-errors hidden"></div>
|
||||
|
||||
<div class="api-form-grid">
|
||||
<label class="api-form-field" for="api-host">
|
||||
<span>Host</span>
|
||||
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 или bmc.example.local">
|
||||
<input id="api-host" name="host" type="text" placeholder="10.0.0.10 or bmc.example.local">
|
||||
<span class="field-error" data-error-for="host"></span>
|
||||
</label>
|
||||
|
||||
<label class="api-form-field" for="api-port">
|
||||
<span>Порт</span>
|
||||
<span>Port</span>
|
||||
<input id="api-port" name="port" type="number" min="1" max="65535" value="443" placeholder="443">
|
||||
<span class="field-error" data-error-for="port"></span>
|
||||
</label>
|
||||
@@ -69,52 +75,52 @@
|
||||
</label>
|
||||
|
||||
<label class="api-form-field" id="api-password-field" for="api-password">
|
||||
<span>Пароль</span>
|
||||
<span>Password</span>
|
||||
<input id="api-password" name="password" type="password" autocomplete="current-password">
|
||||
<span class="field-error" data-error-for="password"></span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="api-form-actions">
|
||||
<button id="api-connect-btn" type="button">Подключиться</button>
|
||||
<button id="api-connect-btn" type="button">Connect</button>
|
||||
</div>
|
||||
<div id="api-connect-status" class="api-connect-status"></div>
|
||||
<div id="api-probe-options" class="api-probe-options hidden">
|
||||
<div id="api-host-off-warning" class="api-host-off-warning hidden">
|
||||
⚠ Host выключен — данные инвентаря могут быть неполными
|
||||
⚠ Host is powered off. Inventory data may be incomplete.
|
||||
</div>
|
||||
<label class="api-form-checkbox" for="api-debug-payloads">
|
||||
<input id="api-debug-payloads" name="debug_payloads" type="checkbox">
|
||||
<span>Сбор расширенных метрик для отладки</span>
|
||||
<span>Collect extended diagnostics</span>
|
||||
</label>
|
||||
<div class="api-form-actions">
|
||||
<button id="api-collect-btn" type="submit">Собрать</button>
|
||||
<button id="api-collect-btn" type="submit">Collect</button>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
|
||||
<section id="api-job-status" class="job-status hidden" aria-live="polite">
|
||||
<div class="job-status-header">
|
||||
<h4>Статус задачи сбора</h4>
|
||||
<h4>Collection Job Status</h4>
|
||||
<div class="job-status-actions">
|
||||
<button id="skip-hung-btn" type="button" class="hidden" title="Прервать зависшие запросы и перейти к анализу собранных данных">Пропустить зависшие</button>
|
||||
<button id="cancel-job-btn" type="button">Отменить</button>
|
||||
<button id="skip-hung-btn" type="button" class="hidden" title="Abort hung requests and continue with analysis of collected data">Skip Hung Requests</button>
|
||||
<button id="cancel-job-btn" type="button">Cancel</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="job-status-meta">
|
||||
<div><span class="meta-label">jobId:</span> <code id="job-id-value">-</code></div>
|
||||
<div>
|
||||
<span class="meta-label">Статус:</span>
|
||||
<span class="meta-label">Status:</span>
|
||||
<span id="job-status-value" class="job-status-badge">Queued</span>
|
||||
</div>
|
||||
<div><span class="meta-label">Этап:</span> <span id="job-progress-value">Сбор данных...</span></div>
|
||||
<div><span class="meta-label">Stage:</span> <span id="job-progress-value">Collecting data...</span></div>
|
||||
<div><span class="meta-label">ETA:</span> <span id="job-eta-value">-</span></div>
|
||||
</div>
|
||||
<div class="job-progress" aria-label="Прогресс задачи">
|
||||
<div class="job-progress" aria-label="Job progress">
|
||||
<div id="job-progress-bar" class="job-progress-bar" style="width: 0%">0%</div>
|
||||
</div>
|
||||
<div id="job-active-modules" class="job-active-modules hidden">
|
||||
<p class="meta-label">Активные модули:</p>
|
||||
<p class="meta-label">Active modules:</p>
|
||||
<div id="job-active-modules-list" class="job-module-chips"></div>
|
||||
</div>
|
||||
<div id="job-debug-info" class="job-debug-info hidden">
|
||||
@@ -123,23 +129,23 @@
|
||||
<div id="job-phase-telemetry" class="job-phase-telemetry"></div>
|
||||
</div>
|
||||
<div class="job-status-logs">
|
||||
<p class="meta-label">Журнал шагов:</p>
|
||||
<p class="meta-label">Step log:</p>
|
||||
<ul id="job-logs-list"></ul>
|
||||
</div>
|
||||
</section>
|
||||
</div>
|
||||
|
||||
<div id="convert-source-content" class="api-placeholder hidden">
|
||||
<h3>Пакетная выгрузка Reanimator</h3>
|
||||
<p>Выберите папку с файлами поддерживаемого типа. Для каждого файла будет создан отдельный экспорт Reanimator.</p>
|
||||
<div id="convert-source-content" class="surface-panel upload-panel hidden">
|
||||
<h2>Batch Convert</h2>
|
||||
<p>Select a folder with supported files. A separate Reanimator export will be produced for each file.</p>
|
||||
<div class="api-form-actions">
|
||||
<input type="file" id="convert-folder-input" webkitdirectory directory multiple hidden>
|
||||
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Выбрать папку</button>
|
||||
<button id="convert-run-btn" type="button">Конвертировать в Reanimator</button>
|
||||
<button id="convert-folder-btn" type="button" onclick="document.getElementById('convert-folder-input').click()">Choose Folder</button>
|
||||
<button id="convert-run-btn" type="button">Convert to Reanimator</button>
|
||||
</div>
|
||||
<div id="convert-progress" class="convert-progress hidden" aria-live="polite">
|
||||
<div class="convert-progress-meta">
|
||||
<span id="convert-progress-label">Подготовка...</span>
|
||||
<span id="convert-progress-label">Preparing...</span>
|
||||
<span id="convert-progress-value">0%</span>
|
||||
</div>
|
||||
<div class="convert-progress-track">
|
||||
@@ -152,12 +158,12 @@
|
||||
</section>
|
||||
|
||||
<section id="data-section" class="hidden">
|
||||
<section class="result-panel">
|
||||
<section class="viewer-panel">
|
||||
<div class="audit-viewer-shell">
|
||||
<iframe
|
||||
id="audit-viewer-frame"
|
||||
class="audit-viewer-frame"
|
||||
title="Reanimator chart viewer"
|
||||
title="Hardware report"
|
||||
loading="eager"
|
||||
scrolling="no"
|
||||
referrerpolicy="same-origin">
|
||||
@@ -167,11 +173,9 @@
|
||||
</section>
|
||||
</main>
|
||||
|
||||
<footer>
|
||||
<div class="footer-buttons">
|
||||
</div>
|
||||
<footer class="page-footer">
|
||||
<div class="footer-info">
|
||||
<p>Автор: <a href="https://mchus.pro" target="_blank">mchus.pro</a> | <a href="https://git.mchus.pro/mchus/logpile" target="_blank">Git Repository</a>{{if .AppVersion}} | v{{.AppVersion}}{{end}}</p>
|
||||
<p>{{if .AppVersion}}LOGPile {{.AppVersion}}{{end}}{{if and .AppVersion .ChartVersion}} · {{end}}{{if .ChartVersion}}Chart {{.ChartVersion}}{{end}}</p>
|
||||
</div>
|
||||
</footer>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user