feat: HPE iLO support — profile, post-probe hang fix, replay parser fixes, AHS parser
- Add HPE iLO Redfish profile (priority 20): matches on manufacturer/OEM/iLO signals, adds SmartStorage/SmartStorageConfig to critical paths, sets realistic ETA baseline and rate policy for iLO's known slowness - Fix post-probe hang on HPE iLO: skip numeric probing of collections where Members@odata.count == len(Members); add 4s postProbeClient timeout as safety net - Exclude /WorkloadPerformanceAdvisor from crawl paths - Fix replay parser: skip absent CPU sockets, absent DIMM slots, absent drive bays - Filter N/A version entries from firmware inventory - Remove drive firmware from general firmware list (already in Storage[].Firmware) - Add HPE AHS (.ahs) archive parser with hybrid SMBIOS/Redfish extraction Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -30,6 +30,7 @@ All modes converge on the same normalized hardware model and exporter pipeline.
|
||||
- Reanimator Easy Bee support bundles
|
||||
- H3C SDS G5/G6
|
||||
- Inspur / Kaytus
|
||||
- HPE iLO AHS
|
||||
- NVIDIA HGX Field Diagnostics
|
||||
- NVIDIA Bug Report
|
||||
- Unraid
|
||||
|
||||
@@ -53,6 +53,7 @@ When `vendor_id` and `device_id` are known but the model name is missing or gene
|
||||
| `easy_bee` | `bee-support-*.tar.gz` | Imports embedded `export/bee-audit.json` snapshot from reanimator-easy-bee bundles |
|
||||
| `h3c_g5` | H3C SDS G5 bundles | INI/XML/CSV-driven hardware and event parsing |
|
||||
| `h3c_g6` | H3C SDS G6 bundles | Similar flow with G6-specific files |
|
||||
| `hpe_ilo_ahs` | HPE iLO Active Health System (`.ahs`) | Proprietary `ABJR` container with gzip-compressed `zbb` members; parser combines SMBIOS-style inventory strings and embedded Redfish storage JSON |
|
||||
| `inspur` | onekeylog archives | FRU/SDR plus optional Redis enrichment |
|
||||
| `nvidia` | HGX Field Diagnostics | GPU- and fabric-heavy diagnostic input |
|
||||
| `nvidia_bug_report` | `nvidia-bug-report-*.log.gz` | dmidecode, lspci, NVIDIA driver sections |
|
||||
@@ -121,6 +122,32 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|
||||
---
|
||||
|
||||
### HPE iLO AHS (`hpe_ilo_ahs`)
|
||||
|
||||
**Status:** Ready (v1.0.0). Tested on HPE ProLiant Gen11 `.ahs` export from iLO 6.
|
||||
|
||||
**Archive format:** `.ahs` single-file Active Health System export.
|
||||
|
||||
**Detection:** Single-file input with `ABJR` container header and HPE AHS member names
|
||||
such as `CUST_INFO.DAT`, `*.zbb`, `ilo_boot_support.zbb`.
|
||||
|
||||
**Extracted data (current):**
|
||||
- System board identity (manufacturer, model, serial, part number)
|
||||
- iLO / System ROM / SPS top-level firmware
|
||||
- CPU inventory (model-level)
|
||||
- Memory DIMM inventory for populated slots
|
||||
- PSU inventory
|
||||
- PCIe / OCP NIC inventory from SMBIOS-style slot records
|
||||
- Storage controller and physical drives from embedded Redfish JSON inside `zbb` members
|
||||
- Basic iLO event log entries with timestamps when present
|
||||
|
||||
**Implementation note:** The format is proprietary. Parser support is intentionally hybrid:
|
||||
container parsing (`ABJR` + gzip) plus structured extraction from embedded Redfish objects and
|
||||
printable SMBIOS/FRU payloads. This is sufficient for inventory-grade parsing without decoding the
|
||||
entire internal `zbb` schema.
|
||||
|
||||
---
|
||||
|
||||
### Generic text fallback (`generic`)
|
||||
|
||||
**Status:** Ready (v1.0.0).
|
||||
@@ -141,6 +168,7 @@ with content markers (e.g. `Unraid kernel build`, parity data markers).
|
||||
|--------|----|--------|-----------|
|
||||
| Dell TSR | `dell` | Ready | TSR nested zip archives |
|
||||
| Reanimator Easy Bee | `easy_bee` | Ready | `bee-support-*.tar.gz` support bundles |
|
||||
| HPE iLO AHS | `hpe_ilo_ahs` | Ready | iLO 6 `.ahs` exports |
|
||||
| Inspur / Kaytus | `inspur` | Ready | KR4268X2 onekeylog |
|
||||
| NVIDIA HGX Field Diag | `nvidia` | Ready | Various HGX servers |
|
||||
| NVIDIA Bug Report | `nvidia_bug_report` | Ready | H100 systems |
|
||||
|
||||
@@ -258,6 +258,9 @@ at parse time before storing in any model struct. Use the regex
|
||||
**Date:** 2026-03-12
|
||||
**Context:** `shouldAdaptiveNVMeProbe` was introduced in `2fa4a12` to recover NVMe drives on
|
||||
Supermicro BMCs that expose empty `Drives` collections but serve disks at direct `Disk.Bay.N`
|
||||
|
||||
---
|
||||
|
||||
paths. The function returns `true` for any chassis with an empty `Members` array. On
|
||||
Supermicro HGX systems (SYS-A21GE-NBRT and similar) ~35 sub-chassis (GPU, NVSwitch,
|
||||
PCIeRetimer, ERoT, IRoT, BMC, FPGA) all carry `ChassisType=Module/Component/Zone` and
|
||||
@@ -976,3 +979,24 @@ the producer utility and archive importer.
|
||||
- Adding support required only a thin archive adapter instead of a full hardware parser.
|
||||
- If the upstream utility changes the embedded snapshot schema, the `easy_bee` adapter is the
|
||||
only place that must be updated.
|
||||
|
||||
---
|
||||
|
||||
## ADL-038 — HPE AHS parser uses hybrid extraction instead of full `zbb` schema decoding
|
||||
|
||||
**Date:** 2026-03-30
|
||||
**Context:** HPE iLO Active Health System exports (`.ahs`) are proprietary `ABJR` containers
|
||||
with gzip-compressed `zbb` payloads. The sample inventory data contains two practical signal
|
||||
families: printable SMBIOS/FRU-style strings and embedded Redfish JSON subtrees, especially for
|
||||
storage controllers and drives. Full `zbb` binary schema decoding is not documented and would add
|
||||
significant complexity before proving user value.
|
||||
**Decision:** Support HPE AHS with a hybrid parser:
|
||||
- decode the outer `ABJR` container
|
||||
- gunzip embedded members when applicable
|
||||
- extract inventory from printable SMBIOS/FRU payloads
|
||||
- extract storage/controller details from embedded Redfish JSON objects
|
||||
- do not attempt complete semantic decoding of the internal `zbb` record format
|
||||
**Consequences:**
|
||||
- Parser reaches inventory-grade usefulness quickly for HPE `.ahs` uploads.
|
||||
- Storage inventory is stronger than text-only parsing because it reuses structured Redfish data when present.
|
||||
- Future deeper `zbb` decoding can be added incrementally without replacing the current parser contract.
|
||||
|
||||
@@ -1606,6 +1606,7 @@ func (c *RedfishConnector) collectRawRedfishTree(ctx context.Context, client *ht
|
||||
crawlStart := time.Now()
|
||||
memoryClient := c.httpClientWithTimeout(req, redfishSnapshotMemoryRequestTimeout())
|
||||
memoryGate := make(chan struct{}, redfishSnapshotMemoryConcurrency())
|
||||
postProbeClient := c.httpClientWithTimeout(req, redfishSnapshotPostProbeRequestTimeout())
|
||||
branchLimiter := newRedfishSnapshotBranchLimiter(redfishSnapshotBranchConcurrency())
|
||||
branchRetryPause := redfishSnapshotBranchRequeueBackoff()
|
||||
timings := newRedfishPathTimingCollector(4)
|
||||
@@ -1908,7 +1909,7 @@ func (c *RedfishConnector) collectRawRedfishTree(ctx context.Context, client *ht
|
||||
ETASeconds: int(estimateProgressETA(postProbeStart, i, len(postProbeCollections), 3*time.Second).Seconds()),
|
||||
})
|
||||
}
|
||||
for childPath, doc := range c.probeDirectRedfishCollectionChildren(ctx, client, req, baseURL, path) {
|
||||
for childPath, doc := range c.probeDirectRedfishCollectionChildren(ctx, postProbeClient, req, baseURL, path) {
|
||||
if _, exists := out[childPath]; exists {
|
||||
continue
|
||||
}
|
||||
@@ -2158,6 +2159,12 @@ func shouldAdaptivePostProbeCollectionPath(path string, collectionDoc map[string
|
||||
if len(memberRefs) == 0 {
|
||||
return true
|
||||
}
|
||||
// If the collection reports an explicit non-zero member count that already
|
||||
// matches the number of discovered member refs, every member is accounted
|
||||
// for and numeric probing cannot find anything new.
|
||||
if odataCount := asInt(collectionDoc["Members@odata.count"]); odataCount > 0 && odataCount == len(memberRefs) {
|
||||
return false
|
||||
}
|
||||
return redfishCollectionHasNumericMemberRefs(memberRefs)
|
||||
}
|
||||
|
||||
@@ -2350,6 +2357,18 @@ func redfishSnapshotRequestTimeout() time.Duration {
|
||||
return 12 * time.Second
|
||||
}
|
||||
|
||||
func redfishSnapshotPostProbeRequestTimeout() time.Duration {
|
||||
if v := strings.TrimSpace(os.Getenv("LOGPILE_REDFISH_POSTPROBE_TIMEOUT")); v != "" {
|
||||
if d, err := time.ParseDuration(v); err == nil && d > 0 {
|
||||
return d
|
||||
}
|
||||
}
|
||||
// Post-probe probes non-existent numeric paths expecting fast 404s.
|
||||
// A short timeout prevents BMCs that hang on unknown paths from stalling
|
||||
// the entire collection for minutes (e.g. HPE iLO on NetworkAdapters Ports).
|
||||
return 4 * time.Second
|
||||
}
|
||||
|
||||
func redfishSnapshotWorkers(tuning redfishprofile.AcquisitionTuning) int {
|
||||
if tuning.SnapshotWorkers >= 1 && tuning.SnapshotWorkers <= 16 {
|
||||
return tuning.SnapshotWorkers
|
||||
@@ -2852,6 +2871,8 @@ func shouldCrawlPath(path string) bool {
|
||||
"/GetServerAllUSBStatus",
|
||||
"/Oem/Public/KVM",
|
||||
"/SecureBoot/SecureBootDatabases",
|
||||
// HPE iLO WorkloadPerformanceAdvisor — operational/advisory data, not inventory.
|
||||
"/WorkloadPerformanceAdvisor",
|
||||
} {
|
||||
if strings.Contains(normalized, part) {
|
||||
return false
|
||||
@@ -5047,6 +5068,16 @@ func isVirtualStorageDrive(doc map[string]interface{}) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// isAbsentDriveDoc returns true when the drive document represents an empty bay
|
||||
// with no installed media (Status.State == "Absent"). These should be excluded
|
||||
// from the storage inventory.
|
||||
func isAbsentDriveDoc(doc map[string]interface{}) bool {
|
||||
if status, ok := doc["Status"].(map[string]interface{}); ok {
|
||||
return strings.EqualFold(asString(status["State"]), "Absent")
|
||||
}
|
||||
return strings.EqualFold(asString(doc["Status"]), "Absent")
|
||||
}
|
||||
|
||||
func looksLikeDrive(doc map[string]interface{}) bool {
|
||||
if asString(doc["MediaType"]) != "" {
|
||||
return true
|
||||
|
||||
@@ -96,6 +96,7 @@ func ReplayRedfishFromRawPayloads(rawPayloads map[string]any, emit ProgressFn) (
|
||||
networkProtocolDoc, _ := r.getJSON(joinPath(primaryManager, "/NetworkProtocol"))
|
||||
firmware := parseFirmware(systemDoc, biosDoc, managerDoc, networkProtocolDoc)
|
||||
firmware = dedupeFirmwareInfo(append(firmware, r.collectFirmwareInventory()...))
|
||||
firmware = filterStorageDriveFirmware(firmware, storageDevices)
|
||||
bmcManagementSummary := r.collectBMCManagementSummary(managerPaths)
|
||||
boardInfo.BMCMACAddress = strings.TrimSpace(firstNonEmpty(
|
||||
asString(bmcManagementSummary["mac_address"]),
|
||||
@@ -498,6 +499,10 @@ func (r redfishSnapshotReader) collectFirmwareInventory() []models.FirmwareInfo
|
||||
if strings.TrimSpace(version) == "" {
|
||||
continue
|
||||
}
|
||||
// Skip placeholder version strings that carry no useful information.
|
||||
if strings.EqualFold(strings.TrimSpace(version), "N/A") {
|
||||
continue
|
||||
}
|
||||
name := firmwareInventoryDeviceName(doc)
|
||||
name = strings.TrimSpace(name)
|
||||
if name == "" {
|
||||
@@ -550,6 +555,32 @@ func dedupeFirmwareInfo(items []models.FirmwareInfo) []models.FirmwareInfo {
|
||||
return out
|
||||
}
|
||||
|
||||
// filterStorageDriveFirmware removes from fw any entries whose DeviceName+Version
|
||||
// already appear as a storage drive's Model+Firmware. Drive firmware is already
|
||||
// represented in the Storage section and should not be duplicated in the general
|
||||
// firmware list.
|
||||
func filterStorageDriveFirmware(fw []models.FirmwareInfo, storage []models.Storage) []models.FirmwareInfo {
|
||||
if len(storage) == 0 {
|
||||
return fw
|
||||
}
|
||||
driveFW := make(map[string]struct{}, len(storage))
|
||||
for _, d := range storage {
|
||||
model := strings.ToLower(strings.TrimSpace(d.Model))
|
||||
rev := strings.ToLower(strings.TrimSpace(d.Firmware))
|
||||
if model != "" && rev != "" {
|
||||
driveFW[model+"|"+rev] = struct{}{}
|
||||
}
|
||||
}
|
||||
out := fw[:0:0]
|
||||
for _, f := range fw {
|
||||
key := strings.ToLower(strings.TrimSpace(f.DeviceName)) + "|" + strings.ToLower(strings.TrimSpace(f.Version))
|
||||
if _, skip := driveFW[key]; !skip {
|
||||
out = append(out, f)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func (r redfishSnapshotReader) collectThresholdSensors(chassisPaths []string) []models.SensorReading {
|
||||
out := make([]models.SensorReading, 0)
|
||||
seen := make(map[string]struct{})
|
||||
@@ -1261,6 +1292,12 @@ func (r redfishSnapshotReader) collectProcessors(systemPath string) []models.CPU
|
||||
!strings.EqualFold(pt, "CPU") && !strings.EqualFold(pt, "General") {
|
||||
continue
|
||||
}
|
||||
// Skip absent processor sockets — empty slots with no CPU installed.
|
||||
if status, ok := doc["Status"].(map[string]interface{}); ok {
|
||||
if strings.EqualFold(asString(status["State"]), "Absent") {
|
||||
continue
|
||||
}
|
||||
}
|
||||
cpu := parseCPUs([]map[string]interface{}{doc})[0]
|
||||
if cpu.Socket == 0 && socketIdx > 0 && strings.TrimSpace(asString(doc["Socket"])) == "" {
|
||||
cpu.Socket = socketIdx
|
||||
@@ -1287,6 +1324,10 @@ func (r redfishSnapshotReader) collectMemory(systemPath string) []models.MemoryD
|
||||
out := make([]models.MemoryDIMM, 0, len(memberDocs))
|
||||
for _, doc := range memberDocs {
|
||||
dimm := parseMemory([]map[string]interface{}{doc})[0]
|
||||
// Skip empty DIMM slots — no installed memory.
|
||||
if !dimm.Present {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(doc, "MemoryMetrics", "EnvironmentMetrics", "Metrics")
|
||||
if len(supplementalDocs) > 0 {
|
||||
dimm.Details = mergeGenericDetails(dimm.Details, redfishMemoryDetailsAcrossDocs(doc, supplementalDocs...))
|
||||
|
||||
@@ -14,13 +14,16 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
driveDocs, err := r.getCollectionMembers(driveCollectionPath)
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(driveCollectionPath) {
|
||||
if isAbsentDriveDoc(driveDoc) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -43,7 +46,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if !isVirtualStorageDrive(driveDoc) {
|
||||
if !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -51,7 +54,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
if looksLikeDrive(member) {
|
||||
if isVirtualStorageDrive(member) {
|
||||
if isAbsentDriveDoc(member) || isVirtualStorageDrive(member) {
|
||||
continue
|
||||
}
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(member, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
@@ -63,14 +66,14 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
driveDocs, err := r.getCollectionMembers(joinPath(enclosurePath, "/Drives"))
|
||||
if err == nil {
|
||||
for _, driveDoc := range driveDocs {
|
||||
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
}
|
||||
if len(driveDocs) == 0 {
|
||||
for _, driveDoc := range r.probeDirectDiskBayChildren(joinPath(enclosurePath, "/Drives")) {
|
||||
if isVirtualStorageDrive(driveDoc) {
|
||||
if isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
@@ -83,7 +86,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
|
||||
if len(plan.KnownStorageDriveCollections) > 0 {
|
||||
for _, driveDoc := range r.collectKnownStorageMembers(systemPath, plan.KnownStorageDriveCollections) {
|
||||
if looksLikeDrive(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
if looksLikeDrive(driveDoc) && !isAbsentDriveDoc(driveDoc) && !isVirtualStorageDrive(driveDoc) {
|
||||
supplementalDocs := r.getLinkedSupplementalDocs(driveDoc, "DriveMetrics", "EnvironmentMetrics", "Metrics")
|
||||
out = append(out, parseDriveWithSupplementalDocs(driveDoc, supplementalDocs...))
|
||||
}
|
||||
@@ -98,7 +101,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
}
|
||||
for _, devAny := range devices {
|
||||
devDoc, ok := devAny.(map[string]interface{})
|
||||
if !ok || !looksLikeDrive(devDoc) || isVirtualStorageDrive(devDoc) {
|
||||
if !ok || !looksLikeDrive(devDoc) || isAbsentDriveDoc(devDoc) || isVirtualStorageDrive(devDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(devDoc))
|
||||
@@ -112,7 +115,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range driveDocs {
|
||||
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
@@ -124,7 +127,7 @@ func (r redfishSnapshotReader) collectStorage(systemPath string, plan redfishpro
|
||||
continue
|
||||
}
|
||||
for _, driveDoc := range r.probeSupermicroNVMeDiskBays(chassisPath) {
|
||||
if !looksLikeDrive(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
if !looksLikeDrive(driveDoc) || isAbsentDriveDoc(driveDoc) || isVirtualStorageDrive(driveDoc) {
|
||||
continue
|
||||
}
|
||||
out = append(out, parseDrive(driveDoc))
|
||||
|
||||
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
67
internal/collector/redfishprofile/profile_hpe.go
Normal file
@@ -0,0 +1,67 @@
|
||||
package redfishprofile
|
||||
|
||||
func hpeProfile() Profile {
|
||||
return staticProfile{
|
||||
name: "hpe",
|
||||
priority: 20,
|
||||
safeForFallback: true,
|
||||
matchFn: func(s MatchSignals) int {
|
||||
score := 0
|
||||
if containsFold(s.SystemManufacturer, "hpe") ||
|
||||
containsFold(s.SystemManufacturer, "hewlett packard") ||
|
||||
containsFold(s.ChassisManufacturer, "hpe") ||
|
||||
containsFold(s.ChassisManufacturer, "hewlett packard") {
|
||||
score += 80
|
||||
}
|
||||
for _, ns := range s.OEMNamespaces {
|
||||
if containsFold(ns, "hpe") {
|
||||
score += 30
|
||||
break
|
||||
}
|
||||
}
|
||||
if containsFold(s.ServiceRootProduct, "ilo") {
|
||||
score += 30
|
||||
}
|
||||
if containsFold(s.ManagerManufacturer, "hpe") || containsFold(s.ManagerManufacturer, "ilo") {
|
||||
score += 20
|
||||
}
|
||||
return min(score, 100)
|
||||
},
|
||||
extendAcquisition: func(plan *AcquisitionPlan, _ MatchSignals) {
|
||||
// HPE ProLiant SmartStorage RAID controller inventory is not reachable
|
||||
// via standard Redfish Storage paths — it requires the HPE OEM SmartStorage tree.
|
||||
ensureScopedPathPolicy(plan, AcquisitionScopedPathPolicy{
|
||||
SystemCriticalSuffixes: []string{
|
||||
"/SmartStorage",
|
||||
"/SmartStorageConfig",
|
||||
},
|
||||
ManagerCriticalSuffixes: []string{
|
||||
"/LicenseService",
|
||||
},
|
||||
})
|
||||
// HPE iLO responds more slowly than average BMCs under load; give the
|
||||
// ETA estimator a realistic baseline so progress reports are accurate.
|
||||
ensureETABaseline(plan, AcquisitionETABaseline{
|
||||
DiscoverySeconds: 12,
|
||||
SnapshotSeconds: 180,
|
||||
PrefetchSeconds: 30,
|
||||
CriticalPlanBSeconds: 40,
|
||||
ProfilePlanBSeconds: 25,
|
||||
})
|
||||
ensureRecoveryPolicy(plan, AcquisitionRecoveryPolicy{
|
||||
EnableProfilePlanB: true,
|
||||
})
|
||||
// HPE iLO starts throttling under high request rates. Setting a higher
|
||||
// latency tolerance prevents the adaptive throttler from treating normal
|
||||
// iLO slowness as a reason to stall the collection.
|
||||
ensureRatePolicy(plan, AcquisitionRatePolicy{
|
||||
TargetP95LatencyMS: 1200,
|
||||
ThrottleP95LatencyMS: 2500,
|
||||
MinSnapshotWorkers: 2,
|
||||
MinPrefetchWorkers: 1,
|
||||
DisablePrefetchOnErrors: true,
|
||||
})
|
||||
addPlanNote(plan, "hpe ilo acquisition extensions enabled")
|
||||
},
|
||||
}
|
||||
}
|
||||
@@ -55,6 +55,7 @@ func BuiltinProfiles() []Profile {
|
||||
msiProfile(),
|
||||
supermicroProfile(),
|
||||
dellProfile(),
|
||||
hpeProfile(),
|
||||
inspurGroupOEMPlatformsProfile(),
|
||||
hgxProfile(),
|
||||
xfusionProfile(),
|
||||
|
||||
@@ -19,6 +19,7 @@ const maxZipArchiveSize = 50 * 1024 * 1024
|
||||
const maxGzipDecompressedSize = 50 * 1024 * 1024
|
||||
|
||||
var supportedArchiveExt = map[string]struct{}{
|
||||
".ahs": {},
|
||||
".gz": {},
|
||||
".tgz": {},
|
||||
".tar": {},
|
||||
@@ -45,6 +46,8 @@ func ExtractArchive(archivePath string) ([]ExtractedFile, error) {
|
||||
ext := strings.ToLower(filepath.Ext(archivePath))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFile(archivePath)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGz(archivePath)
|
||||
case ".tar", ".sds":
|
||||
@@ -66,6 +69,8 @@ func ExtractArchiveFromReader(r io.Reader, filename string) ([]ExtractedFile, er
|
||||
ext := strings.ToLower(filepath.Ext(filename))
|
||||
|
||||
switch ext {
|
||||
case ".ahs":
|
||||
return extractSingleFileFromReader(r, filename)
|
||||
case ".gz", ".tgz":
|
||||
return extractTarGzFromReader(r, filename)
|
||||
case ".tar", ".sds":
|
||||
|
||||
@@ -76,6 +76,7 @@ func TestIsSupportedArchiveFilename(t *testing.T) {
|
||||
name string
|
||||
want bool
|
||||
}{
|
||||
{name: "HPE_CZ2D1X0GS3_20260330.ahs", want: true},
|
||||
{name: "dump.tar.gz", want: true},
|
||||
{name: "nvidia-bug-report-1651124000923.log.gz", want: true},
|
||||
{name: "snapshot.zip", want: true},
|
||||
@@ -124,3 +125,20 @@ func TestExtractArchiveFromReaderSDS(t *testing.T) {
|
||||
t.Fatalf("expected bmc/pack.info, got %q", files[0].Path)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExtractArchiveFromReaderAHS(t *testing.T) {
|
||||
payload := []byte("ABJRtest")
|
||||
files, err := ExtractArchiveFromReader(bytes.NewReader(payload), "sample.ahs")
|
||||
if err != nil {
|
||||
t.Fatalf("extract ahs from reader: %v", err)
|
||||
}
|
||||
if len(files) != 1 {
|
||||
t.Fatalf("expected 1 extracted file, got %d", len(files))
|
||||
}
|
||||
if files[0].Path != "sample.ahs" {
|
||||
t.Fatalf("expected sample.ahs, got %q", files[0].Path)
|
||||
}
|
||||
if string(files[0].Content) != string(payload) {
|
||||
t.Fatalf("content mismatch")
|
||||
}
|
||||
}
|
||||
|
||||
1287
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
1287
internal/parser/vendors/hpe_ilo_ahs/parser.go
vendored
Normal file
File diff suppressed because it is too large
Load Diff
210
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
210
internal/parser/vendors/hpe_ilo_ahs/parser_test.go
vendored
Normal file
@@ -0,0 +1,210 @@
|
||||
package hpe_ilo_ahs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"encoding/binary"
|
||||
"testing"
|
||||
|
||||
"git.mchus.pro/mchus/logpile/internal/parser"
|
||||
)
|
||||
|
||||
func TestDetectAHS(t *testing.T) {
|
||||
p := &Parser{}
|
||||
score := p.Detect([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: makeAHSArchive(t, []ahsTestEntry{{Name: "CUST_INFO.DAT", Payload: []byte("x")}}),
|
||||
}})
|
||||
if score < 80 {
|
||||
t.Fatalf("expected high confidence detect, got %d", score)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseAHSInventory(t *testing.T) {
|
||||
p := &Parser{}
|
||||
content := makeAHSArchive(t, []ahsTestEntry{
|
||||
{Name: "CUST_INFO.DAT", Payload: make([]byte, 16)},
|
||||
{Name: "0000088-2026-03-30.zbb", Payload: gzipBytes(t, []byte(sampleInventoryBlob()))},
|
||||
})
|
||||
|
||||
result, err := p.Parse([]parser.ExtractedFile{{
|
||||
Path: "HPE_CZ2D1X0GS3_20260330.ahs",
|
||||
Content: content,
|
||||
}})
|
||||
if err != nil {
|
||||
t.Fatalf("parse failed: %v", err)
|
||||
}
|
||||
if result.Hardware == nil {
|
||||
t.Fatalf("expected hardware section")
|
||||
}
|
||||
|
||||
board := result.Hardware.BoardInfo
|
||||
if board.Manufacturer != "HPE" {
|
||||
t.Fatalf("unexpected board manufacturer: %q", board.Manufacturer)
|
||||
}
|
||||
if board.ProductName != "ProLiant DL380 Gen11" {
|
||||
t.Fatalf("unexpected board product: %q", board.ProductName)
|
||||
}
|
||||
if board.SerialNumber != "CZ2D1X0GS3" {
|
||||
t.Fatalf("unexpected board serial: %q", board.SerialNumber)
|
||||
}
|
||||
if board.PartNumber != "P52560-421" {
|
||||
t.Fatalf("unexpected board part number: %q", board.PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.CPUs) != 1 || result.Hardware.CPUs[0].Model != "Intel(R) Xeon(R) Gold 6444Y" {
|
||||
t.Fatalf("unexpected CPUs: %+v", result.Hardware.CPUs)
|
||||
}
|
||||
if len(result.Hardware.Memory) != 1 {
|
||||
t.Fatalf("expected one DIMM, got %d", len(result.Hardware.Memory))
|
||||
}
|
||||
if result.Hardware.Memory[0].PartNumber != "HMCG88AEBRA115N" {
|
||||
t.Fatalf("unexpected DIMM part number: %q", result.Hardware.Memory[0].PartNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.NetworkAdapters) != 2 {
|
||||
t.Fatalf("expected two network adapters, got %d", len(result.Hardware.NetworkAdapters))
|
||||
}
|
||||
if len(result.Hardware.PowerSupply) != 1 {
|
||||
t.Fatalf("expected one PSU, got %d", len(result.Hardware.PowerSupply))
|
||||
}
|
||||
if result.Hardware.PowerSupply[0].SerialNumber != "5XUWB0C4DJG4BV" {
|
||||
t.Fatalf("unexpected PSU serial: %q", result.Hardware.PowerSupply[0].SerialNumber)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Storage) != 1 {
|
||||
t.Fatalf("expected one physical drive, got %d", len(result.Hardware.Storage))
|
||||
}
|
||||
drive := result.Hardware.Storage[0]
|
||||
if drive.Model != "SAMSUNGMZ7L3480HCHQ-00A07" {
|
||||
t.Fatalf("unexpected drive model: %q", drive.Model)
|
||||
}
|
||||
if drive.SerialNumber != "S664NC0Y502720" {
|
||||
t.Fatalf("unexpected drive serial: %q", drive.SerialNumber)
|
||||
}
|
||||
if drive.SizeGB != 480 {
|
||||
t.Fatalf("unexpected drive size: %d", drive.SizeGB)
|
||||
}
|
||||
|
||||
if len(result.Hardware.Firmware) == 0 {
|
||||
t.Fatalf("expected firmware inventory")
|
||||
}
|
||||
foundILO := false
|
||||
foundControllerFW := false
|
||||
for _, item := range result.Hardware.Firmware {
|
||||
if item.DeviceName == "iLO 6" && item.Version == "v1.63p20" {
|
||||
foundILO = true
|
||||
}
|
||||
if item.DeviceName == "HPE MR408i-o Gen11" && item.Version == "52.26.3-5379" {
|
||||
foundControllerFW = true
|
||||
}
|
||||
}
|
||||
if !foundILO {
|
||||
t.Fatalf("expected iLO firmware entry")
|
||||
}
|
||||
if !foundControllerFW {
|
||||
t.Fatalf("expected controller firmware entry")
|
||||
}
|
||||
|
||||
if len(result.Hardware.Devices) < 6 {
|
||||
t.Fatalf("expected canonical devices, got %d", len(result.Hardware.Devices))
|
||||
}
|
||||
if len(result.Events) == 0 {
|
||||
t.Fatalf("expected parsed events")
|
||||
}
|
||||
}
|
||||
|
||||
type ahsTestEntry struct {
|
||||
Name string
|
||||
Payload []byte
|
||||
Flag uint32
|
||||
}
|
||||
|
||||
func makeAHSArchive(t *testing.T, entries []ahsTestEntry) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
for _, entry := range entries {
|
||||
header := make([]byte, ahsHeaderSize)
|
||||
copy(header[:4], []byte("ABJR"))
|
||||
binary.LittleEndian.PutUint16(header[4:6], 0x0300)
|
||||
binary.LittleEndian.PutUint16(header[6:8], 0x0002)
|
||||
binary.LittleEndian.PutUint32(header[8:12], uint32(len(entry.Payload)))
|
||||
flag := entry.Flag
|
||||
if flag == 0 {
|
||||
flag = 0x80000002
|
||||
if len(entry.Payload) >= 2 && entry.Payload[0] == 0x1f && entry.Payload[1] == 0x8b {
|
||||
flag = 0x80000001
|
||||
}
|
||||
}
|
||||
binary.LittleEndian.PutUint32(header[16:20], flag)
|
||||
copy(header[20:52], []byte(entry.Name))
|
||||
buf.Write(header)
|
||||
buf.Write(entry.Payload)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func gzipBytes(t *testing.T, payload []byte) []byte {
|
||||
t.Helper()
|
||||
|
||||
var buf bytes.Buffer
|
||||
zw := gzip.NewWriter(&buf)
|
||||
if _, err := zw.Write(payload); err != nil {
|
||||
t.Fatalf("gzip payload: %v", err)
|
||||
}
|
||||
if err := zw.Close(); err != nil {
|
||||
t.Fatalf("close gzip writer: %v", err)
|
||||
}
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func sampleInventoryBlob() string {
|
||||
return stringsJoin(
|
||||
"iLO 6 v1.63p20 built on Sep 13 2024",
|
||||
"HPE",
|
||||
"ProLiant DL380 Gen11",
|
||||
"CZ2D1X0GS3",
|
||||
"P52560-421",
|
||||
"Proc 1",
|
||||
"Intel(R) Corporation",
|
||||
"Intel(R) Xeon(R) Gold 6444Y",
|
||||
"PROC 1 DIMM 3",
|
||||
"Hynix",
|
||||
"HMCG88AEBRA115N",
|
||||
"2B5F92C6",
|
||||
"Power Supply 1",
|
||||
"5XUWB0C4DJG4BV",
|
||||
"P03178-B21",
|
||||
"PciRoot(0x1)/Pci(0x5,0x0)/Pci(0x0,0x0)",
|
||||
"NIC.Slot.1.1",
|
||||
"Network Controller",
|
||||
"Slot 1",
|
||||
"MCX512A-ACAT",
|
||||
"MT2230478382",
|
||||
"PciRoot(0x3)/Pci(0x1,0x0)/Pci(0x0,0x0)",
|
||||
"OCP.Slot.15.1",
|
||||
"Broadcom NetXtreme Gigabit Ethernet - NIC",
|
||||
"OCP Slot 15",
|
||||
"P51183-001",
|
||||
"1CH0150001",
|
||||
"20.28.41",
|
||||
"System ROM",
|
||||
"v2.22 (06/19/2024)",
|
||||
"03/30/2026 09:47:33",
|
||||
"iLO network link down.",
|
||||
`{"@odata.id":"/redfish/v1/Systems/1/Storage/DE00A000/Controllers/0","@odata.type":"#StorageController.v1_7_0.StorageController","Id":"0","Name":"HPE MR408i-o Gen11","FirmwareVersion":"52.26.3-5379","Manufacturer":"HPE","Model":"HPE MR408i-o Gen11","PartNumber":"P58543-001","SKU":"P58335-B21","SerialNumber":"PXSFQ0BBIJY3B3","Status":{"State":"Enabled","Health":"OK"},"Location":{"PartLocation":{"ServiceLabel":"Slot=14","LocationType":"Slot","LocationOrdinalValue":14}},"PCIeInterface":{"PCIeType":"Gen4","LanesInUse":8}}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/0","@odata.type":"#Drive.v1_17_0.Drive","Id":"0","Name":"480GB 6G SATA SSD","Status":{"State":"StandbyOffline","Health":"OK"},"PhysicalLocation":{"PartLocation":{"ServiceLabel":"Slot=14:Port=1:Box=3:Bay=1","LocationType":"Bay","LocationOrdinalValue":1}},"CapacityBytes":480103981056,"MediaType":"SSD","Model":"SAMSUNGMZ7L3480HCHQ-00A07","Protocol":"SATA","Revision":"JXTC604Q","SerialNumber":"S664NC0Y502720","PredictedMediaLifeLeftPercent":100}`,
|
||||
`{"@odata.id":"/redfish/v1/Chassis/DE00A000/Drives/64515","@odata.type":"#Drive.v1_17_0.Drive","Id":"64515","Name":"Empty Bay","Status":{"State":"Absent","Health":"OK"}}`,
|
||||
)
|
||||
}
|
||||
|
||||
func stringsJoin(parts ...string) string {
|
||||
return string(bytes.Join(func() [][]byte {
|
||||
out := make([][]byte, 0, len(parts))
|
||||
for _, part := range parts {
|
||||
out = append(out, []byte(part))
|
||||
}
|
||||
return out
|
||||
}(), []byte{0}))
|
||||
}
|
||||
1
internal/parser/vendors/vendors.go
vendored
1
internal/parser/vendors/vendors.go
vendored
@@ -7,6 +7,7 @@ import (
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/dell"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/easy_bee"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/h3c"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/hpe_ilo_ahs"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/inspur"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia"
|
||||
_ "git.mchus.pro/mchus/logpile/internal/parser/vendors/nvidia_bug_report"
|
||||
|
||||
@@ -36,9 +36,9 @@
|
||||
<div id="archive-source-content">
|
||||
<div class="upload-area" id="drop-zone">
|
||||
<p>Перетащите архив, TXT/LOG или JSON snapshot сюда</p>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<input type="file" id="file-input" accept="application/gzip,application/x-gzip,application/x-tar,application/zip,application/json,text/plain,.ahs,.json,.tar,.tar.gz,.tgz,.sds,.zip,.txt,.log" hidden>
|
||||
<button type="button" onclick="document.getElementById('file-input').click()">Выберите файл</button>
|
||||
<p class="hint">Поддерживаемые форматы: tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
<p class="hint">Поддерживаемые форматы: ahs, tar.gz, tar, tgz, sds, zip, json, txt, log</p>
|
||||
</div>
|
||||
<div id="upload-status"></div>
|
||||
<div id="parsers-info" class="parsers-info"></div>
|
||||
|
||||
Reference in New Issue
Block a user