4.1 KiB
05 — Collectors
Collectors live in internal/collector/.
Core files:
internal/collector/registry.go— connector registry (redfish,ipmi)internal/collector/redfish.go— real Redfish connectorinternal/collector/ipmi_mock.go— IPMI mock connector scaffoldinternal/collector/types.go— request/progress contracts
Redfish Collector (redfish)
Status: Production-ready.
Request contract (from server)
Passed through from /api/collect after validation:
host,port,usernameauth_type=password|token(+ matching credential field)tls_mode=strict|insecure
Discovery
Dynamic — does not assume fixed paths. Discovers:
Systemscollection → per-system resourcesChassiscollection → enclosure/board dataManagerscollection → BMC/firmware info
Collected data
| Category | Notes |
|---|---|
| CPU | Model, cores, threads, socket, status |
| Memory | DIMM slot, size, type, speed, serial, manufacturer |
| Storage | Slot, type, model, serial, firmware, interface, status |
| GPU | Detected via PCIe class + NVIDIA vendor ID |
| PSU | Model, serial, wattage, firmware, telemetry (input/output power, voltage) |
| NIC | Model, serial, port count, BDF |
| PCIe | Slot, vendor_id, device_id, BDF, link width/speed |
| Firmware | BIOS, BMC versions |
Raw snapshot
Full Redfish response tree is stored in result.RawPayloads["redfish_tree"].
This allows future offline re-analysis without re-collecting from a live BMC.
Unified Redfish analysis pipeline (live == replay)
LOGPile uses a single Redfish analyzer path:
- Live collector crawls the Redfish API and builds
raw_payloads.redfish_tree - Parsed result is produced by replaying that tree through the same analyzer used by raw import
This guarantees that live collection and Export Raw Data re-open/re-analyze produce the same
normalized output for the same redfish_tree.
Snapshot crawler behavior (important)
The Redfish snapshot crawler is intentionally:
- bounded (
LOGPILE_REDFISH_SNAPSHOT_MAX_DOCS) - prioritized (PCIe, Fabrics, FirmwareInventory, Storage, PowerSubsystem, ThermalSubsystem)
- tolerant (skips noisy expected failures, strips
#fragmentfrom@odata.id)
Design notes:
- Queue capacity is sized to snapshot cap to avoid worker deadlocks on large trees.
- UI progress is coarse and human-readable; detailed per-request diagnostics are available via debug logs.
LOGPILE_REDFISH_DEBUG=1andLOGPILE_REDFISH_SNAPSHOT_DEBUG=1enable console diagnostics.
Parsing guidelines
When adding Redfish mappings, follow these principles:
- Support alternate collection paths (resources may appear at different odata URLs).
- Follow
@odata.idreferences and handle embeddedMembersarrays. - Prefer raw-tree replay compatibility: if live collector adds a fallback/probe, replay analyzer must mirror it.
- Deduplicate by serial / BDF / slot+model (in that priority order).
- Prefer tolerant/fallback parsing — missing fields should be silently skipped, not cause the whole collection to fail.
Vendor-specific storage fallbacks (Supermicro and similar)
When standard Storage/.../Drives collections are empty, collector/replay may recover drives via:
Storage.Links.Enclosures[*] -> .../Drives- direct probing of finite
Disk.Baycandidates (Disk.Bay.0,Disk.Bay0,.../0)
This is required for some BMCs that publish drive inventory in vendor-specific paths while leaving standard collections empty.
PSU source preference (newer Redfish)
PSU inventory source order:
Chassis/*/PowerSubsystem/PowerSupplies(preferred on X14+/newer Redfish)Chassis/*/Power(legacy fallback)
Progress reporting
The collector emits progress log entries at each stage (connecting, enumerating systems, collecting CPUs, etc.) so the UI can display meaningful status. Current progress message strings are user-facing and may be localized.
IPMI Collector (ipmi)
Status: Mock scaffold only — not implemented.
Registered in the collector registry but returns placeholder data. Real IPMI support is a future work item.