v1.11.15
Lenovo ThinkSystem SR650 V3 (and similar XCC-based servers) caused
collection runs of 23+ minutes because the BMC exposes two large high-
error-rate subtrees in the snapshot BFS:
- Chassis/1/Sensors: 315 individual sensor members, 282/315 failing,
~3.7s per request → ~19 minutes wasted. These documents are never
read by any LOGPile parser (thermal/power data comes from aggregate
Chassis/*/Thermal and Chassis/*/Power endpoints).
- Chassis/1/Oem/Lenovo: 75 requests (LEDs×47, Slots×26, etc.),
68/75 failing → 8+ minutes wasted on non-inventory data.
Add a Lenovo profile (matched on SystemManufacturer/OEMNamespace "Lenovo")
that sets SnapshotExcludeContains to block individual sensor documents and
non-inventory Lenovo OEM subtrees from the snapshot BFS queue. Also sets
rate policy thresholds appropriate for XCC BMC latency (p95 often 3-5s).
Add SnapshotExcludeContains []string to AcquisitionTuning and check it
in the snapshot enqueue closure in redfish.go.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
LOGPile
Standalone Go application for BMC diagnostics analysis with an embedded web UI.
What it does
- Parses vendor diagnostic archives into a normalized hardware inventory
- Collects live BMC data via Redfish
- Exports normalized data as CSV, raw re-analysis bundles, and Reanimator JSON
- Runs as a single Go binary with embedded UI assets
Documentation
- Shared engineering rules:
bible/README.md - Project architecture and API contracts:
bible-local/README.md - Agent entrypoints:
AGENTS.md,CLAUDE.md
Run
make build
./bin/logpile
Default port: 8082
License
MIT (see LICENSE)
Description
Releases
13
Release v1.12
Latest
Languages
Go
87.4%
JavaScript
8.8%
CSS
2.1%
HTML
1%
Shell
0.6%
Other
0.1%