Files
logpile/bible-local/02-architecture.md
Mikhail Chusavitin d650a6ba1c refactor: unified ingest pipeline + modular Redfish profile framework
Implement the full architectural plan: unified ingest.Service entry point
for archive and Redfish payloads, modular redfishprofile package with
composable profiles (generic, ami-family, msi, supermicro, dell,
hgx-topology), score-based profile matching with fallback expansion mode,
and profile-driven acquisition/analysis plans.

Vendor-specific logic moved out of common executors and into profile hooks.
GPU chassis lookup strategies and known storage recovery collections
(IntelVROC/HA-RAID/MRVL) now live in ResolvedAnalysisPlan, populated by
profiles at analysis time. Replay helpers read from the plan; no hardcoded
path lists remain in generic code.

Also splits redfish_replay.go into domain modules (gpu, storage, inventory,
fru, profiles) and adds full fixture/matcher/directive test coverage
including Dell, AMI, unknown-vendor fallback, and deterministic ordering.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-18 08:48:58 +03:00

3.7 KiB

02 — Architecture

Runtime stack

Layer Implementation
Language Go 1.22+
HTTP net/http + http.ServeMux
UI Embedded templates and static assets via go:embed
State In-memory only
Build CGO_ENABLED=0, single binary

Default port: 8082

Audit result rendering is delegated to embedded reanimator/chart, vendored as git submodule internal/chart. LOGPile remains responsible for upload, collection, parsing, normalization, and Reanimator export generation.

Code map

cmd/logpile/main.go          entrypoint and CLI flags
internal/server/             HTTP handlers, jobs, upload/export flows
internal/ingest/             source-family orchestration for upload and raw replay
internal/collector/          live collection and Redfish replay
internal/analyzer/           shared analysis helpers
internal/parser/             archive extraction and parser dispatch
internal/exporter/           CSV and Reanimator conversion
internal/chart/              vendored `reanimator/chart` viewer submodule
internal/models/             stable data contracts
web/                         embedded UI assets

Server state

internal/server.Server stores:

Field Purpose
result Current AnalysisResult shown in UI and used by exports
detectedVendor Parser/collector identity for the current dataset
rawExport Reopenable raw-export package associated with current result
jobManager Shared async job state for collect and convert flows
collectors Registered live collectors (redfish, ipmi)
convertOutput Temporary ZIP artifacts for batch convert downloads

State is replaced only on successful upload or successful live collection. Failed or canceled jobs do not overwrite the previous dataset.

Main flows

Upload

  1. POST /api/upload receives multipart field archive
  2. internal/ingest.Service resolves the source family
  3. JSON inputs are checked for raw-export package or AnalysisResult snapshot
  4. Non-JSON archives go through the archive parser family
  5. Archive metadata is normalized onto AnalysisResult
  6. Result becomes the current in-memory dataset

Live collect

  1. POST /api/collect validates request fields
  2. Server creates an async job and returns 202 Accepted
  3. Selected collector gathers raw data
  4. For Redfish, collector runs minimal discovery, matches Redfish profiles, and builds an acquisition plan
  5. Collector applies profile tuning hints (for example crawl breadth, prefetch, bounded plan-B passes)
  6. Collector saves raw_payloads.redfish_tree plus acquisition diagnostics
  7. Result is normalized, source metadata applied, and state replaced on success

Batch convert

  1. POST /api/convert accepts multiple files
  2. Each supported file is analyzed independently
  3. Successful results are converted to Reanimator JSON
  4. Outputs are packaged into a temporary ZIP artifact
  5. Client polls job status and downloads the artifact when ready

Redfish design rule

Live Redfish collection and offline Redfish re-analysis must use the same replay path. The collector first captures raw_payloads.redfish_tree, then the replay logic builds the normalized result.

Redfish is being split into two coordinated phases:

  • acquisition: profile-driven snapshot collection strategy
  • analysis: replay over the saved snapshot with the same profile framework

PCI IDs lookup

Lookup order:

  1. Embedded internal/parser/vendors/pciids/pci.ids
  2. ./pci.ids
  3. /usr/share/hwdata/pci.ids
  4. /usr/share/misc/pci.ids
  5. /opt/homebrew/share/pciids/pci.ids
  6. Extra paths from LOGPILE_PCI_IDS_PATH

Later sources override earlier ones for the same IDs.