Files
logpile/bible-local/02-architecture.md
Michael Chus 21ea129933 misc: sds format support, convert limits, dell dedup, supermicro removal, bible updates
Parser / archive:
- Add .sds extension as tar-format alias (archive.go)
- Add tests for multipart upload size limits (multipart_limits_test.go)
- Remove supermicro crashdump parser (ADL-015)

Dell parser:
- Remove GPU duplicates from PCIeDevices (DCIM_VideoView vs DCIM_PCIDeviceView
  both list the same GPU; VideoView record is authoritative)

Server:
- Add LOGPILE_CONVERT_MAX_MB env var for independent convert batch size limit
- Improve "file too large" error message with current limit value

Web:
- Add CONVERT_MAX_FILES_PER_BATCH = 1000 cap
- Minor UI copy and CSS fixes

Bible:
- bible-local/06-parsers.md: add pci.ids enrichment rule (enrich model from
  pciids when name is empty but vendor_id+device_id are present)
- Sync bible submodule and local overview/architecture docs

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-01 22:23:44 +03:00

4.0 KiB

02 — Architecture

Runtime stack

Layer Technology
Language Go 1.22+
HTTP net/http, http.ServeMux
UI Embedded via //go:embed in web/embed.go (templates + static assets)
State In-memory only — no database
Build CGO_ENABLED=0, single static binary

Default port: 8082

Directory structure

cmd/logpile/main.go          # Binary entry point, CLI flag parsing
internal/
  collector/                 # Live data collectors
    registry.go              # Collector registration
    redfish.go               # Redfish connector (real implementation)
    ipmi_mock.go             # IPMI mock connector (scaffold)
    types.go                 # Connector request/progress contracts
  parser/                    # Archive parsers
    parser.go                # BMCParser (dispatcher) + parse orchestration
    archive.go               # Archive extraction helpers
    registry.go              # Parser registry + detect/selection
    interface.go             # VendorParser interface
    vendors/                 # Vendor-specific parser modules
      vendors.go             # Import-side-effect registrations
      dell/
      inspur/
      nvidia/
      nvidia_bug_report/
      unraid/
      xigmanas/
      generic/
      pciids/                # PCI IDs lookup (embedded pci.ids)
  server/                    # HTTP layer
    server.go                # Server struct, route registration
    handlers.go              # All HTTP handler functions
  exporter/                  # Export formatters
    exporter.go              # CSV + JSON exporters
    reanimator_models.go
    reanimator_converter.go
  models/                    # Shared data contracts
web/
  embed.go                   # go:embed directive
  templates/                 # HTML templates
  static/                    # JS / CSS
    js/app.js                # Frontend — API contract consumer

In-memory state

The Server struct in internal/server/server.go holds:

Field Type Description
result *models.AnalysisResult Current parsed/collected dataset
detectedVendor string Vendor identifier from last parse
jobManager *JobManager Tracks live collect job status/logs
collectors *collector.Registry Registered live collection connectors

State is replaced atomically on successful upload or collect. On a failed/canceled collect, the previous result is preserved unchanged.

Upload flow (POST /api/upload)

multipart form field: "archive"
  │
  ├─ file looks like JSON?
  │     └─ parse as models.AnalysisResult snapshot → store in Server.result
  │
  └─ otherwise
        └─ parser.NewBMCParser().ParseFromReader(...)
              │
              ├─ try all registered vendor parsers (highest confidence wins)
              └─ result → store in Server.result

Live collect flow (POST /api/collect)

validate request (host / protocol / port / username / auth_type / tls_mode)
  │
  └─ launch async job
        │
        ├─ progress callback → job log (queryable via GET /api/collect/{id})
        │
        ├─ success:
        │     set source metadata (source_type=api, protocol, host, date)
        │     store result in Server.result
        │
        └─ failure / cancel:
              previous Server.result unchanged

Job lifecycle states: queued → running → success | failed | canceled

PCI IDs lookup

Load/override order (LOGPILE_PCI_IDS_PATH has highest priority because it is loaded last):

  1. Embedded internal/parser/vendors/pciids/pci.ids (base dataset compiled into binary)
  2. ./pci.ids
  3. /usr/share/hwdata/pci.ids
  4. /usr/share/misc/pci.ids
  5. /opt/homebrew/share/pciids/pci.ids
  6. Paths from LOGPILE_PCI_IDS_PATH (colon-separated on Unix; later loaded, override same IDs)

This means unknown GPU/NIC model strings can be updated by refreshing pci.ids without any code change.