docs: refresh project documentation
This commit is contained in:
@@ -2,114 +2,85 @@
|
||||
|
||||
## Runtime stack
|
||||
|
||||
| Layer | Technology |
|
||||
|-------|------------|
|
||||
| Layer | Implementation |
|
||||
|-------|----------------|
|
||||
| Language | Go 1.22+ |
|
||||
| HTTP | `net/http`, `http.ServeMux` |
|
||||
| UI | Embedded via `//go:embed` in `web/embed.go` (templates + static assets) |
|
||||
| State | In-memory only — no database |
|
||||
| Build | `CGO_ENABLED=0`, single static binary |
|
||||
| HTTP | `net/http` + `http.ServeMux` |
|
||||
| UI | Embedded templates and static assets via `go:embed` |
|
||||
| State | In-memory only |
|
||||
| Build | `CGO_ENABLED=0`, single binary |
|
||||
|
||||
Default port: **8082**
|
||||
Default port: `8082`
|
||||
|
||||
## Directory structure
|
||||
## Code map
|
||||
|
||||
```
|
||||
cmd/logpile/main.go # Binary entry point, CLI flag parsing
|
||||
internal/
|
||||
collector/ # Live data collectors
|
||||
registry.go # Collector registration
|
||||
redfish.go # Redfish connector (real implementation)
|
||||
ipmi_mock.go # IPMI mock connector (scaffold)
|
||||
types.go # Connector request/progress contracts
|
||||
parser/ # Archive parsers
|
||||
parser.go # BMCParser (dispatcher) + parse orchestration
|
||||
archive.go # Archive extraction helpers
|
||||
registry.go # Parser registry + detect/selection
|
||||
interface.go # VendorParser interface
|
||||
vendors/ # Vendor-specific parser modules
|
||||
vendors.go # Import-side-effect registrations
|
||||
dell/
|
||||
inspur/
|
||||
nvidia/
|
||||
nvidia_bug_report/
|
||||
unraid/
|
||||
xigmanas/
|
||||
generic/
|
||||
pciids/ # PCI IDs lookup (embedded pci.ids)
|
||||
server/ # HTTP layer
|
||||
server.go # Server struct, route registration
|
||||
handlers.go # All HTTP handler functions
|
||||
exporter/ # Export formatters
|
||||
exporter.go # CSV + JSON exporters
|
||||
reanimator_models.go
|
||||
reanimator_converter.go
|
||||
models/ # Shared data contracts
|
||||
web/
|
||||
embed.go # go:embed directive
|
||||
templates/ # HTML templates
|
||||
static/ # JS / CSS
|
||||
js/app.js # Frontend — API contract consumer
|
||||
```text
|
||||
cmd/logpile/main.go entrypoint and CLI flags
|
||||
internal/server/ HTTP handlers, jobs, upload/export flows
|
||||
internal/collector/ live collection and Redfish replay
|
||||
internal/analyzer/ shared analysis helpers
|
||||
internal/parser/ archive extraction and parser dispatch
|
||||
internal/exporter/ CSV and Reanimator conversion
|
||||
internal/models/ stable data contracts
|
||||
web/ embedded UI assets
|
||||
```
|
||||
|
||||
## In-memory state
|
||||
## Server state
|
||||
|
||||
The `Server` struct in `internal/server/server.go` holds:
|
||||
`internal/server.Server` stores:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `result` | `*models.AnalysisResult` | Current parsed/collected dataset |
|
||||
| `detectedVendor` | `string` | Vendor identifier from last parse |
|
||||
| `jobManager` | `*JobManager` | Tracks live collect job status/logs |
|
||||
| `collectors` | `*collector.Registry` | Registered live collection connectors |
|
||||
| Field | Purpose |
|
||||
|------|---------|
|
||||
| `result` | Current `AnalysisResult` shown in UI and used by exports |
|
||||
| `detectedVendor` | Parser/collector identity for the current dataset |
|
||||
| `rawExport` | Reopenable raw-export package associated with current result |
|
||||
| `jobManager` | Shared async job state for collect and convert flows |
|
||||
| `collectors` | Registered live collectors (`redfish`, `ipmi`) |
|
||||
| `convertOutput` | Temporary ZIP artifacts for batch convert downloads |
|
||||
|
||||
State is replaced atomically on successful upload or collect.
|
||||
On a failed/canceled collect, the previous `result` is preserved unchanged.
|
||||
State is replaced only on successful upload or successful live collection.
|
||||
Failed or canceled jobs do not overwrite the previous dataset.
|
||||
|
||||
## Upload flow (`POST /api/upload`)
|
||||
## Main flows
|
||||
|
||||
```
|
||||
multipart form field: "archive"
|
||||
│
|
||||
├─ file looks like JSON?
|
||||
│ └─ parse as models.AnalysisResult snapshot → store in Server.result
|
||||
│
|
||||
└─ otherwise
|
||||
└─ parser.NewBMCParser().ParseFromReader(...)
|
||||
│
|
||||
├─ try all registered vendor parsers (highest confidence wins)
|
||||
└─ result → store in Server.result
|
||||
```
|
||||
### Upload
|
||||
|
||||
## Live collect flow (`POST /api/collect`)
|
||||
1. `POST /api/upload` receives multipart field `archive`
|
||||
2. JSON inputs are checked for raw-export package or `AnalysisResult` snapshot
|
||||
3. Non-JSON inputs go through `parser.BMCParser`
|
||||
4. Archive metadata is normalized onto `AnalysisResult`
|
||||
5. Result becomes the current in-memory dataset
|
||||
|
||||
```
|
||||
validate request (host / protocol / port / username / auth_type / tls_mode)
|
||||
│
|
||||
└─ launch async job
|
||||
│
|
||||
├─ progress callback → job log (queryable via GET /api/collect/{id})
|
||||
│
|
||||
├─ success:
|
||||
│ set source metadata (source_type=api, protocol, host, date)
|
||||
│ store result in Server.result
|
||||
│
|
||||
└─ failure / cancel:
|
||||
previous Server.result unchanged
|
||||
```
|
||||
### Live collect
|
||||
|
||||
Job lifecycle states: `queued → running → success | failed | canceled`
|
||||
1. `POST /api/collect` validates request fields
|
||||
2. Server creates an async job and returns `202 Accepted`
|
||||
3. Selected collector gathers raw data
|
||||
4. For Redfish, collector saves `raw_payloads.redfish_tree`
|
||||
5. Result is normalized, source metadata applied, and state replaced on success
|
||||
|
||||
### Batch convert
|
||||
|
||||
1. `POST /api/convert` accepts multiple files
|
||||
2. Each supported file is analyzed independently
|
||||
3. Successful results are converted to Reanimator JSON
|
||||
4. Outputs are packaged into a temporary ZIP artifact
|
||||
5. Client polls job status and downloads the artifact when ready
|
||||
|
||||
## Redfish design rule
|
||||
|
||||
Live Redfish collection and offline Redfish re-analysis must use the same replay path.
|
||||
The collector first captures `raw_payloads.redfish_tree`, then the replay logic builds the normalized result.
|
||||
|
||||
## PCI IDs lookup
|
||||
|
||||
Load/override order (`LOGPILE_PCI_IDS_PATH` has highest priority because it is loaded last):
|
||||
Lookup order:
|
||||
|
||||
1. Embedded `internal/parser/vendors/pciids/pci.ids` (base dataset compiled into binary)
|
||||
1. Embedded `internal/parser/vendors/pciids/pci.ids`
|
||||
2. `./pci.ids`
|
||||
3. `/usr/share/hwdata/pci.ids`
|
||||
4. `/usr/share/misc/pci.ids`
|
||||
5. `/opt/homebrew/share/pciids/pci.ids`
|
||||
6. Paths from `LOGPILE_PCI_IDS_PATH` (colon-separated on Unix; later loaded, override same IDs)
|
||||
6. Extra paths from `LOGPILE_PCI_IDS_PATH`
|
||||
|
||||
This means unknown GPU/NIC model strings can be updated by refreshing `pci.ids`
|
||||
without any code change.
|
||||
Later sources override earlier ones for the same IDs.
|
||||
|
||||
Reference in New Issue
Block a user