5.6 KiB
Reanimator
Reanimator is an event-driven hardware lifecycle platform for tracking servers and components from warehouse intake to production operation, failure analysis, and retirement.
It treats infrastructure like a medical record: every asset and component has a complete, queryable history of location, firmware, incidents, and reliability over time.
Project Tracking
Development milestones and execution checklist are in TODO.md.
What Reanimator Solves
- Inventory and shipment tracking
- Full component hierarchy (
asset -> subcomponents) - Time-based installation history (components can move between assets)
- Firmware and lifecycle timeline visibility
- Failure analytics (AFR, MTBF, reliability by part class)
- Spare-part planning
- Service ticket correlation
- Offline log ingestion with later synchronization
Core Principles
- Events are the source of truth
- Current state is derived from event history.
- Hardware relationships are time-based
- A component is linked to an asset through installation intervals, not permanent foreign keys.
- Observations are not facts
- Logs produce observations; observations produce derived events; events update state.
- Data integrity first
- Keep writes idempotent and preserve raw ingested data.
High-Level Architecture
LogBundle (offline)
->
Ingestion API
->
Parser / Observation Layer
->
Event Generator
->
Domain Model
->
MariaDB
->
API / UI / Connectors
Domain Model (MVP)
Organizational
- Customer: owner organization
- Project: infrastructure grouping within a customer
- Location (recommended): warehouse, datacenter, rack, repair center
Hardware
- Asset: deployable hardware unit (usually a server)
- Component: hardware part installed in an asset (SSD, DIMM, CPU, NIC, PSU)
- LOT: internal part classification for reliability analytics (for example,
SSD_NVME_03.84TB_GEN4) - Installation: time-bounded relationship between component and asset
installations.removed_at IS NULL means the installation is currently active.
Ingestion and Lifecycle
- LogBundle: immutable raw upload package
- Observation: parsed snapshot at collection time
- TimelineEvent: normalized lifecycle event (installed, removed, moved, firmware changed, ticket linked, etc.)
- FailureEvent: explicit failure record with source and confidence
Service Correlation
- Ticket: imported external service case
- TicketLink: relation to project, asset, or component
Suggested Database Tables (MVP)
customers
projects
locations
assets
components
lots
installations
log_bundles
observations
timeline_events
tickets
ticket_links
failure_events
Important Indexes
components(vendor_serial)assets(vendor_serial)installations(component_id, removed_at)timeline_events(subject_type, subject_id, timestamp)observations(component_id)
API (MVP)
Ingestion
POST /ingest/logbundle(must be idempotent)POST /ingest/failures
Assets and Components
GET /assetsGET /assets/{id}GET /assets/{id}/componentsGET /assets/{id}/ticketsPOST /machines/dispatchPOST /machines/return-to-stockGET /componentsGET /components/{id}GET /assets/{id}/timelineGET /components/{id}/timeline
Organization Registry
GET /customers,POST /customersGET /locations,POST /locations
Tickets
POST /connectors/tickets/syncGET /tickets
Failures
GET /failures
Analytics
GET /analytics/lot-metricsGET /analytics/firmware-riskGET /analytics/spare-forecast
UI
GET /ui(dashboard)GET /ui/assets,GET /ui/assets/{id}GET /ui/components,GET /ui/components/{id}GET /ui/ticketsGET /ui/failuresGET /ui/ingestGET /ui/analytics
Recommended Go Project Layout
cmd/reanimator-api
internal/domain
internal/repository
internal/ingest
internal/events
internal/connectors
internal/jobs
internal/api
Initial Delivery Plan
- Registry: customers, projects, assets, components, LOT
- Ingestion: logbundle upload, observation persistence, installation detection
- Timeline: event derivation and lifecycle views
- Service Correlation: ticket sync and linking
Future Extensions
- Predictive failure modeling
- Spare-part forecasting
- Firmware risk scoring
- Automated anomaly detection
- Rack-level topology
- Warranty and RMA workflows
- Multi-tenant support
Immediate Next Steps
- Initialize Go module
- Create base schema and indexes
- Implement ingestion endpoint
- Add observation-to-event derivation
- Expose asset/component timeline APIs
Start with data model correctness, not UI.
Local Configuration
The API loads settings from environment variables and an optional config.json.
If CONFIG_FILE is set, it will load that file. Otherwise it looks for config.json
in the working directory if present. Environment variables override file values.
Database configuration can be provided as a full database_dsn or as discrete
fields under database (user/password/host/port/name/params). If database_dsn
is empty and no DATABASE_DSN env var is set, the discrete fields are used
to build the DSN.
An example file is available as config.example.json.
Database Bootstrap
At this stage the project uses clean bootstrap instead of in-place schema migration for ownership changes.
Use:
make db-reset
This command drops and recreates the configured database, then applies the current schema from migrations/.