- Add BenchmarkEstimated* constants to benchmark_types.go from _v8 logs
(Standard Perf ~16 min, Standard Power Fit ~43 min, Stability Perf ~92 min)
- Update benchmark profile dropdown to show Perf / Power Fit timing per profile
- Add timing columns to Method Split table (Standard vs Stability per run type)
- Update burn preset labels to show "N min/GPU (sequential) or N min (parallel)"
- Clarify burn "one by one" description with sequential vs parallel scaling
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-18 10:54:50 +03:00
2 changed files with 36 additions and 11 deletions
<option value="standard" selected>Standard — Perf `+validateFmtDur(platform.BenchmarkEstimatedPerfStandardSec)+` / Power Fit `+validateFmtDur(platform.BenchmarkEstimatedPowerStandardSec)+`</option>
<option value="stability">Stability — Perf `+validateFmtDur(platform.BenchmarkEstimatedPerfStabilitySec)+` / Power Fit `+validateFmtDur(platform.BenchmarkEstimatedPowerStabilitySec)+`</option>
<option value="overnight">Overnight — Perf `+validateFmtDur(platform.BenchmarkEstimatedPerfOvernightSec)+` / Power Fit `+validateFmtDur(platform.BenchmarkEstimatedPowerOvernightSec)+`</option>
<p style="font-size:13px;color:var(--muted);margin-bottom:10px">The benchmark page now exposes two fundamentally different test families so compute score and server power-fit are not mixed into one number.</p>
<tr><td>Performance Benchmark</td><td><code>bee-gpu-burn</code></td><td>How much isolated compute performance does the GPU realize in this server?</td></tr>
<tr><td>Power / Thermal Fit</td><td><code>dcgmi targeted_power</code></td><td>How much power per GPU can this server sustain as GPU count ramps up?</td></tr>
<tr><td>Performance Benchmark</td><td><code>bee-gpu-burn</code></td><td>How much isolated compute performance does the GPU realize in this server?</td><td>`+validateFmtDur(platform.BenchmarkEstimatedPerfStandardSec)+`</td><td>`+validateFmtDur(platform.BenchmarkEstimatedPerfStabilitySec)+`</td></tr>
<tr><td>Power / Thermal Fit</td><td><code>dcgmi targeted_power</code></td><td>How much power per GPU can this server sustain as GPU count ramps up?</td><td>`+validateFmtDur(platform.BenchmarkEstimatedPowerStandardSec)+`</td><td>`+validateFmtDur(platform.BenchmarkEstimatedPowerStabilitySec)+`</td></tr>
</table>
<p style="font-size:12px;color:var(--muted);margin-top:10px">Use ramp-up mode for capacity work: it creates 1 GPU → 2 GPU → … → all selected steps so analysis software can derive server total score and watts-per-GPU curves.</p>
<p style="font-size:12px;color:var(--muted);margin-top:10px">Timings are per full ramp-up run (1 GPU → all selected), measured on 4–8 GPU servers. Use ramp-up mode for capacity work: it creates 1 GPU → 2 GPU → … → all selected steps so analysis software can derive server total score and watts-per-GPU curves.</p>
<button type="button" class="btn btn-primary" onclick="runAllBurnTasks()">Burn one by one</button>
<p>Run checked tests one by one. Tests run without cooldown. Each test duration is determined by the Burn Profile. Total test duration is the sum of all selected tests multiplied by the Burn Profile duration.</p>
<p>Runs checked tests as separate sequential tasks. In sequential GPU mode, total time = profile duration × N GPU. In parallel mode, all selected GPUs burn simultaneously for one profile duration.</p>
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.