The results of an nvbench benchmark are often used when comparing two implementations of a kernel and one wants to decide which is "faster".
In general, this is a complex question to answer as performance data is rarely normally distributed, and so therefore approaches with comparing the mean are not statistically robust.
The ideal would be to enable comparing two collections of samples of performance data collected from nvbench benchmarks and determining which is faster according to more statistically rigorous comparison criteria.