Problem
The span implementation creates ranges with step size 1.0 for floating-point data. That leads to scenarios where:
returns
which includes the minimum but not the correct maximum value.
I know this is the expected result of range(-2.73, 22.75), which span uses, but I'm wondering if it is appropriate for most (statistical) use cases. Before happening upon this and having a look at span(x)'s implementation, I assumed span to refer to the statistical range $T$, as in $T=max(x_1,x_2,...,x_n) - min(x_1,x_2,...,x_n)$.
Even if that is not the case, I think span not returning the correct maximum value is a potential source for confusion and/or trouble for users.