Correct, there’s no data files available, nor comparison between units on the same runs.
The challenge with dividing up the data is that the course itself only appears to be 2-3K, so when you subdivide the data into dozens (or more) of smaller sections you start to get to the point where you’re getting into questionable territory. Then, the charts imply really high count numbers, but not actual miles ran (or even total runs). So a single 2-3K run is how many ‘values’? 100, 200, 15?
It’s one of the reasons in many of my reviews I include data from multiple units on a single run/ride and let folks go to town. If you’re presenting the data - then actually present the data too, not just an interpretation of it.
Finally, as for footpods, why add footpods for the 910XT but not the 620 and Fenix2? I guess that’s sorta my point. One is comparing it to a value that’s not ‘like’. I have no problem showing the strength of a footpod, but since the tests are “GPS accuracy” and not “Footpod accuracy”, it’s a bit misleading.
Don’t get me wrong - I think he does interesting work, but I just don’t like that so much of it’s a bit black box and many people are mistakenly believing he’s looking at thousands of runs.