Rather than a tricky loop that should be avoided lest we get stuck, I prefer to think of it as a process of iterative refinement. Revising ideas, methods, and thoughts is a useful process and should not have a hard endpoint. Models are about something. When they’re about real-world phenomenon they need to be informed by real-world observations. This does not mean you chuck the model the first time the tiniest fleck of recalcitrant data lands in your lap.
Math with no connection to anything in the world of experience is a purely intellectual exercise; a valuable and diverting exercise to be sure. Real-world observations devoid of theory and math is a purely practical exercise; also valuable and diverting. What’s wrong with liking both? And letting each inform and revise the other?
I have to disagree that real-world tests are not repeatable. They are, in fact, repeatable, but with varying degrees of error. What we want is testability. We need repeatability to do this, with results showing correlation in excess of margin of error to be valid, right? Here’s an example:
I’ve got 2 “aero” forks. One is the Acme Forkotronic, the other is the Yoyodyne Motherforker. My hypothesis is that there is NO performance difference between the two. To test this, I take several highly trained riders, build several pairs of bikes identical except for the forks, take them to a velodrome over the course of several days, and have them each ride several trials over a given distance alternating bikes. You’re absolutely right in that conditions are far from identical for these tests. However, suppose I find that in every single trial the Forkotronic times are lower? Can this be explained entirely by margin of error (or random factors, of whatever you want to call it)? Yes, it can. However, I crunch the numbers anyway and end up with, say a 99%+ confidence interval that my hypothesis (that there is no performance difference) is false. So… have I proven the Forkotronic is better performing? Nope. But I’ve shown that it probably is better. Furthermore, and more germaine to the original discussion, if you go waving each fork around by itself in a wind tunnel and announce the Motherforker is better, I’ve got some credible alternate “real-world” observations (using a repeatable methodology) with which I can reasonably disagree.
Anyway, I think velodrome tests would be useful as (a) a counterpoint to wind-tunnel test and (b) to possibly help extend and continue, not end, the discussion on the theory side. Both good things in my opinion.
And finally, I think your points were well-taken until this:
Meanwhile, the other guy has been on his
bike and getting stronger despite his
complete disregard for aero-ness. True,
information is your friend, but if its a
debate you’re after, there are some pretty
good ones at roadbikereview
This notion that anyone involved in a discussion is somehow doing so at the expense of training (instead of riding or whatever) seems to pop up in almost every theory debate around here. It’s completely fatuous. Frustratingly so. If this mythical “other guy” has no intellectual life at all, and scampers back onto the saddle anytime life presents him with a stimulating conversation , I don’t want to be that guy. I’m a well-rounded renaissance man after all (a little too round, maybe, but still). Sheesh!
And no, I don’t like debate for the sake of debate, so I’m not going to read archived shimano vs. campy debates just to see people arguing. I like debate on topics I find interesting, ideally debate that challenges my assumptions and teaches me something, and, sometimes, let’s me get the gratification of contributing in a positive manner as well.
Great googly moogly, I think I need a coffee break or something.