ericMPro wrote:
Back on topic…
I’ve been doing some A/B testing and A/B/C testing of long sleeve cycling jerseys and I’ve found one interesting result.
Caveats, I test with Notio. The raw CdA is slightly inconsistent, but the delta is always the same. I’ve averaged, thrown out highs and lows, thrown out only the outliers, etc. and they all produce the same delta. I’ve got dozens of days of testing over a couple three months.
I usually do six 5min runs of each, so 12 total runs. Within each run I snip off the few seconds where I’m getting up to speed and the few seconds at the end when I’m stopping.
I have found a random “aero” jersey that is consistently .050 CdA faster than a totally “smooth” jersey, the kind clubs would use because the fabric is good for graphics and sublimating.
Anybody ever hear of these types of fabric numbers or see holes in my method?
E
Eric shared his files with me. Lots of interesting stuff in there
I am using a generic version of GC that I took the open source code base and compiled it to tinker with. Eric's data is structured in a way that it is readable by the generic version of GC. CDA, wind….all there. You just don’t have their formulas but you have the result of the formulas.
I saw what Eric was seeing : a big difference in the calculated CDA between the 3 jerseys. But even for a specific jersey there was a very significant swing in CDA but one clearly faster than the other 2. Eric did 4x each jersey, 12 tests.
At first the data seemed wrong. Power was really low, like 75watts. I was sure this was a bug since I always imagined Eric as a 400watt guy. He confirmed he was riding super easy, in a super relax position. When I re looked at the data I was impressed how consistent things were for the 12 tests.
The first problem was pretty easy to spot. Eric is riding on a flat 400m track, but the altitude calculated was undulating slightly. It’s pretty easy to track back to the source of the error ie the barometer. This is pretty normal. Barometers are much more subject to atmospheric disturbances. We’ll come back to this
You see that for a test there is a net gain OR a net loss in altitude. This gain or loss is inconsistent from test to test. One test would end with a net 1m gain, the other a 1m loss. This should be 0 since he's on a flat velodrome. You will see it in the pictures attached).1m on a test like that is about 3.5-4watt. So -1 to +1 is a 8 watt difference. When the “air component” is in the 50 watts range, 8 watts is a BIG source of error.
Had he been putting out more power the error would have been diluted. Then again had atmospheric conditions been more difficult he would have got more error. The wind was pretty calm compared to other real world situations.
This is an example where a straight barometer will not do as well as some more elegant solutions. It’s also an example where if you want real time/instantaneous CDA on terrain other than a velodrome you need more than a barometer or you better have a way to correct the barometer. This is not a phenomenon specific to any one device. The need for accurate altitude is so under evaluated.
Yes, protocols such as out and backs with 0 net elevation change help if the software accounts for this. But in such a protocol you would still need accurate levation to get accurate CDA at intermediary points. Protocols can compensate for poor data but the better the data, the less need for protocols and the more you can pinpoint changes.
At one point there was a feature in their GC to zero the altitude. If it’s still there, that is one way to eliminate the error if you are on a velodrome. Too bad we don't all have velodromes :-) Even then, when 50-60ish watts are allocated to the aero component, you will get big error bars. The bigger the aero component, the better. But the devices need to provide accuracy at lower speeds as well, especially when climbing, or when you are using varying speeds to try a separate CDA/CRR.
At the end, the trends of each jersey (from fastest to slowest) remains the same but with a lot less noise and a lot more certitude in the results
What is really interesting is when putting the data in Aerolab’s virtual elevation ie the Chung method, you get really good results. You can also see blips in data. They were mostly due to uneven wind which one can see with the data.
Attached are two tests, green is the device elevation, white is virtual elevation/Chung method. You see a +1m error in one case and a -1m error in the other. The green line should follow the white line.
We have to get Eric to repeat at his regular 400w level (with tufts and a drone)