Login required to started new threads

Login required to post replies

Prev Next
The challenge of aero testing bike vs bike
Quote | Reply
Working a bit off the Ventum thread, but not wanting this to be about Ventum specifically, I wanted to address just how difficult it is to compare one frame vs another, and how you should be highly suspicious of any numbers put out by manufacturers and/or publications. Not because they're lying to you; to the contrary, I think most manufacturers do the best they can and attempt to put out accurate data, though obviously skewed in favor of their particular product. It's just there are so many variables that need to be controlled which can all have a significant impact on results, and it's virtually impossible, both physically and fiscally, to cover them all. Let me explain (this is going to be a little long...hang in there and follow if you can):

First , if all you want to do is test frame vs frame, with nothing else attached, fine. Have at it, test away. The numbers don't mean much to me, but post them anyway for the world to see. But let's make it more realistic, shall we?

Let's add aero bars into the equation. This will be your first big problem because you've just introduced a boat-load of variables:
  1. What aero bar are you adding? Proprietary to your bike? That's okay, what do you put on the other frames? If they have proprietary bars as well, that solved one problem but adds others. If they don't have proprietary bars, who chooses which set of bars you put on the other frames? Let's just stipulate to OEM because it's easiest. It's just that, let's make sure it's an OEM bar at an equivalent price point for every bike tested so we're being fair. After all, the aero bars that come on the P2 vs the P3 are different; same frame dimensions, different aero bars with likely different drag numbers.
  2. How will you insure the aero bars are in the exact same position on all bikes? Good luck with this one because in order for the results to be meaningful, these numbers need to be replicated. Is the pad stack and reach exactly the same? To the millimeter? What about the extensions? Will you replicate those, or are we going OEM? I guess we can go OEM, but I'd prefer the same extensions, and we have to have both the pads, extensions, and the base bar all at the same angle to be fair (you could argue base bar angle here...I wouldn't give you a lot of push back on that). Good luck to anyone taking on that challenge.
  3. Let's say you can match the above. How are you going to do it? For pad stack, will you put spacers under the stem, or do the aero bars allow your to stack spacers under the extensions/pads. Most agree that the latter is better than the former. For pad reach, how do you achieve this? Are you simply moving the pads fore/aft? Are you using a longer or shorter stem? A combination of both perhaps? Are you using a similarly angled stem to your bike? This makes a difference.
There are many other variables I didn't address with aero bars, but you get the point, so let's move on to saddle...
  1. I think it's fair to stipulate to OEM saddles. Unless, of course, your bike doesn't come with an OEM saddle, in which case you're free to choose whatever you want and, trust me, without a rider aboard, some saddles will test faster than others.
  2. Who chooses saddle height? Better make sure it's exact to the millimeter.
  3. Who chooses saddle angle? It absolutely needs to be exact across the board, but can be highly deceptive based on the make/model.
  4. Fore/aft position of the saddle? Another variable which must be matched.
We can all come up with more variables for the saddle, but let's move on to drivetrain...
  1. Are we matching drivetrains? Again, let's just stipulate to OEM spec at matching price points to make it easier.
  2. Who does the cabling? This is where it gets sticky. The simplest solution is to go with no cabling; however, many bikes are fast partly because they handle cabling so well, so this should be part of the equation. We need this to be fair, though, and trying to be fair when it comes to cable/housing length and routing is something easily overlooked/manipulated. Again, when it matters, everything must be exact bike to bike.
  3. When you test, are all bikes set in the same gear? Are the cranks set at the exact same angle? To the degree? Let's not even deal with pedals.
No, I haven't forgotten wheels. Can we all just agree to put Zipp 808's on board? And let's use the same wheels on every bike making sure the tires are pumped to the exact same pressures. I don't mean the same make and model of wheels. I mean let's use one pair of wheels and move them from bike to bike. Okay? Good, that's a huge headache out of the way. I know, I know, some frames will test faster with one wheel vs another, but we have to make something in this process easy.


Now then, let's just for the hell of it say we can do everything listed above, and all the other things we can all think of I don't have the patience to list. Let's just live in our little happy world for a moment and say we can do it to everyone's satisfaction. Ah, yes, life is good! And then we put a rider aboard. Well, shit.


This is where it all falls apart boys and girls. Placing an athlete on a bike for aero testing just added a virtually uncontrollable variable. But wait, you say, everything is the same, so the athlete is in the same position, right? Negative, Ghostrider...it ain't gonna happen. Trust me when I tell you this because I'm pretty sure there aren't many people who've aero tested athletes on bikes more than me...there is no frickin' way you're going to make sure that athlete is in the same position from test-to-test. No way you can insure the clothing on that athlete is set exactly the same - every wrinkle in the same place, every seam rotated on the body exactly as every other test. There's no way you can insure the helmet the athlete is wearing doesn't move even one degree from test to test, that the athlete's head angle is exactly the same test to test. No Way.


Let me give you an example. I've been conducting a lot of independent aero testing of late for a new web site we're about to launch. I, myself, was a test rider working on arm angles, and in the middle of multiple tests purposely scooted approx 1cm forward on the arm pads creating a bit more reach. Nothing on the bike changed...this was mid-test. Most people wouldn't even be able to tell I moved. Still, the difference in drag was substantial every time - about 5.5 to 8 watts. A small movement made a big difference EVERY TIME I tested. Want to really skew a test? Try shrugging in one test just a bit more than the others. You just changed the results substantially. Look up just that little bit more. Create a little wrinkle on the shoulders of your skin suit. Bam...different results.


Oh, but you point out, we have a solution for this! Cervelo's famous "Dave" mannequin will save the day for us! Will it? Not likely. From the tests I saw with Dave, it appeared to me pretty much impossible to put him on each bike identically from test to test. Specifically, arm and hand position one bike to another. Now a lot of this was because all of the above could not be controlled. That's fair, but then why test? But, just for the sake of argument, let's say we can absolutely, without doubt, ensure Dave is in the same position and we can, therefore, test to our satisfaction. Will that do it? Maybe.


Here's the part that might fry you. If you're going to test and show me the results, I want a few things from you.
  1. Multiple tests/sweeps. The more I test, the more I'm convinced that one test on the velodrome or one sweep in the tunnel just doesn't do it for me. I want multiple tests; 3 does it as long as the results are consistent, but what the lay person might not understand about aero testing is the results aren't always consistent from test to test or sweep to sweep. You need to test multiple times to really get your answer. The differences aren't necessarily significant (though they certainly can be), but they rarely match. In fact, you want to see us geek out at ERO? Watch us when a rider completes two laps matching to the 4th integer! That's exciting stuff for us.
  2. Show me the results! Not just the ones you want me to see. As a consumer, do you know what results you're seeing? How do you know that a particular manufacturer doesn't take their lowest result and compare it with everyone else's highest results? Every test will yield a slightly different number, so how do you know which one you're being provided? If a manufacturer chooses to give you an average of all runs, I'm okay with that. In fact, I applaud it, but I still want all the data to see how those averages came to be.

Aero testing isn't easy. The more variables you introduce, the more you open yourself up for inaccurate results. What I'm trying to point out is, especially when comparing bike-to-bike, while the numbers are important, they can be unintentionally deceiving when the most important thing is whether or not a particular bike will allow you to attain your optimal position. Do you know your optimal position? I bet most don't. I'm lucky enough to work with some of the best age group and pro athletes in this sport and I can tell you the majority are no where near optimal when they come here. Position is, BY FAR, the most important piece of the aero puzzle. After position, do you know if your wheels, helmet, and clothing are all optimal? We see larger gains from all of these then the differences between most of these "super bikes." Even your hydration/nutrition setup will often trump the differences between all these frames. Keep it in perspective, think of your aerodynamics as the whole of your set up, not just several individual pieces.


Lastly, as a consumer, if you're going to put weight on the numbers manufacturers put out there, demand to see the data. All the data! Long, long post. I apologize for the inevitable grammar and spelling mistakes. I'll try to answer as many questions as possible, but I can't always track this forum as I actually work for a living! :-)

Jim Manton / ERO Sports
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Exec Summary:- "F*8cking variables.." and believe no one...

Good post cheers ;)
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Awesome post. This hits almost all the points I think about whenever I research on a particular bike, or aerobar, or wheelset.
Which is why it surprises me that whenever you say that bike TSC and STT consistently test faster in your aero testing, other people will invariably make comments about other bikes being faster. The fact that EROsports and similar nonaligned testers are conducting multiple tests (hundreds? maybe more?) on multiple individuals and being able to distill the general findings into things the average triathlete can absorb is a huge deal.
Questions:
1. How much of your data can you show to us triathlete researchers? I presume individual tests are obviously private, but are you allowed to and do you actually compile data regarding frames/aerobars/wheels/helmets etc and could you possibly make even a very rough graph/spreadsheet/estimate available? It would make life so much easier for many.
2. How many data points do you need in your personal testing before you are comfortable creating and voicing out an opinion? I'm looking at Dimond, Falco, Ventum (prob no data yet!), Tririg Alpha, single chain ring and other less represented but hopefully frontline.things that we lust for but have no objective data other than manufacturer tests. An opinion from a nonaligned tester would be significant.
3. And lastly, as a tangent off the "Why aren't you aero tested" thread, is there a possibility of a test site having representative frames (S,M and L) of perhaps 4 top manufacturers and a selection of aerobars, wheelsets, helmets etc so that all one needs is to go there, get tested and come out with a clear view of best frame/aerobar/position etc for him- or herself? This would simplify so many things!
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Jim, thanks for your (detailed) explanations. I mostly agree -- though I'd add one more point. You've come to realize these things because the tool you're using to measure turns out to be pretty sensitive, so you can see the effect of tiny differences in testing situations. If you'd been using a blunt tool you may never have achieved the same level of understanding.
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
That is a fantastic articulation of why I dislike any cycling product media that claims Xs faster for Y frame/wheels/bars etc. For me testing standalone parts or bikes without a rider (whom is actually riding the bike) seems pointless. I think the UKSI bikes highlight a need for frames and forks to be matched to the intended wheel of choice and the intended riding velocity and would like to see bike companies placing less weight on selling frames and more weight on selling integrated systems where everything has been designed together.

Iain

Training Full Time in 2015: http://www.triopensource.com
http://www.facebook.com/iaingillamracing http://www.twitter.com/iaingillam
https://www.youtube.com/...9JYCrOLP34Qtgp5w1WsA

Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Love this post. I was getting pretty flustrated reading a thread the other week which compared the old vs new P3, people were advising on the purchase choice talking purely about frame aerodynamics. The bikes don't have anything like the same geometry! If you couldn't get the lump of flesh sitting on the thing into the best position, the frame aerodynamics might not matter a damn!

Thank you for posting.
Quote Reply
Re: The challenge of aero testing bike vs bike [ In reply to ]
Quote | Reply
I have problems to believe that a thing called objectivity exists. The seller of any kind of a product or even only an opinion is inherently biased, me included


Aero testing in a velodrome too suffers from many variables to be controlled; some variables can be controlled easier than others, some variables has to be estimated and one hope’s that they don’t vary much over time or over different set-ups etc. .
From my experiences with velodrome testing the biggest and most difficult to control variable is the cyclist himself! My best test persons who are able to hold the black line and visibly hold their positions for an entire run reach a one sigma round to round variance for CdA of ~0.002m² and in really rare cases 0.001m² (on these rare days when everything is perfect) i.e. roughly 0.5-1% relative variance in Watts for a given speed and environment. But other riders show much bigger variance of up to about 0.01m² (by the way, this was a top age group triathlete already on the podium in Kona; the level of an athlete is generally not well related to the quality of the test results!).

Finally a provocative comment to the ordinary ST reader. If you have fun exploiting “aero field testing” do it, but if you want to become faster in a predictable way don’t waste time with it! Time is money! Aero field testing has variables with big impact on the drag even the experts have major problems to control. Or do you want to wait for this windless night in June next year?
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
For me, where I work multiple runs collecting the data are always required in order for the data to be statistically significant.

Runs also have to not be done consecutively with the same set up. You have to tear it down and put it back together for each run.

Typically that means doing the exact same tests on different days with no test days between the tests.

We've also found its a good idea to bring in an "equally" qualified crew of humans to do the set up and take the data.

If you can get repeatable data over several runs , with a clean test set up each time and a clean crew each time than you "probably" have some data you can trust. atmospheric conditions on different days plays havoc with the data sometimes. I've learned that dang sun of ours although it doesn't look it , is hugely variable.
Quote Reply
Re: The challenge of aero testing bike vs bike [7401southwick] [ In reply to ]
Quote | Reply
7401southwick wrote:
For me, where I work multiple runs collecting the data are always required in order for the data to be statistically significant.

Runs also have to not be done consecutively with the same set up. You have to tear it down and put it back together for each run.

Typically that means doing the exact same tests on different days with no test days between the tests.

We've also found its a good idea to bring in an "equally" qualified crew of humans to do the set up and take the data.



I agree with you, but things are not always perfect and more often than not one has to live with the uncertainty of of a limited number of tests.



7401southwick wrote:
atmospheric conditions on different days plays havoc with the data sometimes. I've learned that dang sun of ours although it doesn't look it , is hugely variable.

I made the same experience. A sunny day is not good for low variance testing. The indoor track I can use has no air conditioning. When it’s sunny it heats up considerably during the day and I suppose it results in an added "chaotic" air flow inside the velodrome. How is the air flow inside an air conditioned velodrome?
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Great post, Jim. You give excellent examples of how hard it is to do comparative aero testing. And how easy it is to do seemingly fair testing but skew the results in the direction that you want. I hope everyone at ST will read your post and stop believing it when ABC bike, or aero-bar, or wheel, or disc-cover publishes a test where their product is always the fastest, and not by a small margin. Really guys, every time, the fastest overall by a huge margin. And the ST readers fall for it every time.

Good luck with your new website. I look forward to seeing it.

---------------------------
''Sweeney - you can both crush your AG *and* cruise in dead last!! 😂 '' Murphy's Law
Quote Reply
Re: The challenge of aero testing bike vs bike [aahydraa] [ In reply to ]
Quote | Reply
I'm going to address some of aahydraa's questions, which will take care of many comments, and then answer more under one post here. After that, it's off to a day of fitting and aero testing!
  1. I was surprised my post about the Speed Concept and Shiv TT didn't create more controversy. It wasn't completely fair to state the lowest CdA's we see come from these two bikes, perhaps, but it's just an accumulation of all the testing we've done. People who come to test with these bikes typically come away with low CdA's. I believe there are a few reasons for this. Real quick on the Shiv, it just allows you to get really low. I wish there were more adjustment on the front end, but it's a well designed frame that allows you to get low and produce low CdA's. Not necessarily optimal for Tri, but for TT, if it fits you, it's a great bike. For the Speed Concept...well, it checks off all the boxes. You can get pretty darn low, pretty darn long, and you can angle the extensions and the pads together. It's just a very aero bike that has a great deal of adjustment for positioning, and also has well thought-out integration. Don't get me wrong, there are some things I don't like, but I have fewer items on my wish list for it than any other bike.
  2. Within the next couple of weeks, we're going to launch ERO Insight, which is a web site where we will begin posting all our aero data for anyone to see. We'll conduct weekly independent tests, and discount a client's aero tests if they allow us to add their data to the collective (not under their name). While it won't be ready in time for launch, we're building a very robust searchable database breaking variables down a much as possible. It's a huge undertaking, but it's going well, and should be fun. The manufacturers we've spoken to are very excited about it believe it will help them with their own numbers and lower their aero testing costs. Sort of leveling the playing field for everyone. Besides the main testing utilizing the Alphamantis system, we'll be making quarterly trips to the wind tunnel (a well-respected wind tunnel). Mainly, of course, we want this to be for the consumer to help them make good decisions and get faster.
  3. There's really no way we can have every product from every manufacturer, and bikes would be absolutely impossible for many, many reasons.
RChung:


The Alphamantis stuff is really amazing and getting better all the time. I learn something every. single. day. This has been an education for all of us. It's not just a catch phrase to say the more we test, the more questions we ask; it's true. We're getting better and better at it, but we'll always be improving our protocols, and the tech will continue to move forward, as well.


Here's the deal. Anyone who gets on a forum like this and proclaims they know what they're talking about just because they've conducted x number of aero tests is not someone I necessarily take seriously. I've now conducted a lot of testing and I can, without a shadow of a doubt, proclaim to you from all my "vast" experience that I'm pretty clueless. But I'm learning!


General comments:


We're lucky that the VELO Sports Center has a climate control system that is world renown. Engineering students from all over the world come to study it. The SoCal climate doesn't change much either, which helps. We don't have huge swings in temperature or pressure. RHO is typically pretty consistent and within a very small range. We check it before every run and adjust to .001.


Independent testing is going to be key, but even I, a non-retailer, find myself rooting for some products. It's difficult to remain neutral.


Manufacturers are not trying to be deceptive but, again, I hope I've articulated a little about how difficult it is to get good, consistent, meaningful results. If it were easy, everyone would do it.

Jim Manton / ERO Sports
Quote Reply
Re: The challenge of aero testing bike vs bike [Sweeney] [ In reply to ]
Quote | Reply
Sweeney wrote:
I hope everyone at ST will read your post and stop believing it when ABC bike, or aero-bar, or wheel, or disc-cover publishes a test where their product is always the fastest, and not by a small margin. Really guys, every time, the fastest overall by a huge margin. And the ST readers fall for it every time.

But part of this is totally reasonable. Every manufacturer is going to have some protocol or some configuration you want to optimize and so they design their product to optimize that condition. \So they test all their competitors with their protocol and it gives a baseline of what their product needs to beat. So then they go through their entire design and development frame of their product to beat those numbers. So then when the final product is done and tested with the protocol, it is no surprise that the product beats all the competitors. I do not think it is some deceptive practice, just a fall out of the fact you need to pick a protocol at stick with it and to pick the conditions you optimize for at the beginning. What you need as a consumer is a good idea of what the manufacture's protocol and the conditions they are optimizing for. Then you need to determine if that is realistic. Are Trek's yaw assumptions more realistic to you then Cervelo's? So things like Trek's white papers are very valuable.
Quote Reply
Re: The challenge of aero testing bike vs bike [BergHügi] [ In reply to ]
Quote | Reply
BergHügi wrote:
My best test persons who are able to hold the black line and visibly hold their positions for an entire run reach a one sigma round to round variance for CdA of ~0.002m²
And many people fail to understand the implications of that. Suppose you test each setup for 6 laps with an SD of 0.002, what 95% confidence interval does that give you? +/-0.0021. If you want to get your confidence interval down below +/-0.001, you need to do 18 laps per setup. And that is for a level of consistency that you regard as only being exhibited by the best test subjects.
Quote Reply
Re: The challenge of aero testing bike vs bike [RChung] [ In reply to ]
Quote | Reply
RChung wrote:
Jim, thanks for your (detailed) explanations. I mostly agree -- though I'd add one more point. You've come to realize these things because the tool you're using to measure turns out to be pretty sensitive, so you can see the effect of tiny differences in testing situations. If you'd been using a blunt tool you may never have achieved the same level of understanding.

What level of understanding is he achieving that is above and beyond someone else with less fancy equipment? It doesn't seem like he is claiming accuracy above what anyone else is claiming. The specific story here about arms was 5 watts and the point of the OP seems to be that the differences between superbikes is small while the difference between rider related variables is large.

~5 watts of difference between x and y is thrown out around here all the time, speedsuits, difference between the P2 vs the Px, torpedo vs. x. If manufacturers can claim to see 5 watts difference there, they should be able to see the difference in hand position the OP is seeing.

This isn't a blunt/sharp tool story, it's a methodology issue
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Thanks so much for your responses, Jim! Looking forward to EROInsight and will definitely try to have myself tested the next time I'm in the area!
Quote Reply
Re: The challenge of aero testing bike vs bike [Steve Irwin] [ In reply to ]
Quote | Reply
Steve Irwin wrote:
BergHügi wrote:
My best test persons who are able to hold the black line and visibly hold their positions for an entire run reach a one sigma round to round variance for CdA of ~0.002m²

And many people fail to understand the implications of that. Suppose you test each setup for 6 laps with an SD of 0.002, what 95% confidence interval does that give you? +/-0.0021. If you want to get your confidence interval down below +/-0.001, you need to do 18 laps per setup. And that is for a level of consistency that you regard as only being exhibited by the best test subjects.

I'm not sure that's quite right. There are systematic and random components of variability. If there are systematic things happening it might not matter if you did 18 laps, or 36 laps, or 72 laps -- you won't be able to get the CI down below .001.

Jim@EROsports wrote:
The Alphamantis stuff is really amazing and getting better all the time. I learn something every. single. day. This has been an education for all of us. It's not just a catch phrase to say the more we test, the more questions we ask; it's true. We're getting better and better at it, but we'll always be improving our protocols, and the tech will continue to move forward, as well.

"The purpose of models is not to fit the data but to sharpen the questions." That's particularly true about this model. I'm glad you're learning, but I'm even happier you're sharing what you're learning.

chris948 wrote:

What level of understanding is he achieving that is above and beyond someone else with less fancy equipment? It doesn't seem like he is claiming accuracy above what anyone else is claiming. [..] This isn't a blunt/sharp tool story, it's a methodology issue

As I mentioned above, there's a difference between variability due to systematic and random sources. Sometimes even with good measurements you can't tell from the data how to allocate variability to those sources -- but you need sensitive measurement to even have a chance of doing that. If you have a blunt measuring tool you have no chance at all. The method Jim is using is sensitive enough (when performed carefully) to distinguish between variations in the estimates due to errors in speed or power vs. errors due to changes in, say, air density or the ability to follow a line. You can spot a transient puff of air, or when someone opens a door to the outside air. If so, you've identified a non-random source of error so you can either junk that session or, if the disturbance is transitory, snip it out. This is different from tossing out laps because they don't fit your model or because the variance is too large. That's not the case. You're observing that disturbances that violate the model have exactly the predicted effects. That is, you don't toss data because they don't fit your model; you toss data because they fit your model but are describing a systematic effect that you're not interested in measuring. This is much, much harder to do with a low precision measuring tool. If you're looking for nits, you have to use a fine-tooth comb; you can't use a coarse comb.
Last edited by: RChung: Jun 3, 15 9:12
Quote Reply
Re: The challenge of aero testing bike vs bike [ In reply to ]
Quote | Reply
I'll just echo Jim a bit here, having done numerous sessions at his facility, other tunnels, and a ton of field testing. The more you do the less you know. A few things can be penciled in without testing but not much. And IMHO if you really want to be sure a frame is fastest for you, go test some frames with identical setups with you riding the bike. Change the set up? Start over.

Magazines waste a lot of money at times doing aero testing. Stick OEM bikes A,B,C, and D in the tunnel. B wins and we will, I guess, assume no one ever changes wheels or tires.

Some stuff works consistently well on everyone. Maybe not the best, but pretty well. Some stuff works great with some people and is horrible on others. Helmets are the worst for this.

Field testing will get you reliable data to maybe 0.05 +/- CdA differentiation, under the best conditions with a good protocol. Start there, and when you whittle the pile down go race it. If you are winning and losing by small margins take the smaller pile to Jim, or one of the other facilities, and test some more.

3 Nats titles, 1 Nat record, 17 State champs. 4 wind tunnels. 100's of hours of testing. Still figuring this stuff out.
Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
"Aero testing isn't easy. The more variables you introduce, the more you open yourself up for inaccurate results. What I'm trying to point out is, especially when comparing bike-to-bike, while the numbers are important, they can be unintentionally deceiving when the most important thing is whether or not a particular bike will allow you to attain your optimal position. Do you know your optimal position? I bet most don't. I'm lucky enough to work with some of the best age group and pro athletes in this sport and I can tell you the majority are no where near optimal when they come here. Position is, BY FAR, the most important piece of the aero puzzle. After position, do you know if your wheels, helmet, and clothing are all optimal? We see larger gains from all of these then the differences between most of these "super bikes." Even your hydration/nutrition setup will often trump the differences between all these frames. Keep it in perspective, think of your aerodynamics as the whole of your set up, not just several individual pieces."


Can you bring this concept to market to guide frame purchases ?

Quote Reply
Re: The challenge of aero testing bike vs bike [Jim@EROsports] [ In reply to ]
Quote | Reply
Terrific post, Jim!

In general, I would say that the role of the frame is to first, allow the rider to get in the right position. Low and long, if that's what's best. The second role of a frame is to be aero.

AndyF
bike geek
Quote Reply
Re: The challenge of aero testing bike vs bike [KR Bickel] [ In reply to ]
Quote | Reply
KR Bickel wrote:
Field testing will get you reliable data to maybe 0.05 +/- CdA differentiation, under the best conditions with a good protocol.
Woah. That's not very good.
Quote Reply
Re: The challenge of aero testing bike vs bike [RChung] [ In reply to ]
Quote | Reply
I am almost positive that you have commented on this matter in the 'platypus' thread, but what is the order of magnitude of repeatability are you able to get with field testing?
Quote Reply
Re: The challenge of aero testing bike vs bike [RChung] [ In reply to ]
Quote | Reply
RChung wrote:
I'm not sure that's quite right. There are systematic and random components of variability. If there are systematic things happening it might not matter if you did 18 laps, or 36 laps, or 72 laps -- you won't be able to get the CI down below .001.
I was trying to think of an example of what you might mean, here. I suppose an example would be if each time you put a helmet on, it becomes set in a particular position on your head to some extent, that persists across all the laps. So you could get exactly the same CdA every lap, but it might still not be the same as the mean CdA if you were to repeat the whole exercise several times, getting off the bike, removing the helmet, and starting again each time. For this reason, when I test helmets I always do ABABAB etc, so I'm taking a fresh sample of how the helmet sits on my head every rep. So I agree, ideally you want every rep to be an independent sample from the distribution. But that issue just makes it even worse than my figures, i.e. they represent the best case for the confidence interval you'd achieve with that SD and number of reps if each rep were an independent sample from the same distribution. If you have any problems with dependencies between reps, I agree, you'll be even less certain of where the mean is, and simply doing more reps in one go won't solve that problem.
Quote Reply
Re: The challenge of aero testing bike vs bike [RChung] [ In reply to ]
Quote | Reply
RChung wrote:
KR Bickel wrote:

Field testing will get you reliable data to maybe 0.05 +/- CdA differentiation, under the best conditions with a good protocol.

Woah. That's not very good.
I suspect he meant +/-0.005, but it all depends on how long each rep is. Lots of short reps will get you to roughly the same place as fewer longer reps for the same total duration, i.e. you can achieve a similar confidence interval with larger SD and more reps of shorter duration, or smaller SD and fewer reps of longer duration.
Quote Reply
Re: The challenge of aero testing bike vs bike [Steve Irwin] [ In reply to ]
Quote | Reply
Steve Irwin wrote:
RChung wrote:
KR Bickel wrote:

Field testing will get you reliable data to maybe 0.05 +/- CdA differentiation, under the best conditions with a good protocol.

Woah. That's not very good.
I suspect he meant +/-0.005

Even 0.005 m^2 isn't that good:

http://www.trainingandracingwithapowermeter.com/...aerodynamicists.html
Last edited by: Andrew Coggan: Jun 3, 15 18:09
Quote Reply
Re: The challenge of aero testing bike vs bike [pyrahna] [ In reply to ]
Quote | Reply
I am almost positive that you have commented on this matter in the 'platypus' thread, but what is the order of magnitude of repeatability are you able to get with field testing?

I'm pretty sure +- .001 is possible if you are meticulous about accounting for everything you can, and take the time. You'll need a very cooperative day, a steady position, and many repeats and configuration swaps.

Quote Reply

Prev Next