liversedge wrote:
It is not about error, it is about quality. A mean-maximal effort may not be a TTE effort, resulting in estimates that are understated. You called it 'sandbagging'. I call it reality. Maximal efforts might occur on favourite climbs, or in race situations, but are very very rarely to exhaustion. But I've had this conversation with you, others have pointed to the issues with DQ in your approach, just get the software out so we can show you how easy it is to break.
You're still missing the point. If you trust the quality of any data well enough to fit a model to it, then that same level of trust allows you to standard statistical techniques to estimate (note the word) the (im)precision of the resulting parameter estimates. If you don't trust the data enough to do the latter, you shouldn't be doing the former in the first place. The only reason I can think that someone might choose to do so is if they were trying to hide something/hoodwink people.
As for why I think it is a bad idea to cherry-pick data as you prefer to do, I laid them out last spring in a series of posts starting here:
https://www.facebook.com/...osts/786096791412621 To reiterate them (so as to not be accused of just making Mark Zuckerberg richer):
Top 8 reasons why it isn't a good idea to try to fit the extremes of a mean maximal power curve (in no particular order):
1) by definition, the values are already outliers - deleting/discounting/deweighting those that fall below vs. above the modeled curve significantly increases the odds that you're just chasing noise/artifacts (esp. at short durations);
2) use of anything other than OLS to fit a non-linear model makes it difficult/impossible to quantify the uncertainty of the resulting parameter estimates (something that is quite important to know);
3) the model fit often becomes highly unstable, i.e., a reproducible solution often can't be found;
4) deleting/discounting/deweighting points below the curve increases the odds that the model will be overparameterized (contributing to #3 above);
5) "the map is not the territory", i.e., just because someone performs better (relative to themselves/a model) at one duration vs. another doesn't necessarily mean that the lesser performance wasn't also truly maximal - it very well may have been, but occurred at a duration that is a relative weakness for that athlete;
6) fitting to the extremes can establish an impossibly-high standard for the athlete, i.e., only new personal bests provide full "credit";
7) statistically-valid approaches for fitting to extremes are computationally expensive, significantly slowing the response of a program; and
8) over reasonable periods of time, any 'bumps' in the power-duration curve tend to be reduced the point that the difference between using OLS and attempting to fit the extremes isn't large enough to really worry about.
(Follow the link above to find various examples how cherry-picking data can lead you astray.)
liversedge wrote:
I will stick with the same advice; use a test to establish CP.
That's nice - how do you advise people to determine their W', if not by using a model?