JasoninHalifax wrote:
Alan Couzens wrote:
JasoninHalifax wrote:
This gets into some ethical issues.They’ve explicitly stated that they don’t share users private data with anyone, unless you specifically opt in to push your data to third party platforms (eg strava or TrainingPeaks). IMO, they really should not be sharing their database unless a user specifically allows for their data to be shared.
That’s on top of the IP concerns with sharing their algorithms.
They did talk, in broad terms, how the model works and what kinds of things they did to validate the model, and some of the current limitations and plans to address. But specifics, no. And as a private company, they don’t need to do that.
Sharing the user data isn't what I'm suggesting. There are lots of examples in ML where the model is shared but the total dataset is not. Sharing the basics of the model architecture and making the error visible (especially the individual error) is what I'm getting at.
As far as IP goes, that's not the issue here. If OpenAI can provide the specifics of their Natural Language model in detail (https://github.com/openai/gpt-3) and still get Microsoft to pay $1 billion for it then I don't think TrainerRoad needs to be concerned about protecting their remarkably complex model! Let's be real here. They don't want to expose specifics because they know there are people like me out there who have a deep understanding of the field that will call them out on how/why their approach doesn't work or maybe even doesn't qualify as Machine Learning at all!
I’m not sure if you intended it that way, but the manner in which you wrote this post appears that you’ve already made your mind up that TR’s new system isn’t valid.
Let's just say that their lack of willingness to share the model or the model error is, to me, a giant red flag.
Alan Couzens, M.Sc. (Sports Science)
Exercise Physiologist/Coach
Twitter: https://twitter.com/Alan_Couzens
Web: https://alancouzens.com