But that point is not the same as LLMs degrading when trained on its own data.
Again, it may be the same as the problem of “how do you separate AI generated data from human generated data”, so a filtering issue.
But it’s not the same as the problem of degradation due to self-training. Which I’m fairly sure you’re also misrepresenting, but I REALLY don’t want to get into that.
But hey, if you don’t want to keep talking about this that’s your prerogative. I just want to make it very clear that the reasons why that’s… just not a thing have nothing to do with training on AI-generated data. Your depiction is a wild extrapolation even if you were right about how poisonous AI-generated data is.
But that point is not the same as LLMs degrading when trained on its own data.
Again, it may be the same as the problem of “how do you separate AI generated data from human generated data”, so a filtering issue.
But it’s not the same as the problem of degradation due to self-training. Which I’m fairly sure you’re also misrepresenting, but I REALLY don’t want to get into that.
But hey, if you don’t want to keep talking about this that’s your prerogative. I just want to make it very clear that the reasons why that’s… just not a thing have nothing to do with training on AI-generated data. Your depiction is a wild extrapolation even if you were right about how poisonous AI-generated data is.