Skip to main content


What Grok’s recent OpenAI snafu teaches us about LLM model collapse

Instead, it was model collapse—though Babuschkin didn’t use those exact words. “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data,” he wrote. “This was a huge surprise to us when we first noticed it.” Grok was notably set up to pull from livestreams of internet content, including X’s feed of posts, which was identified as a potential issue by experts who spoke to Fast Company a month ago.

“It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.

The reason for that decline is the recursive nature of the LLM loop—and exactly what could have caused the snafu with Grok. “What appears to have happened here is that Elon Musk has taken a less capable model,” says Ross Anderson, one of the coauthors of the original paper that coined the term model collapse, “and he’s then fine-tuned it, it seems, by getting lots of ChatGPT-produced content from various places.” Such a scenario would be precisely what Anderson and his colleagues warned could happen come to life. (xAI did not respond to Fast Company’s request for comment.)

#LLM #AI

#ai #LLM

M Dell reshared this.