like this
like this
Neil E. Hodges likes this.
Engineers are supposed to target efficiency and quality, but LLMs/"AI" are the opposite of that. They're exceedingly energy hungry and produce mediocre results that still require human intervention after the fact. Seems like a dead end until we've figured out better ways to implement them. 🤷♂️
like this
Neil E. Hodges likes this.
Neil E. Hodges reshared this.
What Grok’s recent OpenAI snafu teaches us about LLM model collapse
Instead, it was model collapse—though Babuschkin didn’t use those exact words. “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data,” he wrote. “This was a huge surprise to us when we first noticed it.” Grok was notably set up to pull from livestreams of internet content, including X’s feed of posts, which was identified as a potential issue by experts who spoke to Fast Company a month ago.“It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.
The reason for that decline is the recursive nature of the LLM loop—and exactly what could have caused the snafu with Grok. “What appears to have happened here is that Elon Musk has taken a less capable model,” says Ross Anderson, one of the coauthors of the original paper that coined the term model collapse, “and he’s then fine-tuned it, it seems, by getting lots of ChatGPT-produced content from various places.” Such a scenario would be precisely what Anderson and his colleagues warned could happen come to life. (xAI did not respond to Fast Company’s request for comment.)
like this
M Dell reshared this.
And the results are a Mix of what the #KI had seen, and everybody knew before. It's just stunning perfectly done.
Now for an artist not to seem just like AI, mass product, he has to be as unperfect as an AI never would be. It is the idea, #Konzeptkunst what makes #conceptual #art
Neil E. Hodges likes this.
What AI ended up being: Bullshit generators.
like this
natewaddoups
in reply to Neil E. Hodges • • •Consider trying Perplexity - my main use-case is to just type in an unfamiliar word and press send. It comes back with a paragraph to explain what the word means, with linked citations to the web pages that the information came from. It has almost replaced Google/Bing on my phone, since "what does that word mean" covers 90% of my phone searches anyway.
I used it last night to learn that the unfamiliar word on a menu was a type of goat cheese that I had not heard of before.