You know how companies are "reshaping" LLMs in response to them producing results they don't like (termed "hallucinations" for marketing purposes), even though the data they trained the LLMs on included at least chunks of whatever resulted in said "bad" results?
Imagine getting a lobotomy every time you said something people didn't like. 0_0
like this
AI Vision Explained (and How to Avoid Paying for It) — FortNine
Motorcycle AI Vision is coming. But what is it really worth?
#motorcycle #LLM #AI
like this
Doesn't using an #LLM to create large portions of your codebase mean that you end up with a lot of code that nobody understands unless they put in the time to go through it and learn what it's doing? If so, it's not much different from writing the code from scratch in the long one. :/
(Yes, I know about boilerplate and do think that's a valid use case as long as you know what the boilerplate is doing.)
like this
Neil E. Hodges likes this.
What if part of the push for #LLMs ('AI') is sort of a 'gateway drug" for getting people into programming? It teaches how to format queries, albeit using human language instead of special syntaxes. 🤔
dieter_wilhelm likes this.
like this
Consider trying Perplexity - my main use-case is to just type in an unfamiliar word and press send. It comes back with a paragraph to explain what the word means, with linked citations to the web pages that the information came from. It has almost replaced Google/Bing on my phone, since "what does that word mean" covers 90% of my phone searches anyway.
I used it last night to learn that the unfamiliar word on a menu was a type of goat cheese that I had not heard of before.
like this
Neil E. Hodges likes this.
Engineers are supposed to target efficiency and quality, but LLMs/"AI" are the opposite of that. They're exceedingly energy hungry and produce mediocre results that still require human intervention after the fact. Seems like a dead end until we've figured out better ways to implement them. 🤷♂️
like this
Neil E. Hodges likes this.
Neil E. Hodges reshared this.
What Grok’s recent OpenAI snafu teaches us about LLM model collapse
Instead, it was model collapse—though Babuschkin didn’t use those exact words. “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data,” he wrote. “This was a huge surprise to us when we first noticed it.” Grok was notably set up to pull from livestreams of internet content, including X’s feed of posts, which was identified as a potential issue by experts who spoke to Fast Company a month ago.“It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.
The reason for that decline is the recursive nature of the LLM loop—and exactly what could have caused the snafu with Grok. “What appears to have happened here is that Elon Musk has taken a less capable model,” says Ross Anderson, one of the coauthors of the original paper that coined the term model collapse, “and he’s then fine-tuned it, it seems, by getting lots of ChatGPT-produced content from various places.” Such a scenario would be precisely what Anderson and his colleagues warned could happen come to life. (xAI did not respond to Fast Company’s request for comment.)
like this
M Dell reshared this.
And the results are a Mix of what the #KI had seen, and everybody knew before. It's just stunning perfectly done.
Now for an artist not to seem just like AI, mass product, he has to be as unperfect as an AI never would be. It is the idea, #Konzeptkunst what makes #conceptual #art
Neil E. Hodges likes this.
What AI ended up being: Bullshit generators.
like this
Neil E. Hodges
in reply to Neil E. Hodges • •