You know how companies are "reshaping" LLMs in response to them producing results they don't like (termed "hallucinations" for marketing purposes), even though the data they trained the LLMs on included at least chunks of whatever resulted in said "bad" results?
Imagine getting a lobotomy every time you said something people didn't like. 0_0
like this
AI Vision Explained (and How to Avoid Paying for It) — FortNine
Motorcycle AI Vision is coming. But what is it really worth?
#motorcycle #LLM #AI
like this
Doesn't using an #LLM to create large portions of your codebase mean that you end up with a lot of code that nobody understands unless they put in the time to go through it and learn what it's doing? If so, it's not much different from writing the code from scratch in the long one. :/
(Yes, I know about boilerplate and do think that's a valid use case as long as you know what the boilerplate is doing.)
like this
What if part of the push for #LLMs ('AI') is sort of a 'gateway drug" for getting people into programming? It teaches how to format queries, albeit using human language instead of special syntaxes. 🤔
dieter_wilhelm likes this.
like this
Consider trying Perplexity - my main use-case is to just type in an unfamiliar word and press send. It comes back with a paragraph to explain what the word means, with linked citations to the web pages that the information came from. It has almost replaced Google/Bing on my phone, since "what does that word mean" covers 90% of my phone searches anyway.
I used it last night to learn that the unfamiliar word on a menu was a type of goat cheese that I had not heard of before.
like this
Neil E. Hodges likes this.
Engineers are supposed to target efficiency and quality, but LLMs/"AI" are the opposite of that. They're exceedingly energy hungry and produce mediocre results that still require human intervention after the fact. Seems like a dead end until we've figured out better ways to implement them. 🤷♂️
like this
Neil E. Hodges likes this.
Neil E. Hodges reshared this.
What Grok’s recent OpenAI snafu teaches us about LLM model collapse
Instead, it was model collapse—though Babuschkin didn’t use those exact words. “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data,” he wrote. “This was a huge surprise to us when we first noticed it.” Grok was notably set up to pull from livestreams of internet content, including X’s feed of posts, which was identified as a potential issue by experts who spoke to Fast Company a month ago.“It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.
The reason for that decline is the recursive nature of the LLM loop—and exactly what could have caused the snafu with Grok. “What appears to have happened here is that Elon Musk has taken a less capable model,” says Ross Anderson, one of the coauthors of the original paper that coined the term model collapse, “and he’s then fine-tuned it, it seems, by getting lots of ChatGPT-produced content from various places.” Such a scenario would be precisely what Anderson and his colleagues warned could happen come to life. (xAI did not respond to Fast Company’s request for comment.)
like this
M Dell reshared this.
And the results are a Mix of what the #KI had seen, and everybody knew before. It's just stunning perfectly done.
Now for an artist not to seem just like AI, mass product, he has to be as unperfect as an AI never would be. It is the idea, #Konzeptkunst what makes #conceptual #art
Neil E. Hodges likes this.
What AI ended up being: Bullshit generators.
like this
Neil E. Hodges
in reply to Neil E. Hodges • •like this
Hollis and balduin like this.
Sick Sun
in reply to Neil E. Hodges • • •like this
Arcana and August like this.
Neil E. Hodges
in reply to Sick Sun • •Sick Sun
in reply to Neil E. Hodges • • •reading is way faster than writing and the total time to working code is lower.
I'm not exactly trying to defend it, just telling you my experience.
It's sometimes frustrating because it will work really good all day then you give it a particular problem and you struggle with it over and over to get the right output and eventually give up. I'd say that happens about one out of ten times.
The real advantage is sometimes I literally just can't figure out the right or best way to do something and I ask it and with some back and forth get the answer. I had a problem I left alone for over a year and then came back to it and fought with AI for a while but eventually found a solution that worked that I never could have done simply by reading documentation or blogs.
like this
Neil E. Hodges, Arcana and August like this.
Neil E. Hodges
in reply to Sick Sun • •Also, if it's trained on code from bad programmers, the code will end up bad as well.
Kind of like if you search on Stack Overflow and get an answer from 20 years ago that hasn't been the way to do the thing for a long time, then base the code you're writing on that. Too much garbage data out there. :/
Sick Sun likes this.
Sick Sun
in reply to Neil E. Hodges • • •Neil E. Hodges likes this.
Sick Sun
in reply to Sick Sun • • •recently I have been having it explain W3C specifications to me and come up with solutions that incorporate multiple ones into one working solution, along with implementing code. It has worked REALLY good for that, probably because W3C docs are very verbose (they are also really long and numerous, hence needing help.)
I've been following up and verifying in docs that it is interpreting them correctly and it does make mistakes but overall it has been a massive time saver.
August likes this.
Steven Sandoval
in reply to Neil E. Hodges • • •Reminds me of the stove analogy in Anathem (2008) by Neal Stephenson (“Part 7: Feral”):
“We, the theors, who had retreated (or, depending on how you liked your history, been herded) into the maths at the Reconstitution, had the power to change the physical world through praxis. Up to a point, ordinary people liked the changes we made. But the more clever the praxis became, the less people understood it and the more dependent they became on us—and they didn’t like that at all.”
Neil E. Hodges likes this.
Neil E. Hodges
in reply to Steven Sandoval • •Sick Sun
in reply to Neil E. Hodges • • •Neil E. Hodges likes this.
Neil E. Hodges
in reply to Sick Sun • •Birdulon
in reply to Sick Sun • • •