Skip to main content


You know how companies are "reshaping" LLMs in response to them producing results they don't like (termed "hallucinations" for marketing purposes), even though the data they trained the LLMs on included at least chunks of whatever resulted in said "bad" results?

Imagine getting a lobotomy every time you said something people didn't like. 0_0

#AI #LLM

#ai #LLM

Neil E. Hodges reshared this.