You know how companies are "reshaping" LLMs in response to them producing results they don't like (termed "hallucinations" for marketing purposes), even though the data they trained the LLMs on included at least chunks of whatever resulted in said "bad" results?
Imagine getting a lobotomy every time you said something people didn't like. 0_0
like this
Neil E. Hodges reshared this.
Neil E. Hodges
in reply to Neil E. Hodges • •