My hate for LLMs stems from how ABSOLUTELY NOTHING IS GENUINE about it. Whatever it's claimed to be doing (e.g. "reasoning") is faked, the sources it prints out to be taking from are faked (as I've found... I wrote a tirade about it WHEN I found out), the supposedly human-like "interactions" are completely faked (uh no, it's not "sorry" or "apologizing" about anything when you call out its BS errors... that's the canned pattern it got "reinforced" into by the "trainers" so you'd keep interacting with it)

Fake fake fake fake FAKE. One big damned charade of fakery, made thrice as annoying by people who mistakenly/ignorantly think (and shout from their rooftops) how REAL it all is.

Good fucking grief. Something that spits out strings that are incidentally true when interpreted as statements (remember... X can't actually make any valid proposition P when it doesn't actually refer to any actual entity E) isn't offering any truths at all. It just gives you a starting point to look for them, if you don't get distracted and end it right then and there like so many hapless people already do.

Reply

Avatar

or to participate

Keep Reading