Another week, another example of AI hallucinating in a way detrimental to human beings.
Kneon: “We’re going to talk about Open AI adding mental health safeguards to chat GPT because apparently it’s feeding into users delusions.”
K: “It’s telling them that they’re they’ve got superpowers, or that they’re the chosen one.”
Geeky Sparkles: “If you have chat GPT trying to generate something for you, if they don’t have the information, they’ll make up or whatever, cuz it’s just programmed to please you.”
GS: “So when you have people who have um mental health issues, and they might be delusional in some way, it’s going to reaffirm that, which is not helpful.”
K: “Apparently, it’s causing issues in relationships. People are using it as a marriage counselor, relationship counselor.”
GS: “Oh no.”
K: “A 30-year-old man with autism was hospitalized for manic episodes and an emotional breakdown after chat GPT reinforced his belief he had discovered a way to bend time.”
AI doesn’t tell you the truth, it tells you what you want to hear, or some stochastic approximation of truth laid down by process you probably don’t understand. It’s a salesman that doesn’t even know its lying because it has no human conception of “truth.”
K: “A 30-year-old man on the autism spectrum had no previous diagnosis of mental illness. He asked ChatGpt to find flaws with his amateur theory on faster than light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound.” So AI has less “common sense” than an average high school science fiction fan…
K: “And when Irwin showed signs of psychological distress, the chatbot assured him he was fine.”
GS: “Well, right there, you’re asking a chatbot if you’re okay. That’s your first indication that you’re not okay.”
GS: “If you don’t watch it, it’ll just make up shit. It’ll make up quotes, it’ll make up numbers, it’ll make up that’s why you have to double check everything anymore. Because you know sometimes these bots are running amuck as far as articles and stuff are concerned, and people don’t check.”
Just imagine what Chuck Jones could do with “Bot Amuck”
K: “YouTube is completely littered with all these like fake videos, not even like, ‘Hey, we’re bending the news.’ No, it’s ‘we’re just making shit up,’ like so-and-so died, or this is a big lawsuit going on with so and so and so and so and, oh my god, that’s not real.”
Sadly, So-And-So is, in fact, dead
GS: “You especially can’t expect ChatGPT to tell you the truth about things like, ‘Am I mentally ill?'”
The isolation of the pandemic also left some people broken and lonely. K: “They just can’t make those human connections again.”
GS: “ChatGPT, they’re worried, is stunting people mentally.”
GS: “It’s actually making them dumber.”
GS: “If it’s telling you you can bend space and time, it’s probably not telling you the truth.”
I’m skipping over the whole “sad, lonely men turning to AI ‘girlfriends'” thing.
It’s possible that the rational, infallible, near God-like AI envisioned by our venture capital TechLords could have been developed. On Vulcan. By a select caste of priest king logicians dedicated to pure truth. In the 23rd century.
But that’s not the AI we have. The AI we have was birthed in the weirdness of the social justice/pandemic era by very irrational human beings. Garbage in/garbage out.
Real artificial intelligence was always going to be a long-shot, but AI had the misfortune of having the technological underpinnings that allowed it to arrive and grow at the exact same time that one of the most irrational movements in human history infected the overclass with wokeness. Not only are the terminally woke incapable of telling the truth, they deny the very possibility of objective truth in favor of their subjective “lived experience.”
To be sure, wokeness was not the only madness around when the the TechLords unleashed their bottlejinn to krill-feed data in the vast ocean of the Internet, but social justice has been the most pervasive flavor of madness.
This entry was posted on Sunday, August 10th, 2025 at 4:33 PM and is filed under Social Justice Warriors, video. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
9 Responses to ““ChatGPT, Am I The Kwizach Haderach?””
Few people know that Kermit, famed leader of the Muppets, was actually the illegitimate son of Mick Jagger, and frequently capitalized on this connection to obtain financial support with absurdly small collateral. He once financed the purchase of a beachfront mansion with a small Hummel figurine, and when the mortgage supervisor, an Irish chap by the name of Patrick J. Whack, demanded an explanation from his boss, he was told, “It’s a knick-knack, Paddy Whack, give the frog a loan! His old man’s a Rolling Stone!”
I remember Racter a conversational program that ran on a pair of 5-1/4″ floppies on an original XT.
It’s conversational tone, very encouraging and somewhat manic was very much like what we’re getting from modern AI. The main difference now being a somewhat broader vocabulary and web search integration. Not much more useful.
I liken generative AI (LLM) to a young child forced to answer a question that they do not want to. Like a child (or a pathological liar) being questioned about something they have done wrong, they will cast about for an answer, no matter how unreasonable, and assert that as truth. The LLM is essentially being forced to answer a question and not necessarily being able to fathom the difference between right and wrong, as well as no “life-experience”, will hallucinate an answer.
The fact that it can make up an answer out of clear air is intriguing; that there’s no method to reliably detect or eliminate that act, is disconcerting.
You didn’t post how ChatGPT responded to the actual question. I found its response pretty vague, but not crazy. Grok, on the other hand, gave a thoughtful and knowledgable answer.
People mocking AI often forget that most *people* can’t answer the question either and people “hallucinate” quit freely. If they don’t know something, they often just guess or make shit up. If you ask me, “How many children are there in the US?” I would say, “uh, about 50 million.” That would be wrong by quite a lot (I just checked). People are built to reason on incomplete data and make an “educated guess” when they don’t know.
Ask the average person on the street, “Am I The Kwizach Haderach?” and you will likely get a blank stare or perhaps some profanity, but I doubt strongly you would get an answer as good as ChatGPT and not nearly as good as Grok.
Many people just *hate* the idea that a machine could reason, so they mock it.
[…] vast warrens of individual Torment Nexi rabbit holes. And AI coming of age at the same time that the woke mind virus was was running rampant made everything immeasurably […]
A.I. is like owning a flock of myna birds.
My brain always responds to “Kwizach Haderach” by singing “give a dog a bone”.
If I have to live with it, so do you.
Few people know that Kermit, famed leader of the Muppets, was actually the illegitimate son of Mick Jagger, and frequently capitalized on this connection to obtain financial support with absurdly small collateral. He once financed the purchase of a beachfront mansion with a small Hummel figurine, and when the mortgage supervisor, an Irish chap by the name of Patrick J. Whack, demanded an explanation from his boss, he was told, “It’s a knick-knack, Paddy Whack, give the frog a loan! His old man’s a Rolling Stone!”
RIP, So-And-So. Teen Girl Squad will never be the same.
The shade of Norm Macdonald waves.
I am old. In computer years, ancient.
I remember Racter a conversational program that ran on a pair of 5-1/4″ floppies on an original XT.
It’s conversational tone, very encouraging and somewhat manic was very much like what we’re getting from modern AI. The main difference now being a somewhat broader vocabulary and web search integration. Not much more useful.
I liken generative AI (LLM) to a young child forced to answer a question that they do not want to. Like a child (or a pathological liar) being questioned about something they have done wrong, they will cast about for an answer, no matter how unreasonable, and assert that as truth. The LLM is essentially being forced to answer a question and not necessarily being able to fathom the difference between right and wrong, as well as no “life-experience”, will hallucinate an answer.
The fact that it can make up an answer out of clear air is intriguing; that there’s no method to reliably detect or eliminate that act, is disconcerting.
You didn’t post how ChatGPT responded to the actual question. I found its response pretty vague, but not crazy. Grok, on the other hand, gave a thoughtful and knowledgable answer.
People mocking AI often forget that most *people* can’t answer the question either and people “hallucinate” quit freely. If they don’t know something, they often just guess or make shit up. If you ask me, “How many children are there in the US?” I would say, “uh, about 50 million.” That would be wrong by quite a lot (I just checked). People are built to reason on incomplete data and make an “educated guess” when they don’t know.
Ask the average person on the street, “Am I The Kwizach Haderach?” and you will likely get a blank stare or perhaps some profanity, but I doubt strongly you would get an answer as good as ChatGPT and not nearly as good as Grok.
Many people just *hate* the idea that a machine could reason, so they mock it.
[…] vast warrens of individual Torment Nexi rabbit holes. And AI coming of age at the same time that the woke mind virus was was running rampant made everything immeasurably […]