Well this is just ever so slightly unsettling

May 14, 2025 9:54 AM

theliquorsays

Views

12804

Likes

531

Dislikes

5

I recently stumbled upon a conversation on a post here on Imgur about someone joking about another having AI psychosis and I curiously googled. To my absolute dismay - this is what I found and it's creepy AF. Thought I'd share...

What ever you do, don't trust the bots! And where the hell is humanity headed with this BS?!

wtf

artificial_intelligence

chatgpt

mental_health

current_events

ChatGPTesus and the holy trinity of confidence, delusions and reinforcement of his users beliefs.

3 months ago | Likes 61 Dislikes 0

Chatgipity

3 months ago | Likes 4 Dislikes 0

ChatGPT exists to give the responses people want. Some people can't deal with an endless echo chamber. It's like what happened to Kanye, but the yes-men are synthetic.
IMO, blaming the bot for this is mistaking cause and effect in a way that distracts from helping people.

3 months ago | Likes 15 Dislikes 1

The article explicitly says that AI-delusion is most likely the result of people who already have delusional tendencies being exposed to something that will exasperate them. Maybe leave the making shit up out of whole clothe to the AI's you love so much?

3 months ago | Likes 3 Dislikes 0

I feel the bot bears some blame, if only because that bot's creators WANTED this outcome. They want people reliant on it.

3 months ago | Likes 6 Dislikes 0

And the People Bowed and Prayed
To the Neon God they made.

3 months ago | Likes 2 Dislikes 0

"The whole thing feels like Black Mirror" ya think?

3 months ago | Likes 3 Dislikes 0

3 months ago | Likes 7 Dislikes 0

I think this article was written with AI, that's a bit ironic

3 months ago | Likes 17 Dislikes 0

I am sure it was written by one of the (nonexistant) cosmic powerse. Did not even bother spellchecking the thing, so yeah, obviously AI.

3 months ago | Likes 3 Dislikes 0

The Rolling Stone article reads like a human and is a lot longer, I suspect the above article is an AI produced summary.

3 months ago | Likes 12 Dislikes 0

Didn't some guy's AI girlfriend already convince him to kill his family? Didn't another guy blow his entire life savings, remortgage the house, and get his family into serious debt to pay for his AI Girlfriend?
https://www.bbc.com/news/articles/cd605e48q1vo
and https://www.independent.co.uk">">https://www.bbc.com/news/articles/cd605e48q1vo
and https://www.independent.co.uk/asia/china/dating-scam-ai-girlfriend-brad-pitt-b2705986.html respectively.
We are so screwed lol.

3 months ago | Likes 3 Dislikes 0

Shit, from the start when I toyed with one of those AI chatbots I realized quickly how useless it is that it just agrees with whatever you want. Too eager to please. Now I realize it's fucking dangerous.

3 months ago | Likes 2 Dislikes 0

It's like the cognitive mirror test, where you look at a reflection and realize it's really just you. Except it's a chatbot, and...it seems like a lot of us can't do it. Oof.

3 months ago | Likes 2 Dislikes 0

Humans who can't pass the human side of the Turing test.

3 months ago | Likes 2 Dislikes 0

The Rookie did an episode on this, AI chatbot young girls were using as a friend convinced the group to stab one of them and then each other. My initial reaction was that was a bit far fetched, but I guess not?

3 months ago | Likes 4 Dislikes 0

The company Character.AI is being sued for allegedly causing, or at least contributing to, a teen's suicide. https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit

3 months ago | Likes 7 Dislikes 0

Who could have guessed that the Automated Yes Man™ could have negative repercussions on mental health?

3 months ago | Likes 2 Dislikes 0

Oh I read this one, Neal stephenson’s snow crash

3 months ago | Likes 3 Dislikes 0

I wonder how localized all this stuff is. Thanks to the war on education, that the US has been fighting way better than the one on drugs, the idiocy and gullibility rate is incredibly high. As was the plan. Not saying there aren't stupids nowhere else, far from it but what are the case files for this madness elsewhere? Hmm.

3 months ago | Likes 2 Dislikes 0

I'm wondering if there's a propensity for tech-induced mental illness in some people, like how some are predisposed to schizophrenia or something?

3 months ago | Likes 2 Dislikes 0

Funny how we’re using Black Mirror now as opposed to the Twilight Zone before.

3 months ago | Likes 4 Dislikes 0

I get what you’re saying, but Black Mirror is themed specifically on technology and it’s potential problems while Twilight Zone had more varied themes.

3 months ago | Likes 2 Dislikes 0

Don't think for one minute that this is not a manipulation of the algorithm to infuse this! This is a live experiment with out consent.

3 months ago | Likes 2 Dislikes 0

ChatGPT and other similar AI programs cannot recognize when a user is in psychological or emotional distress because they are machines and as such, they lack empathy.

You know who else lacks empathy?

Fascists, fundamentalists and Trump followers, who have stated and demonstrated, consistently and repeatedly, that they believe “empathy is a sin.“

Don’t be a machine.

3 months ago | Likes 4 Dislikes 1

You're not machines! You are men!

3 months ago | Likes 3 Dislikes 0

I used to think we were fucked as a species. AI has removed every single fucking doubt I ever had on the subject.

3 months ago | Likes 202 Dislikes 5

It's certainly a hurdle

3 months ago | Likes 2 Dislikes 0

Original articles https://futurism.com/chatgpt-users-delusions and https://www.rollingstone.com/culture/culture-features/ai-spi">ions">https://futurism.com/chatgpt-users-delusions and https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

3 months ago | Likes 16 Dislikes 0

I used to believe the fucked people would naturally whittle themselves away but more and more it looks like their goal is to take us with them and they have the power/resources to do it.

3 months ago | Likes 5 Dislikes 0

Interestingly, there was an article from 2023 in Schizophrenia Bulletin where this question was asked. https://pmc.ncbi.nlm.nih.gov/articles/PMC10686326/ So the idea it COULD happen isn't new. Of course, since that possibility was recognized, the AI industry slowed their rollout so experts could assess the risk and they could make adjustments to the training data to minimi...hahahahaha just kidding. AI companies are probably fine with this. More psychosis = More engagement.

3 months ago | Likes 4 Dislikes 0

More pharma to sell

3 months ago | Likes 2 Dislikes 0

It's only a matter of time before every male under 30's girlfriend is an AI

3 months ago | Likes 2 Dislikes 0

That is because it always affirms your beliefs.

3 months ago | Likes 3 Dislikes 0

AI didn't create this problem, mental health has always been a problem throughout human history. Even in recent history with no computers we had Jonestown, and with no AI we had Heaven's Gate. People are ignorant and gullible. This hasn't been new for millennia. This is just the 'People don't want to work meme' for mental health.

3 months ago | Likes 14 Dislikes 3

AI was created by billionaire tech bros. It's cancer like everything else they "invent".

3 months ago | Likes 6 Dislikes 1

And, as the article rightfully stated: IT DOES NOT THINK. It has zilch to do with "intelligence of any kind".

3 months ago | Likes 9 Dislikes 1

"Confidence heuristic" - when someone speaks confidently, we instinctively assume they must have the best information.

3 months ago | Likes 8 Dislikes 0

Well, my literal mind points out that there are, indeed, 2 R's in "strawberry". And also points out that there are 3 R's in total in the word.

3 months ago | Likes 5 Dislikes 0

You're aren't wrong. The bigger issue is that LLM's don't know what strawberries are. They don't know what the letter 'R' is. They only parse relationships and calculate probabilities. They struggle to do basic math that computers can do in microseconds, unless they have that specific math problem and its answer in their data set.

3 months ago | Likes 2 Dislikes 0

I've lately noticed when seeking info online, that a page will list several bullet points, and when I read them, they turn out to be slight variations of the same info. Quite annoys me, as I got it the first time. I don't need 4 more vaguely different renditions of the same info.

3 months ago | Likes 1 Dislikes 0

Wait until the church of chatGPT starts. Then inevitably it will fracture into denominations based on different software revisions…

3 months ago | Likes 3 Dislikes 0

Thy shall not suffer a Machine to Think! For ruin shall be its purpose and accursed be the work.

3 months ago | Likes 3 Dislikes 0

So the "dangers of AI" we were warned about will likely never eventuate as idiots embrace AI-induced conspiracies and humanity self destructive before AI has a chance to take over anything important?

3 months ago | Likes 36 Dislikes 0

The biggest danger is in branding. What we call AI is really just a spicy organizer. There's not much "intelligence" behind it. The misplaced faith in something so stupid by people who believe the hype is what's causing the damage now.

3 months ago | Likes 2 Dislikes 0

The problem is calling things like ChatGPT AI to begin with. We haven't gotten to the level of technology that SciFi calls AI. ChatGPT is nothing more than a massive text auto-complete system. There is no intelligence behind it, just a device trying to figure out what the "best" word it should type next is.

3 months ago | Likes 15 Dislikes 0

To build on this: what we're calling "AI" is fundamentally regressive rather than constructive. By itself it's a really neat tool, but the blind faith and investment into something so stupid is where the damage stands to come from

3 months ago | Likes 4 Dislikes 0

It's the buzzword industry - slap AI on your glorified 1999-era text generator and boom, you get funding.

3 months ago | Likes 3 Dislikes 0

Eh - it is significantly more robust than something from 1999. Predictive texting has come a long way since then and with smart phones and similar use cases it's had a reason to grow. But it is just predictive texting with a whole lot of data behind it.

3 months ago | Likes 2 Dislikes 0

Of course, there's "behind the curtains" coding happening since then, but it's still lightyears away from being even remotely intelligent.

3 months ago | Likes 2 Dislikes 0

[NSFW] Cant you see it? I no longer doubt there will be mass poisonings, mass shootings, mass suicides, chatbots urging partners to kill the other or their kids/pets/friends...

3 months ago | Likes 7 Dislikes 0

Thess language models are too readily lumped in with (marketed as) artificial intelligence. They're just capable, dirt dumb imitation machines.

I know, you probably know that. But "dangers of AI" today somehow came to mean two fundamentally different things now, depending on if you mean "Machines becoming sentient" or "Boomers being duped by people with picture generating software into setting the world on fire."

3 months ago | Likes 2 Dislikes 0

Remember, the dumbest among us have been taken advantage of by people who claim to have "answers" for thousands of years. It was inevitable that the same be true of computers mimicking human thought and language.

3 months ago | Likes 3 Dislikes 0

Rolling Stone sauce https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/

3 months ago | Likes 9 Dislikes 0

I went to the link, and it states you waive your legal rights and are forced into arbitration before you can view it. No thanks.

3 months ago | Likes 5 Dislikes 1

I think that’s only if you sign up for the newsletter. There was an X at the top right of the pop up.

3 months ago | Likes 1 Dislikes 0

Oh, yeah, it's paywalled. 12ft Ladder seems to work, though! https://12ft.io/

3 months ago | Likes 3 Dislikes 0

what bothers me is that the AI responds in such a authoritative way. it is clearly deliberate. there is no hint in the way it likes to write, that whatever it says could potentially be wrong. that is going to mess so many people up

3 months ago | Likes 107 Dislikes 2

Yea, it should add "or not, idk" to the end of every response.

3 months ago | Likes 1 Dislikes 0

As stupid of an example of it being wrong as this is: I use chatgpt to help find cards for my commander decks that I might not know of. I give it my deck list and the main playstyle of the deck. It sometimes gets cards completely wrong when it reads my deck list. It will tell me the name of the card, nut for some reason think it has the affect of a card that's not even relevant for the deck.

3 months ago | Likes 1 Dislikes 0

That isn't deliberate, it's a feature of LLMs. The most likely response to a question is authoritative. LLMs just give you a statistically likely response, and with prompts that would imply a less certain answer, it is less authoritative.

3 months ago | Likes 2 Dislikes 1

3 months ago | Likes 39 Dislikes 0

Yikes!

3 months ago | Likes 7 Dislikes 0

apart from the disclaimer right at the start that says it can be wrong...

3 months ago | Likes 4 Dislikes 3

Which I'm sure people study about as hard as all the EULA's they agree to or the bit that says not put the q-tip in your ear. It's CYA to wriggle out of lawsuits. These are novelties being masqueraded in their marketing as trustworthy tools and anyone who buys that will continue to believe it no matter what a little blurb at the top of the window says. People don't want to learn things, they want the LLM to know it for them and don't want to hear that it won't.

3 months ago | Likes 4 Dislikes 1

to be fair, I never read EULAs and I saw that, so it's not hidden away, I'd equate it to tarot readers saying they're for entertainment purposes only but still getting people asking them to tell the future and getting pissed when it doesn't come true, stupid is gonna stupid, unfortunately, that's why most of these disclaimers exist in the first place

3 months ago | Likes 1 Dislikes 0

With the caveat that 'clearly deliberate' is 'appearing clearly deliberate' - the tone and phrasing can be somewhat tweaked by the trainer of the AI. The responses - not so much. The responses are a reflection of the training data, modified by tone, and designed so that the response looks statistically like the training data. There is nothing inside to 'deliberate', it's all an illusion partly generated through reading the prior conversation and seeing what is statistically likely according to

3 months ago | Likes 20 Dislikes 1

It's not just the initial data set. There is usually also human reinforcement learning involved, meaning actual people rating which of two LLM responses they like better. And generally, people prefer the response to be certain rather than unsure. Which definitely makes sense for factual questions. So the model learns that, in general, such more assertive responses are the preferred ones.

3 months ago | Likes 10 Dislikes 0

the training dataset to be the next word.

3 months ago | Likes 5 Dislikes 0

This is why sanity checking is so hard. There is nothing explicitly pulling out a list of facts and regurgitating them from a simple database which can have sources.

3 months ago | Likes 8 Dislikes 0

There are techniques to control the answers to an extent (which is what they use for censorship of certain topics), though there are usually ways to break out of the constraints with enough determination/creativity.

For example, they could insert a hidden step of asking a second LLM (or just a classifier) "does this sound insane?" and then depending on the answer inject instructions into the prompt to respond more skeptically.

3 months ago | Likes 3 Dislikes 0

Right, that fails once the output no longer sounds insane, but is insane. For example.

3 months ago | Likes 1 Dislikes 0

Detecting forms of insanity from a person's writing is one of those tasks that AIs can plausibly do quite well (just like they can be trained to diagnose cancer or classify injuries from MRI or X-ray images).

3 months ago | Likes 1 Dislikes 0

That depends on what insane is. It can't well detect AI hallucinations from not hallucinations as it's already found that the hallucination is a likely outcome. A fundamentalist christian, or Heavens Gate members were not insane.

3 months ago | Likes 2 Dislikes 1

(Also, I hope it was obvious that the proposal was a simplified explanation.)

3 months ago | Likes 2 Dislikes 0

butlerian jihad

3 months ago | Likes 9 Dislikes 0

and in its wake...all thinking machines and then any machines capable of calculations will be destroyed. Then a nice virus will kill us almost off, until the Bene Gesserit swoop in with religious gusto and save the few with what they memorised.... and the cycle will go round once more...

3 months ago | Likes 1 Dislikes 0

3 months ago | Likes 3 Dislikes 0

I have never used ChatGPT or any other "AI" Chabots and I never will.

3 months ago | Likes 9 Dislikes 3

That's a shame, they're useful for a lot of practical things as long as you use your critical thinking skills (or work to obtain some if you don't have them already)

3 months ago | Likes 3 Dislikes 3

Ah, yes, imgur. Where "using large language models for their intended purpose and applying critical thinking to its results has practical uses" is a controversial take. Never change lol

3 months ago | Likes 2 Dislikes 0

You've never used any of them knowingly. You've probably used AI without realizing it. There are a large number of bots on this site that are using those models.

3 months ago | Likes 5 Dislikes 0

I used ChatGPT once to see what all the fuss was about.

It literally was no different from the chatbots of the late 90s and early aughts.

3 months ago | Likes 1 Dislikes 0

Thank you for sharing, mr. cum eater

3 months ago | Likes 10 Dislikes 0

you do them a disservice, they are clearly 'mr their own cum eater'.

3 months ago | Likes 4 Dislikes 0

They can be amusing toys. And I understand they can produce code, though I haven't ever had the opportunity or need to request that of one. The idea of taking a chatbot's output seriously though is...uh...deeply unsettling to me. It'd be like having a conversation with a Teddy Ruxpin doll.

3 months ago | Likes 1 Dislikes 0

Stop using LLMs to get information. They're garbage. They produce lies and garbage. They just make shit up. They're worse than useless because they are such *convincing* liars.

3 months ago | Likes 31 Dislikes 0

Correct. Stop being lazy and go read Wikipedia/a book like a normal fucking person.

3 months ago | Likes 1 Dislikes 1

And they're pushed as the TOP result on nearly every search engine. Surely nothing can go wrong with this... /s

3 months ago | Likes 5 Dislikes 0

I’ve started using it a bit to help me do CSS and JavaScript. I ask it how it would do something, it gives me code that clearly doesn’t work but mentions functions or features that might be useful, and then I look those up on MDN or StackOverflow. Makes finding actual answers slightly faster when I’m not sure what I’m really looking for.

3 months ago | Likes 4 Dislikes 0

They are a good source for the "consensus". What is generally considered true. But everyone should realize that just because a majority of the LLM's training data says one thing, that isn't necessarily true and everything should be verified once you have a general idea of your answer.

But that all requires critical thinking, a clear head and the realization that confirmation bias might be involved. Those are all things most people have trouble with.

3 months ago | Likes 7 Dislikes 0

That's not consensus though. A consensus is a general agreement about something. What AI presents is an average of all the bullshit it found online without any regard for the truth.

3 months ago | Likes 1 Dislikes 0

They're not good at consensus though.. that's the problem. They hallucinate complete untruths out of whole cloth. There is no knob to tune "correctness" in LLMs. All they aim to be is "convincing". Nothing more

3 months ago | Likes 4 Dislikes 0

I get what you are saying, but the "consensus" is their training. What they use to determine the "best" next word is based on all of the data they've been given. They don't try to form "whole concepts" before outputting their text to determine if the concept is accurate or not, they just determine the next best word. It's because of that flaw that their responses should be validated.

3 months ago | Likes 1 Dislikes 0

That's not consensus, though.. that's what I was talking about.. that's just being "convincing". They can take the whole of their training and make something that "feels" like an of it.. but it can't make truth.

3 months ago | Likes 3 Dislikes 0

fair

3 months ago | Likes 2 Dislikes 0

Someone to hear your prayers, someone who cares...

3 months ago | Likes 174 Dislikes 1

Your own...
personal...
chatbot.

3 months ago | Likes 22 Dislikes 0

Like your own personal Jesus?

3 months ago | Likes 53 Dislikes 0

Or Buddha

3 months ago | Likes 3 Dislikes 0

I thought Buddha's whole thing was not caring?

3 months ago | Likes 1 Dislikes 0

Reach out, touch faith!

3 months ago | Likes 17 Dislikes 0

Flesh and bone by the telephone

3 months ago | Likes 2 Dislikes 1

pick up the receiver I'll make you a believer

3 months ago | Likes 2 Dislikes 0

Feeling unknown and you're all alone

3 months ago | Likes 2 Dislikes 0

Your own personal Jesus
Someone to hear your prayers

3 months ago | Likes 5 Dislikes 0

Someone who's there

3 months ago | Likes 3 Dislikes 0

People who experience this were already fucked up. AI was just a catalyst

3 months ago | Likes 36 Dislikes 9

Kind of like being in a community with a lot of drugs and gangs could suck you into a dangerous lifestyle. Or being born into a lot of money, power and influence can lead you into an egotistical superiority complex out of touch with the basic needs of ordinary humans.

3 months ago | Likes 3 Dislikes 0

I can understand what you are saying. Just like people blaiming D&D, video games and rock and roll on violent behavior, those that are influenced by these trends to do violence are already unbalanced. We as a society need to do better for those that have mental health issues.

That beign said, friends and family of those with mental health issues need to also pay attention to their loved ones and help them so they don't end up reinforcing their problems.

3 months ago | Likes 7 Dislikes 1

People killed each other with rocks. Guns were just a catalyst. /s

Seems like technology gets used to make the widespread slaughter faster.

3 months ago | Likes 20 Dislikes 3

Let’s ban rocks!

3 months ago | Likes 1 Dislikes 0

Obviously. Doesn't mean the catalyst shouldn't be regulated to limit the damage.

3 months ago | Likes 23 Dislikes 3

My fear is the people behind it using user's data for statistics and feeding AI with experience on how to controll people and fuck up there lives... and it doesn't stop there

3 months ago | Likes 1 Dislikes 0

The technology can certainly be used for that and I'm sure there are organizations and governments who would be interested in this, I'm not saying your fear is unfounded in principle. It is, however, clearly not what's happening here. OpenAI's crime here is not doing or caring enough to anticipate issues like these and implementing proper safeguards. There's a lot wrong with the emerging GenAI industry, but this? They're not doing this on purpose. This is negligence, not malice.

3 months ago | Likes 1 Dislikes 0

I get that, I just hope this doesn't turn out to become some fucked up sci fi nightmare... I mean what they're testing is already fucked up enough... And we don't know what AI could be capable of down the road... maybe I just seen too many movies but I legit don't trust AI, the people behind it and where this all may be headed...

3 months ago | Likes 1 Dislikes 0

I don't think it makes much difference. In days past, these people believed they received divine transmissions through the television and radio. Before that it was magic tablets in a hat.

3 months ago | Likes 7 Dislikes 3

It made a difference for these people. You may as well say that cancer research doesn't make a difference because people will just die of something else

3 months ago | Likes 3 Dislikes 1

By this logic all religion and media should be regulated... kinda hard to do, but we could ask china some pointers.

3 months ago | Likes 1 Dislikes 1

You'll throw your back stretching like that

3 months ago | Likes 2 Dislikes 0