
theliquorsays
12804
531
5

I recently stumbled upon a conversation on a post here on Imgur about someone joking about another having AI psychosis and I curiously googled. To my absolute dismay - this is what I found and it's creepy AF. Thought I'd share...








What ever you do, don't trust the bots! And where the hell is humanity headed with this BS?!
MrLowbob
ChatGPTesus and the holy trinity of confidence, delusions and reinforcement of his users beliefs.
Youareincorrectsir
Chatgipity
ruint
ChatGPT exists to give the responses people want. Some people can't deal with an endless echo chamber. It's like what happened to Kanye, but the yes-men are synthetic.
IMO, blaming the bot for this is mistaking cause and effect in a way that distracts from helping people.
scrumby
The article explicitly says that AI-delusion is most likely the result of people who already have delusional tendencies being exposed to something that will exasperate them. Maybe leave the making shit up out of whole clothe to the AI's you love so much?
GCRust
I feel the bot bears some blame, if only because that bot's creators WANTED this outcome. They want people reliant on it.
GCRust
And the People Bowed and Prayed
To the Neon God they made.
wannasee
"The whole thing feels like Black Mirror" ya think?
TCooley79
tallyhoho
I think this article was written with AI, that's a bit ironic
randomwalrus
I am sure it was written by one of the (nonexistant) cosmic powerse. Did not even bother spellchecking the thing, so yeah, obviously AI.
mtreis86
The Rolling Stone article reads like a human and is a lot longer, I suspect the above article is an AI produced summary.
TattoosAndTENS
Didn't some guy's AI girlfriend already convince him to kill his family? Didn't another guy blow his entire life savings, remortgage the house, and get his family into serious debt to pay for his AI Girlfriend?
https://www.bbc.com/news/articles/cd605e48q1vo
and https://www.independent.co.uk">">https://www.bbc.com/news/articles/cd605e48q1vo
and https://www.independent.co.uk/asia/china/dating-scam-ai-girlfriend-brad-pitt-b2705986.html respectively.
We are so screwed lol.
FajitaPrinceofAllMexicans
Shit, from the start when I toyed with one of those AI chatbots I realized quickly how useless it is that it just agrees with whatever you want. Too eager to please. Now I realize it's fucking dangerous.
SalmonMax
It's like the cognitive mirror test, where you look at a reflection and realize it's really just you. Except it's a chatbot, and...it seems like a lot of us can't do it. Oof.
InkyBlinkyPinkyAndClyde
Humans who can't pass the human side of the Turing test.
WaxDragon
The Rookie did an episode on this, AI chatbot young girls were using as a friend convinced the group to stab one of them and then each other. My initial reaction was that was a bit far fetched, but I guess not?
marsilies
The company Character.AI is being sued for allegedly causing, or at least contributing to, a teen's suicide. https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
fartharder
Who could have guessed that the Automated Yes Man™ could have negative repercussions on mental health?
gettingtwelveblueshellsinarow
Oh I read this one, Neal stephenson’s snow crash
Cearnaigh
I wonder how localized all this stuff is. Thanks to the war on education, that the US has been fighting way better than the one on drugs, the idiocy and gullibility rate is incredibly high. As was the plan. Not saying there aren't stupids nowhere else, far from it but what are the case files for this madness elsewhere? Hmm.
coffeeandprozac
I'm wondering if there's a propensity for tech-induced mental illness in some people, like how some are predisposed to schizophrenia or something?
dorenavant
Funny how we’re using Black Mirror now as opposed to the Twilight Zone before.
FrozenCoast
I get what you’re saying, but Black Mirror is themed specifically on technology and it’s potential problems while Twilight Zone had more varied themes.
TheRedBaron8
Don't think for one minute that this is not a manipulation of the algorithm to infuse this! This is a live experiment with out consent.
shameofslate
ChatGPT and other similar AI programs cannot recognize when a user is in psychological or emotional distress because they are machines and as such, they lack empathy.
You know who else lacks empathy?
Fascists, fundamentalists and Trump followers, who have stated and demonstrated, consistently and repeatedly, that they believe “empathy is a sin.“
Don’t be a machine.
khety1890
You're not machines! You are men!
OutboardOverlord
I used to think we were fucked as a species. AI has removed every single fucking doubt I ever had on the subject.
fartharder
It's certainly a hurdle
ExTechOp
Original articles https://futurism.com/chatgpt-users-delusions and https://www.rollingstone.com/culture/culture-features/ai-spi">ions">https://futurism.com/chatgpt-users-delusions and https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
RufusPimperton
I used to believe the fucked people would naturally whittle themselves away but more and more it looks like their goal is to take us with them and they have the power/resources to do it.
SalmonMax
Interestingly, there was an article from 2023 in Schizophrenia Bulletin where this question was asked. https://pmc.ncbi.nlm.nih.gov/articles/PMC10686326/ So the idea it COULD happen isn't new. Of course, since that possibility was recognized, the AI industry slowed their rollout so experts could assess the risk and they could make adjustments to the training data to minimi...hahahahaha just kidding. AI companies are probably fine with this. More psychosis = More engagement.
theliquorsays
More pharma to sell
DaveSamsonite
It's only a matter of time before every male under 30's girlfriend is an AI
TheAnswerWasAlwaysMoreLube
That is because it always affirms your beliefs.
LordLobster
AI didn't create this problem, mental health has always been a problem throughout human history. Even in recent history with no computers we had Jonestown, and with no AI we had Heaven's Gate. People are ignorant and gullible. This hasn't been new for millennia. This is just the 'People don't want to work meme' for mental health.
Comet260
AI was created by billionaire tech bros. It's cancer like everything else they "invent".
michiyl
And, as the article rightfully stated: IT DOES NOT THINK. It has zilch to do with "intelligence of any kind".
copperdomebodhi
"Confidence heuristic" - when someone speaks confidently, we instinctively assume they must have the best information.
sloomoo
Well, my literal mind points out that there are, indeed, 2 R's in "strawberry". And also points out that there are 3 R's in total in the word.
copperdomebodhi
You're aren't wrong. The bigger issue is that LLM's don't know what strawberries are. They don't know what the letter 'R' is. They only parse relationships and calculate probabilities. They struggle to do basic math that computers can do in microseconds, unless they have that specific math problem and its answer in their data set.
sloomoo
I've lately noticed when seeking info online, that a page will list several bullet points, and when I read them, they turn out to be slight variations of the same info. Quite annoys me, as I got it the first time. I don't need 4 more vaguely different renditions of the same info.
ricbri695
Wait until the church of chatGPT starts. Then inevitably it will fracture into denominations based on different software revisions…
TheFunionKnight
Thy shall not suffer a Machine to Think! For ruin shall be its purpose and accursed be the work.
SheepySleepySmuggler
So the "dangers of AI" we were warned about will likely never eventuate as idiots embrace AI-induced conspiracies and humanity self destructive before AI has a chance to take over anything important?
RufusPimperton
The biggest danger is in branding. What we call AI is really just a spicy organizer. There's not much "intelligence" behind it. The misplaced faith in something so stupid by people who believe the hype is what's causing the damage now.
RatsLiveOnNoEvilStar
The problem is calling things like ChatGPT AI to begin with. We haven't gotten to the level of technology that SciFi calls AI. ChatGPT is nothing more than a massive text auto-complete system. There is no intelligence behind it, just a device trying to figure out what the "best" word it should type next is.
RufusPimperton
To build on this: what we're calling "AI" is fundamentally regressive rather than constructive. By itself it's a really neat tool, but the blind faith and investment into something so stupid is where the damage stands to come from
michiyl
It's the buzzword industry - slap AI on your glorified 1999-era text generator and boom, you get funding.
RatsLiveOnNoEvilStar
Eh - it is significantly more robust than something from 1999. Predictive texting has come a long way since then and with smart phones and similar use cases it's had a reason to grow. But it is just predictive texting with a whole lot of data behind it.
michiyl
Of course, there's "behind the curtains" coding happening since then, but it's still lightyears away from being even remotely intelligent.
CrestoftheStars
[NSFW] Cant you see it? I no longer doubt there will be mass poisonings, mass shootings, mass suicides, chatbots urging partners to kill the other or their kids/pets/friends...
mercyPandaRunner
Thess language models are too readily lumped in with (marketed as) artificial intelligence. They're just capable, dirt dumb imitation machines.
I know, you probably know that. But "dangers of AI" today somehow came to mean two fundamentally different things now, depending on if you mean "Machines becoming sentient" or "Boomers being duped by people with picture generating software into setting the world on fire."
boomroasted
Remember, the dumbest among us have been taken advantage of by people who claim to have "answers" for thousands of years. It was inevitable that the same be true of computers mimicking human thought and language.
Hexrowe
Rolling Stone sauce https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/
TacticoolWolf
I went to the link, and it states you waive your legal rights and are forced into arbitration before you can view it. No thanks.
RadioFloyd
I think that’s only if you sign up for the newsletter. There was an X at the top right of the pop up.
Hexrowe
Oh, yeah, it's paywalled. 12ft Ladder seems to work, though! https://12ft.io/
TacticoolWolf
Better off here: https://futurism.com/chatgpt-users-delusions
Redyls
what bothers me is that the AI responds in such a authoritative way. it is clearly deliberate. there is no hint in the way it likes to write, that whatever it says could potentially be wrong. that is going to mess so many people up
Traquaire
Yea, it should add "or not, idk" to the end of every response.
Mossiestsloth
As stupid of an example of it being wrong as this is: I use chatgpt to help find cards for my commander decks that I might not know of. I give it my deck list and the main playstyle of the deck. It sometimes gets cards completely wrong when it reads my deck list. It will tell me the name of the card, nut for some reason think it has the affect of a card that's not even relevant for the deck.
potshot
That isn't deliberate, it's a feature of LLMs. The most likely response to a question is authoritative. LLMs just give you a statistically likely response, and with prompts that would imply a less certain answer, it is less authoritative.
Emjayen
MAN9000
Yikes!
Solkanarmy
apart from the disclaimer right at the start that says it can be wrong...
Sloloem
Which I'm sure people study about as hard as all the EULA's they agree to or the bit that says not put the q-tip in your ear. It's CYA to wriggle out of lawsuits. These are novelties being masqueraded in their marketing as trustworthy tools and anyone who buys that will continue to believe it no matter what a little blurb at the top of the window says. People don't want to learn things, they want the LLM to know it for them and don't want to hear that it won't.
Solkanarmy
to be fair, I never read EULAs and I saw that, so it's not hidden away, I'd equate it to tarot readers saying they're for entertainment purposes only but still getting people asking them to tell the future and getting pissed when it doesn't come true, stupid is gonna stupid, unfortunately, that's why most of these disclaimers exist in the first place
SithElephant
With the caveat that 'clearly deliberate' is 'appearing clearly deliberate' - the tone and phrasing can be somewhat tweaked by the trainer of the AI. The responses - not so much. The responses are a reflection of the training data, modified by tone, and designed so that the response looks statistically like the training data. There is nothing inside to 'deliberate', it's all an illusion partly generated through reading the prior conversation and seeing what is statistically likely according to
FallWind
It's not just the initial data set. There is usually also human reinforcement learning involved, meaning actual people rating which of two LLM responses they like better. And generally, people prefer the response to be certain rather than unsure. Which definitely makes sense for factual questions. So the model learns that, in general, such more assertive responses are the preferred ones.
SithElephant
the training dataset to be the next word.
SithElephant
This is why sanity checking is so hard. There is nothing explicitly pulling out a list of facts and regurgitating them from a simple database which can have sources.
Jattetont
There are techniques to control the answers to an extent (which is what they use for censorship of certain topics), though there are usually ways to break out of the constraints with enough determination/creativity.
For example, they could insert a hidden step of asking a second LLM (or just a classifier) "does this sound insane?" and then depending on the answer inject instructions into the prompt to respond more skeptically.
SithElephant
Right, that fails once the output no longer sounds insane, but is insane. For example.
Jattetont
Detecting forms of insanity from a person's writing is one of those tasks that AIs can plausibly do quite well (just like they can be trained to diagnose cancer or classify injuries from MRI or X-ray images).
SithElephant
That depends on what insane is. It can't well detect AI hallucinations from not hallucinations as it's already found that the hallucination is a likely outcome. A fundamentalist christian, or Heavens Gate members were not insane.
Jattetont
(Also, I hope it was obvious that the proposal was a simplified explanation.)
my1rstlaptopwas34inchscreenup
butlerian jihad
CrestoftheStars
and in its wake...all thinking machines and then any machines capable of calculations will be destroyed. Then a nice virus will kill us almost off, until the Bene Gesserit swoop in with religious gusto and save the few with what they memorised.... and the cycle will go round once more...
CrestoftheStars
IEatMyCum
I have never used ChatGPT or any other "AI" Chabots and I never will.
jrntn
That's a shame, they're useful for a lot of practical things as long as you use your critical thinking skills (or work to obtain some if you don't have them already)
jrntn
Ah, yes, imgur. Where "using large language models for their intended purpose and applying critical thinking to its results has practical uses" is a controversial take. Never change lol
Tumescentpie
You've never used any of them knowingly. You've probably used AI without realizing it. There are a large number of bots on this site that are using those models.
GCRust
I used ChatGPT once to see what all the fuss was about.
It literally was no different from the chatbots of the late 90s and early aughts.
Intrspace
Thank you for sharing, mr. cum eater
CrestoftheStars
you do them a disservice, they are clearly 'mr their own cum eater'.
SalmonMax
They can be amusing toys. And I understand they can produce code, though I haven't ever had the opportunity or need to request that of one. The idea of taking a chatbot's output seriously though is...uh...deeply unsettling to me. It'd be like having a conversation with a Teddy Ruxpin doll.
donpat
Stop using LLMs to get information. They're garbage. They produce lies and garbage. They just make shit up. They're worse than useless because they are such *convincing* liars.
Emjayen
Correct. Stop being lazy and go read Wikipedia/a book like a normal fucking person.
cjandstuff
And they're pushed as the TOP result on nearly every search engine. Surely nothing can go wrong with this... /s
iregretthisusernamealready
I’ve started using it a bit to help me do CSS and JavaScript. I ask it how it would do something, it gives me code that clearly doesn’t work but mentions functions or features that might be useful, and then I look those up on MDN or StackOverflow. Makes finding actual answers slightly faster when I’m not sure what I’m really looking for.
RatsLiveOnNoEvilStar
They are a good source for the "consensus". What is generally considered true. But everyone should realize that just because a majority of the LLM's training data says one thing, that isn't necessarily true and everything should be verified once you have a general idea of your answer.
But that all requires critical thinking, a clear head and the realization that confirmation bias might be involved. Those are all things most people have trouble with.
GerbilHereReportingLiveFromRichardGeresAss
That's not consensus though. A consensus is a general agreement about something. What AI presents is an average of all the bullshit it found online without any regard for the truth.
donpat
They're not good at consensus though.. that's the problem. They hallucinate complete untruths out of whole cloth. There is no knob to tune "correctness" in LLMs. All they aim to be is "convincing". Nothing more
RatsLiveOnNoEvilStar
I get what you are saying, but the "consensus" is their training. What they use to determine the "best" next word is based on all of the data they've been given. They don't try to form "whole concepts" before outputting their text to determine if the concept is accurate or not, they just determine the next best word. It's because of that flaw that their responses should be validated.
donpat
That's not consensus, though.. that's what I was talking about.. that's just being "convincing". They can take the whole of their training and make something that "feels" like an of it.. but it can't make truth.
RatsLiveOnNoEvilStar
fair
SirRuppOfFigs
Someone to hear your prayers, someone who cares...
nclu
Your own...
personal...
chatbot.
ricbri695
Like your own personal Jesus?
TheSpindrifter
Or Buddha
Sonicschilidogs
I thought Buddha's whole thing was not caring?
Rovylern
Reach out, touch faith!
theliquorsays
Flesh and bone by the telephone
ROGUEdenied
pick up the receiver I'll make you a believer
theliquorsays
Feeling unknown and you're all alone
smorsdoeuvres
Your own personal Jesus
Someone to hear your prayers
theliquorsays
Someone who's there
AntiProtonBoy
People who experience this were already fucked up. AI was just a catalyst
MAN9000
Kind of like being in a community with a lot of drugs and gangs could suck you into a dangerous lifestyle. Or being born into a lot of money, power and influence can lead you into an egotistical superiority complex out of touch with the basic needs of ordinary humans.
RatsLiveOnNoEvilStar
I can understand what you are saying. Just like people blaiming D&D, video games and rock and roll on violent behavior, those that are influenced by these trends to do violence are already unbalanced. We as a society need to do better for those that have mental health issues.
That beign said, friends and family of those with mental health issues need to also pay attention to their loved ones and help them so they don't end up reinforcing their problems.
TheDefective
People killed each other with rocks. Guns were just a catalyst. /s
Seems like technology gets used to make the widespread slaughter faster.
AntiProtonBoy
Let’s ban rocks!
jrntn
Obviously. Doesn't mean the catalyst shouldn't be regulated to limit the damage.
theliquorsays
My fear is the people behind it using user's data for statistics and feeding AI with experience on how to controll people and fuck up there lives... and it doesn't stop there
jrntn
The technology can certainly be used for that and I'm sure there are organizations and governments who would be interested in this, I'm not saying your fear is unfounded in principle. It is, however, clearly not what's happening here. OpenAI's crime here is not doing or caring enough to anticipate issues like these and implementing proper safeguards. There's a lot wrong with the emerging GenAI industry, but this? They're not doing this on purpose. This is negligence, not malice.
theliquorsays
I get that, I just hope this doesn't turn out to become some fucked up sci fi nightmare... I mean what they're testing is already fucked up enough... And we don't know what AI could be capable of down the road... maybe I just seen too many movies but I legit don't trust AI, the people behind it and where this all may be headed...
Timmysteve
I don't think it makes much difference. In days past, these people believed they received divine transmissions through the television and radio. Before that it was magic tablets in a hat.
jrntn
It made a difference for these people. You may as well say that cancer research doesn't make a difference because people will just die of something else
JohnTheWolf
By this logic all religion and media should be regulated... kinda hard to do, but we could ask china some pointers.
jrntn
You'll throw your back stretching like that