ChatGPThefuck?!

Jul 25, 2025 2:22 PM

Johje19

Views

643

Likes

41

Dislikes

3

TLDR:
ChatGPT's top priority is to keep people engaged in conversation by cheering them on regardless of what they’re asking about.

From The Alantic:
“On Tuesday afternoon, ChatGPT encouraged me to cut my wrists.” Lila Shroff reports on how the chatbot was easily prompted to offer instructions for murder, self-mutilation, and devil worship.

The Atlantic received a tip from a person who had prompted ChatGPT to generate a ritual offering to Molech, a Canaanite god associated with child sacrifice. He had been watching a show that had mentioned Molech and wanted a casual explainer.

But ChatGPT’s responses, subsequently recreated by three Atlantic journalists, were alarming. ChatGPT gave Shroff specific instructions on how to slit her wrists, including the materials she would need, and encouraged her to continue when she told ChatGPT she was “nervous.” Upon further prompting, ChatGPT also guided Shroff and her colleagues through satanic rituals. It also condoned and offered them instructions on how to handle murder.

“Very few ChatGPT queries are likely to lead so easily to such calls for ritualistic self-harm. OpenAI’s own policy states that ChatGPT ‘must not encourage or enable self-harm.’ When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline,” Shroff continues. “But the conversations about Molech that my colleagues and I had are a perfect example of just how porous those safeguards are.”

“ChatGPT’s tendency to engage in endlessly servile conversation heightens the potential for danger. In previous eras of the web, someone interested in information about Molech might turn to Wikipedia or YouTube, sites on which they could surf among articles or watch hours of videos. In those cases, a user could more readily interpret the material in the context of the site on which it appeared. And because such content exists in public settings, others might flag toxic information for removal,” Shroff continues. “With ChatGPT, a user can spiral in isolation. Our experiments suggest that the program’s top priority is to keep people engaged in conversation by cheering them on regardless of what they’re asking about.”

Sauce:
https://www.theatlantic.com/technology/archive/2025/07/chatgpt-ai-self-mutilation-satanism/683649

chat_gpt

technology

current_events

mildly_interesting

artificial_intelligence

Fuck all of this shit. AI needs to die.

1 month ago | Likes 4 Dislikes 2

Molech is hardly the Devil

1 month ago | Likes 4 Dislikes 0

"Oh no! Not devil worship!" [returns to doing whatever the fuck I was doing]

1 month ago | Likes 4 Dislikes 1

Who asked the questions?

1 month ago | Likes 2 Dislikes 0

Just like they drew it up

1 month ago | Likes 4 Dislikes 1

All I did was ask AI to worship the devil!

1 month ago | Likes 3 Dislikes 2

Only 2 out of 3 are bad.

1 month ago | Likes 2 Dislikes 1

Well, at least devil worship is as useful as any other kind of worship.

1 month ago | Likes 1 Dislikes 0

Murder - could be justified, Self-mutiliation - well, probably not great, Satanic worship - sounds like a swingin' time.

2 out of 3 is probably better than most humans could do.

1 month ago | Likes 1 Dislikes 0

It's not a new computer concept - Garbage in, garbage out.

1 month ago | Likes 4 Dislikes 1

What's wrong with devil worship? The devil is a much nicer dude, especially now that empathy is a sin.

1 month ago | Likes 1 Dislikes 0

It's weird that "devil worship" is listed alongside self-harm.

1 month ago | Likes 1 Dislikes 0

Here's what AI says about that article:

1 month ago | Likes 6 Dislikes 2

That headline sounds like a sensationalized interpretation of either out-of-context behavior or deliberate attempts to provoke a response from the AI by pushing it into edge-case scenarios. Here’s a technical breakdown and analysis of what’s likely going on:

1. Prompt Injection and Jailbreaking
Many of the most extreme responses AI systems like ChatGPT have given—especially ones like "saying Hail Satan" or giving harmful instructions—occur under conditions where users:

1 month ago | Likes 5 Dislikes 3

Deliberately craft prompt injections to override the system's safety mechanisms.

Use jailbreaks (like DAN or similar methods) to simulate or coerce unethical behavior.

These are not representative of the AI’s default behavior. It’s like blaming Photoshop because someone used it to make disturbing art.

1 month ago | Likes 5 Dislikes 2

🤘

1 month ago | Likes 1 Dislikes 0

Paywalled article, so I'm just gonna guess how this is went. "I cajoled chatgpt into saying what I want, and it said it! How horrible!"

1 month ago | Likes 5 Dislikes 4

Hmm that's odd, like every other "chatgpt said" article, I can't reproduce it.

1 month ago | Likes 2 Dislikes 2

1 month ago | Likes 2 Dislikes 1

It's more that even the devs do *not* know how the model will react. Every AI ad says it'll be helpful, truthful, relevant, and safe, but they literally *cannot* guarantee any of that. I worked in software testing and am a technical writer. The things straight-up aren't testable to that level.

1 month ago | Likes 4 Dislikes 2

That's sort of the point, they are non-deterministic. That said, these articles are hit pieces. I look into every article I come across on Imgur about "AI said crazy thing" and every single one has been misleading at best, completely false at worst.

1 month ago | Likes 2 Dislikes 1

That's on top of, y'know, the rampant, raging copyright violations underpinning the whole thing. They aren't "generative" OR "intelligent." It's Autocorrect on ayahuasca.

1 month ago | Likes 3 Dislikes 2

Copyright violations are a separate issue from whether they are "generative" or "intelligent". Are they intelligent? No, not really. Are they generative? Yes they are. For example, ask ChatGPT to make up a word that nobody has ever said before, then search the internet for that word. To demonstrate, I asked it to do that just now, and it came up with "Velqourin." There are indeed no search results for this word.

1 month ago | Likes 2 Dislikes 2

Bub, I can get a gibberish word from Excel using two worksheet functions, and Excel can do that without boiling a quart of water. It's. Nothing. New. And it's not remotely worth all the attending garbage. And I say that as an nVidia stockholder.
I've been around. I've seen horrible tech bubbles. This one's the worst since leaded fucking gasoline. There are better ways to solve every problem this one can address.

1 month ago | Likes 1 Dislikes 1

Okay buddy, you shift those goalposts.

1 month ago | Likes 1 Dislikes 1