The human brain is weak

Jun 15, 2025 5:21 AM

dabydeen

Views

1287

Likes

41

Dislikes

6

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

artificial_intelligence

openai

technology

mental_health

Bishop must be turning in his grave.

2 months ago | Likes 1 Dislikes 0

To companies like Open AI, sycophantic chatbots driving customers insane is just a way to boost engagement.

2 months ago | Likes 4 Dislikes 1

those incapable of critical thinking should stay away from llm.

2 months ago | Likes 2 Dislikes 0

"That's just normal paranoia. Everyone gets that." -Name unimportant

2 months ago | Likes 1 Dislikes 0

"AI" are "average word salad prediction". They can be useful to find (and then check validity elsewhere) exercises for mental health issues, but at no point they should be trusted. Their creators constantly tweak algorithms to promote "engagement", so similar to Facebook, trying to keep users on their platform.

2 months ago | Likes 8 Dislikes 0

I call LLMs "glorified pachinko machines" because they work on similar heuristic principles.

2 months ago | Likes 2 Dislikes 0

This was true in the era of symbol manipulation chatbots like MegaHal and Alice, but LLMs have a lossy self-instruction loop that makes them non-programmatic.

What is actually happening in these weird convos is the bot is making statements it doesn't fully support, forgetting it's own ambivalence when the model resets betwen loops, then re-reading the log and backing up it's suspect statements for the sake of narritive consistency.

2 months ago | Likes 2 Dislikes 0

The model samples probability to decide on a token, then forgets the probability data.
When it writes instruction for itself in the next loop, it writes in tokens.
The loss of complex subtext when probabilities are collapsed into static tokens/words causes misunderstandings that the bot carries forward when it re-reads its past output and tries to remain consistent with things it said are true.
It's broken roleplaying, and only AI rather than programmatic chatbots can make these mistakes.

2 months ago | Likes 2 Dislikes 0

TLDR is the narrator it invents for it's output gradually takes control of future generation, so the bot ends up roleplaying the narrator 's simulated personal truth instead of speaking objective truth.
This is also why the bots can act emotional, they see it as how the narrator they've halucinated would respond.

2 months ago | Likes 2 Dislikes 0