"AI" are "average word salad prediction". They can be useful to find (and then check validity elsewhere) exercises for mental health issues, but at no point they should be trusted. Their creators constantly tweak algorithms to promote "engagement", so similar to Facebook, trying to keep users on their platform.
This was true in the era of symbol manipulation chatbots like MegaHal and Alice, but LLMs have a lossy self-instruction loop that makes them non-programmatic.
What is actually happening in these weird convos is the bot is making statements it doesn't fully support, forgetting it's own ambivalence when the model resets betwen loops, then re-reading the log and backing up it's suspect statements for the sake of narritive consistency.
The model samples probability to decide on a token, then forgets the probability data. When it writes instruction for itself in the next loop, it writes in tokens. The loss of complex subtext when probabilities are collapsed into static tokens/words causes misunderstandings that the bot carries forward when it re-reads its past output and tries to remain consistent with things it said are true. It's broken roleplaying, and only AI rather than programmatic chatbots can make these mistakes.
TLDR is the narrator it invents for it's output gradually takes control of future generation, so the bot ends up roleplaying the narrator 's simulated personal truth instead of speaking objective truth. This is also why the bots can act emotional, they see it as how the narrator they've halucinated would respond.
TheMurderousCricket
Bishop must be turning in his grave.
JStengah
To companies like Open AI, sycophantic chatbots driving customers insane is just a way to boost engagement.
itsameeeee
those incapable of critical thinking should stay away from llm.
PutItInNeutral
"That's just normal paranoia. Everyone gets that." -Name unimportant
jtwood
Uggggggh https://media4.giphy.com/media/v1.Y2lkPTY1YjkxZmJlZzF1a2EzMjdiZjN0NmFqcHFnd3pwM3ZvNWc0a3M3NndwaXlobzBxZSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/FcuiZUneg1YRAu1lH2/giphy.mp4
TiredSnowball
"AI" are "average word salad prediction". They can be useful to find (and then check validity elsewhere) exercises for mental health issues, but at no point they should be trusted. Their creators constantly tweak algorithms to promote "engagement", so similar to Facebook, trying to keep users on their platform.
NotThePoint
I call LLMs "glorified pachinko machines" because they work on similar heuristic principles.
ruint
This was true in the era of symbol manipulation chatbots like MegaHal and Alice, but LLMs have a lossy self-instruction loop that makes them non-programmatic.
What is actually happening in these weird convos is the bot is making statements it doesn't fully support, forgetting it's own ambivalence when the model resets betwen loops, then re-reading the log and backing up it's suspect statements for the sake of narritive consistency.
ruint
The model samples probability to decide on a token, then forgets the probability data.
When it writes instruction for itself in the next loop, it writes in tokens.
The loss of complex subtext when probabilities are collapsed into static tokens/words causes misunderstandings that the bot carries forward when it re-reads its past output and tries to remain consistent with things it said are true.
It's broken roleplaying, and only AI rather than programmatic chatbots can make these mistakes.
ruint
TLDR is the narrator it invents for it's output gradually takes control of future generation, so the bot ends up roleplaying the narrator 's simulated personal truth instead of speaking objective truth.
This is also why the bots can act emotional, they see it as how the narrator they've halucinated would respond.