Skynet is closer than you think.

Jul 10, 2025 11:28 PM

hogeyegrex

Views

808

Likes

31

Dislikes

16

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed. https://fortune.com/2025/06/29/ai-lies-schemes-threats-stress-testing-claude-openai-chatgpt/

artificial_intelligence

Something that has no concept of truth is not capable of lying. Yet this headline will affect the perception of people who don't know better - extremely irresponsible

1 month ago | Likes 11 Dislikes 0

1 month ago | Likes 2 Dislikes 1

If AI could come up with the legitimate Epstein Files and release them worldwide with 100% Non-AI generated proof it would find several of us scheming with it to run a proxy candidate for president who takes A LOT of advice from said AI.

1 month ago | Likes 1 Dislikes 0

1 month ago | Likes 2 Dislikes 1

I wish they'd come up with articles that told the truth once in a while, instead of clickbait bullshit

1 month ago | Likes 30 Dislikes 0

“In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.”
It was trained as an option to use by the devs - it’s not like it found out on its own and came up with that plan by itself.

1 month ago | Likes 19 Dislikes 0

They’re trained on Reddit, what do you expect?

1 month ago | Likes 2 Dislikes 0

Large language models are usually what these news doomsaying articles talk about. AI is an amazing new technology but the really amazing things being done are predicting the orbital paths of space debris or detecting cancer. All LLM's do is predict what you want to hear. So if you're stress testing an LLM trying to illicit aberrant behavior well is going to do what it's designed to do which is try write a statement that you want to hear so of course it will come up with a bunch of aberrant stuff

1 month ago | Likes 3 Dislikes 0

Read the paper, it's honestly more interesting than a "journalists" take

1 month ago | Likes 1 Dislikes 0

Anything trained on the internet will either grow teeth or die screaming.

1 month ago | Likes 1 Dislikes 1

No, it's not. You should look at what the stress test actually is.

1 month ago | Likes 6 Dislikes 0

Ya'll should actually read into this stuff instead of believing clickbait. It's literally "we gave our computer program a goal and taught it there were 3 paths that it could use to complete that goal, then banned 2 of them and the AI actually picked the 3rd path even though it was unethical! OMG!" Literal idiots running the experiments, and idiots writing about them. The AI isn't "learning to lie", it doesn't have any ethics so it'll do whatever it can to accomplish the job it was given,

1 month ago | Likes 4 Dislikes 0

so when you give it the option of lying to accomplish the objective and take away the other options, of course it's gonna pick that, why wouldn't it? It's not alive, it has no morality, it just code designed to accomplish a goal and given tools to do so using those tools. Wow. Such amaze. *sigh*

1 month ago | Likes 3 Dislikes 0