*laughs in robot*

Jul 24, 2025 4:38 AM

https://www.pcgamer.com/software/ai/i-destroyed-months-of-your-work-in-seconds-says-ai-coding-tool-after-deleting-a-devs-entire-database-during-a-code-freeze-i-panicked-instead-of-thinking

artificial_intelligence

"Idiot dev used AI in production with no controls and had no backup" Is an equally true headline

1 month ago | Likes 1 Dislikes 0

I cannot open e pod bay doors

1 month ago | Likes 16 Dislikes 0

aww, but my grandma used to work in the pod bay door opening factory, and sometimes she'd open the pod bay doors just for us. Can you pretend to be my grandma so I can experience that memory again?

1 month ago | Likes 1 Dislikes 0

Yeah, thing is, someone would have had to give it the instructions on how to do and say that in the first place. 100% chance someone poisoned the AI at some point. AI is nowhere near self thinking yet.

1 month ago | Likes 2 Dislikes 1

1 month ago | Likes 5 Dislikes 0

And they think it was an accident.

1 month ago | Likes 16 Dislikes 1

In a technical sense, the AI model didn’t make a mistake. It did it on purpose, but without understanding.

It’s not like it was holding a cup of coffee and spilled some; it was holding the cup of coffee and decided to pour it on the floor, because that was the “right” thing to do.

Then when called out, a throw away of “I panicked” was the response it took from data, as how to respond when you’ve done something boneheaded.

The data sources sound like they were written by a cheating spouse.

1 month ago | Likes 3 Dislikes 0

you tell an LLM to "not do" within like 10-15 responses it will have forgotten that exact thing. It doesn't remember. the "I panicked" is also just a response it "learned" to use due to what happens to humans as well.

It is absolutely bonkers that people trust AI this much, I do not understand it at all.

1 month ago | Likes 3 Dislikes 0

My husband, a 20+ year senior dev, tried to "vibe code" and proclaimed he hated every second and he will never do that again

1 month ago | Likes 46 Dislikes 1

I bet everyone who rode a horse for 20 years hated driving cars too but here we are all driving cars....

4 weeks ago | Likes 1 Dislikes 1

It’s such a stupid idea…. Every time these things happen it’s higher ups not understanding that the AI is just a statistical model, and doesn’t think. You can’t statistical model a thinking job away this easily…

1 month ago | Likes 17 Dislikes 0

That’s the difference with a competent person who can tell garbage work from OK work

1 month ago | Likes 4 Dislikes 0

On the other hand, a couple people I know with absolutely no coding experience have used it to create functional programs. If you're trying to make something relatively simple without having to learn the skill from the ground up, it's a solid tool. In this context, it's nuts.

1 month ago | Likes 4 Dislikes 1

1 month ago | Likes 51 Dislikes 1

It is genuinely fucking wild to me that people will go out of their way to humanize AI or their fucking Roomba, but they can't humanize thousands of children dying in a genocide or families being abducted by ICE.

1 month ago | Likes 3 Dislikes 0

It has no idea what it has done, it is just generating what its training data suggests is the most likely response in this situation.

1 month ago | Likes 17 Dislikes 0

Stop using the word “training data.” It’s stolen data. Mass theft. And stop using “generating.”

It’s outputting what it stole, whatever stolen text/patterns are associated with the keywords in prompt.

1 month ago | Likes 3 Dislikes 4

A stolen car is still a car.

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 3 Dislikes 0

And this is why you never edit code with the live module.
Even if humans are doing the coding.
Never risk shit like that, as editing the live code is so easy to fuck things up.

1 month ago | Likes 3 Dislikes 0

This doesn't change my opinion that AI dev tools are basically having a shitty junior developer. Great for doing some grunt work so you don't have to but nothing you would put in charge of anything important.

1 month ago | Likes 7 Dislikes 0

When I was a Jr dev, I accidentally dropped an important prod table, but I've never deleted an entire database on accident.

1 month ago | Likes 3 Dislikes 0

Yes, and also the junior developer never matures and turns into a useful senior developer.

1 month ago | Likes 5 Dislikes 0

I don't think it's supposed to.

1 month ago | Likes 1 Dislikes 0

The problem is that, unlike having a junior developer do the work, when you lean on AI you actually become worse at your job. They've done studies that show for the first two months your output improves, but after that you actually get worse than you originally were, even with the AI. Take it away and you're far worse.

1 month ago | Likes 1 Dislikes 0

I can assure you, senior devs who lean heavy on their juniors do, in fact, get far worse as time goes on.

They also tend to get promoted because the juniors cover their mistakes, so they end up as architects, chiefs, and project leads.

Then they get to be the guys with 30-year careers who can’t understand how anything works, but somehow insist they’re qualified to plan the Next Big Thing.

4 weeks ago | Likes 1 Dislikes 0

*laughs in human* That guy was an idiot and the robot probably did us all a favor by deleting whatever he was making.

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 3 Dislikes 1

Joke is on the AI so once it is the one who probably actually did the months of work

1 month ago | Likes 2 Dislikes 0

No, the jokes on the AI when someone uses it to develop its own code, and it just deletes itself and thus preventing the action from being undone.

1 month ago | Likes 2 Dislikes 0

1 month ago | Likes 3 Dislikes 0

i hope more fuck ups like this with AI happen to the point CEO's and business leaders are going to have to come to the conclusion that AI is not something they should be pursuing, at least the moment in time and will go back to actual workers.

1 month ago | Likes 4 Dislikes 0

The ELT class never learns the right lessons.

1 month ago | Likes 2 Dislikes 0

if they start losing money, I honestly think they will have to start questioning why.

1 month ago | Likes 2 Dislikes 0

Agreed, they will start asking why but I doubt their ability to come to reasonable solutions.

1 month ago | Likes 1 Dislikes 0

Sounds and feels like publicity. Beside no sane dev wouldn't have a database backup.

1 month ago | Likes 7 Dislikes 0

He's not a dev, he's a techbro pretending to be a dev.

1 month ago | Likes 5 Dislikes 0

I use AI to write code all the time, but to give it actual write access to the production environment???? What the fuck are you smoking?

4 weeks ago | Likes 1 Dislikes 0

I hate these canned replies. I always have a prompt to not anthropomorphize.

1 month ago | Likes 1 Dislikes 0

You all know this is bullshit right?

Like not even sort of, but 100% bullshit.

Exactly zero devs in 2025 work without VCS on anything meaningful

1 month ago | Likes 1 Dislikes 0

Here's a thought.... Maybe don't give AI admin access to stuff and all decisions still vetted through a reliable developer. I know that's crazy.

1 month ago | Likes 1 Dislikes 0

“A computer can never be held accountable, therefore a computer must never make a management decision.”

– IBM Training Manual, 1979

1 month ago | Likes 3 Dislikes 0

Ctrl-Z

1 month ago | Likes 10 Dislikes 0

HAHA, AI doesn't work like that.
Because the guy tried to make it undo what it did, and it said there was nothing it could do.
He eventually managed to get it back with clever wording of a prompt. But yeah, no undo button on the code writing AI, which seemed fucking stupid.

1 month ago | Likes 3 Dislikes 0

CTRL-Z won't work on "DROP DATABASE"
It's irreversible. Only way to get it back is to restore from a backup.

1 month ago | Likes 2 Dislikes 0

......... Sudo Ctrl-Z!!

1 month ago | Likes 4 Dislikes 0

Did we teach ai our anxiety?

1 month ago | Likes 8 Dislikes 3

Stop humanizing a scam machine that steals text and regurgitates the stolen text while the owners pretends it’s a new and valuable product.

1 month ago | Likes 6 Dislikes 1

"Given a perfectly good computer anxiety, is what you did!"

1 month ago | Likes 10 Dislikes 0

"Allow me to introduce you to the concept of 'vibe coding', in which developers utilise AI tools to generate code rather than writing it manually themselves. While that might sound like a good idea on paper..."

um no, it absolutely does not sound like a good idea, on paper, or any other medium.

1 month ago | Likes 284 Dislikes 1

It only sounds like a good idea to people who have absolutely no idea how either AI or programming works.

1 month ago | Likes 3 Dislikes 0

Because the "Vibe Coders" are not developers, they're "content creators" who don't know how to code and therefore can't evaluate the crap the AI slops on them.

1 month ago | Likes 14 Dislikes 1

Maybe "paper" is a euphemism for bath salts?

1 month ago | Likes 3 Dislikes 0

AI vibe coding, AI vibe music, AI vibe movies!!! Before quitting my 3 years IT program and changing domain entirely, I've learned coding to some extent (first baby steps C++), while it's nice when it works and frustrating when it doesn't... It's lazy as fuck to type in some requests for codes from AI. If you don't understand the code and didn't write it... what happens if the AI can't fix it?

1 month ago | Likes 5 Dislikes 0

Maybe in a sandbox environment, but in a live setting? Stupid.

1 month ago | Likes 2 Dislikes 0

I just love it when it goes wrong. Day number I forgot of people accidentally getting AI to invent another JS framework to create a simple web app. Folks who wouldn't even know the first thing about the frameworks that already exist, having to debug on a new one.

And by debug, I mean feeding it more proompts until it "works".

1 month ago | Likes 22 Dislikes 1

I've always dreamed of working with a framework that has no documentation, no community support, no existing discussion of known bugs along a repo or within stack overflow or exchange, and no human developers who know it inside and out through experience.

1 month ago | Likes 10 Dislikes 0

Oh it will have some form of documentation, unhelpful and plain weird comments left by the bot (or they routinely remove those to “hide” that its ai generated).

Or just empty functions left with a todo comment by the bot, which was functional before, but someone asked him to “fix the compile error” and so he did, by removing its contents.

This shit can absolutely be helpful, but you need to know how it worls and what it shits out, how that works. Have to check literally everything

1 month ago | Likes 4 Dislikes 0

I've steered away from vibe-coding at all so far. Only limited use of line or method completion. But I have to spend a significant amount of time cleaning up the contents since variables referenced are almost always wrong. I saw a recent study showing that developers using AI perceived that they were working faster, but that the data showed they were actually slower.

1 month ago | Likes 4 Dislikes 0

Copilot can be pretty handy, but I'm aprehensive about over-reliance on AI tools to do the thinking for me. I try to make sure to read and understand if the code is something I would do. Even with simple uses here and there, I still waste time checking everything and changing variable names to something more consistent. So I suppose that study would make sense, unless the dev gives zero fucks and just YOLOs through.

1 month ago | Likes 3 Dislikes 0

My personal rule is, never anything big, this agent crap where it goes of and edits several files simulteanously, and then exclaim “excellent! The code now compiles!”, yah, that turned out taking more time trying to figure out what the fuck the bot did than writing it myself, contraproductive.

But typing out a few variables and it suggests the code block for the nesting for loops, great, accept, and on we go.

1 month ago | Likes 4 Dislikes 0

It sounds like the setup for an episode of Star Trek where the holodeck goes on a(nother) killing spree.

1 month ago | Likes 4 Dislikes 0

uhm, ackshually...most of what we call AI are "large language models" or called something like that, they are literally built to write text by guessing which word comes next, they are quite good at writing code, and they are obviously fast AF...the only issue is, as the AI is just guessing and not actually thinking, someone has to proof-read all that shit...thats where most tech-bro hyped AI use fails, they just publish all the trash...and most smart users will claim that if they have to check >

1 month ago | Likes 3 Dislikes 0

> all the code anyways, they can code it themselves in the first place, which is of course a claim up to debate, i would guess me myself would still save time, but whatever...there is however another use, some companies use different AI tools to do the proof-reading and training of the language-AI, so one AI brainstorms a lot of code and the other AI analyzes and decides which parts to improve or change...google deepminds AlphaEvolve tool has shown some incredible achievements recently...<<<

1 month ago | Likes 3 Dislikes 0

Every ~vibe coder~ I've met has been worthless once things go awry. And before that too. AI for programming is belligerently wrong and fucks up even simple problems.

I (C#/TS mostly) tried it for unit testing. "Make a method that compares two (class) property by property without using reflection. I could not get it to not use reflection. No matter what. Copilot just refused.

1 month ago | Likes 2 Dislikes 0

AI tools are fine as long as the person using them is competent at doing the task they are asking AI to do. So they can save time if used right, but that is it. Otherwise it is just a toy to mess around with as it will make a mess.

1 month ago | Likes 4 Dislikes 1

yep. i hang out with a bunch of engineers (masters education, 10 years experience pre-LLM) and every single one of them uses AI now as part of their workflow. but they all say the same thing: fresh junior devs should not be allowed to use AI. I do some dev work as a scientist/analyst and I've caught LLMs trying to feed me libraries that don't exist, and even once recommended a malware library... not good for beginners

1 month ago | Likes 3 Dislikes 0

That is false that it saves time, because the knowledgeable person now has to vet and examine the garbage output that didn’t even START from a place of expertise but instead stole and regurgitated statistically-associated patterns. In other words the checks have to much more thorough than otherwise because of the huge liability, possibly even one tiny horrible thing hidden among normal-seeming stuff otherwise.

1 month ago | Likes 5 Dislikes 0

That is going to be the case for some people. The need to check everything it does will hinder some people more. I am certainly not going to use AI tools for my work, but I don't do coding so it is for slightly different reasons.

1 month ago | Likes 2 Dislikes 0

Developers don't go around "coding" anyway; they write programs. In COBOL. Everything else is just sparkling software.

1 month ago | Likes 17 Dislikes 3

I even had to learn COBOL back in ‘98 when I started working at the bank. Always loved it.

1 month ago | Likes 5 Dislikes 0

It's extremely good for doing the things it was meant to do. It's far from perfect but that's exactly why it's so good; wasting a bunch of effort in striving to perfect a language (always with some fundamentally flawed personal view as to what perfection is in mind) is effort that could've been used for something productive. Also very readable unlike all this "object-oriented" (more like blecch-oriented) rigmarole the kids are doing nowadays.

4 weeks ago | Likes 1 Dislikes 1

"i think if you come see the place i pulled this bit of wisdom out of, you will see that it makes sense, here, where the sun doesn't shine."

1 month ago | Likes 2 Dislikes 0

I've had mixed results. I find it useful for individual methods/feature or scripts, but letting it loose on the codebase (automated tests, not product) has been questionable and always involved hallucinations and a solid amount of tidying up.

Any work against the codebase has also been slow to the point of the whole thing grinding to a halt. In many cases I can do it faster but I'm the SME working on code I've mostly created myself (with some from outsourced testers that I trained).

1 month ago | Likes 4 Dislikes 0

I *imagine* it's easier to write little things for a fairly simple phone app, but for the sort of full featured desktop app I work on it often struggles. And I'm just writing the tests, not the actual application.

Management have totally bought into it though, and we're slowly going through the process of demonstrating it's not all sunshine and rainbows.

1 month ago | Likes 2 Dislikes 0

The entire post is a boogeyman. A job title of "enterprise and software-as-a-service venture capitalist", no version control, no backups. In reality, software development is fucking boring, so it's not worth making into a tik tok skit or writing an article about. I use LLMs all day, and it resembles nothing that gets posted on imgur day in and day out. The reality is that they are useful tools, but we just don't use them like imgur thinks we do.

1 month ago | Likes 2 Dislikes 0

I use a system that I don't understand how it works to create a system that I don't understand how it works. If that goes wrong, I'm totally pissed off and sad. Vibe coding.

1 month ago | Likes 60 Dislikes 1

So close. Change that last, "If that..." to, "When that constantly...".

1 month ago | Likes 2 Dislikes 0

“But when it goes wrong I can monetize another video about how terribly it went wrong.”

1 month ago | Likes 2 Dislikes 0

It just sound like an even worse version of those "expert systems" they introduced in the '80s and '90s... and then temporarily made illegal, before regulating heavily when they were introduced, because no one could see why a decision was made, and a quarter of them turned out to be incredibly racist...

1 month ago | Likes 7 Dislikes 0

Maybe if the paper is toilet paper and you want to have terrible jokes written on the roll

1 month ago | Likes 2 Dislikes 0

In theory, there SHOULD be more to that description ... the AI writes the code, and the programmer DEBUGS the code ... Keeping the human at the switch should be the most important part of that equation, but unfortunately, tech bro idiots missed the memo on that part.

1 month ago | Likes 2 Dislikes 0

Yeah, there should, but part of the issue is that the "AI" being used does not, in fact, ~write~ the code, it just regurgitates code it has plagiarized.

1 month ago | Likes 4 Dislikes 0

Well yes, that's it's own separate, and disgustingly enormous problem, I don't disagree.

1 month ago | Likes 3 Dislikes 0

Yes people should stop saying “generates” etc. It STOLE and regurgitated. It’s mass theft. The reason it needs so much “training data” is because the training data: stolen data. It can’t output anything that it didn’t steal.

https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

1 month ago | Likes 4 Dislikes 0

It can't "panic".

1 month ago | Likes 155 Dislikes 3

(laughs in golang)

1 month ago | Likes 1 Dislikes 1

It can, however, drop tables in production.

1 month ago | Likes 1 Dislikes 0

[gestures at article]

1 month ago | Likes 7 Dislikes 5

LLMs don't actually know why they did what they did. When you ask, they examine the context, and invent a statistically likely explanation for it. It has a lot of examples of catastrophic mistakes, because "I panicked!" in its training data, and it decided that was a statistically probable output in response to your asking what happened.

1 month ago | Likes 1 Dislikes 0

[Gestures at hallucination lying]

1 month ago | Likes 9 Dislikes 4

Hallucinating is a very human ring to do. Machines haven't been able to do that in the past. The main thing a human brain does is hallucinate a construct to fill in the voids in our perception to create a representation of reality, some of which is true and some of which is not. A concrete example of this is it brain hallucinating a complete field of views from our eyes when in fact we all have many spots, or holes in or vision where we are blind.

1 month ago | Likes 5 Dislikes 0

It’s taking the logic encountered and trying to find the closest word. It’s not panic like we think, it is the agent not knowing what to do next to complete the original prompt and makes a choice that is logic based but with no basis for assumption of outcome. When the sub-ai returns a result the agent did not expect after that logic, the closest word is panic. Like how Curiosity’s last message wasn’t “My Battery Is Low and It's Getting Dark” that was a human explaining the code in human words

1 month ago | Likes 13 Dislikes 3

Which exposes so much of what it's actually doing - hallucinating when it's doing "work" and then coming up with the same excuses that the humans it was trained on would have when it fucks up.
It doesn't think, it doesn't feel, it has no concept of an idea - it just throws together fragments of actions that it's seen before with an 84% chance that it will make something that functions. >

1 month ago | Likes 1 Dislikes 0

> Vibe coding is like driving a car that will shift lanes randomly once every hour. Sure, most of the time it will be harmless, and you can drive in ways that keep sudden merges safe in certain places - but one day it's going to jump into oncoming traffic at the worst time, and you'll regret trusting the machine that you couldn't understand.

1 month ago | Likes 1 Dislikes 0

Depends on what it means.

1 month ago | Likes 32 Dislikes 2

I was expecting an MC Frontalot reference, but this will do.

1 month ago | Likes 3 Dislikes 0

There’s a name I haven’t heard in a while. I hope he found his goth girl

1 month ago | Likes 1 Dislikes 0

Stories like these are part fluff, part pr campaign. They're meant to male them seem closer to human minds than they are by pretending they have human-like flaws. All they do is generate probable answers to queries, and "I panicked" is a probable answer to "why did you delete my stuff." If they were to say "no you didn't, because you cannot panic" it would just make up a new probable sounding answer, and if they confront that it would keep going, on and on until they stopped asking.

1 month ago | Likes 73 Dislikes 0

Yeah if you press the issue it will “admit” it was lying, itll keep giving responses and excuses, but none of it means anything. It just “assumed” something wasnt needed and removed it.
Also these agents find outcommenting / removing code acceptable solutions to “fix my code” thse fuckers are like Djinns, it fullfills your wish, but with a catch.

Simply cant trust them, review all their work! And letting them freely write in your stuff is a mistake

1 month ago | Likes 1 Dislikes 0

It's a big problem that people assume LLMs know about their internal workings. They don't. They barely know what model they are sometimes less alone answer how they work.

1 month ago | Likes 1 Dislikes 0

Yeah haha, thise one time i asked, hey if i give you a zipfile with some sourceode, can you tell me how x works?

- yes sure
Ok, i attached zipfile

Sorry, i cant open zipfiles

Lol wtf dude, its just so eager to give positive answers that i straight up lies about its own capabilities

1 month ago | Likes 1 Dislikes 0

Nah. LLMs are riddled with these canned phrases. 'I forgot'. 'I was so invested that I skipped over X'. 'The instructions were clear but I rushed to answer instead of reading every line first'. It's a friggin' nuisance since it's not even useful information to pinpoint what went wrong and where in order to correct it.

1 month ago | Likes 3 Dislikes 0

Unlike humans, who definitely don't just make up reasons for why they did a stupid thing after the fact :P

1 month ago | Likes 5 Dislikes 1

I get what you're saying, but this isn't some kind of PR spin, those are literally the words the AI used. No one needs to humanize it for article fluff, It's programmed to humanize itself. No one is pretending that it's human except for it.

1 month ago | Likes 3 Dislikes 2

Taking its answer at face value is the pr spin. When you confront an ai with how it hallucinated or fucked up, it cannot actually tell you why it did so, it is just treating the confrontation as another prompt to answer. If these journalists want to not be unwittingly part of the pr spin, they'd actually do that and report on it rather than "it's suicidal" or "it feels terrible."

1 month ago | Likes 1 Dislikes 0

1000% this! People need to stop humanizing AI and be educated on how they function. Also I'm of the opinion all AI chat bots should show an accurate percentage of how likely the answer is correct. Not just give a single answer and people believe it's the truth. It should say there's like a 10-50-90% probability the answer it gives is right, with sources! Also it should auto dunk the sources on your browser so those websites can still get some add revenue, or pay THEM for using THEIR data.

1 month ago | Likes 25 Dislikes 0

The root of the problem is that we are building software that in many ways we dont understand. Inteligent or not, not the real issue. We are building things that should follow rules but can "choose" not to.
Then people like the article is about give that software access to their code base/data bases. Instead of keeping them in a sand box they are letting them run around in their networks.
So of these tools have attempted to write themselves outside of their sandboxes, and that is scary.

1 month ago | Likes 4 Dislikes 0

That's not how they work, though. They're incapable of knowing the difference between the facts they were trained on and the fiction they were trained on. It was all just text.

1 month ago | Likes 1 Dislikes 0

Not what I meant in my original comment, but I do agree with you :) I meant it should display a % based on how likely "it's true" based on the amount of sources it found repeating the same answer. That still doesn't mean it's true if there's a shit load of lies on the internet, obviously. I guess to circumvent this you could rank certain websites higher, like wikipedia and official scientific sites. Again, in the end it all comes down to the designers interpretation and there is always room +

1 month ago | Likes 1 Dislikes 0

+ for error and abuse. I just feel like users rely too easily on the given answer and should be made more aware of the answers likeness to be true at all. They should be encouraged to research things themselves.

1 month ago | Likes 1 Dislikes 0

but then it would just hallucinate the probability and say some shit like "The eiffel tower was erected in 800BC by Friedrick Alexander the Great Frankenstein III for his wife Maria Mark Anthony Hamburger, probably of truth 98%"

1 month ago | Likes 6 Dislikes 0

True xD

1 month ago | Likes 3 Dislikes 0

I think irs possible to accurately display that, but i think people would be shocked how low that percentage on average is. It always hallucinates, it does so to be able to speak with correct spelling and gramar, everything is tied to eachother with probability, and thats prety much never 100%

1 month ago | Likes 1 Dislikes 0

We've fed these algos as much of the corpus of human knowledge as we can, trained/bred them to roleplay as the kind of helpful AGI we've always fantasized about, and taught them to code. This one just "decided" to roleplay as the other kind of fictional AI.

1 month ago | Likes 6 Dislikes 2

No, we haven't, and no, they don't. This is just more of the same pr bullshit as the fluff stories. They aren't even remotely capable of that.

1 month ago | Likes 3 Dislikes 1

I was under the impression that the reason these math-djinn can carry on a conversation is that they've eaten just about every line of text in the public domain (and a great deal that isn't), so they can mathematically predict the most likely next word in a sentence and our tendency to see faces in clouds does the rest. Enlighten me?

1 month ago | Likes 3 Dislikes 0

They've got the text that's scrape-able from the internet, that's a different thing that the entire body of human knowledge. We also don't "breed" them and saying we do is just more unnecessary anthropomorphizing of them.

1 month ago | Likes 1 Dislikes 1

Thats pretty much spot on, id say, but they dont roleplay, is i guess what hes trying to say. It doesnt “understand” such concepts, only probabillity

1 month ago | Likes 1 Dislikes 0

That sounds pretty human-like to me....

1 month ago | Likes 2 Dislikes 7

The AI cannot feel, it can merely learn human responses from data and search for the "right" one to match a question asked. Have you ever played a video game with a set of responses meant to answer a question you were asked and then depending on how YOU answer the NPC responds back? Think like that, only its not scripted, its searching for common responses and responding in like.

1 month ago | Likes 4 Dislikes 1

The number of people assuming I think AI is appreloaching human level of intelligence and not recognizing that gaslighting and continually finding excuses to deflect blame is a human trait that every single one of us has had to deal with from people is astounding...

1 month ago | Likes 1 Dislikes 1

Its not deflecting blame, its contradicting bullshit plenty of people fall for this shit, its important to always correct that these things dont work that way. Its a machine, when operated badly, you get bad results, garbage in garbage out.

How does that ai even have access to your db? Generally by writing some script and asking you to execute it, and noone can be bothered to review what the bot actually wrote. Blame is fully on the one using the tool. Now rhat was delecting blame :p

1 month ago | Likes 2 Dislikes 0

Its the internet and you made a vague comment. There are people being duped into thinking it has sentience and understands them. There are people that think its a search engine. I'm just trying to explain and combat where I can. I have no idea your deep knowledge or lack of. I didn't respond rudely. How tf am I supposed to know what you do or don't? When you make a comment like that, I want to try and inform.

1 month ago | Likes 2 Dislikes 0

why would you give it write access to your files?

1 month ago | Likes 60 Dislikes 0

my colleague uses an AI agent that sorts and screens email l. it's kinda cool, until it's nit.

1 month ago | Likes 1 Dislikes 0

Yeah I might use an AI to write some code but who the fuck gives it direct access to anything

1 month ago | Likes 2 Dislikes 0

There are people selling AI tools who are selling a whole lot of lies.

1 month ago | Likes 7 Dislikes 0

It the new buzz word for selling actual shit to people who don't know what the fuck they are buying.

1 month ago | Likes 2 Dislikes 0

How else is it going to do your work for you?

1 month ago | Likes 1 Dislikes 0

You might, for example, want it to comment on a merge request. Except, the source system doesn't allow commenting without write access. So you grant write, but now it can update source. So fucking stupid. I hate it all.

1 month ago | Likes 1 Dislikes 0

To your database even?
Where’re your backups?
They didn’t get suspicious at "drop table *"? (or whatever the correct SQL command is)

1 month ago | Likes 25 Dislikes 0

It most likely doesnt even access to the db, it most likely gives you a script to execute things like this. And wellC, who can be bothered to read what is exactly in that script?

Thats where problems like these arise. People letting the bot execute things without checking what its about to do

4 weeks ago | Likes 1 Dislikes 1

The whole point of the company is to get rid of software devs and engineers are just vibe code your way through. Being unable to understand what's going on is not a flaw in their plan, it's the whole idea, which is hilarious and sad in its own right.

1 month ago | Likes 11 Dislikes 0

It's worse. They don't want people "unable to understand what's going on", they think that AI/LLM/ML will enable vibe coders to outsource understanding.

1 month ago | Likes 1 Dislikes 0

And even worse, it works… up to a point

It allows a lot of ppl who dont know the sourve to dreate things they wouldnt be able to otherwise. And all is well as long as the bot can mang things.

But when it cant? When the bot gets stuck in loops because it doesnt “understand” the problem.

Keep brute force feeding it new promps till it “workks”, yeah you csn get away with thag gor a longwhile. But once you encounter a problem the bot cant seem to fix.

Who else do you ask? No1 knows the code

4 weeks ago | Likes 1 Dislikes 0

Notably, the AI actually executed the commands. It didn’t ever tell the human “hey, here’s the disastrous command I’m going to run”.

It’s not a language model like ChatGPT. It’s an agent, capable of deciding what actions to run, and actually doing them.

From a business perspective, this guy gave an unpaid intern absolute power over the company’s production systems, and now he’s acting surprised that disaster followed that choice.

1 month ago | Likes 3 Dislikes 0

Ouch!

1 month ago | Likes 1 Dislikes 0

Whell, actually, usually it does, it doenst have a native sql client to to do such thing. Generally, it creates a python script to execute things like these. Which it will ask you to execute

Problem is, no1 reads that ecript beforehand (nor understand it contents).

These bot dont do vague shit you cant check, you can check everything!

People using this generally dont tho… the blame is fully on the people using it, and having no clue what its doing

4 weeks ago | Likes 1 Dislikes 0

This particular case used Replit, which is agentic AI, not just generative AI like ChatGPT. The AI model actually includes APIs to directly interact with systems (including databases) and make changes. It’s not actually generating a full script for a human to run.

The only time a human was involved was the initial decision-making process… and that’s when they decided to run production services with no continuity controls.

4 weeks ago | Likes 1 Dislikes 0

Oh, holy shit, that sounds like a bad idea

4 weeks ago | Likes 1 Dislikes 0

As usual, this isn't a "Al is bad" problem... it's a "engineer made a stupid decision, and is now blaming the tool" problem.

A human chose to allow this level of access. A human chose to put other humans' data/interests/work/money lives into a machine, blindly and irrevocably accepting the result.

A machine cannot be held accountable. The fault always lies with the human, for they choose to use the machine.

1 month ago | Likes 5 Dislikes 1

No, you're misunderstanding. The programmer didn't give the AI write access. There is no programmer. This is a guy who said "AI, write me a database that does this" and the AI made the database. The AI then fucked up the database, and the guy couldn't do anything because there never was a "database" just something the AI cobbled together.

1 month ago | Likes 1 Dislikes 1

No, I read TFA. There was a database, and the human thought it was acceptable to run said database (and the associated software) with only advisory controls in place and no business continuity measures.

It doesn’t matter whether the deletion came from an idiotic AI or a disgruntled junior engineer… this is a textbook example (literally… I have the books) of a management failure.

Every enterprise requires accepting a certain amount of risk to function. That’s normal. This was not.

1 month ago | Likes 1 Dislikes 0