
Freyja33
12188
483
23



https://research.aimultiple.com/ai-hallucination/
https://www.reddit.com/r/technology/comments/1kfg6xx/ai_hallucinations_are_getting_worse_even_as_new/
https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

link from "benchmark": https://research.aimultiple.com/large-language-model-evaluation/
link from "gpt-4": https://research.aimultiple.com/large-language-model-evaluation/




https://www.theguardian.com/world/2025/aug/06/microsoft-israeli-military-palestinian-phone-calls-cloud



LtRooney
The dipshit brigade has moved on to a new talking point, "Yeah, it sucks, but we're going to ram it down your throat anyway. It's here to stay, so get used to it, because we're sure as shit not going to stop."
TattoosAndTENS
AI will be the ones deciding who goes to the Camps.
Magnus4711
big part of it is the self confidence. Most humans will admit half knowledge or missing information, ai does not. it will tell you an obvious impossibilty and argue for it with all the might of a software trained on all human internet interaction. its ability to manipulate is vastly superior to most humans.
Xenarion
Text generation AIs are made to write good text first, not to be search engine. They will give a confidant answer because that's the pattern that normally follows a question.
sfbiker
The problem is that it's very hard to avoid AI slop on the internet, even if you intentionally avoid AI search tools, most of the websites that come up in a search are AI slop, and that 15% error rate compounds itself since now AI tools are using AI generated content as references.
Cataleast
A client just added a genAI slop-generator to their website that "writes" and publishes 5-10 blog posts a week. My loathing of genAI has gotten to the point, where I feel fucking filthy just for working on a site with such a plugin. Like, I don't even interact with the fucking thing, but just seeing it there sickens me.
redsmerf
The EMR program we use at work will occasionally pop up with a "do you want to use our AI?" message.
Not only no, but hell no. Aside from the questionable data sources and questionable legality of using AI in healthcare, it's wrong too damn often
redsmerf
"It can summarize the information from the intake" How about it summarize deez nuts? I want it in the patients own words.
mixiekins
Those still using LLMs either were barely using their brains to begin with, or already had "delegated" their higher reasoning to their subordinates; the place my husband quit has had, since being founded, one supervisor who scores of people have quit over having to work with. As soon as this software was available to him, everyone working under him ended up having this supervisor simply be a middleman between them and the AI flavor of the week. He *can't* care about #1, as that requires thought.
Davejavu
Wow, I read the first slide and immediately thought they were talking about the Trump administration.
Xenarion
What's crazy to me is how Google used to dominate the AI race, and somehow made it worse and worse in the past years, and then thought adding LLMs would help.
Eldibs
Yeah, humans lie too, but human lies are things like "I didn't eat the last cookie," not shit like "It's safe to eat rocks and glue."
JFMiskatonic
Jedi Jesus and Darth Judas are my favorite Star Wars characters
anteyeclimbaxe
Googled some imported cookies a coworker brought in today. Included "marshmallow" in the search specifically because the package I was looking at said "chocolate and marshmallow". Gemini™ advised me it was a brand of cookies that had no marshmallow variety. Thanks Gemini™
Xenarion
Once again, LLM AIs are designed to write text based on patterns it detects to make grammatically correct sentences. They are NOT search engines.
69thStPepper
"It only lies twice as often as a person" is a weird way to say it's half as reliable as a person
SisyphusRollin
Which is absolutely fine, because it isn't a person. Its a tool, a new tool thats being improved and worked on. Its not AI's fault everyone forgot the basic rules of the damned internet, I just googled something and it gave me a wrong response.... should I panic?
RevengeIsIceCream
That depends. Did you bring your towel? ;)
SisyphusRollin
:)
abetteridiot
Half as reliable as the *average* person, in a general conversation context.
Not people in a professional context (comparable to someone you'd go to for data or to do… work) or in fact-based conversations that are less concerned with relationships, or with lying about taking that cookie or having done homework.
Basically, the data that was brought up was already skewing the comparison in AI's favour, because it was based on a worse standard than is expected in the applications we use AI for.
RevengeIsIceCream
At BEST. Up to 80% is... pretty bad.
69thStPepper
My assumption is the 15% is for simple inquiries with relatively direct inputs, while if you try to use it the way people want it to work it's closer to 80%
[deleted]
[deleted]
PuzzledCompletely
Still better. For one, people can theoretically be held accountable.
AgamemnonsMemes
AI is cool and useful in some situations but it does not outweigh the cost.
SisyphusRollin
Wont anyone THINK OF THE CHILDREN?!?! (This shit is hilarious, outside of the anti regulation BS that lets them pollute this is like being scared of google because a search result could not be 100% proper to your question.)
vericon151
And does AI post a source? lol. Never. It uses all the sources in the world!
DarkZalgo
I dunno which ones you're using but chatgpt and Gemini do post sources.
RadioFloyd
As does Google AI
myloveforyouislikeatruckberserker
Yes to all those things, but it’s technological progress. You can’t mute progress, just like you can’t fake it. If a tech is no longer useful, it dies, if a tech provider a service, it will thrive. To expect humanity to just put a lid on such an extraordinary technology is unrealistic, unfortunately the fact it’s bad for the environment is not a deal breaker for people. So get comfortable with it, cos it ain’t going nowhere. Best we can hope for, which I am seeing some evidence for, is that we1/
myloveforyouislikeatruckberserker
We will develop more efficient models which use a fraction of the computing power, or we will achieve stable quantum computing, which will probably make it energy efficient to run models of unprecedented sizes. Now if on the side we can also achieve nuclear fusion, then we’ll be fine.
everythingzed
People still smoke cigarettes. Countless of evidence of how bad it is for everyone and everything, direct and indirect. Even the most selfish and egocentric people won’t stop. This here? No fucking way people are gonna stop. People are fucking stupid.
RickRollEditor
It infuriates me that people don't understand that LLMs has "hallucinations". THEY LITERALLY STATE A WARNING THAT THEY DO AND SHOULD NOT BE USED FOR REFERENCE.
But it's wrong that they're "killing the environment". This is due to a common misunderstanding on how LLMs (or generative models in general) works. It takes a fuckload of energy TO TRAIN the model, and that's done once. Not using it. You can literally run an LLM on your home computer. Don't even need good hardware.
Xenarion
Yeah, LLMs are designed to write good text, that's it. They aren't search engines, and they certainly aren't thinking like humans. Humans will have ideas and put them into words, so many people assume text gen. AIs are the same, but they're not. All they do is chain words based on patterns, probabilities, and a bit of randomness. An LLM doesn't put ideas into words, they don't even have ideas.
TouchMyInfection
DarkZalgo
Shit like this is such a bad argument against LLMs. Especially when there's so many real arguments against them. They're language models, not logic models, why would you expect it to answer a logic question? Do you also wonder why a hammer is so bad at cutting steak?
Allnamesinworldtaken
So what questions are they meant to answer?
DarkZalgo
...language questions? It's literally in the name.
DrLOAC
alcamar
I tried, i can't, it's so stupid
reichtwinglunatic
to be fair, it is similar to the second G in gigantic. stupid fucking example tho.
DrLOAC
Should have asked how to pronounce GIF
RadioFloyd
We all know you pronounce it “gif”
kiomadoushi
Like the G in gigantic, obviously
oldguyexlurker
You may have to repeat that crucial point multiple times and in a bigger font. These people are LLMs. It takes loud repetition before they start spewing the observed contents without thought. They should agree with you eventually. LOL. Also, 7% of human communication??? Not any more. With as much repetitive press as Trump's lies get, THAT figure HAS to be wildly out of date.
Remmon1
Trump is an outlier and wasn't counted, obviously.
oldguyexlurker
Ah. A trimmed mean.
oldguyexlurker
I see what you did there. Took me a moment...
GlenL
A friend of mine went to see Weird Al, I had to confirm that she went out of town and I didn't miss him locally, the concert was out of town but google AI just flat out lied and told me he played in my city the night before
EmilyKla
Plot twist: she saw Weird AI and not Weird AL.
quietwalker
I saw a post a while back, "Maybe if someone who can't even speak the language, has no real education, and no local support can steal your job, maybe you shouldn't have had it in the first place."
How does this apply to AI and brilliant, skilled creators?
DirectorFury
It's legalized plagiarism, that's hkw
quietwalker
Not saying it isn't, but the complaint above is that AI is bad at what it does (agreed), that it's putting brilliant, skillful creatives out of work (I never doubted that people lost jobs) and also racist (odd thing to pull in here, seems like this is an unrelated issue that makes me question their judgement).
If the brilliant skillful creatives can be replaced with a steaming pile of crap, statically proven, then either they're overqualified, underutilized, or neither brilliant or skillful.
DirectorFury
The problem is that AI companies use the services these creatives provide without paying them. Just make AI companis pay for the art their programs are trained on and this entire issue disappeares
Natastrophe01
Just wait until enshitification hits AI. Then it will get really interesting.
Freyja33
That's kinda what LLMs and "generative AI" are, machine learning was doing actually useful things long before LLMs and aigen came along
PicassoCT
We have a ruling caste that desperately wants to be non-responsible for the shit they pull. "I was just following orders.. AI made me do it." - yeah right, here is your jumpsuit and your hammer.
ChelVanin
It doesn’t matter if it only lies 1% of the time. The fact that it lies at all means it has to be fact checked 100% of the time, because you don’t know where that 1% is.
Einbrecher
And? That's not the "gotcha" you seem to think it is. There is not a single human-involved system in the world that has a 0% error rate. Even in the absence of malfeasance, people make mistakes. That's why I have an assistant that reviews my court filings before I file them. That's why authors have editors. That's why scientific articles are peer reviewed. And so on. There's plenty of reasons to hate on AI, but this is probably the dumbest one to champion.
Freyja33
n0n53n53
It doesn't "lie". The biggest disservice that AI companies did was try to make their assistants "act human". This make people who use them feel as though their intentions are human based. It can be wrong, it can't lie. It isn't trying to trick you. If you take whatever it comes up with as factual without verifying YOU are the one who is lying.
SisyphusRollin
Modern day Satanic Panic is hilarious.
BenjyX55
Teens playing D&D didn't consume all the water in Lake Superior every time they rolled for initiative.
SisyphusRollin
The point is that, while certain arguments are legit (how to compensate those it takes from, the need to regulate the way they cool the server farms etc) the majority are arguments based on fear mongering Satanic Panic nonsense. Someone here claimed to be a guy who works on CGI for video games claimed that AI would replace all of them soon. Thats hilarious man, hard core panic. Like "Microsoft word is going to end the written book!" shit.
SisyphusRollin
Also, AI isn't consuming all the water (and nothing is doing what you're saying, that hard core need to sensationalize and lie so your point has any validity is sad) its the greedy people violating and paying off politicians to get around regulations. Do you think the internet runs on... what? Server farms existed prior to AI you know, the lakes still there as well.
SisyphusRollin
Context is king. Yes, that is true but unrelated to the topic at hand. The Panic is coming from stuff like "Ai is going to take all your jobs" or "Ai drove a man to suicide" or "AI is making marriages fall apart" you see nonsense like that all the time. THIS POST RIGHT HERE has a bunch of it. Its like when Excel was going to kill accountants etc its BS.
BenjyX55
New technology can create new dangers as well as reduce the prevalence of and demand for old skills. This is undeniable. How many died of auto accidents in the year 1800? How many are currently employed as telephone operators relative to the number of phone calls made? I'm not here to defend every claim about AI but it's disingenuous to assume everyone skeptical of AI is being alarmist.
SisyphusRollin
I assume nothing here, I'm referencing sensationalist journalism that reaches the front page here. What you described is the positive effects of tech, that undeniable fact is known as progress and is something to be celebrated. I've been in IT since IT fucking began and have dealt with this over and over. From Engineers screaming about having to use CAD is going to ruin them to accounts/lawyers thinking Microsoft Office would end their jobs...
SisyphusRollin
Its ALWAYS fear mongering with this shit, its just that AI has a couple faces behind it everyone on this platform and many other hate. So the narrative is insanely bad fear mongering, people acting like a chatbot could replace them. Its Schrodinger's immigrant of the right but for tech. Its both incapable of doing anything correctly, but also its going to do all the jobs.
SisyphusRollin
But the reality with the current TYPE of Ai that exists, not this General AI theory that doesn't exist in reality, is a TOOL that can't do anything without someone using it. CGI artists wont lose jobs, they'll just get a new tool to use. Programmers etc. This is undeniable, the tech can't run on its own.
SisyphusRollin
We saw it with CGI and we saw it with the internet, we saw it with ELECTRICITY! People were so scared of that new danged tech they went crazy and put out ads (satanic panic style). Its generally the norm of new tech has some variation of that, what kills me is it appears tech people are eating themselves. What I see mostly is people claiming to be tech and acting like AI can "be dishonest" or shit like that.
Szwejkowski
This is nothing like the satanic panic.
SisyphusRollin
100% it is. Articles like "AI caused a kid to kill himself" while the kid had massive mental disorders and was sleep deprived away from any AI for multiple days. Blaming a chatbot etc. The BS studies about it making people dumber are all bad science. Its hardcore that right now. The people I'm talking to here have NO IDEA what it is yet claim its ruining everything.
SisyphusRollin
If anything its EXACTLY like when CGI came around and everyone said they're not real artists and its fake etc. In fact the CGI guys online are all stoked about AI catching the flak because they're finally off the hook.
Szwejkowski
It will put them out of a job. While I'm sure there are many silly 'think of the children' articles to cherry pick, there is well sourced shit going down as well, which was not the case with D&D, heavy metal or even 'video nasties'. Anyone can see how much slop is spilling out and how it's being used by the worst of us to collect data on people. It's great for scientific/medical research - not so great anywhere else.
SisyphusRollin
This is a really dumb, IMHO, approach to basic tech. You're buying into the BS like when CGI came out because it looked like shit "its not real art" but now its indistinguishable and people stopped crying about it being fake art. This post we're under is a "think of the children"
Szwejkowski
CGI works best when it's used in conjuction with practical effects and that is still the case. I never called it fake art, I worked in video games long enough to have a real appreciation for computer generated artwork. You're just making up stuff to argue against and avoiding all the valid concerns surrounding AI.
SisyphusRollin
If you think that it will put them out of a job soon, you don't know shit about either tech. Not an insult, a fact.
SisyphusRollin
The data collection sucks, we should regulate things, the pollution sucks and corrupt officials allowing shit like that to happen sucks. Whats happening is AI is being blamed (like DnD) instead of the actual bad people/bad things. People are acting like AI should be 100% correct on everything, like the first CGI image ever should have been perfect.
SmellingMistake
How can anyone think "it's at best only twice as dishonest/incorrect as the average human" is a good defense?
Einbrecher
Because you're countering someone who is anthropomorphizing software, meaning it's not a logical argument to begin with. LLMs have no motivation and no sense or understanding of their output. They are no more intelligent or motivated to lie to you than the autocomplete on your phone. Their output is categorically unreliable, yes, but that is nowhere near the "gotcha" folks are convinced it is. There's plenty of other reasons to legitimately hate on AI that don't immediately signal ignorance.
Pervaroo
It's much worse than that. Most humans lie when they have a reason to lie. You can almost predict when a human will lie. The bots lie in random patterns. They might even lie to you when you ask them the same question five seconds later, and it'll sound just as confident for every lie.
TheZerax
That's the thing that drives me crazy if the "Average 7%" thing is about all communication, not specifically data-based work, then it's an intentionally misleading point. Like I imagine the "lie" rate in peer reviewed science is pretty close to 0%.
Freyja33
You're right, though sadly, I could give you more than one example of people lying in peer reviewed research because it profited them to do so.
TheZerax
I mean I'm sure it happens, but the rate has gotta be way lower than 7%. Like, if LLMs had a 1% failure rate, I'd still consider that a lot, but probably an acceptable amount.
Z0op
Its like automated driving and the argument that they cause less accidents than humans do, but thats simply not good enough, especially the expectation it created by calling it intelligent. I would expect a machine to _never_ lie to me, other than having obvious problems and bugs. If it “lies” then it isnt functioning right.
But these are chatbots, not searchengines or databases. They try to push llms as ai what it simply really aint
SubTrout
It's a lie and you don't even know it's a lie because it's mixed in with other data that may or may not be correct. So you shouldn't rely on ANY of it, because at least with a human if pressed they'd have to show their work/data/resources.
scrumby
It's not really dishonest, though, it's delusional. It's inaccurate to say AI's lie because lying involves an intent to deceive, and the god-mode autocorrect machine has no intent. It just has no means by which to determine what is true or false outside of echoing what it's been told in a way it thinks you want to hear.
Z0op
If you want to be pedantic about words, then you shouldnt be using the word “thinks” there either
just saying
scrumby
What's to like to go through life thinking critical thought is an attack on you?
SmellingMistake
I only used the word "dishonest" because the defense for AI argument mentioned lying. I immediately followed up with "incorrect" because I know it isn't actually lying or being dishonest.
SisyphusRollin
How could anyone think its dishonest. Like, learn what this thing is. Its not being dishonest, its not a person. Its a tool. Like photoshop, photoshop isn't dishonest you know.
SmellingMistake
I only used the word "dishonest" because the defense for AI argument mentioned lying. I immediately followed up with "incorrect" because I know it isn't actually lying or being dishonest.
n0n53n53
ppl just don't understand what it is and expect it to do things it's not built for. If you're trying to get straight facts out an LLM you're going to have a hard time. It won't be accurate (aka "lies" to this dude). You can have it assist you in all sorts of tasks that do save a lot of time. You have to know what you're doing though, so you don't just take it verbatim and push out BS. I use it all the time to help with coding, it doers make mistakes but I know what I'm doing so I recognize them
SisyphusRollin
Yep.
SisyphusRollin
Was going to have a garage sale, laid out all these tools and shit I got from my uncle after he passed. Way to much shit, laid it all out in the garage and took a photo. Hate using anything but local but decided to upload to grok, "Identify these items, create a garage sale listing with these items and generate several different types of posts for different social media sites to advertise" and boom done. Didn't have to be perfect, its a Fn garage sale and it saved me hours.
SisyphusRollin
According to some people I'm the devil now and that garage sale is wrong or something. I agree on there needs to be regulation, we need a way to compensate people (I do not have the answers) but to full throttle make up stories and pretend its "evil" or something is surreal. We'd be in caves if these people were the majority for our evolution.
ProppaGanda
Yeah, Photoshop isn't dishonest, because if I use a rectangular selection, it will select a rectangle 100% of times. If I draw a line with a brush it will render a brush stroke 100% of times. It won't try to gaslight me that it did something while it obviously did something completely opposite.
SisyphusRollin
This is hilarious. Thanks for the laugh. "It wont try to gaslight me" Jesus fucking christ, its a chat bot. Get your shit together.
RevengeIsIceCream
Just wait, I bet there'll be an "AI-powered" version soon that does! ;)
SisyphusRollin
You mean the amazing already existing AI applications that allow people to create art right now? Google "What is Comfyui' "what is Wan 2.2"
Khanamana324
You can’t create art with an llm sisyphus, only steal more talented people’s works.
dontrike
It telling you wrong information on purpose is literally what lying is.
SisyphusRollin
You are buying the tech bros idea that this shit is sentient. You're just not capable of using a tool and you're falsely believing its making concious decisions to lie and stuff. It can't do that, the tech bros are lying to you. If you google "Who won the 2020 election" and result comes back "Trump" do you now believe that? Is google a liar?
dontrike
Sentient? Of course fucking not, that's stupid. What I think happens is they create this AI/algorithm to absorb specific information, usually the wrong kind. Take a look at Grok as an example of this. It gets worse as Musk makes it work for him and push literal lies as much as possible, along with Nazi rhetoric.
How you got "you guys think it's alive" is fucking stupid.
SisyphusRollin
Dude, I was playing fallout 4 yesterday and it crashed. It must hold a hatred for me. My TV reacts slow when I use a certain remote, it must be jealous of the other remotes.
dontrike
Dude, your argument is shit, no wonder why F4 hates you.
SisyphusRollin
Its a love hate relationship. I've got 1600 mods on skyrim, like 800 on this fallout latest edition fuck the new upgrade. Dishonest implies intent, their is no intent. Its not lying, its giving bad data.
SisyphusRollin
Its like people wanna go back in time and arrest Ask Jeeves
DarkZalgo
Lying implies intent. Do you really think an algorithm is sentient enough to have intentions?
dontrike
I do believe that the ones who MADE THE ALGORITHM have intent, to which it then follows. Do you think it suddenly has a will of its own?
Freyja33
No, it fucking doesn't, this is a totally nonsensical argument based on meaningless semantics. It's very obvious what someone means when they call LLMs "dishonest". It doesn't matter if it has "intent", the result is exactly the same.
DarkZalgo
Calling it "dishonest" or "lying" just makes you sound like you have no idea what you're talking about. It's an algorithm. It may be a fancy and buzzwordy algorithm, but it's just an algorithm. The result of an algorithm is just that, a result. It can be correct or incorrect, but being incorrect doesn't make the algorithm "dishonest".
BlueSkinnedBeast
Maybe “dishonest” isn’t technically the right word, but it’s confidently telling you false information and doesn’t register any difference between factual and non factual information. “Dishonest” seems like a good shorthand for what’s going on to me
SisyphusRollin
There is this abdication of basic accountability, shit you learn as a child, that drives me nuts with this stuff. There's a satanic panic happening where people lie about AI to push narratives, its surreal to me. If you google "Who won the 2020 election" and a response tells you "Trump did" you wouldn't believe that would you? If you did, is it googles fault?
Einbrecher
Dishonesty implies motivation. An LLM has no motivation, no confidence, and no sense or actual understanding of its output. It is, to put it simply, autocomplete on steroids. It doesn't "lie" to you any more than autocomplete on your phone keyboard does. "Unreliable" is a far more accurate way of describing them without anthropomorphizing them - it just doesn't make for as punchy of a headline.
SisyphusRollin
Again, we need to stop applying human notions to this shit. Its not "confidently" telling you anything. Its not dishonest, its not confident, its a chat bot. Dishonest is a good shorthand if you want to make it something that it isn't so the narrative of all AI is evil and bad works better, to be honest about it you'd know that there is no "mind" behind it to make this statement.
BlueSkinnedBeast
“Doesn’t register any difference between factual and nonfactual information” combined with “portrays itself as an entity rather than a tool” and yeah, “dishonest” seems accurate.
Mostly that’s its programmers, because you are correct in that it doesn’t have any awareness or volition itself, but it’s programmed to emulate being an entity with volition and awareness, which it fundamentally is not. I’m standing by “dishonest”
SisyphusRollin
Yeah, of course you are. The narrative needs you to make it into something it isn't so that fits. Jesus man, its software.
ImGoingToGoFallAsleepOnABench
Aight, so "confidently" refers to the way that it words and phrases its outputs - it'll put out non-factual information in a way designed to appear factual by describing it as such in a way that the average person would assume is credible. No, an LLM isn't "confident", but its outputs are phrased "confidently", i.e., as if written or spoken by someone confident in the veracity of their words (as opposed to "dishonest", which requires willful intent to deceive, and thus, will and agency).
DarkZalgo
Why wouldn't it phrase things "confidently"? What do you think is the actual realistic alternative?
dogboybastard
I'm sure this will be downvoted, but it's not a positive reply, just a reality check. AI is here to stay. It’s in nearly every industry, with growing capabilities & profitability. Businesses see this & push adoption everywhere. Jobs already require AI skills, that will accelerate. Government use is emerging and will expand until AI influences policy. Reality is changing we have to adapt to live and work alongside it, or become “eaters.”
Xenarion
Yeah, it's a tool that will get refined and become prevalent in many industries, for sure.
The issue is that right now many are jumping on the LLM / GenAI bandwagon without understanding what they're doing.
dogboybastard
A repeat of the computer revolution, that's why man are comparing them.
alcamar
Businesses see AI and see $$$. Every AI company has bent the knee because they know it's going to tumble but that Orange Taint Stain is pushing to give them clearance and protect them from liability. It's all going to crash down eventually, just a matter of how much of us it takes with it. It's unsustainable and they are just betting on holding the line until it works, which it never will.
dogboybastard
we have 2 things happening at the same time, it's a revolution on how knowledge work is done and an authorian push to change the US from what it was into a fascist state capitalist oligarchy. The computer revoluation hit the workforce as well and made giant changes that changed how everyoen works; but the big difference here is that AI has far more potential and could lead to a 5th industrial revolution
dogboybastard
Combine that with an antiquated power grid, and a push to keep using old method of power production and backtrack on using renewables, and it's created the headways to a perfect storm
scrumby
Which is why it's unlikely AI is going to lead to any kind of industrial revolution. It's it's a massive resource hog being squandered on bullshit at a time when resources are going to start getting really scarce. They're probably gambling on a Big Brother scenario where the government subsidizes it because otherwise shit like grok isn't making any money and power is only going to get more expensive.
dogboybastard
I understand your point. But I don't think you're correct. I think it will continue to push and the right will make damned sure we all suffer a Depression level event. It's coming, it's not stopping, and things are going to get a hell of a lot worse than they are now. Economics are already giving warnings of a looming Depression, not just a recession, not just stagflation, but a repeat of the first one but with new and improved fascism.
crojohnson
Xenarion
DId you just screenshot someone else's comment on this post?
sadurdaynight
Issue they're having now is once they come out with a new LLM version it's only good for a few months before it's obsolete. The tech is very complex snd evolving quickly. I equate it to having a very eager fresher. Someone that can crank out a lot of work, but you absolutely need to double check them. And they're great at freestyle work, like writing a paper from a few notes. They're awful at doing repetitious tasks. Basically a good cook, but awful baker.
Remmon1
So you're saying that pointing out that AI is defective, destructive and worse than human workers won't stop adoption of AI and there is no point in protesting it? Right. Guess we'll just have to resort to violence then and start murdering our way through the people trying to cram something that's defective and helping to destroy the environment down our collective throats.
dogboybastard
RickRollEditor
AI isn't defective. LLMs are. AI has revolutionized cancer research. Many health research fields, in fact.
SmellingMistake
Eh, I can fake using AI by just doing a shitty job.
SisyphusRollin
You missed what Dogboybastard was saying, you're not going to "fake using AI" at all. You'll use it or you'll have to go off grid.
RickRollEditor
AI has revolutionized cancer research. Many health research fields, in fact.
NinerThreeFourTangoXraySierra
That technology has nothing to do with LLMs, and conflating them by calling both of them "AI" is misleading
RickRollEditor
No shit, Sherlock. That was exactly my point. Read the comment I replied to.
Ronelyn
Syphilis, COVID, and cancer are here to stay. Should we adapt and work alongside those? A popular ill is still an ill.
dogboybastard
Human history shows us massive changes in how people work and live, industries have vanished to be replaced by new ones. I think AI is diffrent in that the "new" industry won't need as many people. It will need people to work with it. But those job numbers will be a lot less than all the other industrial revolutionary changes we have historically seen.
MightyUrto
Did you mean "a lot less"?
RickRollEditor
AI has revolutionized cancer research. Many health research fields, in fact.
Ronelyn
It's. Not. The. Tech. It's the humans. Also, because these things are by and large black boxes, they've been caught taking "shortcuts" in diagnostics, like realizing that a patient's age relates to their disease risk and learning to just use that instead of actually diagnosing.
The tech *has* applications. But what we have now is like using penicillin for, I dunno, brake pads or something.
Xenarion
Exactly.
I'll also add that not all AIs are generative AI, which a lot of people seem to confuse. AI is about pattern recognition. Google's search engine is an AI. Youtube's algorithm is an AI. Your insurance provider likely uses AI to determine your premium based on your profile. Amiibos in Super Smash Bros Ultimate are AIs.
Ronelyn
YouTube, insurance, and fucking Smash characters are *algorithms,* not AI. They're code a person wrote, can explain, and can debug. They make repeatable decisions based on traceable inputs. I was a software tester. AI is *way* more complicated than that, and despite using algorithms within their systems, vastly less possible to debug or even understand. You can uncover and correct bias in algorithms. AI devs will tell you: they can't do that.
Xenarion
Youtube devs said in interviews even them can't explain why the "algorithm" promotes one video or the other. It is explicitly an AI that suggests videos based on patterns it detects in your and other similar people's browsing habits.
Tokreal
Uhm, this is not entirely correct. As per the current scientific definition of AI by McCarthy the way on how you achieve an AI does not matter. You can absolutly write an algorithm, that someone can explain and can debug and still have an AI. An AI is defined by the problem it solves (the problem shall be historically associated with human intelligence) and not by the way it solves that. E.g. there are chess AIs that are also algorithms.
Ronelyn
Intel CPUs have 4.2 BILLION transistors. And yet Intel devs can vouch credibly for and predict their output. AI devs *can't.* If your CPU had a 15 percent chance of shitting the bed, you'd be out of business. Hell, Intell gets HAMMERED for tiny FPU errors.