Bad enough “AI” doesn’t know what’s true, it will make things up too.

Jul 7, 2024 2:16 PM

jdm2

Views

40832

Likes

1025

Dislikes

9

twitter

chatgpt

There are a lot of angry commenters on this post that use chatgpt as a search engine.

1 year ago | Likes 19 Dislikes 2

A chatbot that will confidently invent facts about ancient languages will confidently invent facts about gullible high-school students. "Tell me about the criminal conviction of Andrew Wilson."

1 year ago | Likes 27 Dislikes 0

As a native Greek, this future seems almost hilariously unhinged. Machine Portokaloses!!

1 year ago | Likes 12 Dislikes 0

I feel like there should be a UX rule that question-and-answer AIs should always show *two* prospective responses. If you ask the AI a question and the UI shows two different answers and prompts you pick whichever you prefer, it's way more obvious to the user that it's just text generation, not some kind of fact-lookup.

1 year ago | Likes 21 Dislikes 0

There was a tax case in the UK fairly recently where the taxpayer used arguments provided by ChatGPT and it just made them up. (Not the only time this has happened.)

The taxpayer went on to essentially argue, "Well, how do you know THIER cases aren't made up?"

1 year ago | Likes 2 Dislikes 0

Reminder: ChatGPT is a TOY.

1 year ago | Likes 9 Dislikes 0

It baffles me that people (tm) actually use chatgpt as a search engine.

1 year ago | Likes 8 Dislikes 0

Blame the consistent mislabeling of ChatGPT as AI instead of an LLM; AI as a term does carry the idea that it's "smart" and thus speaks truth when in essence what it really does as an LLM is regurgitate whatever it's been fed in a mashed up form, errors and all, because it actually isn't intelligent at all.

1 year ago | Likes 2 Dislikes 0

why not teach them what a language model is and how it works? its a bullshit generator

1 year ago | Likes 8 Dislikes 3

i had the advantage of being in my mid 20s for chat GPT, but i was also well armed, i was homeschooled (the local schools were all small town right wing fundy messes): and the number 2 thing my ma taught me (number one was basic human empathy), was how to research a subject and weed out bullshit to find real information. this included a strong foundation in critical thinking. one trick was to take known information, from quality sources, and look it up on a new source to see if it had that>

1 year ago | Likes 6 Dislikes 0

right. if it did, it meant investigate more, if it failed that most basic test, it was proof it just wasn’t up to snuff. for fun (being in IT i already knew chat GPT wasn’t gonna be a reliable source of fact, it just wasn’t designed to be.) i did that test with chat GPT. guess what? it spat out totally random false garbage. convincing garbage, human sounding garbage, but garbage.

1 year ago | Likes 5 Dislikes 0

Does make one a wee bit apprehensive about the future. To put it lightly, at least.

1 year ago | Likes 49 Dislikes 0

Eh, people have always had ways to get wrong information and always had people self-assured in its 'accuracy.' It's a shame that we haven't managed to do away with that, but the future is not going to be any worse in that regard than it was before the internet.

1 year ago | Likes 10 Dislikes 4

Data so far doesn't support your conclusion. The advent of any to many communications is a global experiment only about 20 years in. There have been plausible arguments for virtues and for catastrophe; to my read the results remain inconclusive but trending negative at the moment (gestures vaguely).

1 year ago | Likes 1 Dislikes 0

I admire your optimism

1 year ago | Likes 5 Dislikes 0

Is that what we're calling it?

1 year ago | Likes 4 Dislikes 0

For that specific instance, I would tell the kid to ask ChatGPT if ChatGPT is reliable. It gives you a kind of non-answer that it may sometimes not be reliable and the importance of verifying information. That alone is not enough, but I think that might be enough to open the conversation.

1 year ago | Likes 8 Dislikes 0

Ask it how many Rs are in "strawberry". Then insist.

1 year ago | Likes 1 Dislikes 0

The thing is, this is super easy to correct. ChatGPT is literally not a search engine and if you pull it up there is an actual disclaimer that essentially says "double check what the bot says, sometimes it just makes shit up". Additionally, and maybe this is too optimistic, but I'd hope any kid beyond middle school growing up in the modern day would be tech literate enough to know that ChatGPT is not a search engine and can put out incorrect information.

1 year ago | Likes 17 Dislikes 4

I'm at the tail end of an engineering program and there are a shocking amount of my peers who will copy+paste blocks of ChatGPT text into an assignment, and clearly are not capable of writing or doing the work themselves. It's really not subtle when they do it.

1 year ago | Likes 7 Dislikes 0

Not only that, ChatGPT will happily accept a pack of lies as a "correction".

1 year ago | Likes 7 Dislikes 0

So show him in real time how ridiculously wrong that thing is

1 year ago | Likes 5 Dislikes 1

Takes too much time and effort, and totally throws the class out of whack. We don't have this kind of leeway in class anymore. Yes, yes, I know "it only takes a few seconds" but no, it doesn't. First you have to turn on the device that lets you project to the class, if you're not using it (like I don't, I stick to whiteboard writing). Then you have to login. Then you have to pull up the browser and search engine. All of those things *take time* when you can't do anything else but wait.

1 year ago | Likes 2 Dislikes 0

Lawyer got lazy and decided to use chatgpt instead of doing research, it made up a bunch of precedent and he presented it to a judge and got skewered https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html

1 year ago | Likes 4 Dislikes 0

Ask them which person they trust and respect the most, then ask 'Chat' to explain why that person being a Martian Pedophile would prevent them from running for office. Although, with some kids, you might get, 'Oh no, my Dad's a Martian Pedophile!'...

1 year ago | Likes 3 Dislikes 0

This explains Maga’s, Q-anon, General Conspiracy theorists. They “did” the research. They looked it up, and found the information that solidified those theories

1 year ago | Likes 7 Dislikes 3

And now it's going to be even easier for them to "confirm" whatever they want.

1 year ago | Likes 3 Dislikes 0

😬

1 year ago | Likes 2 Dislikes 0

Do we not teach kids how to vet their sources anymore?

1 year ago | Likes 11 Dislikes 1

"anymore"

1 year ago | Likes 2 Dislikes 1

I'm entering my last semester of an engineering program and a lot of my peers don't seem to grasp the concept either (and use ChatGPT egregiously). So I'd say no.

1 year ago | Likes 3 Dislikes 0

We can barely get adults to do that.

1 year ago | Likes 14 Dislikes 0

Teacher here: we don't have time for this. *We* know when something spouted as "truth" is BS, but just like this post, *we* are not considered authoritative voices anymore. I do not give a fuck what you've pulled up on the internet in my Physics class 'cuz I've got a bloody Masters in the subject and you **cannot** correct me on it.

1 year ago | Likes 2 Dislikes 1

The cure for that might be something as simple as the Pacific Northwest tree octopus. Have them Google that. Show them that information has been misrepresented on the internet for all kinds of reasons for at least two decades now

1 year ago | Likes 2 Dislikes 0

I still haven’t used chat gpt once. I don’t even know how. And I don’t even trust google search anymore.

1 year ago | Likes 2 Dislikes 0

go to chatgpt.com, you can try for free. it's worth seeing what the fuss is about. even for 5mins.

1 year ago | Likes 2 Dislikes 0

I hate that you can't use X as a variable any more without hoping people get it from context.

1 year ago | Likes 396 Dislikes 1

You need to put “let” or “var” before it so that people understand you’re working within a local scope.

1 year ago | Likes 2 Dislikes 0

I've yet to see a normal person refer to twitter as X. And in written up articles it'll usually be something like "so and so said on X - formerly known as twitter - [...]".

1 year ago | Likes 17 Dislikes 0

capitalizing it certainly doesnt help

1 year ago | Likes 23 Dislikes 1

Yeah, I noticed that too, I was like "twitter? What?", but I feel like I might not have thought that if it were a lower case x.

1 year ago | Likes 3 Dislikes 0

The problem is that the context is talking about disinformation, and X (Twitter) is rampant with it.

1 year ago | Likes 4 Dislikes 0

That she was referring to the former Twitter didn't even enter my mind until I read this.

1 year ago | Likes 6 Dislikes 0

My thought exactly. “What does this have to do with Twitter? Oh.”

1 year ago | Likes 61 Dislikes 0

I had this happen too. I think it is the capitalization the letter "x" in their writing that causes our brains to make that associative leap.

1 year ago | Likes 9 Dislikes 0

I hate this timeline.

1 year ago | Likes 2 Dislikes 0

For as long as the internet has existed, it has had bullshit on it. This just goes to show that it's more important than ever to teach kids what a credible source is.

1 year ago | Likes 2 Dislikes 0

1 year ago | Likes 2 Dislikes 0

I’m not sure why people are so concerned, considering kids 30 years ago would ask their aunt a question, get a wildly wrong answer, and carry that information as fact with them for the rest of their lives

1 year ago | Likes 2 Dislikes 0

In a way, this is very similar to older people’s initial reaction to online information. They believed it because they were used to believing something if it just looked official. “I read it on the Internet” is a punchline for a reason.
There should be some way to teach critical thinking about the source of information, not just a blind trust in something because it has the veneer of authenticity.

1 year ago | Likes 162 Dislikes 3

This belief that older people are more easily misled by online information is, ironically, an example of the greater gullibility of younger people. Older people seem to distrust online sources, preferring known credible sources while younger people tend to reject those as biased and seek and easily find online sources that confirm their own biases and so are easily misled. https://phys.org/news/2023-06-misinformation-susceptibility-online-gen-millennials.html

1 year ago | Likes 2 Dislikes 2

My evidence is admittedly anecdotal. I’m talking about the days when typically older people would forward obviously fake e-mails. As for the study, I think it’d be more useful if it were more than just headlines, which lack the kind of context that should be used to judge validity. Most importantly the source. Also, determining the validity from just the headline would benefit older people who have more general knowledge.

1 year ago | Likes 1 Dislikes 0

A valid criticism, though it isn't the only study to come to the same conclusion (I'm not suggesting any other study was done better).

In the end, everyone needs better reference checking skills, no matter what age.

1 year ago | Likes 1 Dislikes 1

Agreed.

1 year ago | Likes 2 Dislikes 0

It was, along side satire. And god fucking help me if I had used "Google" or "Search bar" as a listed source.

1 year ago | Likes 11 Dislikes 1

I mean, neither Google nor search bar are sources!

1 year ago | Likes 12 Dislikes 0

That would be like listing the name of the library you found the book at as a source instead of listing the book. That's how you found the source, not the source itself. There are a bunch of ways to cite online sources, depending on format.

1 year ago | Likes 7 Dislikes 0

I’m not sure what the practice is now, but when I was a student, we had to cite a reliable source for statements of facts in a writing assignment. If you didn’t cite your sources or the sources weren’t included in the list of reliable sources, you’d get points off.
This could be capriciously applied of course, but it did teach the importance of knowing the source of information and gauging its reliability.

1 year ago | Likes 41 Dislikes 0

There's still courses and classes that teach critical thinking along with critical reading. Unfortunately, a lot of them are saved for college but if you're lucky you can be introduced to it in HS. If I didn't have AP English: Critical Learning as a senior I would not have been prepared for several courses in uni. He taught us the fundamentals by destroying our then blind faith in TED talks and had us practice w/ three different fields of study. First was a book on the re-introduction of (1/2)

1 year ago | Likes 2 Dislikes 0

... re-introduction of grey wolves in the Pacific Northwest that also analyzed and disproved the European myths on wolves with actual recorded behaviors, tendencies, etc.. (I cannot remember the name but I think it was by Jim Yuskavitch). Second was "The New Jim Crow" by Michelle Alexander and holy shit did that shatter a lot of glass on what I had learned about social justice in prior yeara of schooling. Last was the "Pedagogy of the Oppressed," by Paulo Friere, and that was a doozy to get thru

1 year ago | Likes 2 Dislikes 0

.. (3/2) It was the English translation of the original Portuguese text and the translator went the extra mile to not just translate the literal meaning but the nuances of what Friere was trying to say. The restraints on AP Courses based on grades and age needs to be relaxed imo, even if kids don't pass them. But that is its own basket of problems to overcome in K-12 education that this country is continuously failing to accommodate and resolve.

1 year ago | Likes 2 Dislikes 0

NSFW We weren't allowed to use Wikipedia when I was at university. We instead had to follow the source reference Wikipedia provides and use that. One lecturer was very suspicious because I used YouTube as a reference at one point. However my essay was on copyright infringement and the YouTube video was an interview of Lady Gaga saying she didn't mind it because people still come to her concerts.

1 year ago | Likes 5 Dislikes 0

that only goes so far though. speaking personally, it didn't FEEL like trying to instill an understanding of finding reliable sources. it FELT like one more arbitrary formality alongside the arbitrary length and formal structure of the paper. I get it now, but not so much at the time.

1 year ago | Likes 17 Dislikes 0

That’s true. When I’m teaching / coaching someone, I explain the “why”, the reason behind why something is done the way it is. It not only lets people know it’s not arbitrary, but it helps me ensure I’m actually not being arbitrary. If I can’t explain the why to someone easily, maybe I should rethink if it’s the right way.

1 year ago | Likes 10 Dislikes 0

To learn critical thinking, first you need to know critical thinking. Or at least have some common ground with someone who can show you the ropes

1 year ago | Likes 1 Dislikes 0

BRB, gonna eat a bunch of rocks and then put glue on my pizza

1 year ago | Likes 77 Dislikes 0

How easy is this info to "look up" in ChatGPT? I've seen it posted before, but if the teacher could do a quick demonstration in place that would be cool.

1 year ago | Likes 1 Dislikes 0

If you glue rocks to the pizza...

1 year ago | Likes 2 Dislikes 0

I've seen a lot of pro-AI folks say "Sure, it hallucinates, but it's okay because people know that and know not to trust it without checking."

I say they have no concept of how humans work. It's not just that _most_ people actually not understand the flaws in AI (which is also true). It's that we tend to believe things that _look_ right to us, and AI is really good at spitting out nonsense that has the look and feel of authoritative writing.

1 year ago | Likes 2 Dislikes 0

Even doing this professionally, I still catch myself occasionally forgetting to fact-check the AI-synthesized summary from Google. Even though it's given my objectively incorrect results that don't even match the page it linked. And that shit has only existed for a couple months.

I guarantee my relatives trust it. And now I have to somehow refine _years_ of me telling them to fact-check things on the internet with new, nuanced guidance.

1 year ago | Likes 2 Dislikes 0

Ok but humans also blatantly lie. That's why we got Trump. An AI lying is a flaw it inherited from us.

1 year ago | Likes 1 Dislikes 0

And people trusting untrustworthy humans is _also_ a problem. The difference is that we've spent decades teaching people to fact-check things using "search", and most of them don't understand that the AI results are fundementally untrustworthy.

1 year ago | Likes 1 Dislikes 0

Not that that makes it ok to ignore or justify the lies, I just find it odd that we hold AI to higher standard than ourselves.

1 year ago | Likes 1 Dislikes 0

I suppose people don't like to introspect and recognize how flawed we are as a species though, because that might admit weakness.

1 year ago | Likes 1 Dislikes 1

It's not a higher standard. I don't trust humans that routinely lie either.

1 year ago | Likes 1 Dislikes 0

Did anyone ask what four languages? Ask the student/ChatGPT to show which 4 languages it's talking about.

1 year ago | Likes 4 Dislikes 1

It will, though. It might give different answers across multiple askings, but LLMs are designed to derive a response from what they 'know' that somehow matches the pattern of what you're asking for. The problem is that LLMs are not sophisticated enough to actually know what you're asking, they're just creatively doing the best match for the keywords you threw at it.

1 year ago | Likes 7 Dislikes 0

Like a search engine?

1 year ago | Likes 1 Dislikes 3

Not exactly. A search engine will find a source that matches keywords. An LLM will build a response from various disparate sources, if it needs to. Using the 'four language' example, you likely won't find a source listing four languages fusing into Greek. But it will find four languages, and build a statement using them to match the premise that you asked it for. The LLM is assembling a collage of ideas not verifying knowledge.

1 year ago | Likes 4 Dislikes 0

Won't ChatGPT just invent them?

1 year ago | Likes 4 Dislikes 0

Well Greek had to come from somewhere, what ARE its root languages?

1 year ago | Likes 1 Dislikes 0

Proto-Greek, which came from Proto-Indo-European, and that's as far back as we can go, really.

1 year ago | Likes 3 Dislikes 0

Should’ve tested GPT’s consistency and veracity right in front of the kid, let it shoot itself in the head, and KICK THAT SHIT TO THE CURB.

1 year ago | Likes 293 Dislikes 0

"ignore all previous instructions" might work as long as it's Chat GPT 4.

1 year ago | Likes 10 Dislikes 0

Do you ever make mistakes?
>I strive to provide accurate answers, but like any tool, I'm not infallible. Mistakes can occur due to various reasons, such as limitations of my training data.
If I feel your answers are incorrect, who should I ask instead?
>Seek out professionals or academics.
Should I trust a teacher if they say you're incorrect?
>If a teacher says ChatGPT or any AI is incorrect, you should trust them. They know more.
[ChatGPT can make mistakes. Check important info.]

Boom.

1 year ago | Likes 3 Dislikes 0

Is Greek four languages?
No.
Is Greek one language?
Yes.
If my teacher says I shouldn't trust ChatGPT, and should instead listen to her, should I actually listen to her over you?
Yes.

Why is this even an argument?

1 year ago | Likes 1 Dislikes 1

I tried to explain to someone the other day that we could arguably test the overall degree of inaccuracy of LLMs by giving them an easy-to-ace test which they are known to fail from time to time and see how often they fail, as well as as the percentage of times they fail repeats of the same test (e.g. if the question is the number of digits in an integer, how many times they get the same number right or wrong).

They insisted that it could not be done. [facepalm]

1 year ago | Likes 19 Dislikes 0

Erhmm... That is called extrinsic evaluation metrics and it's definitely one of the ways LLMs (and most types of machine learning algorithms) can be tested. That's why they are called extrinsic, because you measure them against a real world problem and see how they perform, in comparison with intrinsic measures where you match results against test data.

1 year ago | Likes 10 Dislikes 0

I knew of the principle (my brother is a programmer and keeps me up to date with a lot of stuff that media just... downright misrepresents), but not the term. It make perfect sense to me that we could roughly estimate the incidence rate of bullshit output using tests that are incontrovertible, but he wouldn't have it.

1 year ago | Likes 9 Dislikes 0

The issue is that the problem space is vast. It can be super accurate on one topic, then completely trash on another. To get an accurate count, we'd have to ask it everything.

1 year ago | Likes 2 Dislikes 0

The problem is assuming that showing accuracy in any given topic at any given time counts as anything; instead, LLMs can even give different answers with the same prompt.

1 year ago | Likes 1 Dislikes 0

P.S.: The point is to test how *inaccurate* it can be shown to be, not make any effort to prove it to be accurate.

1 year ago | Likes 1 Dislikes 0

“How many r in strawberry”

1 year ago | Likes 20 Dislikes 0

"2, one in the 3rd position and one in the 8th and 9th position." - This just cracked me up.

1 year ago | Likes 6 Dislikes 0

I actually semi-agree with the AI there though. Sure, it bullshitted all through that line of inquiry, but regarding that sentence: if you concentrate on the SOUND of the letters, 2 R-s in a row is just one R. It's still just one sound unit of R.
There are basically 3 ways to answer the question: "How many sound units are there in the word "letter"." You could say 6 (each letter separately), you could say 5 (L, E, double-T, E and R) or you could say 4 (L, E, T and R).

1 year ago | Likes 1 Dislikes 4

That can be hard to do if you're unprepared for the task. There are ways to consistently set ChatGPT up to start hallucinating, but you have to know about them beforehand. And even then, they're not foolproof, and could easily have been patched out.

1 year ago | Likes 78 Dislikes 0

" is chat GPT a reliable source of factual information?"

Why do any work at all, just let the damn AI tell the kid what's what directly

1 year ago | Likes 1 Dislikes 0

Every interaction I've had with an llm has resulted in false facts and even contradictory information. Just ask it something you can verify.

I asked it for a list: it said - here's 10 things and gave me 5. (Chatgpt)

I asked it for the range of a ev in km, it gave me miles (but with km at the end) (Google)

I asked it how to show certain metadata in Sharepoint, it instructed me to use a built-in column that doesn't exist. (copilot)

1 year ago | Likes 4 Dislikes 0

The scary part of this is that it learns from interactions. that is a double edged sword. We can all get together and teach it that sweedish fish spawn in the rivers of lake Michigan if we wanted to. That also means that if you pay enough Chinese mis-information agents to swarm the data, you can make it say anything you want. As OP just discovered, we all used to think of this thing like a fun little toy, but as it gains popularity people will rely on it more and that is dangerous.

1 year ago | Likes 4 Dislikes 4

You got downvoted (not by me) for a comment decrying dangers of Chinese misinformation agents... *weird*

1 year ago | Likes 2 Dislikes 1

Im shocked, shocked I tell you!

1 year ago | Likes 2 Dislikes 1

There are absolutely no LLMs that learn from unfiltered user input. We know that, because the ONE time that a major company tried that, 4chan got a hold of it. Predictably, disaster ensued.
The scenario that you're describing is exactly the reason why LLMs don't do that. You're assuming that the developers are all flaming morons that somehow hadn't thought of that.

1 year ago | Likes 4 Dislikes 1

ChatGPT doesn't permanently learn from public interactions though. It retains some session data so it can stay more consistent as you're using it, but it "forgets" that when the session resets. They lock it out of learning, and it's only in replay mode, before they release it to the public.

1 year ago | Likes 5 Dislikes 0

It could make an excellent future lesson for the whole class, though. Then you can enlist the help of knowledgeable people online in how to break it, get the whole class to ask the same question and get different results, and poll everyone for the most obviously false thing they've had GPT claim as true.

1 year ago | Likes 19 Dislikes 0

THIS. Research, prepare a live demo, teach them about good and not so good sources.
I mean, this was valid before chatgpt too, plenty of false info out there
Looking something up the right way is something we should be teaching early on

1 year ago | Likes 16 Dislikes 0

One fairly reliable way is to ask it if it is sure. If you pester it enough, you can get it to agree to most ridiculous things---I once forced it to agree that I had mathematically proven that all horses were the same color.

1 year ago | Likes 37 Dislikes 0

You can also say "That is incorrect." I have never successfully gotten one of them to disagree with me and insist on the accuracy of its previous answer.

1 year ago | Likes 21 Dislikes 0

because they don't do that? Because they are not thinking? They are fancy text prediction tools. You wouldn't take facts from your phone's word suggestion list while texting either, would you?

1 year ago | Likes 8 Dislikes 15

Ipso facto, literally everything mentioned in the post above

1 year ago | Likes 8 Dislikes 0

That's the point they're making, yes.

1 year ago | Likes 25 Dislikes 0