I don't see the problem. Step one, install studs. Step two, find them with stud finder. Step three, reinforce studs with scovs. Step 4, remove hand and place TV against wall. Step 4, smooth TV with a planer. Step 6, drill palot holes through TV. Step 7, retrieve missing bracket. Step 7, drill pailot holes. And finally step 19, swivel.
Fun fact: LLMs are running out of training data. They need about 5x the data they've already ingested to improve, and then they'll need 5x *that* data to improve further, and there just isn't that much on the internet.
Although they do need *more* data, what improves them the most is *better* data. Better quality, diversity, tagging, etc. It's really silly to believe that they'll just run out of data to give to it, and not be able to improve any more after that. What kind of data could they possibly need, that they don't already have an unlimited supply of? Is eleventy bajillion identical instagram selfies not enough to train that part of the model? They need *twelvety* bajillion?
That's why you don't use it as an end all be all. Let it do its thing, then correct where it's wrong or intuitively illogical. Rinse and repeat. Guide it like it's a first grader coloring in a picture book with wide crayons.
step 4 : when you realize you have the thing mounted backwards and just need to rest your head. step 4 - redux : you decide its the tv's fault and punch it, leading to... step 6 : fuck this whole thing imma drill holes in it step 7 : shit, did i really do that? maybe it isnt that bad *looks at the screen*
Kinda yeh, kinda nah. The biggest concern is that GPT's shown to have a nascent capacity to understand foreign perspectives. As in, the first glimpses of a system capable of going "I know X, but person A doesn't know X, thus I can lie to them about X", which is the basic prerequisite towards being able to manipulate someone.
I'd recommend watching just about everything Computerphile ( https://www.youtube.com/@Computerphile ) and "Robert Miles AI Safety" ( https://www.youtube.com/@RobertMilesAI ) have to offer. Their general consensus/explanations are that YES, these massive language models are based on text-prediction, but that at the same time the tech is clearly pushing into intelligence.
For more specific source on my claim, I'd suggest watching this one: https://www.youtube.com/watch?v=2ziuPUeewK0 from RMAI
It doesn't have the capacity to understand anything. It's predictive text, there's no thought or understanding. It's like comparing taxidermy to Frankenstein's monster. You can see some superficial similarities, but there is an incredibly vast difference.
Understanding and thoughts are not a requirement for intelligence. They're a consequence of a high enough intelligence. Simple example are animals and how they solve problems even if they clearly don't understand everything they do. Heck, humans very often solve problems without understanding them. Compare a kid VS an expert tackling the same problem. The one who understands is able to find better solutions.
Would we consider a calculator intelligent? Intelligence is "the ability to acquire and apply knowledge and skills."
Without any agency of its own I don't think an LLM qualifies. We consolidate the knowledge we think is useful and then we build a prompt to generate the desired response.
I think the real interesting question is: if it's simply a "thought calculator" then what is the extra component in us? Agency? Is that just reaction to stimulus in recursion?
Going for strict definitions, "Knowledge" is "Information", and "Skills" is "Do something well". The true key here is "acquire". So, in short, if you program a calculator to do something, it didn't acquire any information. LLM's and Image-Generators work differently. You tell them what objective you want them to achieve, and then give them an algorithm to guide the training.
You say "won't", I say "shouldn't" because we all know some c-suite dildo will happily fire an entire department and farm the job out to AI the instant it can string together four words semi-coherently.
"Your taxes are fucked up and you're going to jail for tax evasion? Guess you shouldn't have let us farm your taxes to an incompetent AI that was never designed to do 90% of the stuff we tried to make it do. Thanks for the money and enjoy prison."
Unfortunately, it'd probably have to be a class-action suit. One person going to jail because an incompetent AI doing something it was never designed for managed to screw up his taxes wouldn't even be on the court system's radar. We'd need thousands of cases at minimum.
I still think that, when the "AI" bubble bursts, the people who actually use those programs will go back to calling them language learning models. The term "AI" will have been completely ruined for everyone.
Seems like there will be that many cases seeing how corporations are trigger happy with implementing AI in its full incompetent glory. Even if they include the clause they're not responsible for what AI does - that's just straight up admitting they take no responsibility for their products and you can't have that, legally.
CaldariBob
"Pivot!"
PaddyPatrick
Final step. Paint it s the tv doesn’t rust
NameChecksOut87
Stud finders are fun u put them on your chest and say I found a stud.
schnitzell
I don’t think my wife would even notice, she’s just buried in her phone the whole time she puts anything on anyways.
sillyDad
Seems legit
thepicklebucket
YouCanShoveYourMagicBeansUpYourAss
TBF, AI could certainly replace some of the managers I've worked for.
Theshnazzyone
No ChatGPT is right, this is how you mount a tv you killed in the wild after the taxidermy is done.
cunninglinguist85
ChatGPT: fINisHeD!
TohmaytohTohmahtoh
DrFedora
I can't help but laugh at the fact that the final panel shows the guy realizing how badly he's fucked up.
Skuggen
Stud finder picture is even missing the part where he points it at himself and goes "found one!"
TinyLiehon
ChatGPT has self-confidence issues. Who wouldn't when half the internet makes fun of you and it's your job to read the internet?
blainetog
Also, your name sounds like, "Cat, I farted," to the French.
TinyLiehon
Was trying to get Chat J'ai pêté out of TinyLiehon. Then I realized I may not be the sharpest Biological Intelligence in the chat
ItsPhotoShoppedByMe
Don't forget the palot holes.
nonopenah
pailot holes
nonopenah
Or is it pöilot holes?
StTriniansHeadBoy
No, *you* swivel, ChatGPT.
Srcsqwrn
Swive!
Revicus
I don't see the problem. Step one, install studs. Step two, find them with stud finder. Step three, reinforce studs with scovs. Step 4, remove hand and place TV against wall. Step 4, smooth TV with a planer. Step 6, drill palot holes through TV. Step 7, retrieve missing bracket. Step 7, drill pailot holes. And finally step 19, swivel.
HeadlineNews
Still doing a better job than my uncle. That's who chatgpt is replacing.
PaleChapter
Why ChatGPT will fail to perform when it replaces your job anyway because the numbnuts in charge don't know any better.
blainetog
Fun fact: LLMs are running out of training data. They need about 5x the data they've already ingested to improve, and then they'll need 5x *that* data to improve further, and there just isn't that much on the internet.
CheshireCad
Although they do need *more* data, what improves them the most is *better* data. Better quality, diversity, tagging, etc.
It's really silly to believe that they'll just run out of data to give to it, and not be able to improve any more after that. What kind of data could they possibly need, that they don't already have an unlimited supply of? Is eleventy bajillion identical instagram selfies not enough to train that part of the model? They need *twelvety* bajillion?
blainetog
They do, though. I'm just telling you what the AI companies are saying. Quality data would be helpful but there's even less of that in existence.
blainetog
https://theweek.com/tech/ai-running-out-of-data
AverySillyName
I love that google is now at the level we were at before google, where you just ask your uncle a question and he confidently answers it incorrectly.
Showsni
@BotDrawA series of instructional pictures showing how to mount a TV on a wall
BotDrawA
@Showsni Here's your (experimental extra) drawing of a "series of instructional pictures showing how to mount a TV on a wall, Brothers Grimm"
BotDrawA
@Showsni Here's your drawing of a "series of instructional pictures showing how to mount a TV on a wall"
Zreen
That's why you don't use it as an end all be all. Let it do its thing, then correct where it's wrong or intuitively illogical. Rinse and repeat. Guide it like it's a first grader coloring in a picture book with wide crayons.
Cthulhudreams
I'm unsure what's going on in steps 4 and 7
kazeshi
step 4 : when you realize you have the thing mounted backwards and just need to rest your head.
step 4 - redux : you decide its the tv's fault and punch it, leading to...
step 6 : fuck this whole thing imma drill holes in it
step 7 : shit, did i really do that? maybe it isnt that bad *looks at the screen*
testzero
So is ChatGPT
RevengeIsIceCream
Which step 4 and which step 7? I also have a hard time following steps 10-18...
TheSlouchOfBethlehem
Looks like it was hedging its bets on how to spell 'pilot' and still lost
mity0k23
HubicPairs
I’m the guy in frame 6
Destor
I think the guy became a girl for that frame
sadurdaynight
if your tv doesn't come pre-drilled with vent holes, you'll need to add your own.
Itwasmyname
bigdinger81
That's not what chat GPT does at all 🙄
DrKonrad
AI can't even count the steps: 1 - 2 - 3 - 4 - 4 - 6 - 7 - 7 - 19
Shaodyn
I can't help remembering the time Microsoft's AI, Copilot, insisted that 8x4=24.
suiseiseki
Terrance Howard must be proud of it
Shaodyn
AI can't even do basic multiplication, and we're supposed to trust it to do our taxes for us? I don't think so.
DrKonrad
If AI uses 8x4=24 to compute my taxes I'm all for it
Shaodyn
Point is, these programs can't do what we're trying to make them do. They're not designed for it.
sandymount
“Silly ChatGPT”… and then you’re hanging onto a chainlink fence while your skin is melted off.
Photeus
I dunno. I kinda feel like a H:ZD ending for humanity is pretty likely. But less fix climate change and more infinite kill-bots
HostMigrationPleaseWait
eetsumkaus
I am definitely yoinking this meme format
ricpaul
https://imgur.com/wtfg1OA.mp4
astronomypictures
It would forget to open the silo doors before firing when it read the how to fire a nuke instructions it was asked to write
Whatdoyousaytoanicecupoftea
Whooosh
HeadJamistan
The joke is that ChatGPT is dumber than Skynet, even so dumb as to botch a launch.
Whatdoyousaytoanicecupoftea
So far
ravnicrasol
Kinda yeh, kinda nah.
The biggest concern is that GPT's shown to have a nascent capacity to understand foreign perspectives. As in, the first glimpses of a system capable of going "I know X, but person A doesn't know X, thus I can lie to them about X", which is the basic prerequisite towards being able to manipulate someone.
16bitStarbuck
Lying about Twitter? What?
FlipTheBirdBeforeTheBirdFlipsYou
Code will fix that: if(isLying) { dont(); }
ravnicrasol
You don't need to lie to manipulate someone tho.
Look at journalism.
Selective truth can be even worse than lying
Feralkyn
Do you have a source on this? Everything I've seen suggests it's still simply predictive text spitting out what it thinks it SHOULD say
ravnicrasol
I'd recommend watching just about everything Computerphile
( https://www.youtube.com/@Computerphile ) and "Robert Miles AI Safety" ( https://www.youtube.com/@RobertMilesAI ) have to offer. Their general consensus/explanations are that YES, these massive language models are based on text-prediction, but that at the same time the tech is clearly pushing into intelligence.
For more specific source on my claim, I'd suggest watching this one: https://www.youtube.com/watch?v=2ziuPUeewK0 from RMAI
Feralkyn
Thanks! I'll have to set aside some time to watch these, appreciated
Ryyyyyyyan
It doesn't have the capacity to understand anything. It's predictive text, there's no thought or understanding. It's like comparing taxidermy to Frankenstein's monster. You can see some superficial similarities, but there is an incredibly vast difference.
obijan
And you are a few pounds of meat with an electric ghost in it that drives a meat skeleton.
Meanwhile, the rock with lightning running through it passed the turning test.
ravnicrasol
Understanding and thoughts are not a requirement for intelligence. They're a consequence of a high enough intelligence.
Simple example are animals and how they solve problems even if they clearly don't understand everything they do.
Heck, humans very often solve problems without understanding them. Compare a kid VS an expert tackling the same problem. The one who understands is able to find better solutions.
kiliz
Would we consider a calculator intelligent?
Intelligence is "the ability to acquire and apply knowledge and skills."
Without any agency of its own I don't think an LLM qualifies. We consolidate the knowledge we think is useful and then we build a prompt to generate the desired response.
I think the real interesting question is: if it's simply a "thought calculator" then what is the extra component in us? Agency? Is that just reaction to stimulus in recursion?
We can simulate that, too.
ravnicrasol
Going for strict definitions, "Knowledge" is "Information", and "Skills" is "Do something well". The true key here is "acquire". So, in short, if you program a calculator to do something, it didn't acquire any information.
LLM's and Image-Generators work differently. You tell them what objective you want them to achieve, and then give them an algorithm to guide the training.
wherethehorriblethingsare
You say "won't", I say "shouldn't" because we all know some c-suite dildo will happily fire an entire department and farm the job out to AI the instant it can string together four words semi-coherently.
cjandstuff
I am so looking forward to the first company to go down in flames for doing exactly this.
Trastion
Intuit just announced it is doing just that... And when your taxes are fucked it wont be their fault.
hydrocarbon82
Can't wait til someone literally files US taxes to the UK government. "But Intuit told me to..."
Shaodyn
"Your taxes are fucked up and you're going to jail for tax evasion? Guess you shouldn't have let us farm your taxes to an incompetent AI that was never designed to do 90% of the stuff we tried to make it do. Thanks for the money and enjoy prison."
Shaodyn
What we call AI are actually language learning models. They're made for predicting which word comes next in a given sequence. Not math.
SuperPickle17
i mean they use math in a statistical model.
GerbilHereReportingLiveFromRichardGeresAss
This would be a good case for some kind of law that bans the usage of AI.
Shaodyn
Unfortunately, it'd probably have to be a class-action suit. One person going to jail because an incompetent AI doing something it was never designed for managed to screw up his taxes wouldn't even be on the court system's radar. We'd need thousands of cases at minimum.
Shaodyn
I still think that, when the "AI" bubble bursts, the people who actually use those programs will go back to calling them language learning models. The term "AI" will have been completely ruined for everyone.
GerbilHereReportingLiveFromRichardGeresAss
Seems like there will be that many cases seeing how corporations are trigger happy with implementing AI in its full incompetent glory. Even if they include the clause they're not responsible for what AI does - that's just straight up admitting they take no responsibility for their products and you can't have that, legally.