I thought vibe coding was when you sit down one evening with snacks, beer and Led Zeppelin&Metallica and just get into the zone and code to hell and back..
I tried bouncing a few programming ideas off an AI earlier today. The feature I wanted is already O(n) in my code but runs slower than I'd like. It offered an O(n²) alternative, then an O(n log n) one, and then suggested something that was demonstrably wrong. I know for a fact there's at least one other way that is also inefficient that it didn't suggest, and I'm not sure if that means its knowledge is incomplete or whether it decided that was even worse.
My students don't want to learn coding any more, because they are persuaded that AI will produce all the code they need in the future. I'm gonna show them this video
A friend of mine is a high-level programmer. Think architect level. He uses AI tools a lot, but he said he had to study A LOT to figure out how to use them right. He said AI for programming is only good when the user is very experienced with it.
I've used it to code simple things. Even so it took me about 3-4 hours and I had to repeatedly add 'do not summarize or truncate code' because, yes, it was to summarize all that boring code into '// put code here'
I tried to get to write a custom loot table for Minecraft. It got the versioning wrong. Then it got the file path wrong. Then it corrected itself on the file path, but got the versioning wrong. Then it got that right, but got the mechanics of dropping cooked meat wrong, and told me it was only possible to drop cooked meat if the player is on fire when the mob dies. Then I told it how the default loot table manages to do it, at which point it agreed and then got the versioning wrong.
Like every technology ever it has strengths and weaknesses and some key limitations. There are absolutely tasks which it excels at but people keep expecting miracles. It is very useful at creating documentation from code especially if you leave it hints. It also can do a pretty good job at writing unit tests for code you wrote. If you have stringent and documented code style guidelines or rules you can have it do a first pass on PRs that is useful.
reminded me of how using openAPI norms and tools to generate pages with an interactive documentation of your API was a novel revolution. IA is nice, when used by people that already know what they are doing
I was with it until the sending it for review bit. I know there are idiots that actually do that, but you can't blame the LLM for that since you clearly didn't bother to review the code yourself.
The only thing worse than a programmer that doesn't review what an LLM writes for code are those that accept an LLM's code review and approve without looking themselves.
except by constantly letting LLMs spit out code-blobs instead of working to understand the problem and its solutions, we're rapidly reaching a point where AI-assisted developers are no longer capable of reviewing code instead of developing coding skills, they are developing LLM-wrangling skills.
Yeap. I fear we are barreling towards another "get old programmers out of retirement" mess like Y2K after this "just use 'AI'" garbage hits it's peak. They'll need those of us with real talent to come in, figure out, and then clean up the mess.
The really fun thing will be that there will be less non LLM generated code out there as this goes on which will mean LLMs learning from LLMs. We know how that goes...
AI should be a time-saving measure and nothing else. If you rely on it to be accurate and correct, you will be sorely disappointed. If you rely on it to be creative and innovative, you will be sorely disappointed. It you rely on it at all, you will be sorely disappointed.
Treat AI like a chef treats a microwave oven. If you find a niche use-case, then great I guess. If you have to use it to cook every meal, then you're probably running an Applebees and you should reconsider your life choices.
AI has done exactly that for me though. Just got through a project in 1/3 the time it usually takes me because of AI, including debugging and writing code that isn't the usual standard boilerplate. It even gave me ideas to improve my algorithm when I was stuck on a problem. I'm not a dev by trade (Computer Scientist), so it helped a LOT when you don't have all these things memorized. The trick is to only make it produce code that is easy for you to check.
As long as you know that you're playing with fire. If it's just rearranging things you already understand, it's a time saver. But if it provides code that works, and you don't understand how it works, that's what gets you in trouble. Debugging finicky code you didn't write is like being swept out to sea without a paddle. And god knows where the AI copy/pasted that code from to begin with.
I don't really think it's playing with fire. After all, it's just a step beyond finding random code or code fixes on Stack, or even coding in something you yourself are unfamiliar with. Closure processes exist for a reason.
There's a lot of times I need a snippet of bog standard code that I haven't used in half a year or more... like an SQL connection string. Doesn't come up a lot in what I do. Used to be, I'd go out to Stack Overflow, look one up, remember how it works, and configure it for what I'm doing. Now I ask the AI, remember how it works, and clean up the AI's suggestion. It takes about the same amount of time, maybe a hair less. That's my niche use case.
I feel this. I'm a tired, cranky, old software engineer trying to free up more time. I've never liked or trusted AI, but keep hearing about people completing actual projects with it, so I figured I'd try. This video doesn't even come close to covering all the nonsense I got out of Claude and the amount of frustration I felt. I finally just cancelled my sub... instead of making me more productive, it made me less productive.
because its autocomplete, its not intelligent, i cannot solve a problem. It just gives you the statistically probable answer, the average. We're programmers we know average is nonsense
I read about a recent experiment that compared the time required for a programming task with and without AI assistance. The task took longer with AI assist because of the time required to check and debug the AI product.
Maybe use Claude? I haven't had the problem that the AI does the completely the wrong thing (removes the function?), it does something close but with bugs or verbosely.
Today I asked Claude to fix an error where a zip file could not be read. It added a check for the extension being `.zip` and wrote an error messaging saying the file wasn't valid and called it a fix.
Not really. Vibe coding is "I have no idea how this is working but I've explained what I want at a feature level, and you made SOMETHING that seems to work kind of right. I will ask no more questions". This is more like an assistant for people who know broadly what they are doing but want it done quicker. As a Dev with 10+ years experience, they're useful when something is going to be annoying to fix manually, but I can check what it does for correctness. Never trust the machine on its own.
Vibe coding is using only AI to create crap code - fast. If something does not work, you ask AI to re-do it. All these new vibe "coders" are completely unable to make safe and performant software. You might get a demo out fast, but it just doesn't work for any business use.
Vibe coding is using a code editor such as Cursor or Windsurf to prompt your way to finished code rather than typing it yourself. It shifts the majority of the effort to reviewing rather than typing, which is a lot faster if you know how to do it efficiently and review and test the code well. We moved all of our 25 developers to Cursor and the productivity is through the roof. Devs have written crap code since the dawn of coding, proper reviewing was always the answer and it still is.
Well, yes. Most code in production is mediocre, buggy, insecure crap. Which all the LLMs have been trained on. What else would you expect them to emit? If you've got a decent review culture (the most effective strategy to date I've seen was "has anybody besides the author observed this code working?") you may as well let the robots fart out the code since they're faster at it
Not exactly.. Vibe coding was originally a tech term used for a quick and messy implementation to test an overall idea (with the intent on coming back later to organise and refactor)
The phrase was then co opted by the slop jockeys into the new context of just letting the AI do your thinking for you.
This is exactly vibe coding. Vibe coding means building with AI and prompts first vs traditional development where you type the code itself. Vibe coding wasn't a phrase before this.
Sounds like a good way to create lots of unforseen bugs and unintended behavior in your code. Coding is very intentional. Compiling and testing is kind of the name of the game. I could see AI being great for slapping something together, but picking the bugs out would offset the time it saved you, imo.
There still exists a comment from a senior dev (i.e. founder) that reads basically "yeah, this is a weird way to do this but I've been up at night listening to techno, and I need a shower." I will reject any PR that attempts to alter it.
I'm sure you've heard of the lazy, incompetent lawyers who used AI to compile citations relevant to their case? They submitted that to the judge without checking them, and -- surprise! -- some of the citations don't exist. The judge was NOT amused.
Don't worry, someone i know was hired to write legal textbooks using AI. He knows nothing about the law. They're literally giving degrees to lawyers who study AI legal books. We're fucked.
"Some" is underselling it. The judge specifically described the lawyer's submission as, and this is a direct quote: "replete with citations to non-existent cases".
Based on the experience some of my colleagues got with AI assisted coding while dealing with things like improving code compliance to X or Y, there was never a situation where AI provided a better answer than mine, in the best case scenario it was the same thing or something equivalent and I am not even a great programmer :|
It's good at optimizing existing code. It's REALLY bad at generating new, working code from natural language, as shown. I was tasked with using a tool to build a front end since I am NOT a front end dev. The tool struggled with making changes to a scroll bar it made to fix its hideous mouse-over behavior. It was flashing bright white, old school rectangular when it was otherwise a styled dark-mode implementation. After an hour, "Oh, it looks like we weren't using the component I've been editing"
I was tired and just wanted to add something to a bit of python script for a Blender add-on to make a little button to add a category, identical to the one that already existed in another part of the UI, but I'm not very familiar with python and so it takes me a bit. So I tried asking ChatGPT to do it, gave it the code I'd already done, and said "don't otherwise change the UI". Every single time, it changed the UI. Sometimes removed it almost entirely. Took me maybe 10 minutes once I'd slept.
We had some presentations about how to use AI effectively with coding and the main takeaway seemed to be that you had to already know exactly how to write a bit of code to be able get anything out of it and be sure it didn't just make things up. At best it can be used as an alternative to delegating a task to an inexperienced junior colleague... and doing that in the long term is self-defeating by all experienced coders need to have some time as writing code as an inexperienced junior coder
I look at using CoPilot (and whatnot) the same way I use Stack Overflow, but I can ask more specific questions. I still verify it actually does what I expect and look for places to improve. It’s great for boiler plate solutions, but verify verify verify
Claude's been pretty good, but I'm also just tinkering with code for a hobby and letting it do the boring things like the alpha UI layouts so I can focus on methods behind the scenes. It also does a pretty good job of keeping data management up to date.
Do not try to get it to do anything more logically complicated than basic arithmetic though, it's a little loopy.
I have found that AI assists can be useful in learning how to use certain things, but that you always have to gain that understanding and review the entire product before using it.
Like, I've learned a lot about VBA accepting in Excel, enough to use it without an AI assist, but I usually start by asking the AI how to go about something, because sometimes it'll show me something new.
The real trick is recognizing early when you're dealing with a code problem that flights against AI biases.
I've only used it a bit, and basically found it useful as a starting point for something I don't know how to do generating relatively simple scripts. I don't even use it within the project, just ask it to generate a function that does X then take it and fix it.
I've done the same, to wildly mixed results. I like to think of it as a highly-motivated, extremely prolific, and almost completely incompetent, junior programmer. Except instead of them taking two weeks to fuck up the implementation of your spec, now it happens in seconds
AI is a tool, and just like in every other situation in the history of mankind, a tool is only as good as the retard holding it in the wrong direction. When I became a software developer 20 years ago, the first thing I did was build tools to make my life easier. The fact that we have developers here using tools generated for them instead of using the API to build their own tools speaks volumes to the quality of developers, not the tools.
LLMs can really easily spit out templates and common code blocks - tasks that the data that they were trained on has in spades. When the codebase gets too big for its input, it breaks. When you ask it to do something that's unusual or novel, it breaks. When you ask it to make a small change, but keep everything around it the same, it breaks. You can either spend days trying to prompt your way around the limits of LLMs, or you can write it yourself and know you can trust the code.
It's good enough for boilerplate code. From there I validate and edit which still saves me time in the long run because I can review faster than author. I really think of it as a suped-up version of autocomplete and nothing else.
One trick is to ask it to document the code in a plain text file. Like a design doc outlining all the features and caveats. Then start from scratch and tell it to follow the design doc. It's still not viable for large projects, but smaller ones can be kept on track like this.
I am an expert at neither AI nor programming, but I have a question: in the case of the ‘small fix’ wouldn’t be better to just do it yourself than tie yourself into knots trying to get AI to do it?
It's almost always easier to solve the problem yourself than use AI, but that's only with sufficient experience. Someone that AI-coded themselves into the problem with no knowledge is almost guaranteed to be more successful AI-coding their way out of the problem.
Coder here - "small" can be relative, but generally you're right. The same kind of applies for big projects and big fixes. But then there's isolated stuff that's easy to test but hard or boring to develop. Things you might not think are worth your time but are really nice to have. There's some advanced physics interactions I'm too dumb to program myself but can see whether they work correctly instantly. For those kinds of isolated testable tasks, it's great!
To add a bit of context in the situations I was referring to the objective was to change existing code to improve compliance to certain rules due to issues that had been detected by static analysis tools, not to make code from scratch. So my colleagues were essentially trying to check if they could get a more adequate solution than the one I had suggested after analysing the problem for a couple minutes. That is why the response being the same or less adequate is an issue here.
I am not a coder. In fact, I dropped C++ at two different schools. However, I play an 80s game called Tradewars 2002, which is so repetitive, someone named Xide created a scripting language called TWX Proxy. I have tried to write scripts with very limited success for 15 years or so. I'll ask an LLM to write a script in TWX Proxy and it will immediately produce a script which looks awesome for a fraction of a second. "That says Python." "That says perl." "That looks like TWX Proxy, but x, y,
and z are invalid commands." "That is not how you use these commands. This is what the script resource says." "I figured out 90% of it, but now I have this problem." "What did you do?! It doesn't compile anymore!" Then either I figure it out or run out of time. Unfortunately, my scripts never seem to work the next time I try them.
And by Java, it means.. Bad $0.99 gas station java. That you still have to go to the gas station yourself to pour. Boom! The future is here! And it tastes like $0.43! Blech!
from what I hear the way to use AI coder is to ask it to give you a IF statment and then have the actuall programmer fill in the detail (okey a if statment might be a bit to easy but something like that becuase Darn it if I can´t remmber how to write a standard function and I can´t be botered to get the programing book to look it up.
Use an Anthropic Model, they can actually generate code decently. The Google models don't do a bad job either. Just remember that AI can't design architecture and it is about as skilled as the worst coder in your graduating class copy and pasting from Stackoverflow.
Yeah, it's good to comprehend/summarize code, refactor the code, but for new code --- just to get you started. Helps me get started on a task I'm not eager to start because I don't remember the API's. At least I'm not starting with a blank sheet.
I use Anthropic actually! It's by far the least infuriating but it does get under my skin a lot of times. I tried Gemini and it was the worst one out of the 4 I tested. Maybe it's better nowadays. Your analogy is pretty much spot on.
I was a hater for the longest time until I tested out Copilot a few months ago. It was saving me like 20 min a day, so I changed to Cursor and started testing Claude 3.7. I significantly decreased the amount of time I was spending reviewing generated code as it had far fewer stupid errors. Now Gemini 2.5 Pro is my goto, but it is too expensive to use all the time. If I am using chat I always get it to explain what it is doing with sources. It is a powerful tool in a professionals hands.
Oh yeah there are definitely areas where it will save time and even solves issues faster than I could figure out. My friend, who's a veteran senior programmer gets more mileage out of even chatGPT than I do from Anthropic's pro model. For same kind of work. I'm just a self-taught and an amateur compared to him. I will look into Gemini 2.5 Pro. I appreciate the shout!
I'm betting experience varies pretty wildly based upon which model you're using - they seem to range from actively harmful to useless to being "a knowledgeable and motivated assistant who also happens to have a major head injury" (in the words of someone I know).
Yea, and more generally it uses best practices, effective shorthand and comments in the code it generates. It's hard to not immediately see the benefits of writing code in a similarly effective way.
yeah, I was a bit surprised since I haven't used AI to code since 2022, and now it comments much better than I do, and even handles corner cases I haven't thought about! I come from an Electronic Design Automation background though, so I went through my education and training used to a machine doing technical work for me. It might be harder for people who come from a pure coding background to come around.
that goes for most of us who only have to code as a side really. Imagine a system architect who can get a model off the ground before it gets to the devs. It opens up possibilities to more complex systems, and iterating on them before the specs hit the devs and reduces the turnaround time when they find a bug in the spec.
It doesn't have to work to replace good workers. AI execs just need to convince idiot capital investors that it can replace workers. Nothing is going to function anymore, and we'll still get billed for broken services.
Yeah, this is the scary part. Even worse when I see people thinking it is great. Take notes and write a recap? Sure. Google something sure. Turn full code writing over? I still trust my juniors more, and only have to check their code twice. AND at least some will continue to be Seniors, or switch to other IT areas where they will thrive, but people believe “anyone can do it”, which might be true…but can they do it well? Right, and they are still better that AI
But in theory it saves corps money, and people who feel entitled to other people's work - including corpos - get to use other people's work for their own benefit while participating in the exploitation of labour, which regular citizens don't often get to do since they're usually the ones being exploited,so everyone benefits, especially the corpos, and we need to think about the poor execs and their billions. If they'd have to pay for work, they'd lose money :(
I spend more time fighting and fixing what the AI messes up than the time "saved" using AI for coding. I swear to god I have never gotten as angry in my life as I have when having to deal with stupid AI for work.
I'll give AI credit, trying to use it to make a simple python script got me to learn a bit of python, because it was easier to learn it from scratch than constantly fight to get little changes without it inexplicably changing other things for no good goddamn reason.
Pretty much same. It was faster to learn more of C# than fight the AI. It can still write basic code faster than me and solve some issues I can't. And troubleshoot my mistakes on many occasions. Most of the other stuff I have to do myself from scratch.
Can someone please come up with a new term for the thing we have now that we call AI that's not actually AI at all?? Comment after comment on this post about how bullshit AI is, yet we still insist that the Intelligence part of the term Artificial Intelligence is valid. It's artificial sure, but it's not intelligent, why are we giving it that honorific title?
I’m afraid you’re incorrect; it *is* AI, just that phrase technically covers technologies dating back to the 1940s. Video game NPCs use pathing algorithms that are AI. Intelligence can be narrow and shallow, and the latest models still meet both those descriptors.
It’s not what sci-fi calls “AI”, which is more properly called artificial general intelligence (AGI).
I get that people feel the current stuff doesn’t deserve the name, but it’s what the *field* is called.
Two wrongs don't make a right! So a mistake was made....FIX IT! I'm asking for a NEW term that would accurately describe what we have at the moment, not an additional explanation for why we ended up at this shitty place
Do not entrust to AI a task any more complex than you would give to the high school intern who just happens to be related to a management level employee...
i find LLMs really useful for writing scripts and such. because it takes the LLM a few seconds to write something that would take me hours of "how to do x in powershell" searches (because all my experience is in C/ASM). also i noticed that the more detailed and explicit the prompt, the more likely you are to get functional code on the first try. ironically meaning you already need a programmer mindset to think of a full solution to your problem so you can describe it in detail to the LLM.
Treating LLM’s as a place to easily search for information and help with problems your struggling with has been the only useful way I’ve found it to be used. And even then, I reword my phrasing or continue to ask more questions to make sure it understands exactly what I’m looking for. I would not trust it with my own work.
Doesn't the follow-up required fact checking completely negate any usefulness it might have? Like, you can take the entire AI step out and NOT have to keep asking follow-up questions (when the AI might change its mind and start making up new "facts").
Often times it is still faster than “googling”. I used it heavily for symbolic logic when I couldn’t understand the next step. Not to solve the entire thing for me, just to help me see where to go next. The follow up questions were absolutely necessary because it doesn’t always understand the parameters of say Fitch, or how the problem is being worked out.
But in even less technical queries, it will give you answers that are almost correct, but maybe not the one you are looking for. Especially if it involves stuff that has a ton of information. For example, try asking it about a very specific song or instrument that the Beatles used during a performance. It will give you an answer because they used so many instruments, but if you really wanted to nail it down it may give you the wrong answer over and over.
The problem of course being that the only reason you're handing tasks to a high school intern is for the benefit of the intern since they need the experience. Giving a task to an AI isn't really nurturing the next generation of workers so at that point there's just no reason to get it involved.
Give it AI, fly the Marketing team to The clients location, extend the business expense account so that the client can get invited by marketing team to improve bond, and shelve the Product. Everybody’s happy and on paper we created 20,000,000 of market value, so the shareholders are also happy. EzPz lemon squeezy welcome to the world of corporate economics, where nothing matters, and Stocks always go up.
I use it as an advanced rubber duck. And it gave me a really neat idea using rxjs for limiting number of calls to a backend. I was really, really happy with that solution. But so far, I dont let it touch my code.
I just recently started using AI to help me with some hobby hardware code. I don't let it do everything though. I already have existing code then ask for best practice with structuring, and if something doesn't work in my circuit, I use it to help troubleshoot and bounce ideas off of. But all of the code is all me and I still leverage forums, YT, and sites to help with my code/circuit design.
It's definitely helpful for learning code. It often gave me better ways to do things than I was doing. If you just copy and paste code though and hope for the best you're going to have a bad time. It's also really good for finding stupid errors that you keep over looking.
AI if used correctly is a great tool, but can't give you a solution unless trained. You can do some advanced code with AI, but you'll first have to write thousands of lines of code yourself and not trust the AI to do it for you. You can use Copilot and have it recognize your code and add suggestions as it learns from you as you go. Claude however is pretty good at generating code, but again you'll have to write your own code or it will just keep making errors if you don't know how to correct it.
Personally, I use it to help learn how to work with a language I am not familiar with, but I ask for explanations on just about every line so I can understand and when I think the answer is bunk, I go looking for better documentation.
As someone who works in university though, those young coders are ALSO using LLMs and enabling them to gain competence and experience faster. We basically have to switch our training methods from how to get the mechanical processes by memory, to evaluating problems from higher up the abstraction layers. Young coders learning off of code they can actually run is light years faster than searching for a StackOverflow, then failing to setup anything because OP couldn't be arsed to make a MWE.
Bro, strong disagree. Teaching kids to think / understand there are layers of abstraction is the hardest part of making people actually productive.
To be clear - those young coders are NOT "learning off code" from the LLMs. They're checking off boxes: it compiles, 90% code coverage, A/C are satisfied.
How is that different than production work, you say.
Granted. But college profs aren't code reviewing from the standpoint that they'll have to maintain the shit 5 years from now.
so code review. That's why that exists. It's much easier to teach someone what does and doesn't work from code that actually runs, rather than waiting for them to sort out setup and the minute details, and THEN learn software engineering. Any org that has the bandwidth to code review everything for 5 years down the line, should have the bandwidth to shepherd newbies through coding with LLMs.
That's a very interesting position, possibly borne out of your place in academia. You fundamentally misunderstand what code reviews are for in a production environment and exactly how professional software development shops work.
Really feels like I'm having a conversation with leadership about how using CoPilot is (or is not) going to allow my devs and QA to deliver 2x the story points each sprint.
And its starting to look like the math on savings is actually becoming a net loss. AI code requires so much supervision and correction that coders using it are actually less productive.
I use it plenty to speed up processes, mostly mundane loops, it just saves time typing shit out, and generally it does that well and easy to check. Anything complex tho? You need to review every little thing it does, seriously cant trust it to pull this kind of bullshit as it does in the clip, that it ends up being faster doing it yourself
The people making these choices can only understand immediate saving, which llms WILL give them because they wont have to pay workers, any costs accrued, monetary or otherwise will be ignored until it becomes a very present problem at which point they will blame someone else or simply get “fired” with a golden parachute
Same issue that happened with the "no code \ low code" frameworks from years ago that I'm now replacing. I expect in 5 to 10 years (assuming we're not in a mad max style apocalypse) to be doing the same for ai \ vibe coded apps.... Assuming they don't cause the owning entities to crash and burn due to security flaws long before that
Nice seeing others think that way, I thought I'm the equivalent of a crazy cat lady thinking one of the realer problems of ai is that we're becoming dependent on it by cutting of our supply line of knowledge workers
The LLMs and tools advance very quickly. Your statement would be correct 6 months ago, but now it's a little dated. Claude code (different from regular Claude) for instance can do a lot with very little instructions. The newer reasoning models and MCPs help a lot. It's basically AI using AI to understand, breakdown, implent, and verify tasks independently and repetitively.
Yeah, it's a bit surprising how quickly it's come. Last year I was using it to write simple loops parsing strings. This year, it gave me code to package a forgotten C++ library into python and even debugged a namespace issue that occurred because of the packaging library. Even the loops it gives me now correctly handle corner cases.
TeslaSupreme
I thought vibe coding was when you sit down one evening with snacks, beer and Led Zeppelin&Metallica and just get into the zone and code to hell and back..
But sure.. ai coding is vibe coding i guess..
palordrolap
I tried bouncing a few programming ideas off an AI earlier today. The feature I wanted is already O(n) in my code but runs slower than I'd like. It offered an O(n²) alternative, then an O(n log n) one, and then suggested something that was demonstrably wrong. I know for a fact there's at least one other way that is also inefficient that it didn't suggest, and I'm not sure if that means its knowledge is incomplete or whether it decided that was even worse.
blzrdphoto
Saucy sauce. Alberta Tech. https://youtube.com/shorts/ql56K3sveqo
DrKonrad
My students don't want to learn coding any more, because they are persuaded that AI will produce all the code they need in the future. I'm gonna show them this video
UltraHellboy
A friend of mine is a high-level programmer. Think architect level. He uses AI tools a lot, but he said he had to study A LOT to figure out how to use them right. He said AI for programming is only good when the user is very experienced with it.
shawnmilo
You can't do anything *with* an LLM ("AI" doesn't exist) that you can't do *without* one.
gnomegenome
If ypu can't code without AI, then you can't code.
eetsumkaus
Fine by me. I'm not a coder by profession (Computer Scientist). It helps a ton.
CedricDur
I've used it to code simple things. Even so it took me about 3-4 hours and I had to repeatedly add 'do not summarize or truncate code' because, yes, it was to summarize all that boring code into '// put code here'
IlluminaBlade
Oh the security fiascos from people using AI code will be glorious.
RevRagnarok
I'm moving my money tonite; it's on my TODO list. https://developers.slashdot.org/story/25/07/17/1918220/robinhood-ceo-says-majority-of-companys-new-code-written-by-ai
lightfoot2
Yep, job for all eternity in security.
tesseract4d2
I tried to get to write a custom loot table for Minecraft. It got the versioning wrong. Then it got the file path wrong. Then it corrected itself on the file path, but got the versioning wrong. Then it got that right, but got the mechanics of dropping cooked meat wrong, and told me it was only possible to drop cooked meat if the player is on fire when the mob dies. Then I told it how the default loot table manages to do it, at which point it agreed and then got the versioning wrong.
JerBearington
Luckily management isn’t pushing us to involve AI in the coding process, so far. Hearing a lotta anecdotes like this
drinkthederpentine
Like every technology ever it has strengths and weaknesses and some key limitations. There are absolutely tasks which it excels at but people keep expecting miracles. It is very useful at creating documentation from code especially if you leave it hints. It also can do a pretty good job at writing unit tests for code you wrote. If you have stringent and documented code style guidelines or rules you can have it do a first pass on PRs that is useful.
Thojira
reminded me of how using openAPI norms and tools to generate pages with an interactive documentation of your API was a novel revolution. IA is nice, when used by people that already know what they are doing
normalizebeingalone
Using AI, like chatgpt, wastes natural resources. Just do your job.
shawnmilo
AI doesn't exist. They're LLMs. And they're useful as tools, but are MUCH stupider than you (even if you're *really* stupid).
unluckyandbored
If you are using "AI" to do anything even remotely important, you're a moron.
shawnmilo
AI doesn't exist. They're LLMs. And they're useful as tools, but are MUCH stupider than you (even if you're *really* stupid).
iamgnat
I was with it until the sending it for review bit. I know there are idiots that actually do that, but you can't blame the LLM for that since you clearly didn't bother to review the code yourself.
The only thing worse than a programmer that doesn't review what an LLM writes for code are those that accept an LLM's code review and approve without looking themselves.
Cataleast
Review?! Surely, the AI knows what it's doing! It's 4:50pm on a Friday! It's drinks o'clock, bud! Push to prod! Chop, chop!
phobosorbust
except by constantly letting LLMs spit out code-blobs instead of working to understand the problem and its solutions, we're rapidly reaching a point where AI-assisted developers are no longer capable of reviewing code
instead of developing coding skills, they are developing LLM-wrangling skills.
iamgnat
Yeap. I fear we are barreling towards another "get old programmers out of retirement" mess like Y2K after this "just use 'AI'" garbage hits it's peak. They'll need those of us with real talent to come in, figure out, and then clean up the mess.
The really fun thing will be that there will be less non LLM generated code out there as this goes on which will mean LLMs learning from LLMs. We know how that goes...
ApothecaryGrant
AI should be a time-saving measure and nothing else. If you rely on it to be accurate and correct, you will be sorely disappointed. If you rely on it to be creative and innovative, you will be sorely disappointed. It you rely on it at all, you will be sorely disappointed.
Treat AI like a chef treats a microwave oven. If you find a niche use-case, then great I guess. If you have to use it to cook every meal, then you're probably running an Applebees and you should reconsider your life choices.
eetsumkaus
AI has done exactly that for me though. Just got through a project in 1/3 the time it usually takes me because of AI, including debugging and writing code that isn't the usual standard boilerplate. It even gave me ideas to improve my algorithm when I was stuck on a problem. I'm not a dev by trade (Computer Scientist), so it helped a LOT when you don't have all these things memorized. The trick is to only make it produce code that is easy for you to check.
ApothecaryGrant
As long as you know that you're playing with fire. If it's just rearranging things you already understand, it's a time saver. But if it provides code that works, and you don't understand how it works, that's what gets you in trouble. Debugging finicky code you didn't write is like being swept out to sea without a paddle. And god knows where the AI copy/pasted that code from to begin with.
eetsumkaus
I don't really think it's playing with fire. After all, it's just a step beyond finding random code or code fixes on Stack, or even coding in something you yourself are unfamiliar with. Closure processes exist for a reason.
ShadusMacAoidh
There's a lot of times I need a snippet of bog standard code that I haven't used in half a year or more... like an SQL connection string. Doesn't come up a lot in what I do. Used to be, I'd go out to Stack Overflow, look one up, remember how it works, and configure it for what I'm doing. Now I ask the AI, remember how it works, and clean up the AI's suggestion. It takes about the same amount of time, maybe a hair less. That's my niche use case.
ProfessorBanesworth
I feel this. I'm a tired, cranky, old software engineer trying to free up more time. I've never liked or trusted AI, but keep hearing about people completing actual projects with it, so I figured I'd try. This video doesn't even come close to covering all the nonsense I got out of Claude and the amount of frustration I felt. I finally just cancelled my sub... instead of making me more productive, it made me less productive.
Hexidimentional
because its autocomplete, its not intelligent, i cannot solve a problem. It just gives you the statistically probable answer, the average. We're programmers we know average is nonsense
fumptrucker
I read about a recent experiment that compared the time required for a programming task with and without AI assistance. The task took longer with AI assist because of the time required to check and debug the AI product.
ValhallaPaperBoy
https://media2.giphy.com/media/v1.Y2lkPWE1NzM3M2U1d3h0M2U4cHJxNnpodWhjYXl4NW96cHRiZTIydXZkYXcyanM5ZXV0MCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/i2I3sLKrJQsfvv0Tyc/200w.webp
spliffen
this is so true, its not even funny
seheim
Except for you don't send it for review until you reviewed it yourself. That's even true if you write it without assistance.
spliffen
of course, was referring to how bad llm's are at "helping"
gsynth
Emperor's clothes so much
jt42
Maybe use Claude? I haven't had the problem that the AI does the completely the wrong thing (removes the function?), it does something close but with bugs or verbosely.
spliffen
feed enough lines, and it will happen
shawnmilo
Today I asked Claude to fix an error where a zip file could not be read. It added a check for the extension being `.zip` and wrote an error messaging saying the file wasn't valid and called it a fix.
TheLuminousBanana
Forgive my ignorance but is this what they refer to as vibe coding?
anonymous
Vibe coding is when you let an AI write the code for you. Or like the other person said, it's exactly what you see in this video.
PushPullMagnet
Yep.
KevinStrexcorp
Not really. Vibe coding is "I have no idea how this is working but I've explained what I want at a feature level, and you made SOMETHING that seems to work kind of right. I will ask no more questions". This is more like an assistant for people who know broadly what they are doing but want it done quicker. As a Dev with 10+ years experience, they're useful when something is going to be annoying to fix manually, but I can check what it does for correctness. Never trust the machine on its own.
drinkthederpentine
Yes
givemepickles
I love your username.
L4t3xs
Vibe coding is using only AI to create crap code - fast. If something does not work, you ask AI to re-do it. All these new vibe "coders" are completely unable to make safe and performant software. You might get a demo out fast, but it just doesn't work for any business use.
Skeksify
Vibe coding is using a code editor such as Cursor or Windsurf to prompt your way to finished code rather than typing it yourself. It shifts the majority of the effort to reviewing rather than typing, which is a lot faster if you know how to do it efficiently and review and test the code well.
We moved all of our 25 developers to Cursor and the productivity is through the roof. Devs have written crap code since the dawn of coding, proper reviewing was always the answer and it still is.
0xDEC0DE
Well, yes. Most code in production is mediocre, buggy, insecure crap. Which all the LLMs have been trained on. What else would you expect them to emit? If you've got a decent review culture (the most effective strategy to date I've seen was "has anybody besides the author observed this code working?") you may as well let the robots fart out the code since they're faster at it
pothocket
Tony Stark is a vibe coder
Ijustheartthings
This is exactly what vibe coding is.
Kryppers
Not exactly.. Vibe coding was originally a tech term used for a quick and messy implementation to test an overall idea (with the intent on coming back later to organise and refactor)
The phrase was then co opted by the slop jockeys into the new context of just letting the AI do your thinking for you.
drinkthederpentine
This is exactly vibe coding. Vibe coding means building with AI and prompts first vs traditional development where you type the code itself. Vibe coding wasn't a phrase before this.
WiiShaker
Sounds like a good way to create lots of unforseen bugs and unintended behavior in your code. Coding is very intentional. Compiling and testing is kind of the name of the game. I could see AI being great for slapping something together, but picking the bugs out would offset the time it saved you, imo.
comacomacomacomachameleon
BACK IN MY DAY vibe coding was the 3am Red Bull slamming eurobeat jamming work sessions
dasAchteck
There still exists a comment from a senior dev (i.e. founder) that reads basically "yeah, this is a weird way to do this but I've been up at night listening to techno, and I need a shower." I will reject any PR that attempts to alter it.
zushiba
Vibe coding doesn't necessarily mean good vibes... I mean.. It's coding.
SpammersAreScum
I'm sure you've heard of the lazy, incompetent lawyers who used AI to compile citations relevant to their case? They submitted that to the judge without checking them, and -- surprise! -- some of the citations don't exist. The judge was NOT amused.
applesforjuice
Don't worry, someone i know was hired to write legal textbooks using AI. He knows nothing about the law. They're literally giving degrees to lawyers who study AI legal books. We're fucked.
Colopty
"Some" is underselling it. The judge specifically described the lawyer's submission as, and this is a direct quote: "replete with citations to non-existent cases".
GeekyLina
Based on the experience some of my colleagues got with AI assisted coding while dealing with things like improving code compliance to X or Y, there was never a situation where AI provided a better answer than mine, in the best case scenario it was the same thing or something equivalent and I am not even a great programmer :|
EvPointMaster
Maybe this is just an edge case, but it doesn't seem bad https://www.youtube.com/watch?v=20s9hWDx0Io
AveryLynel
It's good at optimizing existing code. It's REALLY bad at generating new, working code from natural language, as shown. I was tasked with using a tool to build a front end since I am NOT a front end dev. The tool struggled with making changes to a scroll bar it made to fix its hideous mouse-over behavior. It was flashing bright white, old school rectangular when it was otherwise a styled dark-mode implementation. After an hour, "Oh, it looks like we weren't using the component I've been editing"
Forosnai
I was tired and just wanted to add something to a bit of python script for a Blender add-on to make a little button to add a category, identical to the one that already existed in another part of the UI, but I'm not very familiar with python and so it takes me a bit. So I tried asking ChatGPT to do it, gave it the code I'd already done, and said "don't otherwise change the UI". Every single time, it changed the UI. Sometimes removed it almost entirely. Took me maybe 10 minutes once I'd slept.
Youhavinagiraffe
We had some presentations about how to use AI effectively with coding and the main takeaway seemed to be that you had to already know exactly how to write a bit of code to be able get anything out of it and be sure it didn't just make things up. At best it can be used as an alternative to delegating a task to an inexperienced junior colleague... and doing that in the long term is self-defeating by all experienced coders need to have some time as writing code as an inexperienced junior coder
RatsLiveOnNoEvilStar
I look at using CoPilot (and whatnot) the same way I use Stack Overflow, but I can ask more specific questions. I still verify it actually does what I expect and look for places to improve. It’s great for boiler plate solutions, but verify verify verify
Stringgeek
My mum asked what the issue with AI was and I told her remember when any of us kids were toddlers.
TheForgewright
Claude's been pretty good, but I'm also just tinkering with code for a hobby and letting it do the boring things like the alpha UI layouts so I can focus on methods behind the scenes. It also does a pretty good job of keeping data management up to date.
Do not try to get it to do anything more logically complicated than basic arithmetic though, it's a little loopy.
MisterFluffi
Yes, but does your way of doing it use a bathtub of fresh water and a full car battery of power, I think not!
BishlamekGurpgork
I have found that AI assists can be useful in learning how to use certain things, but that you always have to gain that understanding and review the entire product before using it.
Like, I've learned a lot about VBA accepting in Excel, enough to use it without an AI assist, but I usually start by asking the AI how to go about something, because sometimes it'll show me something new.
The real trick is recognizing early when you're dealing with a code problem that flights against AI biases.
IconicM
I've only used it a bit, and basically found it useful as a starting point for something I don't know how to do generating relatively simple scripts. I don't even use it within the project, just ask it to generate a function that does X then take it and fix it.
0xDEC0DE
I've done the same, to wildly mixed results. I like to think of it as a highly-motivated, extremely prolific, and almost completely incompetent, junior programmer. Except instead of them taking two weeks to fuck up the implementation of your spec, now it happens in seconds
Oogibah
Sounds like you might also be suffering from mild imposter syndrome
FartyMcDumpstein
AI is a tool, and just like in every other situation in the history of mankind, a tool is only as good as the retard holding it in the wrong direction. When I became a software developer 20 years ago, the first thing I did was build tools to make my life easier. The fact that we have developers here using tools generated for them instead of using the API to build their own tools speaks volumes to the quality of developers, not the tools.
Navrodel
LLMs can really easily spit out templates and common code blocks - tasks that the data that they were trained on has in spades.
When the codebase gets too big for its input, it breaks.
When you ask it to do something that's unusual or novel, it breaks.
When you ask it to make a small change, but keep everything around it the same, it breaks.
You can either spend days trying to prompt your way around the limits of LLMs, or you can write it yourself and know you can trust the code.
SeditiousBit
It's good enough for boilerplate code. From there I validate and edit which still saves me time in the long run because I can review faster than author. I really think of it as a suped-up version of autocomplete and nothing else.
CarpoolTunnelSyndrome
One trick is to ask it to document the code in a plain text file. Like a design doc outlining all the features and caveats. Then start from scratch and tell it to follow the design doc. It's still not viable for large projects, but smaller ones can be kept on track like this.
AlabamaNerd
I am an expert at neither AI nor programming, but I have a question: in the case of the ‘small fix’ wouldn’t be better to just do it yourself than tie yourself into knots trying to get AI to do it?
FartyMcDumpstein
It's almost always easier to solve the problem yourself than use AI, but that's only with sufficient experience. Someone that AI-coded themselves into the problem with no knowledge is almost guaranteed to be more successful AI-coding their way out of the problem.
CarpoolTunnelSyndrome
Coder here - "small" can be relative, but generally you're right. The same kind of applies for big projects and big fixes. But then there's isolated stuff that's easy to test but hard or boring to develop. Things you might not think are worth your time but are really nice to have. There's some advanced physics interactions I'm too dumb to program myself but can see whether they work correctly instantly. For those kinds of isolated testable tasks, it's great!
GeekyLina
To add a bit of context in the situations I was referring to the objective was to change existing code to improve compliance to certain rules due to issues that had been detected by static analysis tools, not to make code from scratch. So my colleagues were essentially trying to check if they could get a more adequate solution than the one I had suggested after analysing the problem for a couple minutes. That is why the response being the same or less adequate is an issue here.
MenloPart
I am not a coder. In fact, I dropped C++ at two different schools. However, I play an 80s game called Tradewars 2002, which is so repetitive, someone named Xide created a scripting language called TWX Proxy. I have tried to write scripts with very limited success for 15 years or so. I'll ask an LLM to write a script in TWX Proxy and it will immediately produce a script which looks awesome for a fraction of a second.
"That says Python."
"That says perl."
"That looks like TWX Proxy, but x, y,
MenloPart
and z are invalid commands."
"That is not how you use these commands. This is what the script resource says."
"I figured out 90% of it, but now I have this problem."
"What did you do?! It doesn't compile anymore!"
Then either I figure it out or run out of time.
Unfortunately, my scripts never seem to work the next time I try them.
danlei
What the AI takes 500 lines of code to do, you could probably do in 50.
drinkthederpentine
Oh I didn't know it only writes Java
hiyo365
And by Java, it means.. Bad $0.99 gas station java.
That you still have to go to the gas station yourself to pour. Boom! The future is here! And it tastes like $0.43! Blech!
grosscol
And will helpfully add a couple of logic security flaws and a potential race condition for free!
ZackWester
from what I hear the way to use AI coder is to ask it to give you a IF statment and then have the actuall programmer fill in the detail (okey a if statment might be a bit to easy but something like that becuase Darn it if I can´t remmber how to write a standard function and I can´t be botered to get the programing book to look it up.
grosscol
Burning the money and power used by LLMs to essentially look up syntax is like swatting flies with hand grenades.
ZackWester
yep but that is the level the AI can do currently anything beyound that and the AI breaks down.
StarSumiaki
Pssh, the AI doesn't do it in 500 lines of code. It does it in 15 lines of code with 7 libraries it hallucinated into existence.
UWAGAGABLAGABLAGABA
This is the correct one.
shiftingbits
It also included your OAUTH token in commit along with using an insecure version of a library that has been compromised for years.
Asadsadsadclown
While also opening up vulnerabilities through the hallucinated libraries as hackers start building fake libraries ahead of the AI.
RatsLiveOnNoEvilStar
That hackers then deploy libraries called those hallucinations so you download malware and include it in your project.
hiyo365
AI in the sky with diiiaammooonnnddsss!!
Frederf
If I had more time I'd have written a smaller program.
SaphraxTheFirstVillain
Use an Anthropic Model, they can actually generate code decently. The Google models don't do a bad job either. Just remember that AI can't design architecture and it is about as skilled as the worst coder in your graduating class copy and pasting from Stackoverflow.
jt42
Yeah, it's good to comprehend/summarize code, refactor the code, but for new code --- just to get you started. Helps me get started on a task I'm not eager to start because I don't remember the API's. At least I'm not starting with a blank sheet.
danlei
I use Anthropic actually! It's by far the least infuriating but it does get under my skin a lot of times. I tried Gemini and it was the worst one out of the 4 I tested. Maybe it's better nowadays. Your analogy is pretty much spot on.
SaphraxTheFirstVillain
I was a hater for the longest time until I tested out Copilot a few months ago. It was saving me like 20 min a day, so I changed to Cursor and started testing Claude 3.7. I significantly decreased the amount of time I was spending reviewing generated code as it had far fewer stupid errors. Now Gemini 2.5 Pro is my goto, but it is too expensive to use all the time. If I am using chat I always get it to explain what it is doing with sources. It is a powerful tool in a professionals hands.
danlei
Oh yeah there are definitely areas where it will save time and even solves issues faster than I could figure out. My friend, who's a veteran senior programmer gets more mileage out of even chatGPT than I do from Anthropic's pro model. For same kind of work. I'm just a self-taught and an amateur compared to him. I will look into Gemini 2.5 Pro. I appreciate the shout!
schizznatt
My husband has had the opposite experience. AI has been fantastic for writing code.
6313326
I'm betting experience varies pretty wildly based upon which model you're using - they seem to range from actively harmful to useless to being "a knowledgeable and motivated assistant who also happens to have a major head injury" (in the words of someone I know).
Colopty
That's saying something about your husband for sure.
tantallous
Ugh. Dang work making me miss the timing here. Maybe I should have had AI do it for me... lol
CarpoolTunnelSyndrome
I find it useful too. It's helped me get in the habit of writing much cleaner code.
eetsumkaus
it automates a lot of the less familiar processes and gives you more time to think about how you'd structure the whole thing.
CarpoolTunnelSyndrome
Yea, and more generally it uses best practices, effective shorthand and comments in the code it generates. It's hard to not immediately see the benefits of writing code in a similarly effective way.
eetsumkaus
yeah, I was a bit surprised since I haven't used AI to code since 2022, and now it comments much better than I do, and even handles corner cases I haven't thought about! I come from an Electronic Design Automation background though, so I went through my education and training used to a machine doing technical work for me. It might be harder for people who come from a pure coding background to come around.
porphyre1e00
What does your husband do for a living?
schizznatt
He is an economist and coded extensively for his PhD. He is extremely good at what he does and would not settle for shitty code.
porphyre1e00
An economist wouldn't know shitty code
CarpoolTunnelSyndrome
Works on imgur servers
WizBardBarian
Copies and pasted shitty code, apparently
eetsumkaus
that goes for most of us who only have to code as a side really. Imagine a system architect who can get a model off the ground before it gets to the devs. It opens up possibilities to more complex systems, and iterating on them before the specs hit the devs and reduces the turnaround time when they find a bug in the spec.
jfd8u438fdsfkds
The point isn't that it's "better than yours". The point is that it does it for you.
sofako41404
The actual point is if you can have it do your work for you, then there is no point in not replacing you with it.
jfd8u438fdsfkds
Yes, that's the goal!
LordOfThePenguin
Which is only helpful if what it produces is useful.
jfd8u438fdsfkds
and it is useful
freakdiablo
Remember one of the main rules of coding - computers make very fast, very accurate mistakes. Now apply that to AI.
szpet627
MichaelMars
RageZamu
Saving this for reasons (am senior engineer).
thatkoreanguy
LOL, love it
thatsbadmmkay
It doesn't have to work to replace good workers. AI execs just need to convince idiot capital investors that it can replace workers. Nothing is going to function anymore, and we'll still get billed for broken services.
Darprice
Yeah, this is the scary part. Even worse when I see people thinking it is great. Take notes and write a recap? Sure. Google something sure. Turn full code writing over? I still trust my juniors more, and only have to check their code twice. AND at least some will continue to be Seniors, or switch to other IT areas where they will thrive, but people believe “anyone can do it”, which might be true…but can they do it well? Right, and they are still better that AI
Fidregore
This is about how Millennials and Gen Z can't afford to have kids
ThatOtherTransGirl
Don't use AI to write your code.
MaleProstateMilker88
But in theory it saves corps money, and people who feel entitled to other people's work - including corpos - get to use other people's work for their own benefit while participating in the exploitation of labour, which regular citizens don't often get to do since they're usually the ones being exploited,so everyone benefits, especially the corpos, and we need to think about the poor execs and their billions. If they'd have to pay for work, they'd lose money :(
danlei
I spend more time fighting and fixing what the AI messes up than the time "saved" using AI for coding. I swear to god I have never gotten as angry in my life as I have when having to deal with stupid AI for work.
Forosnai
I'll give AI credit, trying to use it to make a simple python script got me to learn a bit of python, because it was easier to learn it from scratch than constantly fight to get little changes without it inexplicably changing other things for no good goddamn reason.
danlei
Pretty much same. It was faster to learn more of C# than fight the AI. It can still write basic code faster than me and solve some issues I can't. And troubleshoot my mistakes on many occasions. Most of the other stuff I have to do myself from scratch.
CarpoolTunnelSyndrome
Same.
CatChef
https://media0.giphy.com/media/v1.Y2lkPWE1NzM3M2U1a2JhaTgwbjB3NGU4aWJ3N3Q0cXpiNWViNGhrMTVqeWtheWdreHQxbyZlcD12MV9naWZzX3NlYXJjaCZjdD1n/kvmGozJIFULg91pxsv/200w.webp
Columbus43219
YES! Janet, is this just a cactus?
iamgnat
If LLMs were like normal Janet, that would be awesome.
What we have are early iterations of Derek.
Evenmoreuselessname
What're you talking about? Derek is rad!
iamgnat
He was rad after a few million reboots. Initially he was a fucking moron, which is about where we are with LLMs.
JohnSatclaire
You are massively insulting Derek with that comparison.
CitrusyGarlic
Can someone please come up with a new term for the thing we have now that we call AI that's not actually AI at all?? Comment after comment on this post about how bullshit AI is, yet we still insist that the Intelligence part of the term Artificial Intelligence is valid. It's artificial sure, but it's not intelligent, why are we giving it that honorific title?
lightfoot2
It's called marketing crap and it makes companies a lot of money. Remember OLE? Cyberdog? Yeah
CedricDur
It already exists. It may be pedantic, but the term is LLMs. It's the only term I use for them since there is no intelligence in there.
shawnmilo
This is the only correct answer.
brazzy42
That ship sailed long ago. The term has been used for decades to describe many things where it was FAR less appropriate.
shawnmilo
Two hills I will fucking DIE on: The things called "hoverboards" and "AI" are FUCKING NOT!
nihiltres
I’m afraid you’re incorrect; it *is* AI, just that phrase technically covers technologies dating back to the 1940s. Video game NPCs use pathing algorithms that are AI. Intelligence can be narrow and shallow, and the latest models still meet both those descriptors.
It’s not what sci-fi calls “AI”, which is more properly called artificial general intelligence (AGI).
I get that people feel the current stuff doesn’t deserve the name, but it’s what the *field* is called.
shawnmilo
There is no consciousness. Therefore, there is no intelligence. "AI" does not exist.
CitrusyGarlic
Two wrongs don't make a right! So a mistake was made....FIX IT! I'm asking for a NEW term that would accurately describe what we have at the moment, not an additional explanation for why we ended up at this shitty place
SteveMND
You would think people involved in the world of programming would be more familiar with the idea of GIGO.
CitrusyGarlic
The programmers are, their bosses with business degrees are not
phobosorbust
a developer would understand that, yes.
a bot operator cosplaying as a developer would not.
realrealluckless
Do not entrust to AI a task any more complex than you would give to the high school intern who just happens to be related to a management level employee...
ProxyPlayerHD
i find LLMs really useful for writing scripts and such. because it takes the LLM a few seconds to write something that would take me hours of "how to do x in powershell" searches (because all my experience is in C/ASM). also i noticed that the more detailed and explicit the prompt, the more likely you are to get functional code on the first try. ironically meaning you already need a programmer mindset to think of a full solution to your problem so you can describe it in detail to the LLM.
ProcrastinatingWork
Just make sure you check their work before you show anyone!
RevRagnarok
I think of it as "a well trained pigeon" but an intern works too.
iquestionthepinappleeveryday
Treating LLM’s as a place to easily search for information and help with problems your struggling with has been the only useful way I’ve found it to be used. And even then, I reword my phrasing or continue to ask more questions to make sure it understands exactly what I’m looking for. I would not trust it with my own work.
MagicalScientist
Doesn't the follow-up required fact checking completely negate any usefulness it might have? Like, you can take the entire AI step out and NOT have to keep asking follow-up questions (when the AI might change its mind and start making up new "facts").
iquestionthepinappleeveryday
Often times it is still faster than “googling”. I used it heavily for symbolic logic when I couldn’t understand the next step. Not to solve the entire thing for me, just to help me see where to go next. The follow up questions were absolutely necessary because it doesn’t always understand the parameters of say Fitch, or how the problem is being worked out.
iquestionthepinappleeveryday
But in even less technical queries, it will give you answers that are almost correct, but maybe not the one you are looking for. Especially if it involves stuff that has a ton of information. For example, try asking it about a very specific song or instrument that the Beatles used during a performance. It will give you an answer because they used so many instruments, but if you really wanted to nail it down it may give you the wrong answer over and over.
TheInternetNeedsMoreCats
Chat gpt couldn't even make me an accurate grocery list based on 4 recipes I gave it. Not faster than I could anyway.
Colopty
The problem of course being that the only reason you're handing tasks to a high school intern is for the benefit of the intern since they need the experience. Giving a task to an AI isn't really nurturing the next generation of workers so at that point there's just no reason to get it involved.
benderfreak
...except for the point that you just clarified; training the AI.
Colopty
The AI is not in training mode when you're giving it tasks.
johnxbear
Sooooo mopping the floor then? Can AI do that? That's what AI should be doing.
drinkthederpentine
There are indeed robot mops
Fn0rd
Give it AI, fly the Marketing team to The clients location, extend the business expense account so that the client can get invited by marketing team to improve bond, and shelve the Product. Everybody’s happy and on paper we created 20,000,000 of market value, so the shareholders are also happy. EzPz lemon squeezy welcome to the world of corporate economics, where nothing matters, and Stocks always go up.
LocalSemiFriendlyRabbitGal
RoboMop, the future of cleaning! Your move, creep.
samwyze
jekath
I use it as an advanced rubber duck. And it gave me a really neat idea using rxjs for limiting number of calls to a backend. I was really, really happy with that solution. But so far, I dont let it touch my code.
DarthKillian
I just recently started using AI to help me with some hobby hardware code. I don't let it do everything though. I already have existing code then ask for best practice with structuring, and if something doesn't work in my circuit, I use it to help troubleshoot and bounce ideas off of. But all of the code is all me and I still leverage forums, YT, and sites to help with my code/circuit design.
etcnotect
It's definitely helpful for learning code. It often gave me better ways to do things than I was doing. If you just copy and paste code though and hope for the best you're going to have a bad time. It's also really good for finding stupid errors that you keep over looking.
TheFoodBaby
AI if used correctly is a great tool, but can't give you a solution unless trained. You can do some advanced code with AI, but you'll first have to write thousands of lines of code yourself and not trust the AI to do it for you. You can use Copilot and have it recognize your code and add suggestions as it learns from you as you go.
Claude however is pretty good at generating code, but again you'll have to write your own code or it will just keep making errors if you don't know how to correct it.
nihil777
Personally, I use it to help learn how to work with a language I am not familiar with, but I ask for explanations on just about every line so I can understand and when I think the answer is bunk, I go looking for better documentation.
thevicker
I mean... if it does get good at coding, boom. Singularity. Dont need the meat sacks any more.
thetinymonarch
I use it just to write basic python scripts and to right simple but tedious excel formulas
Ricdesan
You are being WAAYY to ambitious with that craziness
Sorcatarius
I've heard, "Only trust to AI what you'd trust to a trained pigeon", and I feel thats a solid standard to have.
AnOceanOfStars
You mean the intern who's using AI and can't even be bothered trying to cover the most obvious traces of it?
lordnequam
I mean, what are you gonna do? Tell his management-level relative to fire him?
porphyre1e00
While this is a humorous statement, it's also 100% true and that's a big problem.
Current LLM tools need to be managed (given clear instructions and code reviewed throughly) just like a 1st year college grad.
Meaning you must have someone competent with some experience as babysitter.
The problem is that the LLMs are taking a lot of the jobs that would traditionally go to young coders for them to gain competence and experience.
We're cutting off our own supply lines.
eetsumkaus
As someone who works in university though, those young coders are ALSO using LLMs and enabling them to gain competence and experience faster. We basically have to switch our training methods from how to get the mechanical processes by memory, to evaluating problems from higher up the abstraction layers. Young coders learning off of code they can actually run is light years faster than searching for a StackOverflow, then failing to setup anything because OP couldn't be arsed to make a MWE.
porphyre1e00
Bro, strong disagree. Teaching kids to think / understand there are layers of abstraction is the hardest part of making people actually productive.
To be clear - those young coders are NOT "learning off code" from the LLMs. They're checking off boxes: it compiles, 90% code coverage, A/C are satisfied.
How is that different than production work, you say.
Granted. But college profs aren't code reviewing from the standpoint that they'll have to maintain the shit 5 years from now.
eetsumkaus
so code review. That's why that exists. It's much easier to teach someone what does and doesn't work from code that actually runs, rather than waiting for them to sort out setup and the minute details, and THEN learn software engineering. Any org that has the bandwidth to code review everything for 5 years down the line, should have the bandwidth to shepherd newbies through coding with LLMs.
porphyre1e00
That's a very interesting position, possibly borne out of your place in academia. You fundamentally misunderstand what code reviews are for in a production environment and exactly how professional software development shops work.
Really feels like I'm having a conversation with leadership about how using CoPilot is (or is not) going to allow my devs and QA to deliver 2x the story points each sprint.
Arkandos
And the same issues are hitting many sectors. Your cut out the new blood, meaning you decrease the future supply.
bertchstudio
We are literally being attacked and doing nothing about it but trying to work with the broken tools they've given you.
I cannot stress enough how much all of this was planned from the fucking 90s y'all
dragoonwraith
I would trust a reasonably-enthusiastic college grad FAR more than an LLM.
MenloPart
They are teaching codependency?
januskincaid
Code-dependancy, amiright? I'll see myself out.
iWillAlwaysBoopTheSnoot
Comment of the day right here. I can officially go touch grass now.
DMSledge
And its starting to look like the math on savings is actually becoming a net loss. AI code requires so much supervision and correction that coders using it are actually less productive.
Z0op
I use it plenty to speed up processes, mostly mundane loops, it just saves time typing shit out, and generally it does that well and easy to check. Anything complex tho? You need to review every little thing it does, seriously cant trust it to pull this kind of bullshit as it does in the clip, that it ends up being faster doing it yourself
LanceSackless
The people making these choices can only understand immediate saving, which llms WILL give them because they wont have to pay workers, any costs accrued, monetary or otherwise will be ignored until it becomes a very present problem at which point they will blame someone else or simply get “fired” with a golden parachute
Lynkfox
Same issue that happened with the "no code \ low code" frameworks from years ago that I'm now replacing. I expect in 5 to 10 years (assuming we're not in a mad max style apocalypse) to be doing the same for ai \ vibe coded apps.... Assuming they don't cause the owning entities to crash and burn due to security flaws long before that
poscduke
That sounds like a problem for 5 years from now, meanwhile I was able to fire a bunch of people to make my stock options slightly more valuable! /s
PArthica2
Nice seeing others think that way, I thought I'm the equivalent of a crazy cat lady thinking one of the realer problems of ai is that we're becoming dependent on it by cutting of our supply line of knowledge workers
6313326
Yep, this same creator has a bit about just that.
wildlycurious1
Link?
6313326
I went to go find it right after replying but can't remember her name
:(
TheSharpieOne
The LLMs and tools advance very quickly. Your statement would be correct 6 months ago, but now it's a little dated. Claude code (different from regular Claude) for instance can do a lot with very little instructions. The newer reasoning models and MCPs help a lot. It's basically AI using AI to understand, breakdown, implent, and verify tasks independently and repetitively.
eetsumkaus
Yeah, it's a bit surprising how quickly it's come. Last year I was using it to write simple loops parsing strings. This year, it gave me code to package a forgotten C++ library into python and even debugged a namespace issue that occurred because of the packaging library. Even the loops it gives me now correctly handle corner cases.
coffeeandprozac
So, no tasks outside of sharpening pencils?
Scorpion451
And only if you want the pencils to be sharpened on the eraser end.