AI coding agents are removing programming language barriers
railsatscale.com145 points by Bogdanp 3 days ago
145 points by Bogdanp 3 days ago
Counter point: AI makes mainstream languages (for which a lot of data exists in the training data) even more popular because those are the languages it knows best (ie, has the least rate of errors in) regardless of them being typed or not (in fact, many are dynamic, like Python, JS, Ruby).
The end result? Non-mainstream languages don't get much easier to get into because average Joe isn't already proficient in them to catch AI's bugs.
People often forget the bitter lesson of machine learning which plagues transformer models as well.
It’s good at matching patterns. If you can frame your problem so that it fits an existing pattern, good for you. It can show you good idiomatic code in small snippets. The more unusual and involved your problem is, the less useful it is. It cannot reason about the abstract moving parts in a way the human brain can.
>It cannot reason about the abstract moving parts in a way the human brain can.
Just found 3 race conditions in 100 lines of code. From the UTF-8 emojis in the comments I'm really certain it was AI generated. The "locking" was just abandoning the work if another thread had started something, the "locking" mechanism also had toctou issues, the "locking" also didn't actually lock concurrent access to the resource that actually needed it.
> UTF-8 emojis in the comments
This is one of the "here be demons" type signatures of LLM code generation age, along with comments like
// define the payload struct payload {};
Yes, that was my point. Regardless of the programming language, LLMs are glorified pattern matchers. A React/Node/MongoDB address book application exposes many such patterns and they are internalised by the LLM. Even complex code like a B-tree in C++ forms a pattern because it has been done many times. Ask it to generate some hybrid form of a B-tree with specific requirements, and it will quickly get lost.
"Glorified pattern matching" does so much work for the claim that it becomes meaningless.
I've copied thousands of lines of complex code into an LLM asking it to find complex problems like race conditions and it has found them (and other unsolicited bugs) that nobody was able to find themselves.
Oh it just pattern matched against the general concept of race conditions to find them in complex code it's never seen before / it's just autocomplete, what's the big deal? At that level, humans are glorified pattern matchers too and the distinction is meaningless.
LLMs are good at needle in the haystack problems, specifically when they have examples in the corpus.
The counter point is how LLMs can't find a missing line in a poem when they are given the original.
PAC learning is basically existential quantification...has the same limits too.
But being a tool to find a needle is not the same as finding all or even reliability finding a specific needle.
Being being a general programming agent requires much more than just finding a needle.
> The counter point is how LLMs can't find a missing line in a poem when they are given the original.
True, but describing a limitation of the tech can't be used to make the sort of large dismissals we see people make wrt LLMs.
The human brain has all sorts of limitations like horrible memory (super confident about wrong details) and catastrophic susceptibility to logical fallacies.
> super confident about wrong details
Have you not had this issue with LLMs? Because I have. Even with the latest models.
I think someone upthread was making an attempt at
> describing a limitation of the tech
but you keep swatting them down. I didn’t see their comments as a wholesale dismissal of AI. They just said they aren’t great at sufficiently complex tasks. That’s my experience as well. You’re just disagreeing on what “sufficiently” and “complex” mean, exactly.
> humans are glorified pattern matchers too and the distinction is meaningless.
I'm still convinced that this is true. The more advances we make in "AI" the more i expect we'll discover that we're not as creative and unique as we think we are.
I suspect you're right. The more I work with AI, the more clear is the trajectory.
Humans generally have a very high opinion of themselves and their supposedly unique creative skills. They are not eager to have this illusion punctured.
maybe you aren't...
Whether or not we have free will is not a novel concept. I simply side on us being more deterministic than we realize, that our experiences and current hormone state shape our output drastically.
Even our memories are mutable. We will with full confidence recite memories or facts we've learned just moments ago which are entirely fictional. Normal, healthy adults.
Humans can't be glorified pattern matchers because they recognize that they aren't.[1]
[1]https://ai.vixra.org/pdf/2506.0065v1.pdf
The paper is satire, but it's a pretty funny read.
LLMs should definitely be used for brute force searches especially of branching spaces. Use them for what they do best.
“Pattern matching” is thought of as linear but LLMs are doing something more complex, it should be appreciated as such.
> it has found them (and other unsolicited bugs) that nobody was able to find themselves.
How did you evaluate this? Would be interested in seeing results.
I am specifically interested in the amount of false issues found by the LLM, and examples of those.
Well, how do you verify any bug? You listen to someone's explanation of the bug and double check the code. You look at their solution pitch. Ideally you write a test that verifies the bug and again the solution.
There are false positives, and they mostly come from the LLM missing relevant context like a detail about the priors or database schema. The iterative nature of an LLM convo means you can add context as needed and ratchet into real bugs.
But the false positives involve the exact same cycle you do when you're looking for bugs yourself. You look at the haystack and you have suspicions about where the needles might be, and you verify.
> Well, how do you verify any bug?
You do or you don't.
Recently we've seen many "security researchers" doing exactly this with LLM:s [1]
1: https://www.theregister.com/2025/05/07/curl_ai_bug_reports/
Not suggesting you are doing any of that, just curious what's going on and how you are finding it useful.
> But the false positives involve the exact same cycle you do when you're looking for bugs yourself.
In my 35 years of programming I never went just "looking for bugs".
I have a bug and I track it down. That's it.
Sounds like your experience is similar to using deterministic static code analyzers but more expensive, time consuming, ambiguous and hallucinating up non-issues.
And that you didn't get a report to save and share.
So is it saving you any time or money yet?
Oh, I go bug hunting all the time in sensitive software. It's the basis of test synthesis as well. Which tests should you write? Maybe you could liken that to considering where the needles will be in the haystack: you have to think ahead.
It's a hard, time consuming, and meandering process to do this kind of work on a system, and it's what you might have to pay expensive consultants to do for you, but it's also how you beat an expensive bug to the punchline.
An LLM helps me run all sorts of considerations on a system that I didn't think of myself, but that process is no different than what it looks like when I verify the system myself. I have all sorts of suspicions that turn into dead ends because I can't know what problems a complex system is already hardened against.
What exactly stops two in-flight transfers from double-spending? What about when X? And when Y? And what if Z? I have these sorts of thoughts all day.
I can sense a little vinegar at the end of your comment. Presumably something here annoys you?