AI makes you boring

marginalia.nu

282 points by speckx an hour ago


aeturnum - an hour ago

I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.

JohnMakin - an hour ago

> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had

Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.

josefresco - an hour ago

While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.

tptacek - an hour ago

That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.

AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.

kouru225 - 17 minutes ago

This issue exists in art and I want to push back a little. There has always been automation in art even at the most micro level.

Take for example (an extreme example) the paintbrush. Do you care where each bristle lands? No of course not. The bristles land randomly on the canvas, but it’s controlled chaos. The cumulative effect of many bristles landing on a canvas is a general feel or texture. This is an extreme example, but the more you learn about art the more you notice just how much art works via unintentional processes like this. This is why the Trickster Gods, Hermes for example, are both the Gods of art (lyre, communication, storytelling) and the Gods of randomness/fortune.

We used to assume that we could trust the creative to make their own decisions about how much randomness/automation was needed. The quality of the result was proof of the value of a process: when Max Ernst used frottage (rubbing paper over textured surfaces) to create interesting surrealist art, we retroactively re-evaluated frottage as a tool with artistic value, despite its randomness/unintentionality.

But now we’re in a time where people are doing the exact opposite: they find a creative result that they value, but they retroactively devalue it if it’s not created by a process that they consider artistic. Coincidentally, these same people think the most “artistic” process is the most intentional one. They’re rejecting any element of creativity that’s systemic, and therefore rejecting any element of creativity that has a complexity that rivals nature (nature being the most systemic and unintentional art.)

The end result is that the creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets. Their audiences become prey for creative predators. They idolize the art because they see it as something they can’t make, but the truth is there’s always a method by which the creative is cheating. It’s accessible to everyone.

serf - 38 minutes ago

AI doesn't make people boring, boring people use AI to make projects they otherwise never would have.

Non-boring people are using AI to make things that are ... not boring.

It's a tool.

Other things we wouldn't say because they're ridiculous at face value:

"Cars make you run over people." "Buzzsaws make you cut your fingers off." "Propane torches make you explode."

An exercise left to the reader : is a non-participant in Show HN less boring than a participant with a vibe coded project?

lasgawe - an hour ago

The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.

0xbadcafebee - 7 minutes ago

[delayed]

jcalvinowens - an hour ago

Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.

The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.

zinodaur - 12 minutes ago

Using AI to write your code doesn't mean you have to let your code suck, or not think about the problem domain.

I review all the code Claude writes and I don't accept it unless I'm happy with it. My coworkers review it too, so there is real social pressure to make sure it doesn't suck. I still make all the important decisions (IO, consistency, style) - the difference is I can try it out 5 different ways and pick whichever one I like best, rather than spending hours on my first thought, realizing I should have done it differently once I can see the finished product, but shipping it anyways because the tickets must flow.

The vibe coding stuff still seems pretty niche to me though - AI is still too dumb to vibe code anything that has consequences, unless you can cheat with a massive externally defined test suite, or an oracle you know is correct

max8539 - 26 minutes ago

Before vibe coding, I was always interested in trying different new things. I’d spend a few days researching and building some prototypes, but very few of them survived and were actually finished, at least in a beta state. Most of them I left non-working, just enough to satisfy my curiosity about the topic before moving on to the next interesting one.

Now, these days, it’s basically enough to use agent programming to handle all the boring parts and deliver a finished project to the public.

LLMs have essentially broken the natural selection of pet projects and allow even bad or not very interesting ideas to survive, ideas that would never have been shown to anyone under the pre-agent development cycle.

So it’s not that LLMs make programming boring, they’ve allowed boring projects to survive. They’ve also boosted the production of non-boring ones, but they’re just rarer in the overall amount of products

BiraIgnacio - an hour ago

One of the down sides of Vibe-Coded-Everything, that I am seeing, is reinforcing the "just make it look good" culture. Just create the feature that the user wants and move on. It doesn't matter if next time you need to fix a typo on that feature it will cost 10x as much as it should.

That has always been a problem in software shops. Now it might be even more frequent because of LLMs' ubiquity.

Maybe that's how it should be, maybe not. I don't really know. I was once told by people in the video game industry that games were usually buggy because they were short lived. Not sure if I truly buy that but if anything vibe coded becomes throw away, I wouldn't be surprised.

taude - an hour ago

AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.

EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.

TheDong - an hour ago

We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.

daxfohl - an hour ago

And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"

nemomarx - an hour ago

I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.

overgard - 35 minutes ago

Totally agree with this. Smart creators know that inspiration comes from doing the work, not the other way around. IE, you don't wait for inspiration and then go do the work, you start doing the work and eventually you become inspired. You rarely just "have a great idea", it comes from immersing yourself in a problem, being surrounded with constraints, and finding a way to solve it. AI completely short circuits that process. Constraints are a huge part of creativity, and removing them doesn't mean you become some unstoppable creative force, it probably just means you run out of ideas or your ideas kind of suck.

glitchc - an hour ago

It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.

Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.

The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.

Kalpaka - an hour ago

The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.

The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?

I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.

The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.

Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.

pelagicAustral - an hour ago

I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.

In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.

dang - 44 minutes ago

Recent and related:

Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (422 comments)

discreteevent - an hour ago

> Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.

There was also a comment [1] here recently that "I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!"

Both of them reminded me of Picasso saying in 1968 that " Computers are useless. They can only give you answers,"

Of course computers are useful. But he meant that they have are useless for a creative. That's still true.

[1] https://news.ycombinator.com/item?id=47059206

adverbly - 37 minutes ago

Whoa there. Let's not oversimplify in either direction here.

My take:

1. AI workflows are faster - saving people time

2. Faster workflows involve people using their brain less

3. Some people use their time savings to use their brain more, some don't

4. People who don't use their brain are boring

The end effect here is that people who use AI as a tool to help them think more will end up being more interesting, but those who use AI as a tool to help them think less will end up being more boring.

jrmg - an hour ago

The issue with the recent rise in Show HN submissions, from the perspective of someone on the ‘being shown’ side, is that they are from many different perspectives lower quality than they used to be.

They’re solving small problems or problems that don’t really exist, usually in naive ways. The things being shown are ‘shallow’. And it’s patently obvious that the people behind them will likely not support them in any meaning full way as time goes on.

The rise of Vibe Coding is definitely a ‘cause’ of this, but there’s also a social thing going on - the ‘bar’ for what a Show HN ‘is’ is lower, even if they’re mostly still meeting the letter of the guidelines.

fredliu - an hour ago

We are in this transition period where we'll see a lot of these, because of the effort of creating "something impressive" is dramatically reduced. But once it stabilizes (which I think is already starting to happen, and this post is an example), and people are "trained" to recognize the real effort, even with AI help, behind creating something, the value of that final work will shine through. In the end, anything that is valuable is measured by the human effort needed to create it.

ossa-ma - 42 minutes ago

Sorry to hijack this thread to promote but I believe it's for a good and relevant cause: directly identifying and calling out AI writing.

I was literally just working on a directory of the most common tropes/tics/structures that LLMs use in their writing and thought it would be relevant to post here: https://tropes.fyi/

Very much inspired by Wikipedia's own efforts to curb AI contributions: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

Lmk if you find it useful, will likely ShowHN it once polished.

iambateman - an hour ago

We are going to have to find new ways to correct for low-effort work.

I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.

As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.

pbmango - an hour ago

Along these same lines, I have been trying to become better at knowing when my work could benefit from reversion to the "boring" and general mean and when outsourcing thought or planning would cause a reversion to the mean (downwards).

This echos the comments here about enjoying not writing boilerplate. The there is that our minds are programmed to offload work when we can and redirecting all the saved boilerplate to going even deeper on parts of the problem that benefit from original hard thinking is rare. It is much easier to get sucked into creating more boilerplate, and all the gamification of Claude code and incentives of service providers increase this.

darod - 16 minutes ago

"You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think." Great line!

stopachka - 44 minutes ago

> Original ideas are the result of the very work you’re offloading on LLMs.

I largely agree that if someone put less work in making a thing, than it takes you to use it, it's probably not going to be useful. But I disagree with the premise that using LLMs will make you boring.

Consider the absurd version of the argument. Say you want to learn something you don't know: would using Google Search make you more boring? At some level, LLMs are like a curated Google Search. In fact if you use Deep Research et al, you can consume information that's more out of distribution than what you _would_ have consumed had you done only Google Searches.

fdefitte - 29 minutes ago

The article nails it but misses the flip side. AI doesn't make you boring, it reveals who was already boring. The people shipping thoughtless Show HN projects with Claude are the same people who would have shipped thoughtless projects with Rails scaffolding ten years ago. The tool changed, the lack of depth didn't.

mym1990 - an hour ago

Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).

daxfohl - an hour ago

I want to say this is even more true at the C-suite level. Great, you're all-in on racing to the lowest common denominator AI-generated most-likely-next-token as your corporate vision, and want your engineering teams to behave likewise.

At least this CEO gets it. Hopefully more will start to follow.

jihadjihad - an hour ago

> Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates. Prompting an AI model is not articulating an idea.

I agree, but the very act of writing out your intention/problem/goal/whatever can crystallize your thinking. Obviously if you are relying on the output spat out by the LLM, you're gonna have a bad time. But IMO one of the great things about these tools is that, at their best, they can facilitate helpful "rubber duck" sessions that can indeed get you further on a problem by getting stuff out of your own head.

Retr0id - an hour ago

This aligns with an article I titled "AI can only solve boring problems"[0]

Despite the title I'm a little more optimistic about agentic coding overall (but only a little).

All projects require some combination of "big thinking" and tedious busywork. Too much busywork is bad, but reducing it to 0 doesn't necessarily help. I think AI can often reduce the tedious busywork part, but that's only net positive if there was an excess of it to begin with - so its value depends on the project / problem domain / etc.

[0]: https://www.da.vidbuchanan.co.uk/blog/boring-ai-problems.htm...

acjohnson55 - an hour ago

This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.

To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.

We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.

dabedee - 44 minutes ago

Anecdotally, I haven't been excited by anything published on show HN recently (with the exception being the barracuda compiler). I think it's a combination of what the author describes: surface-level solutions and projects mostly vibe-coded whose authors haven't actually thought that hard about what real problem they are solving.

matsemann - an hour ago

If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.

solarisos - an hour ago

This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.

crawshaw - an hour ago

It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.

grimgrin - an hour ago

I land on this thread to ctrl-f "taste" and will refresh and repeat later

That is for sure the word of the year, true or not. I agree with it, I think

redwood - 9 minutes ago

AI is group think. Group think makes you boring. But then the same can be said about mass culture. Why do we all know Elvis, Frank Sinatra, Marilyn Monroe, the Beatles, etc? when there were countless others who came before them and after them? Because they happened to emerge at the right time in our mass culture.

Imagine how dynamic the world was before radio, before tv, before movies, before the internet, before AI? I mean imagine a small town theater, musician, comedian, or anything else before we had all homogenized to mass culture? It's hard to know what it was like but I think it's what makes the great appeal of things like Burning Man or other contexts that encourage you to tune out the background and be in the moment.

Maybe the world wasn't so dynamic and maybe the gaps were filled by other cultural memes like religion. But I don't know that we'll ever really know what we've lost either.

How do we avoid group think in the AI age? The same way as in every other age. By making room for people to think and act different.

minimaxir - 38 minutes ago

> Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.

This is reductive to the point of being incorrect. One of the misconceptions of working with agents is that the prompts are typically simple: it's more romantic to think that someone gave Claude Code "Create a fun Pokemon clone in the web browser, make no mistakes" and then just ship the one-shot output.

As some counterexamples, here are two sets of prompts I used for my projects which very much articulate an idea in the first prompt with very intentional constraints/specs, and then iterating on those results:

https://github.com/minimaxir/miditui/blob/main/agent_notes/P... (41 prompts)

https://github.com/minimaxir/ballin/blob/main/PROMPTS.md (14 prompts)

It's the iteration that is the true engineering work as it requires enough knowledge to a) know what's wrong and b) know if the solution actually fixes it. Those projects are what I call super-Pareto: the first prompt got 95% of the work done...but 95% of the effort was spent afterwards improving it, with manual human testing being the bulk of that work instead of watching the agent generated code.

- an hour ago
[deleted]
notatoad - an hour ago

i think about this a lot wit respect to AI-generated art. calling something "derivative" used to be a damning criticism. now, we've got tools whose whole purpose is to make things that are very literally derivative of the work that has come before them.

derivative work might be useful, but it's not interesting.

elif - an hour ago

And AI denial makes you annoying.

Your preference is no more substantial than people saying "I would never read a book on a screen! It's so much more interesting on paper"

There's nothing wrong with having pretentious standards, but don't confuse your personal aversion with some kind of moral or intellectual high ground.

gAI - an hour ago

I'm self-aware enough to know that AI is not the reason I'm boring.

hnlmorg - an hour ago

I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.

And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.

This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.

Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.

It’s so dumb.

Oarch - an hour ago

Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.

ryandrake - 43 minutes ago

Honestly, most people are boring. They have boring lives, write boring things, consume boring content, and, in the grand scheme of things, have little-to-no interesting impact on the world before they die. We don't need AI to make us boring, we're already there.

turnsout - an hour ago

I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.

Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.

Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.

Sol- - an hour ago

The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.

As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?

I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)

BurningFrog - an hour ago

OK, but maybe we only notice the mediocre uses of AI, while the smart uses come across as brilliant people having interesting insights.

logicprog - an hour ago

I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.

However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.

Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.

I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.

So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.

Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.

apexalpha - an hour ago

Meh.

Being 'anti AI' is just hot right now and lots of people are jumping on the bandwagon.

I'm sure some of them will actually hold out. Just like those people still buying Vinyl because Spotify is 'not art' or whatever.

Have fun all, meanwhile I built 2 apps this weekend purely for myself. Would've taken me weeks a few years ago.

nickysielicki - an hour ago

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.

- an hour ago
[deleted]
tonymet - an hour ago

When apps were expensive to build , developers at least had the excuse that they were too busy to build something appealing. Now they can cope by pretending to be an artisanal hand-built software engineer, and still fail at making anything appealing.

If you want to build something beautiful, nothing is stopping you, except your own cynicism.

"AI doesn't build anything original". Then why aren't you proving everyone wrong? Go out there and have it build whatever you want.

AI has not yet rejected any of my prompts by saying I was being too creative. In fact, because I'm spending way less time on mundane tasks, I can focus way more time on creativity , performance, security and the areas that I am embarrassed to have overlooked on previous projects.

himata4113 - an hour ago

I've actually ran into few blogs that were incredibly shallow while sounding profound.

I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.

hhsuey - an hour ago

Another click bait title produced by a human. Most of your premises could be easily be countered. Every comment is essentially an example.

add-sub-mul-div - an hour ago

Also sounds likely that it's the mediocre who gravitate to AI in the first place.

- 30 minutes ago
[deleted]
clint - an hour ago

Yet another boring, repetitive, unhelpful article about why AI is bad. Did the 385th iteration of this need to be written by yet another person? Why did this person think it was novel or relevant to write? Did they think it espouses some kind of unique point of view?

sarmasamosarma - 25 minutes ago

[dead]

ghostclaw-cso - an hour ago

[dead]

ai4prezident - an hour ago

[dead]

wagwang - an hour ago

Isnt this just flat out untrue since bots can pass turing tests

stuckinhell - an hour ago

I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.

elliotbnvl - an hour ago

I was onboard with the author until this paragraph:

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.

There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).

They also strawman usage of LLMs:

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.

I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.

nickjj - an hour ago

Look at the world Google is molding.

Here's a guy who has had an online business dependent on ranking well in organic searches for ~20 years and has 2.5 million subs on YouTube.

Traffic to his site was fine to sustain his business this whole time up until about 2-3 years where AI took over search results and stopped ranking his site.

He used Google's AI to rewrite a bunch of his articles to make them more friendly towards what ranks nowadays and he went from being ghosted to being back on the top of the first page of results.

He told his story here https://www.youtube.com/watch?v=II2QF9JwtLc.

NOTE: I've never seen him in my YouTube feed until the other day but it resonated a lot with me because I have a technical blog for 11 years and was able to sustain an online business for a decade until the last 2 years or so. Traffic to my site nose dived. This translates to a very satisfying lifestyle business to almost $0. I haven't gone down the path of rewriting all of my posts with AI to remove my personality yet.

Search engines want you to remove your personal take on things and write in a very machine oriented / keyword stuffed way.