The bottleneck was never the code

thetypicalset.com

437 points by Anon84 2 days ago


nayroclade - 9 hours ago

It's hilarious to me to see the same kind of engineer, who throughout my career have constantly bitched and moaned about team meetings, agile ceremonies, issue trackers, backlogs, slack, emails, design reviews, and anything else that disrupted the hours of coding "flow state" they claimed as their most essential and sacred activity to be protected at all costs, suddenly, and with no hint of shame, start preaching about about the vital importance of collaborative activities and the apparent inconsequence of code and coding, the moment a machine was able to do the latter faster than them. I mean, they're not even wrong, but the nakedly hypocritical attitude of people who, until a year ago, were the most antisocial and least collaborative members of any team they were on is still extraordinary.

jugg1es - 9 hours ago

I think veteran engineers have always known that the real problems with velocity have always been more organizational than technical. The inability for the business to define a focused, productive roadmap has always been the problem in software engineering. Constantly jumping to the next shiny thing that yields almost no ROI but never allowing systemic tech debt to be addressed has crippled many company's I have worked at in the long-term.

coldtea - 11 minutes ago

>The goal was to test our structured-generation algorithms and their open-source counterparts, replacing the naive “does it accept this string?” with something closer to the real problem: “does it produce the right token distribution?” The experiment kept coming up in conversation, then returning to the roadmap. Last month, I spent half an hour explaining the method to Codex. A few hours later, it had produced a working first version. That’s all it took.

Proving that the bottleneck, was, in fact, the code. It's just that the AI wrote it now.

The person who thought "the bottleneck wasn't the code" already had the goal discussed and coherent in their mind.

If the code wasn't the bottleneck, they could just sit and write it themlseves. But, they didn't want to go through the effort and time spent of coding it themselves, which they also knew it wouldn't take as little as with the LLM.

(And even when you don't have a clear final spec in mind, the exploratory code+check+discard+retry-new-design, is also faster with an LLM, precisely because the "code" part is).

In other words, the code was the bottleneck.

The post appears AI-generated itself, just with instructions to avoid obvious constructions, which still makes for tedious reading.

jmilloy - 6 hours ago

Code is a liability.

I think it can be easy to look at code as an asset, but fundamentally it is a liability. Some of the "bottlenecks" to new code are in place to make sure that the yield outweighs the increased liability. Agents that produce more code faster are producing more liability faster. Much of the excitement and much of the skepticism about coding agents is about whether the immediate increased productivity (new features) and even immediate yield (new products or new revenue) outweighs the increased long term liabilities. I'd say we won't find out for another 1-3 years, and of course that the answer will differ in different domains.

From this perspective, attempting to build these bottlenecks into the agentic workflow directly makes some sense. Supplying coding agents with additional context that values a coherent project vision and that pushes back against new features or unconstrained processes would be valuable.

Is this what the article is trying to get at? Is this attempting to make some agents essentially take on product management responsibilities, synthesizing as much as possible into a cohesive product vision and reminding the coding agents of that vision as strictly as possible? Should these agents review new proposals and new pull requests for "adherence to the full picture", whether you want to call this "context" or "vision" or something else?

I think these agents might do an exceptionally good job at synthesizing context and presenting a cohesive roadmap that appears, linguistically, to adhere to the team values and vision. But I'm doubtful that they can have the discernment that a quality manager or team can have. Rapidly and convincingly greenlighting a particular roadmap could do more harm than good.

trjordan - 2 hours ago

Yes, but: writing code always teaches you something.

I've worked at founder-sized startups and $xxb dollar public companies. I've never read a product spec, a pitch deck, or a PRD that describes a solution that, if implemented in the way described, would solve the problem. Building the thing teaches you how it should behave.

Software is a complex, interactive medium. Iterating in the code, with people who understand the problem and care to see it solved, is the only way I've seen valuable products get created. Meetings and diagrams help, but it's not until you write some working software that you know whether you have something.

paldepind2 - 8 hours ago

From the article:

> Jevons Paradox: when something gets cheaper, you tend to use more of it, not less.

That's a butchering of Jevons paradox. What's stated is not a paradox, but a very natural effect. Obviously usage of something goes up when it gets cheaper.

What Jevons paradox actually describes is the situation where usage of a resource becomes more efficient (which means less of it is needed for a given task), but still the total usage of that resource increases.

nilirl - 9 hours ago

Bottleneck for what? More features?

I don't think amount of software is what determines whether a company does well.

I don't think capturing quantity of context is that important either.

Now, quality of context. How well do the humans reason?

Then, attitude. How well do the humans respond to bad situations?

Then, resource management. How well does the company treat people and money?

Finally, luck. How much of the uncontrollables are in our favor?

Those are pretty good bottlenecks for a company. I doubt an agent is fixing any of those. At least any time soon.

ChrisMarshallNY - 7 hours ago

> Software is what’s left over after a group of humans finishes negotiating with each other about what the system should do.

Love that.

I agree, in particular, about the context. That’s where long-retention, experienced, teams pay off.

I managed one of those for decades. When they finally rolled up our department, the engineer with the least seniority, had ten years.

When a team is together for that long, the communication overhead drops to an almost negligible level.

That’s what I find most upsetting about the current culture of mayfly-lifespan employment tenures.

Nowadays, I work mostly alone. I’m highly productive, but my scope is really limited.

I miss being on a good team.

Antibabelic - 9 hours ago

What kind of projects are people working on, where understanding what features the management wants is the only difficult part and the rest can just be "typed out" (or, today, offloaded to an LLM)? If that's what you do, then I'm not surprised so many people on HN think LLMs can replace them.

booleandilemma - 3 minutes ago

[delayed]

rudyp_dev - 9 hours ago

I think the argument here misses critical nuance; there is a difference between code used to implement a product and when code _is_ the product.

It goes without saying that agents have little to no product sense in any discipline. If you're building a game or an app or a business, your creative input still matters heavily! And the same is true for code; if the software is your product, then absolutely the context missed by skipping the writing process will degrade your output.

That doesn't mean that writing code wasn't a bottleneck even for creating well structured software projects. Being able to try multiple approaches (which would have previously been prohibitively expensive) can in many instances provide something a room of bickering humans never would have reached.

syntax-sailor - 7 hours ago

An awful lot of problems can in fact be solved by 'more code' in fact. People seem to straw man this in terms of product feature surface.

A lot of places skip creation and maintenance of decent observability - that's code.

We can now easily use advanced, code heavy testing techniques like property testing - code.

We can create environmental simulations to speed up and improve integration testing - code.

We can lift up internal abstraction levels, replace boiler plate with frameworks, DSLs - code.

neosat - 3 hours ago

"What slows down a team where agents do the implementation is the production of specifications precise enough for an agent to pick up and run. Roadmap, written down. Acceptance criteria, written down. The “what we actually want” forced into precision, be it via a test suite, a ticket, or a written design."

This is merely speed of development and not the velocity of a company towards higher value. There are many PMs confidently (using the same AI tools), without a clear deep understanding of the user problems or why the requirements will be adopted by their target users (or even who the target users really are), writing these done elaborately.

So yes this will lead to faster end-end execution. But if the product is used or if it sits unused will depend on things beyond the above.

oxag3n - 2 hours ago

It shares some ideas with Peter Naur "Programming as Theory Building".

Quote from the post article: "To quote Michael Polanyi: we know more than we can tell. Some load-bearing context exists precisely because it was never put into words, and writing it down would change what it is."

Imagine how much knowledge exists only in the heads of software engineers, with code being just a functioning footprint of that "Theory". I know SRE in FAANG who told me that multi-billion system is supported by tribal knowledge within their group, and for years, even pre-AI it was a protection against automation.

j16sdiz - 10 hours ago

(not related to the article)

The flashing red dot on the web page is very annoying. Is there some design reason for that?

edit: I meant the <svg> inside `trail-map-container`

frollogaston - 8 hours ago

Doesn't add up. I used to spend more than half my time coding, as did others. Besides the obvious cost, that coding took wall-time which meant talks had to wait. Sure a poor collaborator will jam things up a ton, but a team of at least ok collaborators used to be bottlenecked on code.

tlkan - 8 hours ago

One of the bottlenecks has always been the code. That code has been stolen and is being laundered while companies rely on mediocre engineers who have never written anything of value to promote the burglary tools and call the process "writing software".

It is the same as putting an Einstein paper on a photocopier and call the process "writing a paper".

I agree with the point of the article though: code generation does not really work, the results are bloated and often wrong and people already had more features that they could absorb in 2020.

The solution to this mess is to have 18 year olds boycott studying computer science altogether, since the industry (and mediocre fellow "engineers") will treat them like human garbage.

keithnz - 2 hours ago

I think this is the wrong conclusion.

Whether code is the bottleneck likely depends on the organization. In mine, code is the bottleneck. AI has pushed it so validation is now the bottleneck. If it is such that the devs are "middlemen" such they can't spec things, then I think whoever can spec things is likely the bottleneck.

ZeWaka - 9 hours ago

> Producing easily consumable context is precisely the thing humans don’t like to do.

I don't think this sentence speaks for me. This is the sort of thing I love to do.

lysium - 10 hours ago

Can someone explain the title? I think the author illustrates that the code was the bottleneck and it has shifted to context. What am I missing?

TedDallas - 3 hours ago

Ask yourself what monks did when scribes were replaced by the printing press.

If I was a scribe at the time I’d be thrilled because of all that extra time available to work on beer productivity metrics.

pu_pe - 8 hours ago

Sometimes code is definitely the bottleneck. For example some organizations have a very bureaucratic process guarding which projects get access to a development team and when. That's not needed if implementation is now faster/cheaper.

I'm also skeptical that development velocity is so separate from all those other things (context, stakeholder alignment,etc). It's much easier to get actionable feedback when you have a prototype.

skiing_crawling - 2 hours ago

As software engineer, we should collectively realize that this is all cope. Every article or comment about how AI will never be smart enough, etc, etc will only be true until its not. One of our main valuable skill sets is now partially automated. Some of us are completely obsolete and its coming for the specialists and more experienced ones within a decade tops. You're not going to convince anyone that "um actually we're better because we bike shed more".

Stuff like this is ridiculous and comes off as frantically trying to save your ass. Its pretty obvious at this point that we will just throw more matmuls at it until it can do this or something equivalent.

> Agents cannot do osmosis. They do not get context by being in the room, by half-hearing the planning conversation, or by carrying the memory of the last incident.

web-cowboy - 5 hours ago

I'm finding counterexamples of this constantly now that I can have an agent rewrite large sections of my codebase that have been sorely needing it.

- Moving to a newer and more modern test library

- Refactoring my data layer so it's easier to read, based on years of organic changes that need to be baked in and simplified

- Porting some functionality to another language to vastly improve performance

I agree with the overall sentiment, but having an agent at my finger tips who can really crank out large-scale, involved code changes is unclogging quite a few backburnered todos lately for me.

kylestlb - 8 hours ago

Absolutely matching the gut feel I've had lately. We've always been pretty good at producing bad code very fast. All of the other stuff - dependency management, learning what's valuable, ownership & boundaries, context switching costs, etc... have always been the bottlenecks and it's just more obvious now.

kadhirvelm - 7 hours ago

Totally agree, we wrote our own piece similar to this: https://productnow.ai/blogs/teams-that-coordinate

I really think as code becomes cheap, misalignment between people, teams, and organizations is going to hurt a lot more, especially when everyone is trying to move at break neck speeds.

I also think a big piece of this is human attention and inertia. Aka, why bother doing the hard work to coordinate with others when you can just ship whatever you’re thinking. I think whichever organizations can figure out the human and cultural aspects to this will do phenomenally

jorisw - 9 hours ago

> Agents that consume context need agents that produce it. Once that loop is running, the organization has a written substrate it would never have produced on its own.

I'm not sure a business is helped by documentation that distilled from (hopefully present) PR descriptions and comments in JIRA, by agents. Or wherever this context is supposed to be reverse-engineered from.

theptip - 6 hours ago

> They are waiting on the next well-formed spec

Is this actually true? Maybe in a widget factory. I think it’s an anti-pattern for the new world.

When you look at places that are shipping at insane pace (like Anthropic) the secret is not accelerating the writing down of a roadmap and we’ll groomed backlog, it’s empowering smart individuals to run their own end-to-end product improvement loops.

You can slightly reframe the OP by saying “the bottleneck is product ideas”, but “well formed backlog items” IMO frames it as more structured and hierarchical than it should be.

charlieflowers - 4 hours ago

I have been thinking about this a lot lately. How do you capture key factors succinctly, and even harder, keep it succinct as it evolves?

The shrinking that property based testing does when it finds an issue is kind of what we need for specs/context.

stego-tech - 8 hours ago

The bottleneck has always been the human element. I too used to be one of those up-my-own-ass engineers who thought the most important part of my work was the machine, and it wasn’t until I began actually listening to others and their problems that I realized my function was far more than mere technology scaffolding.

That said, I’m also increasingly aware that puts me in a minority group. I got to see this first hand in a recent org where their codebase and product design hadn’t meaningfully evolved in nearly thirty years. NAT was a “game changer” to them - and one they refused to implement without tons of extraneous testing they would deliberately undermine, stall, and sabotage so they didn’t have to modernize their code accordingly. It was easier for the developers and stakeholders to preserve their own status quo rather than entertain alternatives, to the point of open hostility (name calling, insults, screaming, and a few threats) to anyone suggesting otherwise.

The human element has always been, and always will be the bottleneck. Stakeholders who don’t contribute updated or accurate datasets to automation systems, or who hold back development to preserve personal status and power, or who otherwise gum up the works on purpose to game their own careers.

That’s not to make the argument of “replace all humans with machines”, mind you. Just stating that an organization that incentivizes bad behavior will be slowed down versus ones that incentivize collaborative outcomes, and AI is just going to turbocharge that by removing the friction associated with code creation and shifting that elsewhere.

jaccola - 9 hours ago

The company website linked in the article is broken https://www.dottxt.ai/ on (mobile and desktop) Safari. Looks like your cert doesn’t cover the www subdomain.

gavinh - 4 hours ago

If I read “load-bearing” or “blast radius” one more time…

- 6 hours ago
[deleted]
randallsquared - 6 hours ago

So managers are overwhelmed because the code is now happening a lot faster? It sounds like the immediate bottleneck really was the code, at least frequently. Now it seems the bottleneck is managerial.

zabzonk - 9 hours ago

> Real programmers don’t document their programs.

Probably true, but I, for one, have always liked documenting how the code I've written should be used, whether programmers calling APIs I've created, or end-users actually making use of a program's executable. I find writing the docs just as interesting and creative as writing code.

sharts - 7 hours ago

Velocity, velocity, velocity! Ah yes, velocity always seems to matter except to those that don’t need to worry about it.

Brysonbw - 5 hours ago

Everything in life revolves around people, and even more so today

blueTiger33 - 8 hours ago

the bottleneck was never the software, that is the ship we ride,

people, are part of a team focused on a goal, they work together because they believe in that the ship is worth riding on and will reach its destination,

the ship should carry food people want,

team decides what food will be consumed,

captain tries first the food,

if food is good and people want it, people buy more

BrokenBuild - 9 hours ago

I can see the division here already, and the cogs are afraid. As a dev of 25+ years, currently working for a small company who came from a global company, I see both sides. I'm very excited about AI and love to see my projects come to life so much faster. I still love the craft of code, but its always been about the product for me.

luodaint - 9 hours ago

The paper hits the nail right on the head, but it misses the mark on the next constraint: how to decide what to build.

In the old days when writing code took up a lot of resources, the constraint was self-correcting since being off in your implementation was obvious enough that the error could be easily seen after three months of work on the wrong feature. Today, you could spend five wrong efforts in the same amount of time that it used to take you to implement one wrong effort.

nibab - 5 hours ago

the tediousness of keeping documentation up to date and the natural tendency towards small attention spans has always come up as a tax on organizational efficiency: complicated org structures, legibility exercises, communication tollgates etc. there is real value in reducing the friction in the former so that the latter becomes less of a burden.

at the same time, context poisoning is a real cognitive problem for humans too and I can't tell you the number of times I've seen irrelevant details become a drag on execution. my fear is that having too much context will only cause bikeshedding and a revisiting of prior decisions.

frankly, our organizational structures were already pretty good at creating mechanisms for eliciting the right implicit context at large scales. it is possible that we're just going to come up with the same mechanisms from first principles...

wesm - 10 hours ago

See also https://wesmckinney.com/blog/mythical-agent-month/

spiderfarmer - 9 hours ago

For me it was. Solo entrepreneurs are the ones who profit the most from AI assisted development.

lynx97 - 10 hours ago

If thats true, I am sure some C-suite manager knows this already. Assuming management knows what they do, after all, they're getting payed for this. The time where engineer are trying to educate people above them should be over. Management gets payed for the big decisions. If they tank the company, so be it. I no longer care.

keeda - 2 hours ago

The bottleneck was ALWAYS the code, which is why everything was built around it.

This is the key line right here:

> Negotiating, agreeing, communicating the shared picture of what we are building has become the work. And it’s just as hard as it was.

But if software (via code) is what we ultimately produce and sell, how did we get here? The main reason is the following lemma:

Lemma A: "The loss of fidelity of what can fit in any one person's head scales superlinearly (exponentially?) as the scope of work scales up." Or more colloquially: "It is impossible to fit a large scope of work in any one person's head." This is largely because any non-trivial task is a fractal of smaller dependencies.

The chain of logic to today's situation is then obvious:

1. Writing code requires humans who are slow and expensive.

2. To do large things we need large groups of humans.

3. As the number of humans grows (like beyond 5? 10?) it becomes impossible to keep them aligned, largely because Lemma A.

4. We need to coordinate these humans, so: enter managers!

5. But even a manager can't manage too many people and coordinate with all other managers because, again, Lemma A. Enter hierarchy!

6. As the size of the organization grows, so does the coordination overhead (exponentially, if Google AI overview is to be believed) until as,that quote surmises, the majority of the work is just that.

7. Coordination costs (or "Conway Overhead" as I call them) are very well understood in the literature, but this also brings in undesirable dynamics like bureaucracy, politics, organizational metrics (also due to Lemma A, but now triggering GoodHart's law!) and eventually territorial disputes and empire-building. Lots of friction and subtle mis-alignments.

As you can see the overhead scales superlinearly with the number of leaf workers added. And for the same reason, once the leaf workers are decimated because one worker can now do the work of a whole team, the entire organizational overhead above that is gone, which is also a superlinear change! Assume a conservative 2:1 reduction in ICs and a 1:5 manager:reportee ratio, a simplistic hierarchy that was:

1 CEO -> 5 VPs -> 25 Dirs -> 125 Managers -> 625 ICs

now becomes something like:

1 CEO -> 12 SVPs -> 60 Sr. Managers -> 310 Sr. ICs.

Not only did that eliminate 300 ICs (mostly junior I suspect) it took out 60 managers and removed an entire layer of Directors from the hierarchy! Worse, the leaf-layer will probably get decimated 5:1 not 2:1, and this will also eliminate coordination-specific roles like Program Managers. The rest of the hierarchy is much fewer but mostly more experienced (or politically savvy) people. They will be paid more, but not superlinearly more, of course, what do you think this is, socialism?

It's very much a pyramid scheme of cards built on that one bottleneck. And this bottleneck applies for pretty much all knowledge work. Once that bottleneck opens up, everything collapses. This is why I fear that the coming job changes are going to be much more disruptive that people realize, something I'm extra concerned about as a parent of high-schoolers.

stevefan1999 - 3 hours ago

Before going to work, we're fed algorithms and data structures and how they are the bottlenecks that makes wasteful use and here's how to utilize them; only to naively know from hard stories that the actual bottleneck is always from the people, the H-factor, except this time H stands for human.

Insane amount of bureaucracy, paperworks, and how we are missing deadlines so we write shit code that the quick and dirty solutions were never replaced.

Algorithms and data structures therefore are more like helping you utilize the machine economy better, but it doesn't have any meaningful impact on the social aspect of it. That's a hard lesson I had to learn from my two previous job, though now I'm considering starting my own small business just to make a little bit of living enough to survive.

But now my ADHD kicked in and is still lazy and I had so many concerns whether the market validation is great, how to deal with situations if I broke customers stuff, how to gain (and hopefully not regain) trust if any bad things ever happen, what if I want to go vacation and suddenly the server broke and got code zero (the highest level of alert I termed internally, when you had alertmanager flashing everything red, network storage is down, corruption happened) during a trip to Bahamas.

I'm still in the watershed of thinking really to do this or not, but the job market is filled with ghost jobs that are not worth my time either, I'm basically "dead locked" right now and had to make a decision quick.

Either choice is fucked for me, as I started to notice after going to work, despite I got some really interesting ideas in tech, but I'm not a charismatic person so I can't really make those idea to fruition, because no one wants to listen to me and implement it together, so I'm pretty sure it is impossible for me to be a great leader (tech lead probably, but CEO level of leadership and coordinator and manipulate the grand scheme of thing, nah, I pretty much can't do).

Now the problem is, even if I'm pretty sure to get fucked, you should choose the one that inflicts minimum pain to you. So far having my own business seems like a less painful to die and bankrupt, and I'm preparing to sell off some of my stuff to get a last dip of my fortunes and have fun. Will see how it looks. Bankruptcy is nothingburger in this modern society perhaps.

Now you see how the bottlenecks can't even be the code anymore and even goes beyond code, despite having the same core template: I don't even have to code, to repeat the same "quick and dirty" kind of mindset in another domain, in another instance. That's something LLM, heck not even AGI can solve: decision-making based on situations with limited time and resources, and it can be personal or organizational or even structural.

This is very much not going to be solvable by a bunch of lines and statements and expressions, but it really need some time to dig in and compromise. Pick your kool-aid and drink it

freejazz - 8 hours ago

It seems like so many developers know this, yet here we are. SV pushing this AI slop economy. More code! Faster! Less testing! Less understanding! It's what we NEED!

HarHarVeryFunny - 7 hours ago

> What may save us it that agents are unreasonably good at reading exhaustively. An agent will read every PR comment, every closed issue, every commit message, every stale design doc ...

> Not just “this module exists,” but “this module is weird because the migration had to preserve old behavior,” or “this benchmark matters because a previous optimization silently changed the distribution.”

The thesis here is that an LLM will document code better than a human (although based on human artifacts), since churning through huge quantities of text is what they are good at.

A few thoughts:

1) Yes, an LLM may be able to pull comments out of commits and PR comments and put them back in the code where they belong, but I question how often a developer too lazy to put a vital comment in the code would put it in a commit message instead!

2) "The truth is in the code" has always been true, and will always remain true. If the comments differ from the code, the code defines the truth. Pulling comments from stale external documentation and putting them in the code does more harm than good.

3) Comments that can be auto-generated from the code don't add much value (lda #1; add one to the accumulator).

4) Comments about the purpose or motivation of the code, distinct from 3), such as the "we had to preserve backwards compatibility" example, or "this code does this non-obvious tricky thing because ...", are where the value is, but the LLM is highly unlikely to be able to discern any unwritten motivation by itself. If the human developer left a comment somewhere then great (assuming it is still relevant)

Most of the discussion we see about LLM coding is how fast it can churn out thousands of LOC on a greenfield project, or how good they can be at finding bugs, but neither of these are very relevant to the main job of developers which is maintaining and extending existing codebases. It would be lovely if most projects were greenfield, but they are not.

In any large project that has been maintained over a few years or more, there will inevitably be an ever growing accumulation of bug fixes and patches for specific issues that have been discovered in production, likely poorly documented and out of sync with any original documentation that may have existed (which anyway tends to be more idealistic and architectural in nature, not capturing these types of post-deployment detail and special cases).

The natural tendency of an LLM is to want to rewrite code to match the statistics of what it was trained on, and they need to be reigned in via prompting to resist this and not touch more code than is minimally needed for what is being asked. Of course asking an LLM to do something is a bit like asking a dog to do something - sometimes it will, and sometimes it won't. I expect over the next few years we'll be experiencing, and reading about, more and more cases where LLMs have introduced bugs and regressions into mature code bases because of this - rewriting code that should have been left alone. The general rule is that if you are tempted to rewrite something you better first understand why it was there, coded the way it is, in the first place.

I can't help but compare the current state of "AI" (LLMs) to the early days of things like computer speech recognition or language translation when they were considered amazing, and everyone was gushing about them, but at the end of the day the accuracy still wasn't good enough to make them very useful - that would take another 10-20 years.

Another historical lesson/perspective would be expert systems which at the time were considered as AI and the future of machine intelligence (the Japanese "5th generation systems" were going to take over the world, CYC promised to offer human level intelligence), but in retrospect were far less important. It won't be until we move on from LLMs to something more brain-like, deserving to be called AGI, that LLMs will be put in their historical perspective.

At the moment DeepMind seems to be the only one of the big labs admitting/recognizing that scaling LLMs isn't going to achieve AGI and that "a few more transformer-level breakthroughs" are needed. Hassabis has however talked about LLMs (GPTs) still being a part of what they are envisaging, which one could either regard as a pragmatic stepping stone to real AGI, or perhaps that they are not being ambitious enough - building something that still needs to be spoon-fed language rather than being capable of learning it from scratch.

woodydesign - 5 hours ago

[dead]

tylershamy - 4 hours ago

[flagged]

dividendflow - 8 hours ago

[flagged]

bucktrack - 8 hours ago

[flagged]

tuo-lei - 7 hours ago

[flagged]

SadErn - 5 hours ago

[dead]

auggierose - 7 hours ago

I cringe every time I read the word "load-bearing" in an article.