AI tools are making me lose interest in CS fundamentals

55 points by Tim25659 4 hours ago


With powerful AI coding assistants, I sometimes feel less motivated to study deep computer science topics like distributed systems and algorithms. AI can generate solutions quickly, which makes the effort of learning the fundamentals feel less urgent.

For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?

jmward01 - 2 hours ago

There are two aspects to this. The desire to learn and the utility of learning. These are two very different things. Arguably the best programmers I have known have been explorers and hopped around a lot. Their primary skills have been flexibility and curiosity. The point here was their curiosity, not what they were curious about. Curiosity enabled them to attack new problems quickly and find solutions when others couldn't. Very often those solutions had nothing to do with skip lists or bubble sort. Studying algorithms is useful for general problem solving and hey, as a bonus, it helps sometimes when you are solving a real world problem, but staying curious is what really matters.

We have seen so many massive changes to software engineering in the last 30 years that it is hard to argue the clear utility of any specific topic or tool. When I first started it really mattered that you understood bubble sort vs quicksort because you probably had to code it. Now very few people think twice about how sort happens in python or how hashing mechanism are implemented. It does, on occasion, help to know that but not like it used to.

So that brings it back to what I think is a fundamental question: If CS topics are less interesting now, are you shifting that curiosity to something else? If so then I wouldn't worry too much. If not then that is something to be concerned about. So you don't care about red black trees anymore but you are getting into auto-generating Zork like games with an LLM in your free time. You are probably on a good path if that is the case. If not, then find a new curiosity outlet and don't beat yourself up about not studying the limits of a single stack automata.

kccqzy - 4 hours ago

Because AI still hallucinates. Since you mentioned algorithms, today for fun I decided to ask Claude a pretty difficult algorithm problem. Claude confidently told me a greedy solution is enough, before I told Claude a counterexample that made Claude use dynamic programming instead.

If you haven't learned the fundamentals, you are not in a position to judge whether AI is correct or not. And this isn't limited to AI; you also can't judge whether a human colleague writing code manually has written the right code.

babas03 - 4 hours ago

AI is great at giving you an answer, but fundamentals tell you if it's the right answer. Without the basics, you're not a pilot; you're a passenger in a self-driving car that doesn't know what a red light is. Stay strong in the fundamentals so you can be the one holding the steering wheel when the AI hits a hallucination at 70mph.

wcfrobert - 2 hours ago

To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.

hedora - 3 hours ago

Fundamentals are the only thing left to learn in our field.

Either the AI doesn’t understand them, and you need to walk it down the correct path, or it does understand them, and you have to be able to have an intelligent conversation with it.

royal__ - 3 hours ago

These tools actually make me more interested in CS fundamentals. Having strong conceptual understanding is as relevant as ever for making good judgement calls and staying connected with your work.

entrox - an hour ago

It used to be that you had to have a strong understanding of the underlying machine in order to create software that actually worked.

Things like cycle times of instructions, pipeline behavior, registers and so on. You had to, because compilers weren‘t good enough. Then they caught up.

You used to manage every byte of memory, utilized every piece of underlying machinery like the different chips, DMA transfers and so on, because that‘s what you had to do. Now it‘s all abstracted away.

These fundamentals are still there, but 99,9% of developers neither care nor bother with them. They don’t have to, unless they are writing a compiler or kernel, or just because it‘s fun.

I think what you‘re describing is also going to go away in the future. Still there, but most developers are going to move up one level of abstraction.

TehShrike - 3 hours ago

There are two types of CS fundamentals: the ones that help in making useful software, and the rest of them.

AI tools still don't care about the former most of the time (e.g. maybe we shouldn't do a loop inside of loop every time we need to find a matching record, maybe we should just build a hashmap once).

And I don't care if they care about the latter.

oefrha - an hour ago

Fundamentals should have even higher weight in learning budgets now, because AI can’t reliably reason complex architectural problems. It’s the surface level APIs you don’t have to learn/memorize.

Maybe you mean “AI tools are making me lose interest in learning anything”, which is… a common reaction, I suppose.

takwatanabe - an hour ago

Knowing fundamentals gives you deeper intuition about the technology, at every layer. When compilers appeared, you no longer needed to understand assembly and registers. But knowing how assembly and registers actually work makes you better at C. When Python came along, low-level languages felt unnecessary. But understanding C's memory management is what lets you understand Python's limitations. Now LLMs write the implementation. LLMs abstract away the code. But knowing how algorithms work, even in a high-level language like Python, is exactly how you catch LLM mistakes and inefficiencies.

Knowledge builds on knowledge. We learn basic math before advanced math for a reason. The pyramid keeps accumulating from what came before. Understanding the fundamentals still matters, I think.

atonse - 3 hours ago

Read this article from the Bun people about how they used CS fundamentals (and that way of thinking) to improve Bun install's performance.

https://bun.com/blog/behind-the-scenes-of-bun-install

Then look at how Anthropic basically Acquihired the entire Bun team. If the CS fundamentals didn't matter, why would they?

Even Anthropic needs people that understand CS fundamentals, even though pretty much their entire team now writes code using AI.

And since then, Jared Sumner has been relentlessly shaving performance bottlenecks from claude code. I have watched startup times come way down in the past couple months.

Sumner might be using CC all day too. But an understanding of those fundamentals (more a way of thinking rather than specific algorithms) still matter.

tartoran - 3 hours ago

I think that AI, particularly LLMs, can be quite effective for learning, especially if you maintain a sense of curiosity. CS fundamentals, in particular, are well-suited for learning through LLMs because models have been trained on extensive CS material. You can explore different paradigms in various ways, ask questions, and dissect both questions and answers to deepen your understanding or develop better mental models. If you're interested in theory, you can focus on theoretical questions but if you're more hands-on you can take a practical approach, ask for code examples etc. If you have a session and feel that there's something there that you want to retain ask for flash cards.

Tim25659 - 4 hours ago

One follow-up question I’ve been thinking about:

In the AI era, is it still worth spending significant time reading deep CS books like Designing Data-Intensive Applications by Martin Kleppmann?

Part of my hesitation is that AI tools can generate implementations for many distributed system patterns now. At the same time, I suspect that without understanding the underlying ideas (replication, consistency, partitioning, event logs, etc.), it’s hard to judge whether the AI-generated solution is actually correct.

For those who’ve read DDIA or similar books, did the knowledge meaningfully change how you design systems in practice?

remarkEon - 3 hours ago

I was wondering about this. I do not write software to pay the mortgage, I just write the occasional python script, some SQL stuff to update various dashboards, R in my spare time when I'm getting ready for looking at baseball stats or something. AI has had pretty much the opposite effect for me. Watching it write something has made me ask questions, get answers, dig into more details about things I never had the time to google on my own or spend an hour or several looking through stackoverflow.

I'd say my ability to write code has stayed about the same, but my understanding of what's going on in the background has increased significantly.

Before someone comes in here and says "you are only getting what the LLM is interpreting from prior written documentation", sure, yeah, I understand that. But these things are writing code in production environments now are they not?

- 3 hours ago
[deleted]
skybrian - 2 hours ago

I think it's important to stay curious and keep learning, but there's a lot to be curious about and all sorts of different skills you could work on improving. Going deep on algorithms or distributed systems are two possible directions, but there are others.

pkulak - 3 hours ago

I find I'm going even deeper lately. I, obviously, have to completely and _totally_ understand every line written before I will commit it, so if AI spits something out that I haven't seen before, I will generally get nerd sniped pretty good.

ankurdhama - 3 hours ago

Did you learn arithmetic in school even though calculator exist?

__rito__ - 2 hours ago

I will keep learning fundamentals.

I studied Physics fundamentals even though I had a microwave or could buy an airplane ticket. And I deeply enjoyed it. I still do.

I will keep doing it with CS fundamentals. Simply because I enjoy it too much.

nilirl - 3 hours ago

CS fundamentals is about framing an information problem to be solvable.

That'll always be useful.

What's less useful, and what's changed in my own behavior, is that I no longer read tool specific books. I used to devour books from Manning, O'reilly etc. I haven't read a single one since LLMs took off.

codance - 3 hours ago

I'd argue the opposite — AI tools make fundamentals more important, not less. When you can generate code instantly, the bottleneck shifts to knowing what to ask for and evaluating whether the output is correct.

serf - 3 hours ago

watching the difference in a non-CS versus a CS person using an LLM is all you need to do to reaffirm belief that fundamentals are still a massive benefit, if not a requirement for the deeper software.

petersonh - 3 hours ago

if you really love CS - there's a future in it. If AI becomes the new substrate for civilization, we'll always need people who fundamentally understand how these systems works to some degree.

j3k3 - 3 hours ago

Ultimately humans are the judge of reality, not LLMs.

How can you be a good judge? You must have very strong foundations and fundamental understanding.

jbrozena22 - 4 hours ago

> why do you think it’s still important to stay strong in CS fundamentals?

I don't think anyone at any level has any idea what the future is holding with this rapid pace of change. What some old timers think is going to be useful in a post-Claude world isn't really meaningful.

I think if I had limited time to prioritize learnings at the moment it would be prioritizing AI tooling comfort (e.g. getting comfortable doing 5 things shallowly in parallel) versus going super deep in understanding.

wayfwdmachine - 3 hours ago

"With powerful computers, I sometimes feel less motivated to study deep mathematical topics like differential equations and statistics. Computers can math quickly, which makes the effort of learning the fundamentals feel less urgent. For those who have been in the industry longer, why do you think it’s still important to stay strong in mathematical fundamentals?"

Because otherwise you are training to become a button pressing cocaine monkey?

add-sub-mul-div - 4 hours ago

It reminds me of the situation with self-driving that expects you to keep your full attention on the road while not driving so that you can take over at any time. It's clearly unrealistic.

It's not a failing of yours or anyone else's, but the idea that people will remain intellectually disciplined when they can use a shortcut machine is just not going to work.

dmitrygr - an hour ago

> For those who have been in the industry longer, why do you think it’s still important to stay strong in CS fundamentals?

Dictionaries have made me feel like studying languages is pointless. People, why do you think it’s still important to stay strong in languages when dictionaries exist?

esafak - an hour ago

So you know how a bad idea when you see one, and to be able to ask it to do it the good way?

j45 - 2 hours ago

AI tools are more effectively used aligning with/from CS fundamentals. Knowing how to ask and what to avoid is critical. It can be powered through/past, but the incorrect areas in the context can compound and multiply.

Ycros - 3 hours ago

How can you possibly make any informed statement about the solutions AI generates for you if you don't understand them?

bluefirebrand - 4 hours ago

I view this a bit like asking "why bother getting a job when I could just get rich at the slot machines"

Knowledge is still power, even in the AI age. Arguably even moreso now than ever. Even if the AI can build impressive stuff it's your job to understand the stuff it builds. Also, it's your job to know what to ask the AI to build

So yes. Don't stop learning for yourself just because AI is around

Be selective with what you learn, be deliberate in your choices, but you can never really go wrong with building strong fundamentals

Edit: What I can tell you almost for certain is that offloading all of your knowledge and thinking to LLMs is not going to work out very well in your favor

anonym29 - 3 hours ago

babas03 put it best IMO - https://news.ycombinator.com/item?id=47394432

I'd also second bluefirebrand's point that "it's your job to know what to ask the AI to build" - https://news.ycombinator.com/item?id=47394349

Those are great answers to the question you did ask, but I'd also like to answer a question you didn't ask: whether AI can improve your learning, rather than diminish it, and the answer is absolutely a resounding yes. You have a world-class expert that you can ask to explain a difficult concept to you in a million different ways with a million different diagrams; you have a tool that will draft a syllabus for you; you have a partner you can have a conversation with to probe the depth of your understanding on a topic you think you know, help you find the edges of your own knowledge, can tell you what lies beyond those edges, can tell you what books to go check out at your library to study those advanced topics, and so much more.

AI might feel like it makes learning irrelevant, but I'd argue it actually makes learning more engaging, more effective, more impactful, more detailed, more personalized, and more in-depth than anyone's ever had access to in human history.

tayo42 - 3 hours ago

For now yeah becasue you need to direct the Ai correctly still. Either with planning or you need to fix it's mistakes and identify when it did something correct but not optimally.

stainlu - 2 hours ago

[flagged]

SadErn - 3 hours ago

[dead]

diven_rastdus - 2 hours ago

The motivation loop breaking is the real problem, not the fundamentals themselves becoming less useful. Fundamentals feel rewarding to learn when you immediately apply them — you learn how a hash map works, you write one, you feel the difference. AI short-circuits that feedback loop by giving you the answer before you've built the intuition. The fix isn't to avoid AI tools; it's to deliberately impose a delay — implement the thing yourself first, then compare with what the model produces. The gap between the two is where the learning still happens.