The Junior Hiring Crisis
people-work.io211 points by mooreds 7 hours ago
211 points by mooreds 7 hours ago
> We used to have a training ground for junior engineers, but now AI is increasingly automating away that work. Both studies I referenced above cited the same thing - AI is getting good at automating junior work while only augmenting senior work. So the evidence doesn’t show that AI is going to replace everyone; it’s just removing the apprenticeship ladder.
Was having a discussion the other day with someone, and we came to the same conclusion. You used to be able to make yourself useful by doing the easy / annoying tasks that had to be done, but more senior people didn't want to waste time dealing with. In exchange you got on-the-job experience, until you were able to handle more complex tasks and grow your skill set. AI means that those 'easy' tasks can be automated away, so there's less immediate value in hiring a new grad.
I feel the effects of this are going to take a while to be felt (5 years?); mid-level -> senior-level transitions will leave a hole behind that can't be filled internally. It's almost like the aftermath of a war killing off 18-30 year olds leaving a demographic hole, or the effect of covid on education for certain age ranges.
Adding to this: it's not just that the apprenticeship ladder is gone—it's that nobody wants to deal with juniors who spit out AI code they don't really understand.
In the past, a junior would write bad code and you'd work with them to make it better. Now I just assume they're taking my feedback and feeding it right back to the LLM. Ends up taking more of my time than if I'd done it myself. The whole mentorship thing breaks down when you're basically collaborating with a model through a proxy.
I think highly motivated juniors who actually want to learn are still valuable. But it's hard to get past "why bother mentoring when I could just use AI directly?"
I don't have answers here. Just thinking maybe we're not seeing the end of software engineering for those of us already in it—but the door might be closing for anyone trying to come up behind us.
> Now I just assume they're taking my feedback and feeding it right back to the LLM.
This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."
Part of the challenge (and I don't have an answer either) is there are some juniors who use AI to assist... and some who use it to delegate all of their work to.
It is especially frustrating that the second group doesn't become much more than a proxy for an LLM.
New juniors can progress in software engineering - but they have to take the road of disciplined use of AI and make sure that they're learning the material rather than delegating all their work to it... and that delegating work is very tempting... especially if that's what they did in college.
I must ask once again why we are having these 5+ round interview cycles and we aren't able to filter for qualities that the work requires of its talent. What are all those rounds for if we're getting engineers who aren't as valued for the team's needs at the end of the pipeline?
It's the cargo cult kayfabe of it all. People do it because Google used to do it, now it's just spread like a folk religion. But nobody wants guilds or licensure, so we have to make everyone do a week-long take-home and then FizzBuzz in front of a very awkward committee. Might as well just read chicken bones, at least that would be less humiliating.
I can understand such process for freshman, but for industry veteran with 10+ years of experience, with with recommendation from multiple senior managers?
And yet welcome to leetcode grind.
> there are some juniors who use AI to assist... and some who use it to delegate all of their work to.
Hmmm. Is there any way to distinguish between these two categories? Because I agree, if someone is delegating all their work to an LLM or similar tool, cut out the middleman. Same as if someone just copy/pasted from Stackoverflow 5 years ago.
I think it is also important to think about incentives. What incentive does the newer developer have to understand the LLM output? There's the long term incentive, but is there a short term one?
Dealing with an intern at work who I suspect is doing exactly this, I discussed this with a colleague. One way seems to be to organize a face to face meeting where you test their problem solving skills without AI use, the other may be to question them about their thought process as you review a PR.
Unfortunately, the use of LLMs has brought about a lot of mistrust in the workplace. Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice as they see it as sermonizing when an “easy” process to get “acceptable” results exists.
> Earlier you’d simply assume that a junior making mistakes is simply part of being a junior and can be coached; whereas nowadays said junior may not be willing to take your advice
Hot take: This reads like an old person looking down upon young people. Can you explain why it isn't? Else, this reads like: "When I was young, we worked hard and listened to our elders. These days, young people ignore our advice." Every time I see inter-generational commentary like this (which is inevitably from personal experience), I am immediately suspicious. I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way. Why would this be any different in the current generation? In my experience, it isn't.On a positive note: I can remember mentoring some young people and watching them comb through blogs to learn about programming. I am so old that my shelf is/was full of O'Reilly books. By the time I was mentoring them, few people under 25 were reading O'Reilly books. It opened my eyes that how people changes more than what people learn. Example: Someone is trying to learning about access control modifiers for classes/methods in a programming language. Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.
> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT.
The answer to this (throughout the ages) should be the same: read the authoritative source of information. The official API docs, the official language specification, the man page, the textbook, the published paper, and so on.
Maybe I am showing my age, but one of the more frustrating parts of being a senior mentoring a junior is when they come with a question or problem, and when I ask: “what does the official documentation say?” and get a blank stare. We have moved from consulting the primary source of information to using secondary sources (like O’Reilly, blogs and tutorials), now to tertiary sources like LLMs.
> Old days: Get the O'Reilly book for that programming language. Lookup access modifiers in the index. 10 year ago: Google for a blog with an intro to the programming language. There will be a tip about what access modifiers can do. Today: Ask ChatGPT. In my (somewhat contrived) example, the how is changing, but not the what.
The tangent to that is it is also changing with the how much one internalizes about the problem domain and is able to apply that knowledge later. Hard fought knowledge from the old days is something that shapes how I design systems today.
However, the tendency of people who reach for ChatGPT today to solve a problem results in them making the same mistakes again the next time since the information is so easy to access. It also results in things that are larger are more difficult... the "how do you architect this larger system" is something you learn by building the smaller systems and learning about them so that their advantages and disadvantages and how and such becomes an inherent part of how you conceive of the system as a whole. ... Being able to have ChatGPT do it means people often don't think about the larger problem or how it fits together.
I believe that is harder for a junior who is using ChatGPT to advance to being a mid level or senior developer than it is for a junior from the old days because of the lack of retention of the knowledge of the problems and solutions.
> I can assure you that when I was young, I did not listen to older people's advice and I tried to do everything my own way.
Hot take: This reads like a person who was difficult to work with.
Senior people have responsibility, therefore in a business situation they have authority. Junior people who think they know it all don't like this. If there's a disagreement between a senior person and a junior person about something, they should, of course, listen to each other respectfully. If that's not happening, then one of them is not being a good employee. But if they are, then the supervisor makes the final call.
There are some definite signs of over reliance on AI. From emojis in comments, to updates completely unrelated to the task at hand, if you ask "why did you make this change?", you'll typically get no answer.
I don't mind if AI is used as a tool, but the output needs to be vetted.
What is wrong with emojis in comments? I see no issue with it. Do I do it myself? No. Would I pushback if a young person added emojis to comments? No. I am looking at "the content, not the colour".
I think GP may be thinking that emojis in PR comments (plus the other red flags they mentioned) are the result of copy/paste from LLM output, which might imply that the person who does mindless copy/pasting is not adding anything and could be replaced by LLM automation.
The point is that heavy emoji use means AI was likely used to produce a changeset, not that emojis are inherently bad.
Exactly. Use LLMs as a tutor, a tool, and make sure you understand the output.
Just like anything, anyone who did the work themself should be able to speak intelligently about the work and the decisions behind its idiosyncrasies.
For software, I can imagine a process where junior developers create a PR and then run through it with another engineer side by side. The short-term incentive would be that they can do it, else they'd get exposed.
Is/was copy/pasting from Stackoverflow considered harmful? You have a problem, you do a web search and you find someone who asked the same question on SO, and there's often a solution.
You might be specifically talking about people who copy/paste without understanding, but I think it's still OK-ish to do that, since you can't make an entire [whatever you're coding up] by copy/pasting snippets from SO like you're cutting words out of a magazine for a ransom note. There's still thought involved, so it's more like training wheels that you eventually outgrow as you get more understanding.
> Is/was copy/pasting from Stackoverflow considered harmful?
It at least forces you to tinker with whatever you copied over.
Pair programming! Get hands-on with your junior engineers and their development process. Push them to think through things and not just ask the LLM everything.
I've seen some overly excessive pair programming initiatives out there, but it does baffle me why less people who struggle with this do it. Take even just 30 minutes to pair program on a problem and see their process and you can reveal so much.
But I suppose my question is rhetorical. We're laying off hundreds of thousands of engineers and maming existing ones do the work of 3-4 engineers. Not much time to help the juniors.
having dealt with a few people who just copy/pasted Stackoverflow I really feel that using an LLM is an improvement.
That is at least for the people who don't understand what they're doing, the LLM tends to come out with something I can at least turn into something useful.
It might be reversed though for people who know what they're doing. IF they know what they're doing they might theoretically be able to put together some stackoverflow results that make sense, and build something up from that better than what gets generated from LLM (I am not asserting this would happen, and thinking it might be the case)
However I don't know as I've never known anyone who knew what they were doing who also just copy/pasted some stackoverflow or delegated to LLM significantly.
> Is there any way to distinguish between these two categories?
Yes, it should be obvious. At least at the current state of LLMs.
> There's the long term incentive, but is there a short term one?
The short term incentive is keeping their job.
> This is especially annoying when you get back a response in a PR "Yes, you're right. I have pushed the fixes you suggested."
I've learnt that saying this exact phrase does wonders when it comes to advancing your career. I used to argue against stupid ideas but not only did I achieve nothing, but I was also labelled uncooperative and technically incompetent. Then I became a "yes-man" and all problems went away.
I was attempting to mock Claude's "You are absolutely right" style of response when corrected.
I have seen responses to PRs that appear to be a copy and paste of my feedback into it and a copy and paste of the response and fixes into the PR.
It may be the that the developer is incorporating the mannerisms of Claude into their own speech... that would be something to delve into (that was intentional). However, more often than not in today's world of software development such responses are more likely to indicate a copy and paste of LLM generated content.
This. May you have great success! My PR comments that I get are so dumb. I can put the most obvious bugs in my code, but people are focused in the colour of the bike shed. I am happy to repaint the bike shed whatever colour they need it to be!
I get that. I think that getting to know juniors outside of work, at a recurring meetup or event, in a setting where you can suss out their motivation level and teachability level, is _a_ way of going about it. That way, if your team is hiring juniors, you have people you have already vetted at the ready.
IMO teachability/curiosity is ultimately orthogonal to the more base question of money-motivation.
In a previous role I was a principal IC trying to mentor someone who had somehow been promoted up to senior but was still regularly turning in code for review that I wouldn't have expected from an intern— it was an exhausting, mind-numbing process trying to develop some sense of engineering taste in this person, and all of this was before LLMs. This person was definitely not just there for the money; they really looked up to the top-level engineers at our org and aspired to be be there, but everything just came across as extremely shallow, like engineering cosplay: every design review or bit of feedback was soundbites from a how-to-code TED talk or something. Lots of regurgitated phrases about writing code to be "maintainable" or "elegant" but no in-the-bones feeling about what any of that actually meant.
Anyway, I think a person like this is probably maximally susceptible to the fawning ego-strokes that an AI companion delivers alongside its suggestions; I think I ultimately fear that combination more than I fear a straight up mercenary for whom it's a clear transaction of money -> code.
I had one fairly-junior teammate at Google (had been promoted once) who was a competent engineer but just refused to make any choices about what to work on. I was his TL and I gave him a choice of 3 different parts of the system to work on, and I was planning to be building the other two. He got his work done adequately, but his lack of interest / curiosity meant that he never really got to know how the rest of the system operated, and got frustrated when he didn't advance further in his career.
Very odd. It was like he only had ever worked on school projects assigned to him, and had no actual interest in exploring the problems we were working on.
In my experience, curiosity is the #1 predictor of the kind of passionate, high-level engineer that I'm most interested in working with. And it's generally not that hard to evaluate this in a free-form interview context where you listen to how a person talks about their past projects, how they learn a new system or advocated/onboarded a tool at their company.
But it can be tricky to evaluate this in the kind of structured, disciplined way that big-company HR departments like to see, where all interviewees get a consistent set of questions and are "scored" on their responses according to a fixed rubric.
That does not even sounds like a problem? Like when people are that picky about what exact personality the junior musr have that good work is not enough ... then there is something wrong with us.
I don't think it's beyond the call of duty to expect someone to acquire context beyond their immediate assignments, especially if they have ambitions to advance. It's kind of a key prerequisite to the kind of bigger-picture thinking that says "hey I noticed my component is duplicating some functionality that's over there, maybe there's an opportunity to harmonize these, etc"
When presenting the three projects, I gave pros and cons about each one, like "you'll get to learn this new piece of technology" or "a lot of people will be happy if we can get this working". Absolutely no reaction, just "I don't care, pick one".
This guy claimed to want to get promoted to Senior, but didn't do anything Senior-shaped. If you're going to own a component of a system, I should be able to ask you intelligent questions about how you might evolve it, and you should be able to tell me why someone cares about it.
I am honestly totally fine with person like that. Sounds like someone easy to work with. I dunno, not having preference between working on three parts of the system is not abnormal. Most people choose randomly anyway.
Just pick the two you like the most.
>not having preference between working on three parts of the system is not abnormal.
I suppose it depends on the team and industry. This would be unheard of behavior for games, for example. Why you taking a pay cut and likely working more hours to just say "I don't know, whatever works?". You'd ideally be working towards some sort of goal. Management, domain knowledge, just begin able to solve hard problems.
Welp, to each their own I suppose.
Yea a lot software developers I’ve worked with, across the full spectrum of skill levels, didn’t have a strong preference about what code they were writing. If there is a preference, it’s usually the parts they’ve already worked on, because they’re already ramped up. Strong desire to work on a specific piece of the code (or to not work on one) might even in some cases be a red flag.