English professors double down on requiring printed copies of readings
yaledailynews.com147 points by cmsefton 2 days ago
147 points by cmsefton 2 days ago
I have mentioned this in a few comments: for my CS classes I have gone from a historical 60-80% projects / 40-20% quizzes grade split, to a 50/50 split, and have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes
Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:
https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac
And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.
AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.
"It is impossible to design a system so perfect that no one needs to be good."
I had planned to move towards projects counting towards the majority of my CS class grades until chatgpt was released, now I've stuck with a 50/50 split. This year I said they were free to use AI all they liked (as if I can do anything about it anyway) , then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas.
Would like to give them some guidance on how to get AI to help prepare them for their interviews next year, will definitely take a look at your AGENTS.md approach.
What's your student feedback on it been like?
> Then ran interviews with the students about their project work, asking them to explain how it works etc. Took a lot of time with a class of 60 students, but worked pretty well, plus they got some experience developing the important skull of communicating technical ideas.
This is amazing and wish professors had done this back when I did CS in the late 1990s.
i would absolutely love to do individual interviews, but I have three classes of 50-80 students each and, at 10 minutes per interview, that would be ~35 hours worth of interviewing and there just isn't time to do that given the schedules of the students, etc.
my feedback has been pretty good on the in-person quizzes, we just had our first set
> then ran interviews with the students about their project work, asking them to explain how it works etc
Was there something fundamentally different from those who used "AI" a "lot" vs those who didn't?
Did they mention the issue of hallucination and how they addressed it?
> AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations
I feel like this is still underappreciated. Awesome meaningful diagrams with animations that I would take me days to make in a basic form can now be generated in under an hour with all the styling bells and whistles. It's amazing in practice because those things can deliver lots of value, but still weren't worth the effort before. Now you just tell the LLM to use anime.js and it will do a decent job.
I'm not sure I agree with the example interactions.
If a lecturer prepared slides with basically an x86 assembly to show how to loop, what is so bad about an AI regurgitating that and possibly even annotating it with the inner workings.
You seem like a great professor(/“junior baby mini instructor who no one should respect”, knowing American academic titles…). Though as someone whose been on the other end of the podium a bit more recently, I will point out the maybe-obvious:
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…
For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)
Yes, I expect that pressure will be there, and project grades will be near 100% going forward, whether the student did the work or not.
This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.
I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.
> I hated the high-pressure midterms/finals of my undergrad
The pressure was what got me to do the necessary work. Auditing classes never worked for me.
> I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz.
Isn't that what the lectures and homework are for?
The quizzes are still somewhat difficult (and fairly frequent) so you have to still get your stuff done (and more consistently than the cramming encouraged by a big midterm/final)
I do spaced repetition in lectures, my homeworks are typically programming problems and, as I said in OP, rely on the student committing to doing them w/o AI. So spaced repetition of the most important topics on quizzes seems reasonable. (It's an experiment this semester)
> When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night…
Millions of students prior to the last few years figured out how to manage conflicting class requirements.
> Millions of students prior to the last few years figured out how to manage conflicting class requirements.
Sure, and they also didn't have an omniscient entity capable of doing all of their work for them in a minute. The point of the GP comment, in my reading, is that the temptation is too great.
Oh? I never did actually. I had to keep cutting down my goals and even took summer classes.
The irony is that on-time completion of is probably the #1 source of project failure in the real world.
> have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes
I guess it depends quite a bit on what the answers to these questions look like but in college nothing frustrated me more than being asked to write a C program on paper. Even back then IDE autocomplete was something I depending on heavily and I felt forcing me to memorize arcane syntax was a complete waste of everyone's time. It's not at all representative of work in the real world nor does memorizing exact syntax IMHO.
Now, if you are being asked to write pseudo code or just answer questions it's a bit different but I really hate writing, my handwriting has never been great but why should I care? I've been typing on a computer since elementary school. Being asked to use paper/pencil in a computer class always rubbed me wrong.
I hear the concerns on AI/LLMs/cheating but I can't help but feel like there must be a better solution.
Do you find advocating for AI literacy to be controversial amongst peers?
I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.
There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)
AI is extremely dangerous for students and needs to be used intentionally, so I don't blame people for just going to "ban it" when it comes to their kids.
Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.
I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.
Honestly defining what to teach is hard
People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.
You're right that few high school educators would have any sense of all that.
I don't know anyone who learned arithmetic from a calculator.
I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong.
Yes but I was also taught to use a calculator, and particular the advanced graphing calculators.
Not to mention programming is a meta skill on top of “calculators”
I certainly practiced a lot on a calculator. Oh and I was very interested in how this concept, doing math using equipment rather than brains, worked.
The sycophancy is an artifact of how they RLHF train the popular chat models to appeal to normies, not fundamental to the tool. I can't remember encountering it at all since I've started using codex, and in fact it regularly fills in gaps in my knowledge/corrects areas that I misunderstand. The professional tool has a vastly more professional demeanor. None of the "that's the key insight!" crap.
> AI is extremely dangerous for students and needs to be used intentionally
Can you expound on both points in more details please, ideally with some examples?
I asked my students in a take home lab to write tests for a function that computes the Collatz sequence. Half of the class returned AI generated tests that tested the algorithm with floating point and negative numbers (for "correct" results, and not for input validation). I am not doing anything take home anymore.
If a student uses AI to simply code-gen without understanding the code (e.g. in my compilers class if they just generate the recursive-descent parser w/Claude, fixing all the tests) then they are robbing themselves of the opportunity to learn how to code.
In OP I showed an AGENTS.md file I give my students. I think this is using AI in a manner productive for intellectual development.
Not OP, but I would imagine (or hope) that this attitude is far less common amongst peer CS educators. It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. The best-positioned students will be the ones who can operate these tools effectively but with a critical mindset, while also being able to do without AI as needed (which of course makes them better at directing AI when they do engage it).
That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.
> It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now,
That's true, but you can't use AI in coding effectively if you don't know how to code. The risk is that students will complete an undergraduate CS degree, become very proficient in using AI, but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results.
My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder.
> My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder
Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.
> but you can't use AI in coding effectively if you don't know how to code
Depends on the LLM. I have a fine-tuned version of Qwen3-Coder where if you ask it to show you to compare to strings in C/C++, it will but then it will also suggest you look at a version that takes unicode into account.
I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
> but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results
That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
> Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.
My students don't seem to have a problem using AI: it's quite adequate to the task of completing their homework for them. I therefore don't feel a need to complete my buzzword bingo by promoting an "AI-first classroom." The concern is what they'll do when they find problems more challenging than their homework.
> I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
You are proving my point. If the programmer doesn't know what Unicode is, then the AI's helpful suggestion is likely to be ignored. You need to know enough to be able to make sense of the AI beyond a superficial measure.
> That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
We still teach that stuff. Being an engineer requires understand the whole machine. I'm not talking about mid-level marketroids who are excited that Claude can turn their Excel sheets into PowerPoints. I'm talking about actual engineers who take responsibility for their code. For every helpful suggestion that AI makes, it botches something else. When the AI gives up, where do you turn?
> It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia.
No, it's not.
Nothing around AI past the next few months to a year is clear right now.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.
None of that is certain, of course! That's the whole point: we don't know what's coming.
I get a slow-but-usable ~10tk/s on kimi 2.5 2b-ish quant on a high end gaming slash low end workstation desktop (rtx 4090, 256 gb ram, ryzen 7950). Right now the price of RAM is silly but when I built it it was similar in price to a high end macbook - which is to say it isn’t cheap but it’s available to just about everybody in western countries. The quality is of course worse than what the bleeding edge labs offer, especially since heavy quants are particularly bad for coding, but it is good enough for many tasks: an intelligent duck that helps with planning, generating bog standard boilerplate, google-less interactive search/stackoverflow ("I ran flamegraph and X is an issue, what are my options here?” etc).
My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out.
Then you're missing my point.
Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that.
But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.)
And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc.
So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring.
I never understand anyone's push to throw around AI slop coding everywhere. Do they think in the back of their heads that this means coding jobs are going to come back on-shore? Because AI is going to make up for the savings? No, what it means is tech bro CEOs are going to replace you even more and replace at least a portion of the off-shore folks that they're paying.
The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire.
Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective?
Lots of variability here that literally nobody has any idea how any of it's going to go.
It is clear that AI had already transformed how we do our jobs in CS
The genie is out of the bottle, never going back
It's a fantasy to think it will "dry up" and go away
Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS
That is not remotely clear. Every shred of evidence I've seen is that AI at best is a net zero on productivity. More often it drains productivity (if people are checking up on it as they should) or makes the software shitty (if they don't). I genuinely don't understand how people are willing to take this broken-ass tool and go "oh yeah this has transformed the industry".
Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more. I'm sure there's an amount of money for a monthly subscription that you'd either scale back your use or consider other alternatives. LLM as technology is indeed out of the bottle and here to stay but the current business around it is is not quite clear.
I've pondered that point, using my monthly car payment and usage as a barometer. I currently spend %5 on Ai compared to my car, I get far more value out of Ai
Yes, the genie is out of the bottle but could get back right in when it starts costing more, a whole lot more.
Local models are already good enough to handle some meaningful programming work, and they run very well on an expensive-but-not-unattainable PC. You could cheat your way through an undergrad CS curriculum with Qwen 80b, certainly, including most liberal-arts requirements.
The genie is not going back in the bottle no matter what happens, short of a nuclear war. There is no point even treating the possibility hypothetically.
Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc.
Plenty of tech becomes exploitative (or more exploitative).
I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.
Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.
You're crossing products with technology, also some cherry picking of personal perspectives
I personally think GSuite is much better today than it was a decade ago, but that is separate
The underlying hardware has improved, the network, the security, the provenance
Specific to LLMs
1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving
2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential
I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto)
Every time I hear "democratization" from a techbro I keep thinking that the end state is technofeudalism.
We can't fix social problems with technological solutions.
Every scalable solution takes us closer to Extremistan, which is inherently anti democratic.
Read the Black Swan by Taleb.
Jumping from someone using a word to assigning a pejoritve label to them is by definition a form of bigotry
Democratization, the way I'm using it without all the bias, is simply most people having access to build with a tool or a technology. Would you also argue everyone having access to the printing press is a bad thing? The internet? Right to repair? Right to compute?
Why should we consider Ai access differently?