Developing our position on AI
recurse.com77 points by jakelazaroff 2 days ago
77 points by jakelazaroff 2 days ago
> RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them.
This brings about an important point for a LOT of tools, which many people don't talk about: namely, with a tool as powerful as AI, there will always be minority of people with healthy and thoughtful attitude towards its use, but a majority who use it improperly because its power is too seductive and human beings on average are lazy.
Therefore, even if you "strive to be more rigorous", you WILL be a minority helping to drive a technology that is just too powerful to make any positive impact on the majority. The majority will suffer because they need to have an environment where they are forced not to cheat in order to learn and have basic competence, which I'd argue is far more crucial to a society that the top few having a lot of competence.
The individualistic will say that this is an inevitable price for freedom, but in practice, I think it's misguided. Universities, for example, NEED to monitor the exam room, because otherwise cheating would be rampant, even if there is a decent minority of students who would NOT cheat, simply because they want to maximize their learning.
With such powerful tools as AI, we need to think beyond our individualistic tendencies. The disciplined will often tout their balanced philosophy as justification for that tool use, such as this Recurse post is doing here, but what they are forgetting is that by promoting such a philosophy, it brings more legitimacy into the use of AI, for which the general world is not capable of handling.
In a fragile world, we must take responsibility beyond ourselves, and not promote dangerous tools even if a minority can use them properly.
This is why I am 100% against AI – no compromise.
Wait, you're literally advocating for handicapping everyone because some people can't handle the tools as well as others.
"The disciplined minority can use AI well, but the lazy majority can't, so nobody gets to use it" I feel like I read this somewhere. Maybe a short story?
Should we ban calculators because some students become dependent on them? Ban the internet because people use it to watch cat videos instead of learning?
You've dressed up "hold everyone back to protect the incompetent" as social responsibility.
I never actually thought I would find someone who read Harrison Bergeron and said "you know what? let's do that!" But the Internet truly is a vast and terrifying place.
A rather shallow reply, because I never implied that there should be enforced equality. For some reason, I get these sorts of "false dichotomy" replies constantly here, where the dichotomy is very strong exaggerated. Maybe it's due to the computer scientist's constant use of binary, who knows.
Regardless, I only advocate for restricting technologies that are too dangerous, much in the same way as atomic weapons are highly restricted by people can still own knives and even use guns in some circumstances.
I have nothing against the most intelligent using their intelligence wisely and doing more than the less intelligent, if only wise use is even possible. In the case of AI, I submit that it is not.
Why are you putting down a well reasoned reply as being shallow? Isn't that... shallow? Is it because you don't want people to disagree with you or point out flaws in your arguments? Because you seem to take an absolutist black/white approach and disregard any sense of nuanced approach.
I do want people to argue or point out flaws. But presenting a false dichotomy is not a well-reasoned reply.
Who decides what technologies are too dangerous? You, apparently.
AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist?
You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it."
If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue.
> Who decides what technologies are too dangerous? You, apparently.
I see takes like this from time to time about everything.
They didn't say that.
As with all similar cases, they're allowed to advocate for whatever being dangerous, and you're allowed to say it isn't, the people who decide is all of us collectively and when we're at our best we do so on the basis of the actual arguments.
> AI isn't nukes - anyone can train a model at home.
(1) They were using an extreme to illustrate the point.
(2) Anyone can make a lot of things at home. I know two distinct ways to make a chemical weapon using only things I can find in a normal kitchen. That people can do a thing at home doesn't make the thing "not prohibited".
> Who decides what technologies are too dangerous? You, apparently.
Again, a rather knee-jerk reply. I am opening up the discussion, and putting out my opinion. I never said I should be God and arbiter, but I do think people in general should have a discussion about it, and general discussion starts with opinion.
> AI isn't nukes - anyone can train a model at home. There's no centralized thing to restrict. So what's your actual ask? That nobody ever trains a model? That we collectively pretend transformers don't exist?
It should be something to consider. We could stop it by spreading a social taboo about it, denigrate the use of it, etc. It's possible. Many non techies already hate AI, and mob force is not out of the question.
> You're dressing up bog-standard tech panic as social responsibility. Same reaction to every new technology: "This tool might be misused so nobody should have it."
I don't have that reaction to every new technology personally. But I think we should ask the question of every new technology, and especially onces that are already disrupting the labor market.
> If you can't see the connection between that and Harrison Bergeron's "some people excel so we must handicap everyone," then you've missed Vonnegut's entire point. You're not protecting the weak - you're enforcing mediocrity and calling it virtue.
What people call excellent and mediocre these days is often just the capacity to be economically over-ruthless, rather than contribute any good to society. We already have a wealth of ways that people can excel, even if we eradicated AI. So there's definitely no limitation on intelligent individuals to be excellent, even if we destroyed AI. So your argument really doesn't hold.
Edit: my goal isn't to protect the weak. I'd rather have everyone protected, including the very intelligent who still want to have a place to use their intelligence on their own and not be forced to use AI to keep up.
Second reply to your expanded comment: I think in some cases, some technologies are just versions of the prisoner's dilemma where no one is really better off with the technology. And one must decide on a case by case basis, similar to how the Amish decide what is best for their society on a case by case basis.
Again, even your expanded reply shrieks with false dichotomy. I never said ban every possible technology, only ones that are sufficiently dangerous.
(Author here.)
This was a really fascinating project to work on because of the breadth of experiences and perspectives people have on LLMs, even when those people all otherwise have a lot in common (in this case, experienced programmers, all Recurse Center alums, all professional programmers in some capacity, almost all in the US, etc). I can't think of another area in programming where opinions differ this much.
Thank you Nick.
As a recurse alum (s14 batch 2) I loved reading this. I loved my time at recurse and learned lots. This highlight from the post really resonates:
“ Real growth happens at the boundary of what you can do and what you can almost do. Used well, LLMs can help you more quickly find or even expand your edge, but they risk creating a gap between the edge of what you can produce and what you can understand.
RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them.”
The e-bike analogy in the article is a good one. Paraphrasing: Use it if you want to cover distance with low effort. But if your goal is fitness then the e-bike is not the way to go.
It is a good one. I'm going to keep it in my pocket for future discussions about AI in education, as I might have some say in how a local college builds policy around AI use. My attitude has always been that it should be proscribed in any situation in which the course is teaching what the AI is doing (Freshman writing courses, intro to programming courses, etc.) and that it should be used as little as possible for later courses in which it isn't as clearly "cheating". My rationale is that, for both examples of writing and coding, one of the most useful aspects of a four year degree is that you gain a lot from constantly exercising these rudimentary skills.
I feel like John Holt, author of Unschooling, who is quoted numerous times in the article, would not be too keen on seeing his name in a post legitimizes a technology that uses inevitabilism to insert itself in all domains of life.
--
"Technology Review," the magazine of MIT, ran a short article in January called "Housebreaking the Software" by Robert Cowen, science editor of the "Christian Science Monitor," in which he very sensibly said: "The general-purpose home computer for the average user has not yet arrived.
Neither the software nor the information services accessible via telephone are yet good enough to justify such a purchase unless there is a specialized need. Thus, if you have the cash for a home computer but no clear need for one yet, you would be better advised to put it in liquid investment for two or three more years." But in the next paragraph he says "Those who would stand aside from this revolution will, by this decade's end, find themselves as much of an anachronism as those who yearn for the good old one-horse shay." This is mostly just hot air.
What does it mean to be an anachronism? Am I one because I don't own a car or a TV? Is something bad supposed to happen to me because of that? What about the horse and buggy Amish? They are, as a group, the most successful farmers in the country, everywhere buying up farms that up-to-date high-tech farmers have had to sell because they couldn't pay the interest on the money they had to borrow to buy the fancy equipment.
Perhaps what Mr. Cowen is trying to say is that if I don't learn how to run the computers of 1982, I won't be able later, even if I want to, to learn to run the computers of 1990. Nonsense! Knowing how to run a 1982 computer will have little or nothing to do with knowing how to run a 1990 computer. And what about the children now being born and yet to be born? When they get old enough, they will, if they feel like it, learn to run the computers of the 1990s.
Well, if they can, then if I want to, I can. From being mostly meaningless, or, where meaningful, mostly wrong, these very typical words by Mr. Cowen are in method and intent exactly like all those ads that tell us that if we don't buy this deodorant or detergent or gadget or whatever, everyone else, even our friends, will despise, mock, and shun us the advertising industry's attack on the fragile self-esteem of millions of people. This using of people's fear to sell them things is destructive and morally disgusting.
The fact that the computer industry and its salesmen and prophets have taken this approach is the best reason in the world for being very skeptical of anything they say. Clever they may be, but they are mostly not to be trusted. What they want above all is not to make a better world, but to join the big list of computer millionaires.
A computer is, after all, not a revolution or a way of life but a tool, like a pen or wrench or typewriter or car. A good reason for buying and using a tool is that with it we can do something that we want or need to do better than we used to do it. A bad reason for buying a tool is just to have it, in which case it becomes, not a tool, but a toy.
On Computers Growing Without Schooling #29 September 1982
by John Holt.
>author of Unschooling
You say this like it should give him more credibility. He created a homeschooling methodology that scores well below structured homeschooling in academic evaluations. And that's generously assuming it's being practiced in earnest rather than my experience with people doing it (effectively just child neglect with high minded justification)
I have absolutely no doubt that a quack like John Holt would love AI as a virtual babysitter for children.
I don't agree with your characterization of my post, but I do appreciate your sharing this piece (and the fun flashback to old, oversized issues of GWS). Thanks for sharing it! Such a tragedy that Holt died shortly after he wrote that, I would have loved to hear what he thought of the last few decades of computing.