The Writers Came at Night
metropolitanreview.org29 points by ctoth 7 hours ago
29 points by ctoth 7 hours ago
I am not sure what the general point of this is; for a good chunk of their conversation it seems to show why AI will fail in the arts, it is incapable of understanding their frustration with the AI as demonstrated by the conversation, it misses the humanity of it and only states it and states it as a weird sort of concession. But at the end it seems to undercut that by making it all out as futile and the writers pretentious and/or the AI cruel, which leaves the whole rather thin. The final prompt to the AI had a great chance for a bit of recursive metafictional fun, but does not seem to be used; could be a hint to a subtle bit of indirect metafiction but I don't think it was.
It spoke to me as someone who's not jazzed about LLMs but also not convinced by the "it's violating our precious copyright!" arguments against them.
I think there's something in there with the character hierarchy of screenwriter vs novelist vs poet; it seems like the screenwriter in the story writes to make a living, the novelist does it for prestige, and the poet does it largely for the love of the game. The screenwriter is on board with AI until he realizes it'll hurt him more than it'll help him--ironic since he had been excited about being able to use different actors' likenesses!--and the whole time he's looking down at the poet like "Oh, god, if all this takes off I'm going to be as poor and pathetic as that guy." (Which raises interesting questions about the poet's stake in all of this: he doesn't actually have much to lose here, considering how little money or recognition he gets in the first place, but he's helping the other two guys anyway.) The novelist is rallying against the AI, but he's also initially disappointed to find out that his work wasn't important enough to use in its training data... and then later gets a kind of twisted thrill when it does actually quote his own work back at him. I dunno. I think it's a messy story in the same way that the conversation about AI and the arts is itself messy, which I like. And I always appreciate a story that leaves me with questions to mull over instead of trying to dump a bunch of platitudes in my lap :P
What I meant by not being sure about the point was not that he was not clear in what platitudes he was trying to convey, just that I was not sure about what he was trying to say which includes what questions he was trying to raise. It provides the reader with something to think about primarily through the messiness that you noticed instead of raising questions and ideas which work off of each other; the ending simply undercuts any nuance of the AI failing to get their frustration instead building on it or changing our perspective on it.
For example, if it had ended a few sentences earlier and used that potential bit of metafiction it would be suggesting that the story we just read was or at least could be the story written by the AI for the novelist and now the AI does understand their frustration but represented itself as not understanding it. That gives us a great deal to think about and builds in a second perspective on the entire piece, the perspective of the AI. But as written that only works well with the conversation part of the story and those last few lines make it really not work at all.
Edit: I think you could make the case that the meta is utilized just as I outlined above, it kind of works with the general pretentious ass that ChatGPT is in the story, things like the mace and the general lack of preparedness of the writers kind of works with those last few lines in that context. But that raises other issues and likely has some rather ugly/messy ramifications on the whole, I think. Probably will reread it when I get home but on a quick check of a few things, strongly suspect my initial view is the the more accurate one and I am just having fun with analysis at this point.
That's fair! I guess I didn't feel the same frustration with the last few lines because they did raise further questions, at least for me. The AI in the story is so bitter and cruel that it makes me wonder whether it does possess the capacity for human experience/emotion that they claim it doesn't have, and therefore might actually have a shot at replacing them. Without that final zinger I don't know I would've felt the same way. (And I did think it was a funny jab at the novelist's own elitism, especially since it adds another dimension of pitting him against other humans in addition to pitting him against the AI.)
Like, I don't think it's an amazing ending, but it did leave me on a contemplative note in a way that a "the AI wrote this all along" ending wouldn't have, at least for me personally. Although I would've still preferred that to an "and then they did, in fact, behead Sam Altman" ending :P
And I definitely respect having fun with analysis, lol. If nothing else I think the story was successful on that front... I don't think the successfully-beheading-Sam-Altman ending would've sparked this kind of discussion!
The hierarchy of writers you previously mentioned really suggests archetypes of writers, how do you feel about that last jab in that context? What about in context of the pretentious ass of AI? These are some of the things which I had issue with and things which I felt contributed to the general messiness of the story, that the author never considered the piece as a whole. If I dig into it and look closely instead of look at the whole and notice things like the unrealized metafiction, I would end up hating it, it would make it impossible for me to see the author as anything but a pretentious ass or pandering. At least in context of this one short story and generally I do not judge writers on short stories unless they are exclusively writers of short stories, there is a pragmatism required for the novelist writing a short story.
Beheading Altman could be made to work quite well if it had used the metafiction.
I understand the hand-wringing in the humanities, but even if LLMs can output wonderful stories they can't really ever get past pulp fiction levels.
Question: why do people write and buy new books? There are millenia of amazing works, but people prefer to read new things. Some gems persist, but even my kids read entirely different YA novels then I did as a kid.
Art is communication. A big part of why we like new art is because it reflects the culture and mores of the now. LLMs can only give you the most like next token prediction - it is antithetical to what we want when we interact with art.
Not to say it can't spit out reasonable plots for shows or be a valuable aid to writers, but I think it will continue to serve best as an aid to humans, even more so than in coding. Maybe artists will turn into LLM wranglers like software engineers are starting to, but a human-in-the-loop is going to remain valuable.
Telling stories is as old as humanity, no machine will ever make storytelling obsolete. It will change it for sure but change is as constant as the waxing and waning of the moon.
This was a really entertaining read, do any of you have similar contemporary stories to share?
I think the story makes a good point, but I'm not sure it's even the primary point the story was trying to make.
> “Writing a book is supposed to be hard,” he said.
> “Is it, though?” said the AI. The novelist wasn’t sure, but he thought he detected a touch of exasperation in the machine’s voice.
> “Perseverance is half the art,” he said. He hadn’t had much natural talent and had always known it, but he had staying power.
It's this right here. I don't think any LLM-based AI is going to be able to replace raw human creativity any time soon, but I do think it can dramatically reduce the effort it takes to express your creativity. And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.
I think a lot of the Luddite-esque response to AI is actually a response to this realization happening at a subconscious level. From the gifted classes in middle school until I was done with schooling, I can always remember two types of students: those that didn't work very hard but succeeded on their talents and those that were otherwise unexceptional beyond their organizational skills and work ethic. Both groups thought they were superior to the other group, of course, and the latter group has gone on to have more external success in their lives (at least among my student peers I maintain contact with decades later). To wit, the smart lazy people are high-ranking individual contributors, but the milquetoast hard workers are all management who the smart lazy people that report to them bitch about. The inversion of that power dynamic in creative and STEM professions... it's not even worth describing the implications, they're so obvious.
Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI; how does my prompt for an epic space opera vary meaningfully from yours? In other words, everything is reduced to an individual's unique perspective of things (and how they encode it into their communication to the AI) because the AI has normalized everything else away (access to vocabulary, access to media, time to create, everything). In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased.
We can hand wring about whether that reality ever can exist, whether it exists now, whatever, but the truth is that's how AI is being sold and I think that's the reality people are reacting to.
> And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.
I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.
The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.
> I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.
That's the truth right now, but that's merely a limitation of the technology. Particularly if you imagine arbitrarily wide context windows such that the LLM can usefully begin to infer your specific preferences and implications over time.
> The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.
Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.
No previous force multipliers have lifted the "lazy but smart" over the "smart and NOT lazy". That's not how lazy works, or how taste/expectations work. The "smart and NOT lazy" will evolve their preferences, perspectives, and point of view over time much faster than the "smart and lazy" will so even if they have these agents doing all their work for them, the people motivated to introspect much more on that work will be the ones driving the trends and leading the edge of creative production.
It's like conventions in art: you could make Casablanca much more easily today than in 1942. But if you made it today it would be seen as lazy and cliche and simplistic, because it's already been copied by so many other people. If you make something today, it needs to take into account that everyone has already seen Casablanca + nearly 85 additional years of movies and build on top of that to do something interesting that will surprise the viewer (or at least meet their modern expectations). "The best created human works" changes over time; in your proposed world, it will change even faster, and so you'll have to pay even more attention to keep up.
So if you're content to let your AI buddy cruise along making shit for you while you just put in 1 hour a day of direction, and someone else with about equal natural spark is hacking on it for 10 hours a day—watching what everyone else is making, paying much more active attention to trends, digging in and researching obscure emerging stuff—then that second person is going to leave you in the dust.
> Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.
Again, it's a false dichotomy. What you described was just "super super smart", not what I suggested as "smart + hard worker: "In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased." That's not hard work + smart, that's "generationally smart genius." And that set is much smaller than the set I'm talking about. It's very easy to coast on "gifted but lazy" to perpetually be a big-fish-in-a-small-pond school-wise. But there are ponds out there full of people who do both. Twenty or thirty years ago this was the difference between a 1540 SAT score, As/Bs in high school, and going to a very good school and 1540 SAT score, A's in high school with a shitload of AP courses, and significant positions in extracurricular activities, and going to MIT. I don't know what it looks like for kids today - parents have cargo-culted all the extracurriculars so that it now reflects their drive more than the kids' - but those kids who left the pack behind to go to the elite institutions were grinders AND gifted.
Of course talent+effort are better than either alone, but it seems strange to argue that there will be zero effect on the value of having just one of them. AI may not raise the talented lazy person straightforwardly above the hard-working grinder but it seems likely that it will alter their relative position, in favor of talent.
What does it mean to even say "having just one of them"? I think the false dichotomy just torpedoes the ability to predict the effect of new tools at all. There's already a world of difference between the janitor who couldn't learn how to read but does his best to show up and clean the place as well as he can every day and the middle manager engineer with population-median math or engineering abilities but a 12-hour-day work ethic that has let him climb the ladder a bit. And the effect of these AI tools we're considering here is going to be MUCH larger on one than the other - it's gonna be worse here for the smarter one, until the AI's are shoveling crap around with human-level dexterity. (Who knows, maybe that's next.)
Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.
EDIT: here's a scenario where it'll be harder to be lazy as a software engineer already, not even in the "super AI" future: in the recent past, if you were quicker than your coworkers and lazy, you could fuck around for 3 hours than knock something out in 1 hour and look just as productive, or more, than many of your coworkers. If everyone knows - even your boss - that it actually should only take 45 minutes of prompting then reviewing code from the model, and can trivially check that in the background themselves if they get suspicious, then you might be in trouble.
> Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works.
This requires the machine to understand a whole bunch of things. You're talking about AGI, at that point there will be blood in the streets and screenplays will be the least of our problems.
> Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI
It's an insightful point, but I think there's more going on. It seems that quite a lot of the people consuming media and art do actually care how much it's the product of a human mind vs generated by a machine. They want connection with the artist. Maybe it's a bit like organic produce. If you give me a juicy white peach, I probably can't tell whether it's an organic one, lovingly raised and harvested by a farmer with a generations-in-the-family orchard, or one that's been fertilized, pesticide-sprayed, and genetically-engineered by a billion dollar corporation. But there's a very good chance I care about the difference. I'm increasingly getting the impression that a big swathe of consumers prefer human-made art. Probably bigger than the percentage that insist on organic produce. There will be a market for human-created works because that's something that consumers want. Yes, some authors will cheat. Some will get away with it. It'll start to look a lot like how we think of plagiarism.
Maybe the strength of that preference varies in different parts of the industry. Maybe consumers of porn or erotica or formulaic romance or guilty pleasure pop songs don't care as much about it being human-produced. Probably no one cares about the human authenticity of the author of a technical manual. But I suspect the voters at the Oscars and Grammys and Pulitzers will always care. The closer we are to calling something "art", the more it seems we care about the authenticity and intention of the person behind it.
The other thing I think is missing from the debate is the shift from mass-market works to personalized ones. Why would I buy someone else's ChatGPT-generated novel for twenty bucks when I could spend a few cents to have it generate one to my exact preferences? I'd point to the market for romance novels as one where you can already see the seeds of this. It's already common for them to be tagged by trope: "why choose", "enemies to lovers", "forced proximity", etc. Readers use those tags to find books that scratch their very specific itch. It's not a big jump from there to telling the AI to write you a book that even more closely matches your preferences. It might look even less like a traditional "book" and more like a companion or roleplay world that's created by the AI as you interact with it. You can see seeds of that out there too, in things like SillyTavern and AI companion apps.