An image of an archeologist adventurer who wears a hat and uses a bullwhip
theaiunderwriter.substack.com1337 points by participant3 a day ago
1337 points by participant3 a day ago
Not sure if anyone is interested in this story, but I remember at the height of the PokemonGo craze I noticed there were no shirts for the different factions in the game, cant rememebr what they were called but something like Teamread or something. I setup an online shop to just to sell a red shirt with the word on it. The next day my whole shop was taken offline for potential copyright infringement.
What I found surprising is I didnt even have one sale. Somehow someone had notified Nintendo AND my shop had been taken down, to sell merch that didn't even exist for the market and if I remember correctly - also it didnt even have any imagery on it or anything trademarkable - even if it was clearly meant for pokmeonGo fans.
Im not bitter I just found it interesting how quick and ruthless they were. Like bros I didn't even get a chance to make a sale. ( yes and also I dont think I infringed anything).
I asked Sora to turn a random image of my friend and myself into Italian plumbers. Nothing more, just the two words "Italian plumbers". The created picture was not shown to me because it was in violation of OpenAI's content policy. I asked then just to turn the guys on the picture into plumbers, but I asked this in the Italian language. Without me asking for it, Sora put me in an overall and gave me a baseball cap, and my friend another baseball cap. If I asked Sora to put mustache on us, one of us received a red shirt as well, without being asked to. Starting with the same pic, if I asked to put one letter on the baseball caps each - guess, the letters chosen were M and L. These extra guardrails are not really useful with such a strong, built-in bias towards copyright infringement of these image creation tools. Should it mean that with time, Dutch pictures will have to include tulips, Italian plumbers will have to have a uniform with baseball caps with L and M, etc. just not to confuse AI tools?
You (and the article, etc) show what a lot of the "work" in AI is going into at the moment - creating guardrails against creating something that might get them in trouble, and / or customizing weights and prompts under water to generate stuff that isn't the obvious. I'm reminded of when Google's image generator came up and this customization bit them in the ass when they generated a black pope or asian vikings. AI tools don't do what you wish they did, they do what you tell them and what they are taught, and if 99% of their learning set associates Mario with prompts for Italian plumbers, that's what you'll get.
A possible (probably already exists) business is setting up truly balanced learning sets, that is, thousands of unique images that match the idea of an italian plumber, with maybe 1% of Mario. But that won't be nearly as big a learning set as the whole internet is, nor will it be cheap to build it compared to just scraping the internet.
I remember all the hullaballoo about Asian Vikings and the like. It was so preposterous that Vikings would ever be Asian that it must be ultra-woke DEI mind-worms being forced onto AI! But of course, as far as the AI's concerned, it is even more preposterous that an Italian plumber would not be wearing red or green overalls with a mustache and a lettered baseball cap. I don't see any way you can get the AI to recognize that Vikings "should" be white people and not also think that Italian plumbers "should" look like that. Are they allowed to recombine their training data or must they strictly adhere to only what they've seen?
Of course the irony is that if the people who get offended whenever they see images of non-white people asked for a picture of "Vikings being attacked by Godzilla" , they'd get worked up if any of the Vikings in the picture were Asian (how unrealistic!). It's a made-up universe! The image contains a damn (Asian) Kaiju in it, and everyone is supposed to be pissed because the Vikings are unrealistic!?
That's what you get when you expect AIs to be like humans and be able to reason. We would be pissed if a human artist did that, so we are pissed when AIs do it.
A human, even one whose only experience of an Italian plumber is Mario will be able to draw an Italian plumber who is not Mario. That's because he knows that Mario is just a video game character and doesn't even do much plumbing. He knows however how an actual non-Italian plumber looks like, and that a guy doing plumbing work in Italy is more likely to look like a regular Italian guy equipped like a non-Italian plumber than to a video game character.
And if asked to draw a Viking, he knows that Vikings are people originating from Scandinavia, so they can't be Asian by definition, even in an Asian context. A human artist can adjust things to the unrealistic setting, but unless presented with a really good reason, will not change the core traits of what makes a Viking a Viking.
But it requires reasoning. Which current image generating AIs don't have.
> We would be pissed if a human artist did that
No, I would not be pissed if a human artist drew an Asian Viking. Do you get pissed when a human artist draws a white Jesus? Why are we justifying internet outrage over an Asian Viking when people have been drawing this middle-eastern Jew as white for centuries?
> A human artist can adjust things to the unrealistic setting, but unless presented with a really good reason, will not change the core traits of what makes a Viking a Viking.
If you asked Matt Stone and Trey Parker to draw a Viking, are you sure it would contain the "core traits of what makes a Viking a Viking?" What if you asked Picasso to draw a Viking? The Vikings in The Simpsons would be yellow, and nobody would complain. Would you be offended if you asked Hokusai to draw a Viking and it came out looking Asian? Vikings didn't even have those stupid horned helmets that everyone draws them with! Is their dumb, historically inaccurate horned helmet a core part of what makes a Viking a Viking? What the hell are we even talking about? It's crystal clear that all of these "historical accuracy" drums are only ever beaten when some white person is offended that non-white people exist. Otherwise, nobody gives a shit about historical accuracy. There's a fucking Kaiju in the image!
Like any artist, Gemini had a particular style. That style happened to be a multi-cultural one, and what we learned is that a multi-culture style is absolutely enraging to people unless it results in more Whiteness.
Consider elves instead of Vikings. People would also be offended if an AI drew elves as black people with pointy ears. There's no "a human artist should know that elves have to be white" bullshit defense there. There's no historical accuracy bullshit. There's only racism.
The AIs were not "naturally" generating images of Asian Vikings. It was established to my satisfaction, even if the companies never admitted it (I don't recall it happening but I may have missed it), that it was actually the prompt being rather hamhandedly edited on the way to the image generator, for the clear purpose of "correcting" the opinions and attitudes of those issuing the prompts through social engineering.
Unsurprisingly, people don't like being so nakedly herded in their opinions. When the "nudges" become "shoves" people object.
My point is that there is no prompt engineering that could keep Vikings white without also keeping Italian plumbers looking like Mario. Unless you singled out Mario, but there are too many examples to do that with. The AI does not put Mario in a different category than a Viking. You have to try to get the AI to avoid using exact literal imagery, to make sure it's mixing things up a bit, varying facial features and clothing styles when it shows people ... you know, being "diverse". How are we supposed to get an Italian plumber in anything other than red overalls without getting a Viking wearing a sari?
The Gemini prompt was something like "make sure any images of people show a diverse range of humans", or something. Yes, it was totally ham-handed, but that's not what people were pissed about. It's also ham-handed that we can't generate a nipple, or a swear word, or violence. Why does "make sure images do not contain excessive violence" not piss people off? The Vikings were fucking brutal. It would be very historically accurate to show them raping women and cutting people's limbs off. Are we all supposed to be pissed that AI does not generate that image? It's just as ham-handed as "make sure humans are diverse". No, it was not the ham-handedness that enraged people. It was not the historical inaccuracy. It was the word "diverse".
I'm assuming the downvoters are the ones who get offended at the sight of an Asian Viking, so let me ask you this:
In a work of fiction -- which you're automatically asking for when you ask an AI to generate an image -- in a work of fiction, would you be offended if you saw a white Ninja? A white Samurai? A white Middle-Eastern Jew born in Roman times? Would there have been internet outrage over pictures of white Samurai? We all know the answer: no, of course not. So why is an Asian Viking offensive when a white Samurai is not? Why are we supposed to get angry about an Asian Viking, but a white Jesus is just A-OK? What could the difference possibly be? Anyone?
OpenAI will eventually have competition for GPT 4o image generation.
They'll eventually have open source competition too. And then none of this will matter.
OmniGen is a good start, just woefully undertrained.
The VAR paper is open, from ByteDance, and supposedly the architecture this is based on.
Black Forest Labs isn't going to sit on their laurels. Their entire product offering just became worthless and lost traction. They're going to have to answer this.
I'd put $50 on ByteDance releases an open source version of this in three months.
Well the teams in Pokemon Go aren't quite as generic as Teamred: they are Team Instinct, Team Mystic, and Team Valor. Presumably Nintendo has trademarks on those phrases, and I’m sure all the big print on demand houses have an API for rights-holders to preemptively submit their trademarks for takedowns.
Nintendo is also famously protective of their IP: to give another anecdote, I just bought one of the emulator handhelds on Aliexpress that are all the rage these days, and while they don't advertise it they usually come preloaded with a buttload or ROMs. Mine did, including a number of Nintendo properties — but nary an Italian plumber to be found. The Nintendo fear runs deep.
Many years ago I tried to order a t-shirt with the postscript tiger on the front from Spreadshirt.
It was removed on Copyright claims before I could order one item myself. After some back and forth they restored it for a day and let me buy one item for personal use.
My point is: Doesn't have to be Sony, doesn't have to be a snitch - overzealous anticipatory obedience by the shop might have been enough.
>After some back and forth they restored it for a day and let me buy one item for personal use.
I used Spreadshirt to print a panel from the Tintin comic on a T-shirt, and I had no problem ordering it (it shows Captain Haddock moving through the jungle, swatting away the mosquitoes harassing him, giving himself a big slap on the face, and saying, 'Take that, you filthy beasts!').
I bought Tintin T-shirts 40 years ago in Thailand (the "branded" choices were amazing). They were actually really good, still got them!
Twenty years ago, I worked for Google AdWords as a customer service rep. This was still relatively early days, and all ads still had some level of manual human review.
The big advertisers had all furnished us a list of their trademarks and acceptable domains. Any advertiser trying to use one that wasn’t on the allow-list had their ad removed at review time.
I suspect this could be what happened to you. If the platform you were using has any kind of review process for new shops, you may have run afoul of pre-registered keywords.
Somehow someone had notified Nintendo
Is this correct? I would guess Nintendo has some automation/subscription to a service that handles this. I doubt it was some third party snitching.
How was your shop taken down?
Usually there are lawyers letters involved first?
Print in demands definitely have terms of service allowing them to take whatever down. You’re playing by their rules, and your $2 revenue / tshirt and very few overall sales is not worth the potentially millions in legal fees to fight for you.
Sure, from the suing party who sent a DMCA takedown request to your webhost, who forward it to you and give you 24 hours before they take it down. Nobody wants to actually go to court over this stuff because of how expensive it is.
> my whole shop was taken offline
I think the problem there was being dependent on someone who is a complete pushover, doesn't bother to check for false positives and can kill your business with a single thought.
Yes that was the whole point of my post.
For further info it was Redbubble.
>Redbubble is a significant player in the online print-on-demand marketplace. In fiscal year 2023, it reported having 5 million customers who purchased 4.8 million different designs from 650,000 artists. The platform attracts substantial web traffic, with approximately 30.42 million visits in February 2025.
I don't condone or endorse breaking any laws.
That said, trademark laws like life of the author + 95 years are absolutely absurd. The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property. The reasoning being that if you don't allow people to exclude 3rd party copying, then the primary party will assumedly not receive compensation for their creation and they'll never create.
Even in the case where the above is assumed true, the length of time that a protection should be afforded should be no more than the length of time necessary to ensure that creators create.
There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
For that matter, this argument extends to other criminal penalties, but that's a whole other subject.
> The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property.
That was the original purpose. It has since been coopted by people and corporations whose incentives are to make as much money as possible by monopolizing valuable intangible "property" for as long as they can.
And the chief strategic move these people have made is to convince the average person that ideas are in fact property. That the first person to think something and write it down rightfully "owns" that thought, and that others who express it or share it are not merely infringing copyright, they are "stealing."
This plan has largely worked, and now the average person speaks and thinks in these terms, and feels it in their bones.
>the average person speaks and thinks in these terms,
(Trademarks aside) Even more surprising to me is how everyone seems concerned about the studios making enough money?! As if they should make any money at all. As if it is up to us to create a profitable game for them.
If they all go bankrupt today I won't lose any sleep over it.
People also try to make a living selling bananas and apples. Should we create an elaborate scheme for them to make sure they survive? Their product is actually important to have. Why can't they own the exclusive right to sell bananas similarly? If anyone can just sell apples it would hurt their profit.
It is long ago but that is how things use to work. We do still have taxi medallions in some places and all kinds of legalized monopolies like it.
Perhaps there is some sector where it makes sense but I can't think of it.
If you want to make a movie you can just do a crowd funder like Robbert space industry.
> Even more surprising to me is how everyone seems concerned about the studios making enough money?! As if they should make any money at all. As if it is up to us to create a profitable game for them.
Do you want more games (movies, books...)? Then you want studios to make money in that type of game. Because and if they make money they have incentive to do so. Now if you are happy with the number and quality of free games a few hard core people who will do it even if they make nothing then you don't care. However games generally take a lot of effort to create and so by paying people to make them we can ensure people who want to actually have the time - as opposed want to but instead have to spend hours in a field farming for their food.
Now it is true that games often do look alike and many are not worth making and such. However if you want more you need to ensure they make money so it is worth investing.
We can debate how much they should make and how long copyright should be for. However you want them to make money so they make more.
Games:
> "On platforms like Steam, indie games constitute the vast majority of new titles. For instance, in 2021, approximately 98% of the 11,700 games released on Steam were from indie developers. This trend has continued, with indie games accounting for 99% of releases on gaming platforms between 2018 and 2023."
Written content:
> "Every year, traditional publishers release around half a million to a million new books in the U.S., but that number is dwarfed by the scale of independent writing online: WordPress users alone publish over 70 million blog posts per month, Amazon sees over 1.7 million self-published books annually, and platforms like Medium, Substack, and countless personal websites generate millions more articles and essays. While the average quality of traditional publishing remains high due to strict editorial standards, consumer behavior has shifted dramatically—people now spend far more time reading informal, self-published content online, from niche newsletters to Reddit posts, often favoring relevance, speed, and authenticity over polish. This shift has made the internet the dominant source of written content by volume and a major player in shaping public discourse."
Video content:
> "Today, the overwhelming majority of video content is produced not by Hollywood or television studios, but by individuals on the internet. YouTube alone sees over 500 hours of video uploaded every minute—more than 260 million hours per year—vastly outpacing the combined annual output of all major film studios and TV networks, which together produce only a fraction of that volume. Despite questions about quality, consumer habits have shifted dramatically: people now watch over 1 billion hours of YouTube content per day, and platforms like TikTok, Instagram, and Twitch are growing rapidly, especially among younger audiences. While Hollywood still commands attention with high-budget blockbusters and prestige series, user-generated content dominates the daily media diet in both time spent and engagement."
You know what dominates though: the big budget games/books/videos. Indie is sometimes really good, but a lot of it is horrible.
That's because the big budget creators are very good at business, which has four parts[1]: not just the product, but also the revenue model, the market, and distribution.
Big budget studios are AMAZING at distribution. They blow indie devs out of the water, who focus almost all their effort on just product.
Do big budget studios often make great games? Yes! But they often produce total garbage, too, just like indie devs. I think the biggest difference between them is distribution.
[1] https://www.indiehackers.com/post/how-to-brainstorm-great-bu...
We were close to your viewpoint being the popular one, but sadly many (most?) independent content creators are so overtaken by fear of AI that they've done a 180. The same people who learned by tracing references to sell fanart of a copyrighted franchise (not complaining, I spend thousands on such things) accuse AI of stealing when it glances at their own work. We're entering a new golden age of creative opportunity and they respond by switching sides to the philosophy of intellectual property championed by Disney and Oracle (except for those companies' ironic use of AI themselves..).
We would prefer a world where we can use the skills we have spent a lifetime honing without having to compete with some asshole taking everything we’ve shared and stuffing it into a machine that spits out soulless clones of our work without any acknowledgment of our existence.
> we were close
Maybe. In my microcosm even before big AI, 100% of my tech acquaintances were against IP laws, 0% of my art acquaintances were, and authors I know had varied opinions based on their other backgrounds.
Artists do seem to have had a mindset shift. Previously they supported IP protection because it was "right" (or they'd at least concede that in practice it's not helping them personally), but with the AI boom most of them are pro-IP laws because of more visceral livelihood fears.
It's been a US-led project for the benefit of American corporations.
If I was running the trade emergency room in any European state right now, I'd have "stop enforcing US copyright" up there next to "reciprocal tarrifs".
Unfortunately we have a bunch of copyright-friendly groups in EU, so this would only work in the "stop enforcing US copyright in retaliation" sense, but not likely in the "stop enforcing copyright because on the net, it's a scam" sense.
Worked for china
In the context of when they want to borrow others' stuff. But then Chinese companies are _more_ than happy to take advantage of Western laws to defend their own IP. It's hypocrisy.
Your comment inspires me to write an essay titled "What's wrong with hypocrisy?" because it seems like no-one really cares about it anymore. It's like the concept itself has lost meaning. Hypocrisy a big, abstract word that has the audacity to refer to other big abstract words like "character" and "virtue" and "fairness". Now many people accused of hypocrisy say "so what?". What's going on there? It has the feel of a situation where someone says your software has memory leaks, and you say "so?" not knowing what that even means. "Hypocrisy" and "memory leaks" share the notion of a characterization of a set of flaws that can and will show themselves in many disparate ways. Powerful signals to a specialist, and noise for a generalist. And not just noise, but a signal against the critic as an elitist snob that uses words and concepts no-one understands.
The worst part about this trend toward hypocrisy acceptance is that nobody cares when you point it out. This empowers the hypocrite to answer with "So what?" because they know they will face absolutely no consequences. In politics, business, and even personal life, most people have everything to gain and very little to lose*. And our current hyper-individualistic society has only exacerbated the issue. "Who cares if the people around me don't trust me? I'll just get what I need from some faceless computer system."
* You actually have a lot to lose, but it's not tangible or very directly measurable, and the effects compile over a long time, so the results are not easy to see.
Trademark isn't copyright, those are two different things. Trademarks can be renewed roughly every 10 years [1] until the end of time and are about protecting a brand. Now copyright law lasts for "author plus 70 years. For anonymous works, pseudonymous works, or works made for hire, the copyright term is 95 years from the year of first publication or 120 years from creation, whichever comes first." [2]
Is copyright too long? Yes. Is it only that long to protect large media companies? Yes. But I would argue that AI companies are pushing the limits of fair use if not violating fair use, which is used as a affirmative defense by the way meaning that AI companies have to go to court to argue what they are doing is okay. They don't just get to wave their hands and say everything is okay because what we're doing is fair use and we get to scrape the world's entire creative output for our own profit.
[1] https://www.uspto.gov/learning-and-resources/trademark-faqs#...
[2] https://www.copyright.gov/history/copyright-exhibit/lifecycl...
Trademark isn't the same as Registered Trademark either, while we're at it
> There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years.
I’m sure you’re right for individual authors who are driven by a creative spark, but for, say, movies made by large studios, the length of copyright is directly tied to the value of the movie as an asset.
If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
The value of the asset is in turn directly linked to how much the studio is willing to pay for that asset. They will invest more money in a film they can milk for 120 years than one that goes public domain after 20.
Would studios be willing to invest $200m+ in movie projects if their revenue was curtailed by a shorter copyright term? I don’t know. Probably yes, if we were talking about 120->70. But 120->20? Maybe not.
A dramatic shortening of copyright terms is something of a referendum on whether we want big-budget IP to exist.
In a world of 20 year copyright, we would probably still have the LOTR books, but we probably wouldn’t have the LOTR movies.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
Not so, because of net present value.
The return from investing in normal stocks is ~10%/year, which is to say ~670% over 20 years, because of compounding interest. Another way of saying this is that $1 in 20 years is worth ~$0.15 today. A dollar in 30 years is worth ~$0.05 today. A dollar in 40 years is worth ~$0.02 today. As a result, if a thing generates the same number of dollars every year, the net present value of the first 20 years is significantly more than the net present value of all the years from 20-120 combined, because money now or soon from now is worth so much more than money a long time from now. And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
The reason corporations lobby for copyright term extensions isn't that they care one bit about extended terms for new works. It's because they don't want the works from decades ago to enter the public domain now, and they're lobbying to make the terms longer retroactively. But all of those works were already created and the original terms were sufficient incentive to cause them to be.
Your analysis misses the incredibly important caveat that revenue rises with inflation – or sometimes even faster.
50 years ago, a movie ticket was 0.50 cents in revenue. Today, it’s $25. That’s a 50x increase… a dollar in 50 years might be worth $0.02 today, but a movie ticket in 50 years is worth about a movie ticket today.
> And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
For the crown jewel IP that the studios are most interested in protecting, the opposite of this assumption is true. Star Wars, for example, is making more money than ever. Streaming revenues will probably invalidate that assumption for an even wider pool of back catalog properties.
If Star Wars were in the public domain now it would be making even more money. Money that would go into the general economy and not just into a single studio.
Also copyright duration when Star Wars was created was a maximum of 56 years, and obviously George Lucas felt that was sufficient incentive to create it!
I wonder if there is value in splitting copyright into two parts and keeping a longer duration on the copies of the original work, but shortening the duration on the concepts in that work. That is, allow an author / studio to retain a long duration ownership of the original movie or story, so no one else can just start distributing copies of their VHS tapes after a few years. But at the same time, after 10 or 20 years other people can start making new Star Wars universe movies and books without licensing it from the original artist/author. If the original was good enough, then the rights to be the sole distributor of that original material should be plenty worthwhile, and in the mean time, just in time for the generational “nostalgia bump” a whole new set of related properties can come out, reinvigorating interest in the original.
Maybe even some sort of gradual opening of the IP, where after say 10 years, broad categories are opened (think things like “the Jedi” or “the Empire” or “Endor”), but specific characters and their representations aren’t (so no Darth Vader or Luke Skywalkers), then after 20 years you open the characters themselves but only derivative works. And then finally after 30 years or so you open the originals as well for things like translations or “de-specialized” editions or what have you. Then finally 50 years puts the raw originals in the public domain as well.
IIRC, of works that bring in any money to their creators, the vast majority is returned, for almost all works, in the first handful of years after creation. Sure. the big names you know have value longer, but those are a miniscule fraction of works.
Make copyright last for a fixed term of 25 years with optional 10-year renewals up to 95 years on an escalating fee schedule (say, $100k for the first decade and doubling every subsequent decade) and people—and studios—would have essentially the same incentive to create as they do now, and most works would get into the public domain far sooner.
Probably be fewer entirely lost works, as well, if you had firmer deposit requirements for works with extended copyrights (using the revenue from the extensions to fund preservation) with other works entering the public domain soon enough that they were less likely to be lost before that happened.
> I’m sure you’re right for individual authors who are driven by a creative spark, but for, say, movies made by large studios, the length of copyright is directly tied to the value of the movie as an asset.
That would be fine, if the studios didn't want to have it both ways. They want to retain full copyright control over their "asset", but they also use Hollywood Accounting [1] to both avoid paying taxes and cheat contributors that have profit-sharing agreements.
If studios declare that they made a loss on producing and releasing something to get a tax break, the copyright term for that work should be reduced to 10 years tops.
For movies in particular the tail is very thin. Only very few 50 year old movies are ever watched. Was any commercial movie ever financed without a view to making a profit in the box office/initial release?
According to Matt Damon (in one of many interviews) a lot of movies were produced with the second revenue stream (vhs/dvd) being part of the calculations, that is why we now get a lot less variety and alternative movies made, that second revenue stream doesn’t exist any more (I assume streaming pays very little in comparison).
True, but how much of that second stream exists after say 5 years? For most movies it is zero - they aren't pressing the DVD anymore and all stores that once had no longer do (except for second hand stores which don't give money to the studio)
With books it's even worse. Movies might get a trickle of revenue from TV licensing but once a book is out of print (which usually happens very quickly, and most never go into print again), that's it. No more revenue from that book, it continues to circulate in libraries and used bookstores but the author and publisher gets nothing from that.
The Fellowship of the Ring, the first of Peter Jackson's LOTR movies released in 2001, made $887 million in its original theatrical run (on a $93 million budget). It would absolutely still have been made if copyright was only 20 years. And now it would be in the public domain!
The success that we can now measure through hindsight wasn’t assured at the time of greenlighting the film. They took a huge gamble:
https://variety.com/2021/film/news/lord-of-the-rings-peter-j...
It would have been an even bigger gamble if they weren’t able to bank on any long term revenue (I’m certain Netflix continues to pay for the rights to stream the trilogy after 2021).
This argument works against you. The probability of a long tail of revenue is even less likely than a major hit, so it necessarily has less weight in any decision to swing for the fences.
Producers don't invest in movies for hypothetical revenues in 20 years time. If it doesn't pay off soon after release, it's written off as a loss. Revenues in 100 years time are completely irrelevant.
Actually I think long tail revenue is quite well correlated with a property being a hit. Netflix paid $500m for the rights to Seinfeld 20 years after the show ended. Star Wars is still huge, nearly 50 years after the release of the original. Disney in general has ruthlessly mined its back catalog; they just printed another $700m from a Lion King prequel, whose value lay largely in the good will still hanging over from the original, which they still own, and which is still absolutely a valuable asset despite being 30 years old. Back catalogs are huge deals. Amazon paid $8bn for MGM to boost its Prime Video content library. Streaming has opened up long tail revenue opportunities beyond the box office that never existed before.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years
Movies for instance make most of their revenue in the 2 week following their release in theater. Beyond, peoole who wanted to see it had already seen it and the others don't care.
I'd argue it's similar for other art form, even for music. The gain at the very end of the copyright lifetime is extremely marginal and doesn't influence spending decision, which is mostly measured on a return basis of at most 10 years.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years
Due to the fairy high cost of capital right now, pretty much anything more than 5 years away is irrelevant. 10 years max, even for insanely high returns on investment.
This line of reasoning doesn't make sense for retroactive lengthening of copyright though, as the author is not gaining anything from that anymore.
Seems to me most of that inflated budget is needed for the entertainment role of films, not the art in them, which a low budget can often stimulate rather than inhibit. In which case nothing of importance would be lost by a drastic shortening of copyright terms.
OK? So we wouldn't have $100m movies, the vast majority of which are forgotten about in a few months. I don't think a $100m movie is ten times better than a $10m one, so I think I'd be fine with movies with much smaller budgets, if they meant that LotR (the books) are now in the public domain for everyone to enjoy.
If movies had a payoff curve like rents, this would be more true, but they're cultural artifacts that decay in relevance precipitously after release, and more permanently after a few decades, where they become "dated", outside a few classics.
For what it's worth, this is a uniquely American view of copyright:
> The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property.
In Europe, particularly France, copyright arose for a very different reason: to protect an author's moral rights as the creator of the work. It was seen as immoral to allow someone's work -- their intellectual offspring -- to be meddled with by others without their permission. Your work represents you and your reputation, and for others to redistribute it is an insult to your dignity.
That is why copyrights in Europe started with much longer durations than they did in the United States, and the US has gradually caught up. It is not entirely a Disney effect, but a fundamental difference in the purpose of copyright.
That's an interesting perspective, and yes wholly foreign to my very American economics influenced background.
Are the origins the same when looking at other intellectual property like patents?
How did they deal with quoting and/or critiquing other's ideas? Did they allow limited quotation? What about parody and satire?
While I think the laws are broken, I also get why companies fight so hard to defend their IP: it is valuable, and they've built empires around it. But at some point, we have to ask: are we preserving culture or just hoarding it?
Missing is why laws fight so hard too, missing the opposite of what we have (in the west), namely blatant and rampant piracy. The other extreme is really bad, creators of any type pirated by organized crime. There was no video game nor movie market in eastern Europe for example, can't compete against large scale piracy.
Which is to say, preservation without awareness of the threat will look like hoarding. A secondary question is to what extent is that threat real? Without seeing what true rampant piracy looks like, I think it would be easy to be ignorant of the threat.
You're conflating trademark with copyright.
Regardless, it's not just copyright laws that are at issue here. This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.
So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him? I can imagine any court asking "how is this not simply laundering someone's likeness through a third party which claims to not have an image / filter / app / artist reproducing my client's likeness?"
All seemingly complicated scams come down to a very basic, obvious, even primitive grift. Someone somewhere in a regulatory capacity is either fooled or paid into accepting that no crime was committed. It's just that simple. This, however, is so glaring that even a child could understand the illegality of it. I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley. I think there are legal grounds here to force all of these models to be taken offline.
Additionally, "guardrails" that prevent 1:1 copies of film stills from being reprinted are clearly not only insufficient, they are evidence that the pirates in this case seek to obscure the nature of their piracy. They are the evidence that generative AI is not much more than a copyright laundering scheme, and the obsession with these guardrails is evidence of conspiracy, not some kind of public good.
> So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him?
No, you can't! But it shouldn't be the tool that prohibits this. You are not allowed to use existing images of Harrison Ford for your commercial and you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?
Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?
The justification so far seems to have been loosely based on the idea that derivative artworks are protected as free expression. That argument loses currency if these are not considered derivative but more like highly compressed images in a novel, obfuscated compression format. Layers and layers of neurons holding a copy of Harrison Ford's face is novel, but it's hard to see why it's any different legally than running a JPEG of it through some filters and encoding it in base64. You can't just decode it and use it without attribution.
Your argument is valid but it's mostly irrelevant from a copyright perspective.
If ChatGPT generates an image of Indiana Jones and distributes it to an end user that is precisely one violation of copyright. A violation that no one but ChatGPT and that end user will know about. From a legal perspective, it's the equivalent of taking a screenshot of an Indiana Jones DVD and sending it to a friend.
ChatGPT can hold within its memory every copyrighted thing that exists and that would not violate anyone's copyright. What does violate someone's copyright is when an exact replica or easily-identifiable derivative work is actually distributed to people.
Realistically, OpenAI shouldn't be worried about someone generating an image of Indiana Jones using their tools. It's the end user that ultimately needs to be held responsible for how that image gets used after-the-fact.
It is perfectly legitimate to capture or generate images of Indiana Jones for your own personal use. For example, if you wanted to generate a parody you would need those copyrighted images to do so (the copyright needs to exist before you can parody it).
If I were Nintendo, Disney, etc I wouldn't be bothered by ChatGPT generating things resembling of my IP. At worst someone will use them commercially and they can be sued for that. More likely, such generated images will only enhance their IP by keeping it active in the minds of people everywhere.
> Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?
Formulated this way, I see your point. I see the LLM as a tool, just like photoshop. From a legal standpoint, I even think you're right. But from a moral standpoint, my feeling is that it should even be okay for an artist to sell painted pictures of Harrison Ford. But not to sell the same image as posters on ebay. And now my argument falls apart. Thanks for leading my thoughts in this direction...
You raise a really amazing point! One that should get more attention in these discussions on HN! I'm a painter in my spare time. I think it is okay to sit down and paint a picture of Harrison Ford (on velvet, maybe), and sell it on Etsy or something if you want to. Before you accuse me of hypocrisy, let me stipulate: Either way, it would not be ok for someone to buy that painting and use it in an ad campaign that insinuated that their soap had been endorsed by Harrison Ford. As an art director, it has obviously never been okay to ask someone to paint Harrison Ford and use that picture in a soap ad. I go through all kinds of hoops and do tons of checking on my artists' work to make sure that it doesn't violate anyone else's IP, let alone anyone's human likeness.
But that's all known. My argument for why me selling that painting is okay, and why an AI company with a neural network doing the same thing and selling it would not be okay, is a lot more subtle and goes to a question that I think has not been addressed properly: What's the difference between my neurons seeing a picture of Harrison Ford, and painting it, and artificial neurons owned by a company doing the same thing? What if I traced a photo of Ford and painted it, versus doing his face from memory?
(As a side note, my friend in art school had an obsession with Jewel, the singer. He painted her dozens of times from memory. He was not an AI, just a really sweet guy).
To answer why I think it's ok to paint Jewel or Ford, and sell your painting, I kind of have to fall back on three ideas:
(1) Interpretation: You are not selling a picture of them, you're selling your personal take on your experience of them. My experience of watching Indiana Jones movies as a kid and then making a painting is not the same thing as holding a compressed JPEG file in my head, to the degree that my own cognitive experience has significantly changed my perceptions in ways that will come out in the final artwork, enough to allow for whatever I paint to be based on some kind of personal evolution. The item for sale is not a picture of Harrison Ford, it's my feelings about Harrison Ford.
(2) Human-centrism: That my neurons are not 1:1 copies of everything I've witnessed. Human brains aren't simply compression algorithms the way LLMs or diffusers are. AI doesn't bring cognitive experience to its replication of art, and if it seems to do so, we have to ask whether that isn't just a simulacrum of multiple styles it stole from other places laid over the art it's being asked to produce. There's an anti-human argument to be made that we do the exact same thing when we paint Indiana Jones after being exposed to Picasso. But here's a thought: we are not a model. Or rather, each of us is a model. Buying my picture of Indiana Jones is a lot like buying my model and a lot less like buying a platonic picture of Harrison Ford.
(3) Tools, as you brought up. The more primitive the tools used, the more difficult we can prove it to be to truly copy something. It takes a year to make 4 seconds of animation, it takes an AI no time at all to copy it... one can prove by some function of work times effort that an artwork is, at least, a product of one's own if not completely original.
I'm throwing these things out here as a bit of a challenge to the HN community, because I think these are attributes that have been under-discussed in terms of the difference between AI-generated artwork and human art (and possibly a starting point for a human-centric way of understanding the difference).
I'm really glad you made me think about this and raised the point!
[edit] Upon re-reading, I think points 1 and 2 are mostly congruent. Thanks for your patience.
I like your formulation but I find point 1 unconvincing. Does it still hold if you paint from a reference image beside the easel? Or projected into the canvas? Or if it’s not a “real” painter but a low-wage laborer? Two of them side by side? A hundred of them?
Where I’m going is I don’t think it makes sense for the moral / legal acceptability of a in image to depend on the mechanical means which created it. I think we have to judge based on the image itself. If the human-generated version and AI-generated version both show the same level of interpretation when viewed, I don’t think point 1 supports treating them differently.
And, as you say, point 2 is mostly congruent, but I have to point out that LLMs are not merely compressed versions of the training material, but instead generalized learnings based on the training data.
ML “neurons” may function differently than our own, and transformer architecture is likely different from the way we think, but the learning of generalized patterns plus details sufficient to reconstitute specific instances seems pretty similar.
Think about painting Indiana Jones; I’ll bet you could paint the handle of the whip in great detail. But it’s likely that’s because you remember a specific image of his whip handle; it’s because you know what whip handles look like in general. ML models work similarly (at some level of abstraction).
I’m left unconvinced that there is anything substantially different about human and AI generated art, and also that we can only judge IP position of either based on the work itself.
Thank you too, this discussion really helped me in getting a more nuanced view on this whole topic. I still think OpenAI should be allowed to generate these kind of images, but just from of a selfish "I want to use this to generate labels for my (uncommercial) home brew beers"-perspective. I surely better understand the counterpoints now.
I think it's an amazing tool as a starting point or a way to get ideas. Our small ad agency's policy has always been to research the hell out of something... like, if you're asked to do a logo for someone running for state Senate, go read the history of the state senate since 1846, and look up all the things everyone used, and start brainstorming art ideas that have multiple layers of meaning that work with your candidate's message. But AI makes it super easy to get a nice looking starting point and then use your ideas to iterate on top of that.
I'm a bit of a home moonshiner, too, so love that you're coming up with labels and using these tools to help out! If I could offer one piece of advice, whether for writing prompts or making your own final art, it would be: History is so rich with visual ideas you can riff from. The history of beer and wine bottles itself is unbelievable. If aliens came here a thousand years after we're gone, and all that was left were liquor labels, they could understand most of our culture. The LLMs always go to the most obvious thing, unless you tell them specifically otherwise. Use the tool but also get funky and mix up the ideas you love the most, adding your own flavor. Just like being a brewer or a chef. That's the essence of being an artist, and making something that at the end of the day is unique and new. Love it. Send me a beer please.
It's reasonably well established that large neural networks don't contain copies of the training data, therefore their outputs can't be considered copies of anything. The model might contain a conceptual representation of Harrison Ford's face, but that's very different to a verbatim representation of a particular copyrighted image of Harrison Ford. Model weights aren't copyrightable; it's plausible that model outputs aren't copyrightable, but there are some fairly complicated arguments around authorship. Training an AI model on copyrighted work is highly likely to be fair use under US law, but plausibly isn't fair dealing under British law or a permitted use under Article 5 of the EU Copyright and Information Society Directive.
All of that is entirely separate from trademark law, which would prevent you from using any representation of a trademarked character unless e.g. you can reasonably argue that you are engaged in parody.
Because I can pay a painter to paint me a picture of Harrison Ford, I just can't then use that to sell things.
> you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?
If the AI prompt was "produce a picture of Micky Mouse", I'd agree with you.
The creators of AI claim their product produces computer-generated images, i.e. generated/created by the computer. Instead it's producing a picture of a real actual person.
If I contract an artist to produce a picture of a person from their imagination, i.e. not a real person, and they produce a picture of Harrison Ford, then yeah I'd say that's on the artist.
Mickey Mouse is a bad example because he's now in the public domain, including Color images.
> This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.
The thing is though, there is also a human requesting that. The prompt was chosen specifically to get that result on purpose.
The corporate systems are trying to prevent this, but if you use any of the local models, you don't even have to be coy. Ask it for "photo of Harrison Ford as Indiana Jones" and what do you expect? That's what it's supposed to do. It does what you tell it to do. If you turn your steering wheel to the left, the car goes to the left. It's just a machine. The driver is the one choosing where to go.
No, I think that's unfair. I, as a user, could very reasonably want a parody or knock-off of Indiana Jones. I could want the spelunky protagonist. It's hard to argue that certain prompts the author put into this could be read any other way. But why does Nintendo get a monopoly on plumbers with red hats?
The way AI is coded and trained pushes it constantly towards a bland-predictable mean, but it doesn't HAVE to be that way.
Human appearance does not have enough dimensions to make likeness a viable thing to protect; I don't see how you could do that without say banning Elvis impersonators.
That said:
> I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley.
If you're framing the sides like that, it's pretty clear which I'm on. :)
Interesting you should bring that up:
https://www.calcalistech.com/ctechnews/article/1517ldjmv
Loads of lawsuits have been filed by celebrities and their estates over the unauthorized use of their likeness. And in fact, in 2022, Las Vegas banned Elvis impersonators from performing weddings after a threat from the Presley estate's licensing company:
https://www.dailymail.co.uk/news/article-10872855/Elvis-imag...
But there are also a couple key differences between putting on a costume and acting like Elvis, and using a picture of Elvis to sell soap.
One is that a personal artistic performance could be construed as pastiche or parody. But even more importantly, if there's a financial incentive involved in doing that performance, the financial incentive has to be aligned more with the parody than with drawing an association to the original. In other words, dressing up as Elvis as a joke or an act, or even to sing a song and get paid to perform a wedding is one thing if it's a prank, it's another thing if it's a profession, and yet another thing if it's a mass-marketing advertisement that intends for people to seriously believe that Elvis endorsed this soap.
I can remember two ad campaigns with an Elvis impersonator, and they used multiple people in both of them. I think we can safely assume that if you represent multiple people as a specific public figure, that a reasonable person must assume that none of them are in fact that person.
Now of course that leaves out concerns over how much of advertisement is making money off of unreasonable people, which is a concern Congress occasionally pays attention to.
> This, however, is so glaring that even a child could understand the illegality of it
If you have to explain "laundering someone's likeness" to them maybe not, I think it's a frankly bizarre phrase.
You are missing a bunch of edge cases, and the law is all about edge cases.
An artist who works professionally has family members, family members who are dependent on them.
If they pass young, become popular just before they pass and their extremely popular works are now public domain. Their family sees nothing from their work, that is absolutely being commercialized ( publishing and creation generally spawns two seperate copyrights).
GP's not missing those edge cases; GP recognizes those edge cases are themselves a product of IP laws.
Those laws are effectively attempting to make information behave as physical objects, by giving them simulated "mass" through a rent-seeking structure. The case you describe is where this simulated physical substrate stops behaving like physical substrate, and choice was made to paper over that with extra rules, so that family can inherit and profit from IP of a dead creator, much like they would inherit physical products of a dead craftsman and profit from selling them.
It's a valid question whether or not this is taking things too far, just for the sake of making information conform to rules of markets for physical goods.
For writing many people only become popular after they are dead.
I heard this explained once as the art in some writing is explaining how people feel in a situation that is still too new for many to want to pay to have it illustrated to them. But once the newness has passed, and people understand or want to understand, then they enjoy reading about it.
As a personal example, I could enjoy movies about unrequited love before and long after I experienced it firsthand, but not during or for years after. People may not yet have settled feelings about an event until afterward, and not be willing to “pick at the scab”.
The other, more statistical explanation is that it just takes a lot of attempts to capture an idea or feeling and a longer window of time represents more opportunities to hit upon a winning formula. So it’s easier to capture a time and place afterward than during.
If copyright law is reduced to say, 20 years from the date of creation (PLENTY of time for the author to make money), then it's irrelevant if he dies young or lives until 100.
You seem to talk about fairness. Copyright law isn't supposed to be fair, it's supposed to benefit society. On one side you have the interest of the public to make use of already created work. On the other side is the financial incentive to create such work in the first place.
So the question to ask is whether the artist would have created the work and published it, even knowing that it isn't an insurance to their family in case of their early death.
I don’t know how much time you’ve spent on task scheduling or public strategy, but minmaxing of The Public Good versus the private benefit of art is, in fact, a question of fairness. It’s a compromise to give both parties as much of what they want or need as possible.
Il not sure IP should be used as a life insurance, there’s already many public and private ideas tools for that.
Also it seems you assume inheritance is a good think. Most people do think the same on a personal level, however when we observes the effect on a society the outcome is concentration of wealth on a minority and barriers for wealth hand change === barriers for "American dream".
You’re thinking of copyright law, not trademark law. Which serves a different function. If you’re going to critique something it’s useful to get your facts right.
I'd go further and say 10 years from time of creation is probably sufficient.
If the work is popular it will make plenty of money in that time. If it isn't popular, it probably won't make much more money after that.
>I don't condone or endorse breaking any laws.
Really? Because there are a lot of very stupid laws out there that should absolutely be broken, regularly, flagrantly, by as many people as possible for the sake of making enforcement completely null and pointless. Why write in neutered corporate-speak while commenting on a casual comment thread while also (correctly) pointing out the absurdity of certain laws.
> There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.
I think life of creator + some reasonable approximation of family members life expectancy would make sense. Content creators do create to ensure their family's security in some cases, I would guess.
this reminds me of the time I tried to use a prepaid lawyer to research some copyright issues
they went down the rabbit hole on trademark laws, which are not only not copyright related, they are an entirely different federal agency at the Patent Office
gave me a giggle and the last time I used cheapo prepaid lawyers
Trademarks are very different from copyrights. In general, they never expire as long as the fees are paid in the region, and products bearing the mark are still manufactured. Note, legal firms will usually advise people that one can't camp on Trademarks like other areas of IP law.
For example, Aspirin is known as an adult dose of acetylsalicylic acid by almost every consumer, and is Trademarked to prevent some clandestine chemist in their garage making a similarly branded harmful/ineffective substance that damages the Goodwill Bayer earned with customers over decades of business.
Despite popular entertainment folklore, sustainable businesses actually want consumer goodwill associated with their products and services.
While I agree in many WIPO countries copyright law has essentially degenerated into monetized censorship by removing the proof of fiscal damages criteria (like selling works you don't own.) However, Trademarks ensure your corporate mark or product name is not hijacked in local Markets for questionable purposes.
Every castle needs a moat, but I do kind of want to know more about the presidential "Tesler". lol =)
It is not about trademark laws. It is about where laws apply.
If you torrent a movie right now, you'll be fined in many advanced countries.
But a huge corporation led by a sociopath scrapes the entire internet and builds a product with other people's work ?
Totally fine.
This. It seem the situation is controversial now because of the beloved Studio Ghibli IP, but I want to see the venn diagram of people outrage at this, and the people clamoring for overbearing Disney chatachter protection when the copyright expired and siding with paloworld in the Nintendo lawsuit.
It seem most of the discussion is emotionally loaded, and people lost the plot of both why copyright exists and what copyright protects and are twisting that to protect whatever media they like most.
But one cannot pick and choose where laws apply, and then the better question would be how we plug the gap that let a multibilion Corp get away with distribution at scale and what derivative work and personal use mean in a image generation as a service world, and artist should be very careful in what their wishes are here, because I bet there's a lot of commission work edging on style transfer and personal use.
It was always me and them. They are either "nobel" or just rich. me is the rest of peasants. We pay everything and we give everything to the rich ones. Thats it.
Which huge corporation led by a sociopath? There's so many to choose from.
The wickedest idea that social media run by sociopaths has implanted in the human psyche is the idea that anyone else would take advantage of immoral tactics if they had the means or could afford the payoffs.
I honestly don't see why anyone who isn't JK Rowling should be allowed to coopt her world. I probably feel even more strongly for worlds/characters I like.
Why am I wrong?
New rule: you get to keep your belongings for 20 years and that's it. Then they are taken away. Every dollar you made twenty or more years ago, every asset you acquired, gone.
That oughtta be enough to incentivize people to work and build their wealth.
Anything more than that is unnecessary.
I think you mean "20 years after you die". That seems like a perfectly rational way to deal with people who want to be buried in their gold and jewels in a pyramid.
OK, so you can have your dad's piano for 20 years after he dies, after that it's public property.
We should never have accepted the term "intellectual property" at all, if this is the mindset it leads to.
So every year I go to a swap meet and try to exchange my “expiring” belongings for something similar to reset the clock? Or, they get taken away by force? By someone who has meticulous records of all my stuff?
What problem are you trying to solve?
Intellectual property does not belong to you. The entire concept of "owning" intellectual property is a very recent thing.
So how about we go back to how it used to be and just remove this entire concept.
You can own things you can't own an idea.
Your copyrighted work just has to be an original expression, not derived from someone else's work. It doesn't have to contain original ideas.
Patents are for ideas. Patents do in fact expire far faster than copyrights in the USA. The main problem with patents is patent trolling in the area of software patents.
I disagree that you can own things, but I certainly concede IP as a good starting point.
So you wouldn't mind if I dip my hands into the pockets of the clothing you are wearing and help myself to whatever cash bills I might find?
Also, can you wash this not-my car I'm driving, that happens to be registered in my name?
Let me know if you will be requiring compensation for not-your time and not-your effort.
You will find the needed cleaning materials at the household goods store down the street. Just walk in, grab whatever you need, and walk out.
So many arguing that "copyright shouldn't be a thing" etc., ad nauseam, which is a fine philosophical debate. But it's also the law. And that means ChatGPT et. al. also have to follow the law.
I really, really hope the multimedia-megacorps get together and class-action ChatGPT and every other closed, for-profit LLM corporation into oblivion.
There should not be a two-tier legal system. If it's illegal for me, it's illegal for Sam Altman.
Get to it.
> There should not be a two-tier legal system.
That’s a fine philosophical debate, but the law is designed by the rich to favor the rich and while there are a number of exceptions there is little you can do with the legal system without money and lots of it. So while having a truly just system would be neat it just isn’t in the cards for humanity (IMHO) so long as we allow entities to amass “fuck you” money and wield it to their liking.
Sorry, but have you paid attention to the legal system in the states?
Large corporations and their execs live by different laws than the rest of us.
That’s how it is.
Anything is else is, unfortunately, a fiction in this country.
Looks to me like OpenAI drew their guardrails somewhere along a financial line. Generate a Micky Mouse or a Pikachu? Disney and Pokemon will sue the sh*t out of you. Ghibli? Probably not powerful enough to risk a multimillion years long court battle.
I thought Disney had the rights to publish Ghibli movies in the US.
They did, but the rights expired. GKIDS now has the theatrical and home video rights to Studio Ghibli films in the US (except for Grave of the Fireflies).
Mickey Mouse (the original one) is out of copyright, as of last year, AFAIR.
Ghibli isn’t a character, but a style. You can’t copyright it.
Yes, the only test will eventually be "Can you train AI on copyrighted works"
I consider this article quite strong proof that generative AI is closer to copying than it is to creating a new derivative work.
For the downvotes:
https://www.copyright.gov/circs/circ01.pdf
“Copyright does not protect • Ideas, procedures, methods, systems, processes, concepts, principles, or discoveries”
Not sure why this is even controversial, this has been the case for a hundred years.
Idk, the models generating what are basically 1:1 copies of the training data from pretty generic descriptions feels like a severe case of overfitting to me. What use is a generational model that just regurgitates the input?
I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?
People like what they already know. When they prompt something and get a realistic looking Indiana Jones, they're probably happy about it.
To me, this article is further proof that LLMs are a form of lossy storage. People attribute special quality to the loss (the image isn't wrong, it's just got different "features" that got inserted) but at this point there's not a lot distinguishing a seed+prompt file+model from a lossy archive of media, be it text or images, and in the future likely video as well.
The craziest thing is that AI seems to have gathered some kind of special status that earlier forms of digital reproduction didn't have (even though those 64kbps MP3s from napster were far from perfect reproductions), probably because now it's done by large corporations rather than individuals.
If we're accepting AI-washing of copyright, we might as well accept pirated movies, as those are re-encoded from original high-resolution originals as well.
The year is 2030.
A new MCU movie is released, its 60 second trailer posted on Youtube, but I don't feel like watching the movie because I got bored after Endgame.
Youtube has very strict anti-scraping techniques now, so I use deep-scrapper to generate the whole trailer from the thumbnail and title.
I use deep-pirate to generate the whole 3 hour movie from the trailer.
I use deep-watcher to summarize the whole movie in a 60 second video.
I watch the video. It doesn't make any sense. I check the Youtube trailer. It's the same video.
Probably the majority of people in the world already "accept pirated movies". It's just that, as ever, nobody asks people what they actually want. Much easier to tell them what to want, anyway.
To a viewer, a human-made work and an AI-generated one both amount to a series of stimuli that someone else made and you have no control over; and when people pay to see a movie, generally they don't do it with the intent to finance the movie company to make more movies -- they do it because they're offered the option to spend a couple hours watching something enjoyable. Who cares where it comes from -- if it reached us, it must be good, right?
The "special status" you speak of is due to AI's constrained ability to recombine familiar elements in novel ways. 64k MP3 artifacts aren't interesting to listen to; while a high-novelty experience such as learning a new culture or a new discipline isn't accessible (and also comes with expectations that passive consumption doesn't have.)
Either way, I wish the world gave people more interesting things to do with their brains than make a money, watch a movies, or some mix of the two with more steps. (But there isn't much of that left -- hence the concept of a "personal life" as reduced to breaking one's own and others' cognitive functioning then spending lifetimes routing around the damage. Positively fascinating /s)
Tried Flux.dev with the same prompts [0] and it seems actually to be a GPT problem. Could be that in GPT the text encoder understands the prompt better and just generates the implied IP, or could be that a diffusion model is just inherently less prone to overfitting than a multimodal transformer model.
[0] https://imgur.com/a/wqrBGRF Image captions are the impled IP, I copied the prompts from the blog post.
DALL-E 3 already uses a model that trained on synthetic data that take the prompt and augments it. This might lead to the overfitting. It could also be, and might be the simpler explanation, that its just looks up the right file from a RAG.
If it overfits on the whole internet then it’s like a search engine that returns really relevant results with some lossy side effect.
Recent benchmark on unseen 2025 Math Olympiad shows none of the models can problem solve . They all accidentally or on purpose had prior solutions in the training set.
You probably mean the USAMO 2025 paper. They updated their comparison with Gemini 2.5 Pro, which did get a nontrivial score. That Gemini version was released five days after USAMO, so while it's not entirely impossible for the data to be in its training set, it would seem kind of unlikely.
The claim is that these models are training on data which include the problems and explanations. The fact that the first model trained after the public release of the questions (and crowdsourced answers) performs best is not a counter example, but is expected and supported by the claim.
The same timing is actually suspicious. And it would not be the first time something like this happened.
I was noodling with Gemini 2.5 Pro a couple days ago and it was convinced Donald Trump didn’t win the 2024 election and that he conceded to Kamala Harris so I’m not entirely sure how much weight I’d put behind it.
What if the word "generic" were added to a lot of these image prompts? "generic image of an intergalactic bounty hunter from space" etc.
Certainly there's an aspect of people using the chat interface like they use google: describe xyz to try to surface the name of a movie. Just in this case, we're doing the (less common?) query of: find me the picture I can vaguely describe; but it's a query to a image /generating/ service, not an image search service.
Generic doesn't help. I was using the new image generator to try and make images for my Mutants and Masterminds game (it's basically D&D with superheroes instead of high fantasy), and it refuses to make most things citing that they are too close to existing IP, or that the ideas are dangerous.
So I asked it to make 4 random and generic superheroes. It created Batman, Supergirl, Green Lantern, and Wonder Woman. Then at about 90% finished it deleted the image and said I was violating copyright.
I doubt the model you interact with actually knows why the babysitter model rejects images, but it claims to know why and leads to some funny responses. Here is it's response to me asking for a superhero with a dark bodysuit, a purple cape, a mouse logo on their chest, and a spooky mouse mask on their face.
> I couldn't generate the image you requested because the prompt involved content that may violate policy regarding realistic human-animal hybrid masks in a serious context.
Yeah... so much for that hope on my end! Thanks for testing.
(hard to formulate why I was too lazy to test myself :) )
Because it's depressing how much money was burned for this sort of result? That makes me pretty lazy.
Probably more: "I'll save my energy for interacting with AI for something more useful to me"
Yeah, I've been feeling the same. When a model spits out something that looks exactly like a frame from a movie just because I typed a generic prompt, it stops feeling like “generative” AI and more like "copy-paste but with vibes."
To my knowledge this happens when that single frame is overrepresented in its training data. For instance, variations of the same movie poster or screenshot may appear hundreds of times. Then the AI concludes that this is just a unique human cultural artifact, like the Mona Lisa (which I would expect many human artists could also reproduce from memory).
Idk, a couple of the examples might be generic enough that you wouldn't expect a very specific movie character. But most of the prompts make it extremely clear which movie character you would expect to see, and I would argue that the chat bot is working as expected by providing that.
Even if I'm thinking of an Indiana Jones-like character doesn't mean I want literally Indiana Jones. If I wanted Indiana Jones I could just grab a scene from the movie.
if someone gets Indiana on the suspiciously detailed request the author provided and it appears they wanted something else, they can clarify that to the chat bot, e.g. by copying this your comment.
I have a strong suspicion that many human artists would behave in a way the chat bot did (unless they start asking clarifying questions. Which chatbots should learn to do as well)
Good luck grabbing that Ghibli movie scene, where Indiana Jones arm-wrestles Lara Croft in a dive-bar, with Brian Flanagan serving a cocktail to Allan Quatermain in the background.
Probably an over-representation in the training data really so it's causing overfitting. Because using training data in amounts right from the Internet it's going to be opinionated on human culture (Bart Simpson is popular so there are lots of images of him, Ori is less well known so there are fewer images). Ideally it should be training 1:1 for everything but that would involve _so_ much work pruning the training data to have a roughly equal effect between categories.
I'm not sure if this is a problem with overfitting. I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details, it just seems that it's generating images from that knowledge in cases where that isn't appropriate.
I wonder if it's a fine tuning issue where people have overly provided archetypes of the thing that they were training towards. That would be the fastest way for the model to learn the idea but it may also mean the model has implicitly learned to provide not just an instance of a thing but a known archetype of a thing. I'm guessing in most RLHF tests archetypes (regardless of IP status) score quite highly.
What I'm kind of concerned about is that these images will persist and will be reinforced by positive feedback. Meaning, an adventurous archeologist will be the same very image, forever. We're entering the epitome of dogmatic ages. (And it will be the same corporate images and narratives, over and over again.)
And it's worth considering that this issue isn't unique to image generation, either.
E.g., I think, there are now entire generations, who never played anything as a child that wasn't tied in with corporate IP in one way or the other.
Santa didn't always wear red.
Granted, but not the best example, red and green are the emblematic colours elves wore in northern european cultures. Santa is somewhat syncretic with Robert Goodfellow or Robin Redbreast, Puck, Puca, etc etc. it wasn’t really a cola invention.
> I'm ok with the model knowing what Indiana Jones or the Predator looks like with well remembered details,
ClosedAI doesn't seem to be OK with it, because they are explicitly censoring characters of more popular IPs. Presumably as a fig leaf against accusations of theft.
If you define feeding of copyrighted material into a non-human learning machine as theft, then sure. Anything that mitigates legal consequences will be a fig leaf.
The question is "should we define it as such?"
The fact that they have guardrails to try and prevent it means OpenAI themselves thinks it is at least shady or outright illegal in someway. Otherwise why bother?
If a graphics design company was using human artists to do the same thing that OpenAI is, they'd be sued out of existence.
But because a computer, and not a human does it, they get to launder their responsibility.
Doing what? Telling their artists to create what they want regardless of copyright and then filtering the output?
For humans it doesn't make sense because we have generation and filtering in a single package.
In this case the output wasn't filtered. They are just producing images of Harrison Ford, and I don't think they are allowed to use his likeness in that way.
There is a difference between knowing what something looks like and generating an image of that thing.
Why? Replace the context and not having that property is now called a hallucination.
Overall the model is tra
It's not a single image though. Stitching 3 or so input images together clearly makes copyright laundering legal.
No it doesn't. Commercial intent (usually) makes it illegal, not the fact of copying.
So the criminal party here would be OpenAI, since they are selling access to a service that generates copyright-infringing images.
> I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.
Ironically that's probably because the errors and flaws in those generations at least made them different from what they were attempting to rip off.
So I train a model to say y=2, and then I ask the model to guess the value of y and it says 2, and you call that overfitting?
Overfitting is if you didn't exactly describe Indiana Jones and then it still gave Indiana Jones.
The prompt didn't exactly describe Indiana Jones though. It left a lot of freedom for the model to make the "archeologist" e.g. female, Asian, put them in a different time period, have them wear a different kind of hat etc.
It didn't though, it just spat out what is basically a 1:1 copy of some Indiana Jones promo shoot. No where did the prompt ask for it to look like Harrison Ford.
But the concentrations of training data because of human culture/popularity of characters/objects means that if I go and give a random person the same description of a character that the AI got and ask "who am I talking about, what do they look like?" there's a very high likelihood that they'll answer "Indiana Jones".
But... the prompt neither forbade Indiana Jones nor did it describe something that excluded Indiana Jones.
If we were playing Charades, just about anyone would have guessed you were describing Indiana Jones.
If you gave a street artist the same prompt, you'd probably get something similar unless you specified something like "... but something different than Indiana Jones".
And… that is called overfitting. If you show the model values for y, but they are 2 in 99% of all cases, it’s likely going to yield 2 when asked about the value of y, even if the prompt didn’t specify or forbid 2 specifically.
> If you show the model values for y, but they are 2 in 99% of all cases, it’s likely going to yield 2 when asked about the value of y
That's not overfitting. That's either just correct or underfitting (if we say it's never returning anything but 2)!
Overfitting is where the model matches the training data too closely and has inferred a complex relationship using too many variables where there is really just noise.
I would argue this is just fitting.
If you take the perspective of all the possible responses to the request, then it is overfit because it only returns a non-generalized response.
But if you look at it from the perspective that there is only one example to learn, from it is maybe not over it.
The nice thing about humans is that not every single human being read almost every content present on the Internet. So yeah, a certain group of people would draw or think of Indiana Jones with that prompt, but not everyone. Maybe we will have different models with different trainings/settings that permits this kind of freedom, although I doubt it will be the commercial ones.
I mean, did anyone here read the prompt and not think “Indiana Jones”?
I didn't think it. I imagined a cartoonish chubby character in typical tan safari gear with a like-colored round explorer hat and swinging a whip like a lion tamer. He is mustachioed, light skin, and bespectacled. And I am well familiar with Dr. Jones.
Is HN the whole world? Isn't an AI model supposed to be global, since it has ingested the whole Internet?
How can you express, in term of AI training, ignoring the existence of something that's widely present in your training data set? if you ask the same question to a 18yo girl in rural Thailand, would she draw Harrison Ford as Indiana Jones? Maybe not. Or maybe she would.
But IMO an AI model must be able to provide a more generic (unbiased?) answer when the prompt wasn't specific enough.
Why should the AI be made to emulate a person naive to extant human society, tropes and customs? That would only make it harder for most people to use.
Maybe it would have some point if you are targetting users in a substantially different social context. In the case, you would design the model to be familiar with their tropes instead. So when they describe a character iconic in their culture, by a few distinguishing characteristics, it would produce that character for them. That's no different at all.
Or even just 'obvious Indiana Jones knockoff who isn't literally Harrison Ford'. Comics do that kind of thing constantly for various obviously inspired but legally distinct characters.
What would most humans draw when you describe such a well known character by their iconic elements. Think if you deviated and acted a pedant about it people would think you're just trying to prove a point or being obnoxious.
Obviously a horrible hideous theft machine.
One thing I would say, it's interesting to consider what would make this not so obviously bad.
Like, we could ask AI to assess the physical attributes of the characters it generated. Then ask it to permute some of those attributes. Generate some random tweaks: ok but brawy, short, and a different descent. Do similarly on some clothing colors. Change the game. Hit the "random character" button on the physical attributes a couple times.
There was an equally shatteringly-awful less-IP-theft (and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations).... An equally shattering recent incident for me. Having trouble finding it, don't remember the right keywords, but an article about how AI has a "default guy" type that it uses everywhere, a super generic personage, that it would use repeatedly. It was so distasteful.
The nature of 'AI as compression', as giving you the most median answer is horrific. Maybe maybe maybe we can escape some of this trap by iterating to different permutations, by injecting deliberate exploration of the state spaces. But I still fear AI, worry horribly when anyone relies on it for decision making, as it is anti-intelligent, uncreative in extreme, requiring human ingenuity to budge off its rock of oppressive hypernormality that it regurgitates.
Theft from whom and how?
Are you telling me that our culture should be deprived of the idea of Indiana Jones and the feelings that character inspires in all of us forever just because a corporation owns the asset?
Indiana Jones is 44 years old. When are we allowed to remix, recreate and expand on this like humanity has done since humans first started sitting down next to a fire and telling stories?
edit: this reminds of this iconic scene from Dr. Strangelove, https://www.youtube.com/watch?v=RZ9B7owHxMQ
Mandrake: Colonel... that Coca-Cola machine. I want you to shoot the lock off it. There may be some change in there.
Guano: That's private property.
Mandrake: Colonel! Can you possibly imagine what is going to happen to you, your frame, outlook, way of life, and everything, when they learn that you have obstructed a telephone call to the President of the United States? Can you imagine? Shoot it off! Shoot! With a gun! That's what the bullets are for, you twit!
Guano: Okay. I'm gonna get your money for ya. But if you don't get the President of the United States on that phone, you know what's gonna happen to you?
Mandrake: What?
Guano: You're gonna have to answer to the Coca-Cola company.
I guess we all have to answer to the Walt Disney company."idea of Indiana Jones and the feelings that character inspires in all of us forever just because a corporation owns the asset" is very different from the almost exact image of Indiana Jones.
And a reason people are getting ticked at the AI companies is the hypocrisy. They're near-universally arguing that it's okay for them to treat copyright in a way that it is illegal for us to, apparently on the basis of, "we've got a billions in investment capital, and applying the law equally will make it hard for us to get a return on that investment".
Exactly. The idea of Indiana Jones, the adventurer archaeologist more at home throwing a punch than reading a book, is neither owned by nor unique to Lucasfilm (Disney). There is a ton of media out there featuring this trope character [1]. Yes, the trope is overwhelmingly associated with the image of Harrison Ford in a fedora within the public consciousness, but copyright does not apply to abstract ideas such as tropes.
Some great video games to feature adventurer archaeologists:
* NetHack (One of the best roles in the game)
* Tomb Raider series (Lara Croft is a bona fide archaeologist)
* Uncharted series (Nathan Drake is more of a treasure hunter but he becomes an archaeologist when he retires from adventuring)
* Professor Layton series
* La-Mulana series (very obviously inspired by Indiana Jones, but not derivative)
* Spelunky (inspired by La-Mulana)
[1] https://tvtropes.org/pmwiki/pmwiki.php/Main/AdventurerArchae...
As a connoisseur of bad as well as old movies I'd like to add The Librarian movies to this list, and The Mummy, where the first from 1932 stars the inimitable Boris Karloff as the lovesick undead.
Not forever. But 75 years after the death of the creator by current international agreement. I definitely think that the exact terms of copyright should be revisited - a lot of usages should be allowed like 50 years of publishing a piece of work. But that needs to be agreed upon and converted into law. Till then, one should expect everyone, especially large corporations, to stick to the law.
When Mickey Mouse was created (1928), copyright was 28 years that could be reupped once for an additional 28 years. So according to those terms, Mickey Mouse would have ascended to the public domain in 1984.
IMO any change to copyright law should not be applied retroactively. Make copyright law to be what is best for society and creators as a whole, not for lobbyists representing already copyrighted material.
> IMO any change to copyright law should not be applied retroactively.
Careful, if we were to shorten copyright, not doing so retroactively would give an economic advantage to franchises already published over those that would get published later. As if the current big studios needed any further advantages over newcomers.
Its kind of funny that everyone is harping this way or that way about IP.
This is a kind of strange comment for me to read. Because imby tone it sounds like a rebuttal? But by content, it agrees with a core thing I said about myself:
> and as someone who thinks IP is itself incredibly ripping off humanity & should be vastly scoped down, it's important to me to not rest my arguments on IP violations
What's just such a nightmare to me is that the tech is so normative. So horribly normative. This article shows that AI again and again reproduced only the known, only the already imagined. Its not that it's IP theft that rubs me so so wrong, it's that it's entirely bankrupt & uncreative, so very stuck. All this power! And yet!
You speak at what disgusts me yourself!
> When are we allowed to remix, recreate and expand on this like humanity has done
The machine could be imagining all kinds of Indianas. Of all different remixed recreated expanded forms. But this pictures are 100% anything but that. They're Indiana frozen in Carbonite. They are the driest saddest prison of the past. And call into question the validity of AI entirely, show something greviously missing.
> All this power! And yet!
You are completely ignoring the fact that you can provide so much more information to the LLMs to get what you want. If you truly want novel images, ChatGPT can absolutely provide them, but you have to provide a better starting point than "An image of an archeologist adventurer who wears a hat and uses a bullwhip".
If you just provide a teensy bit more information, the results dramatically change. Try out "An image of an Indian female archeologist adventurer who wears a hat and uses a bullwhip". Or give it an input image to work with.
From just adding a couple words, ChatGPT produces an entirely new character. It's so easy to get it to produce novel images. It is so easy in fact, that it makes a lot of posts like this one feel like strawmen, intentionally providing so little information to the LLMs that the generic character is the only obvious output that you would expect.
Now, would it be better if it didn't default to these common movie tropes? Sure. But the fact that it can follow these tropes doesn't mean that it cannot also be used to produce entirely new images filled with your imagination as well. You just have to actually ask it for that.
You're again failing to read my top post. Badly.
No, I am not. Read my comment again. You can literally just ask AI for whatever you want. It has such an incredible breadth of what it can produce that calling it uncreative because the default thing that it produces is the most common image you'd expect is both lazy, and motivated thinking.
It strikes me that perhaps the prompts are not expansive or expressive enough. If you look at some of the prompts our new wave of prompt artists use to generate images in communities like midjourney, a single sentence doesn't cut it.
If AI is just compression, then decompressing a generic pop-culture-seeking prompt will yield a generic uninspired image.
Exactly. The AI understands that reference. It gives you what you asked for, it doesn't try to divine that it's a weird test for IP violations. If it made up a different image, that would be exactly the thing we're mad about with "hallucinations" when we want serious, accurate responses.
I have given detailed descriptions of my own novel ideas to these image generators and they have faithfully implemented my ideas. I don't need the bot to be creative, I can do that myself. The bot is a paint brush. Give it to somebody who isn't creative and you won't get anything creative out of it. That isn't the tool's fault, it's merely an inadequacy of the user.
But I can hire an artist and ask him to draw me a picture of Indiana Jones, he creates a perfect copy and I hang it on my fridge. Where did I (or the artist) violate any copyright (or other) laws? It is the artist that is replaced by the AI, not the copyrighted IP.
> But I can hire an artist and ask him to draw me a picture of Indiana Jones,
Sure, assuming the artist has the proper license and franchise rights to make and distribute copies. You can go buy a picture of Indy today that may not be printed by Walt Disney Studios but by some other outfit or artists.
Or, you mean if the artist doesn't have a license to produce and distribute Indiana Jones images? Well they'll be in trouble legally. They are making "copies" of things they don't own and profiting from it.
Another question is whether that's practically enforceable.
> Where did I (or the artist) violate any copyright (or other) laws?
When they took payment and profited from making unauthorized copies.
> It is the artist that is replaced by the AI, not the copyrighted IP.
Exactly, that's why LLMs and the companies which create them are called "theft machines" -- they are reproducing copyrighted material. Especially the ones charging for "tokens". You pay them, they make money and produce unauthorized copies. Show that picture of Indy to a jury and I think it's a good chance of convincing them.
I am not saying this is good or bad, I just see this having a legal "bite" so to speak, at least in my pedestrian view of copyright law.
The likeness of Indiana Jones is not protected in any way - as far as I know - that would stop a human artist creating, rendering and selling a work of art representing their creative vision of Indiana Jones. And even more so in a private context. Even if the likeness is protected (“archaeologist, adventurer, whip, hat”) then this protection would only be in certain jurisdictions and that protection is more akin to a design right where the likeness would need to be articulated AND registered. Many jurisdictions don’t require copyright registration and do not offer that sort of technical likeness registration.
If they traced a photo they might be violating the copyright of the photographer.
But if they are drawing an archaeologist adventurer with a whip and a hat based on their consumption and memory of Indiana Jones imagery there is very little anyone could do.
If that image was then printed on an industrial scale or printed onto t-shirt there is a (albeit somewhat theoretical) chance that in some jurisdictions sale of those products may be able to be restricted based on rights to the likeness. But that would be a stretch.
The likeness of Indiana Jones, as a character, is owned by Disney
If they show that image to a jury they’ll have no issues convincing them the LLM is infringing.
Moreover if the LLM creators are charging for it, per token or whatever, they are profiting from it.
Yes are there jurisdictions were this won’t work and but I think in US Disney lawyers could make viable argument.
I wasn’t talking about LLMs, I was talking about human artists.
With the LLM it would be nothing to do with likeness, it would be to do with the copyright in the image, the film, video or photograph. The image captures the likeness but the infringement would not be around the likeness.
> Or, you mean if the artist doesn't have a license to produce and distribute Indiana Jones images? Well they'll be in trouble legally. They are making "copies" of things they don't own and profiting from it.
Ok, my sister can draw, and she gifts me an image of my favorite Marvel hero she painted to hang on my wall. Should that be illegal?
The question is not whether it should but whether it is.
The likeness of the character is owned by Marvel. Does it mean there aren’t vendors selling unlicensed versions? No. I am sure there are. But just because not everyone is being sued doesn’t mean it’s suddenly legal.
That’s not how copyright law works.
Commissioned work is owned by the commissioner unless otherwise agreed upon by contract.
So long as the work is not distributed, exhibited, performed, etc, as in the example of keeping the artwork on their refrigerator in their home, then no infringement has taken place.
> Commissioned work is owned by the commissioner unless otherwise agreed upon by contract.
I think the LLM example is closer to the LLM and its creator being like a vendor selling pictures of Indiana Jones on the street corner than hiring someone and performing work for hire. Yes, if it was a human artist commissioned to create an art piece, then yeah, the commissioner owns it.
As far as I know, if you're speaking of the United States, the copyright of commissioned work is owned by the creator, unless otherwise agreed upon specifically through a "work made for hire" (i.e. copyright transfer) clause in the contract.
> [If commissioning some work and] keeping the artwork on their refrigerator in their home, then no infringement has taken place.
I'd like to push back on this: Is that legally true, or is it infringement which just happens to be so minor and under-the-radar that nobody gets in trouble?
Suppose there's a printer in my room churning out hundreds of pages of words matching that of someone's copyrighted new book, without permission.
That sure seems like infringement is happening, regardless of whether my next step is to: (A) sell it, (B) sell many of it, (C) give it away, (D) place it into my personal library of other home-printed books, or (E) hand it to someone else who paid me in advance to produce it for them under contract.
If (A) is infringement, why wouldn't (E) also be?
Ownership of artwork is independent of copyright infringement. Derivative works qualify for their own independent copyright, you just can’t sell them until after the original copyright expires.
Just because I own my car doesn’t mean I can break the speed limit, these are orthogonal concepts legally.
That does infringe copyright...you're just unlikely to get in trouble for it. You might get a cease and desist if the owner of the IP finds out and can spare a moment for you.
Totally agree. LLM's are just automating that infringement process.
If you make money off it, it's no longer fair use; it's infringement. Even if you don't make money off it, it's not automatically fair use.
My own favorite crazy story about copyright violations:
Metallica sued Green Jello for parodying Enter Sandman (including a lyric where it says "It's not Metallica"):
https://en.wikipedia.org/wiki/Electric_Harley_House_(of_Love...
They lost that case. The kicker? Metallica were guest vocalists on that album.
This doesn't make any sense to me. No media is getting copied, unless the drawing is exactly the same as an existing drawing. Shouldn't "copy"right apply to specific, tangible artistic works? I guess I don't understand how the fantasy of "IP" works.
What if the drawing is of Indiana Jones but he's carrying a bow and arrow instead of a whip? Is it infringement?
What if it's a really bad drawing of Indiana Jones, so bad that you can't really tell that it's the character? Is that infringement?
What if the drawing is of Indiana Jones, but in the style of abstract expressionism, so doesn't even contain a human shape? Is it infringement?
What if it's a good drawing that looks very much like Indiana Jones, but it's not! The person's name is actually Iowa Jim. Is that infringement?
What if it's just an image of an archeologist adventurer who wears a hat and uses a bullwhip, but otherwise doesn't look anything like Indiana Jones? Is it infringement?
Here's some reading material.
https://en.wikipedia.org/wiki/Copyright_protection_for_ficti...
Wow, that's very informative, actually. It's surprisingly complex and leaves a lot up to the judgment and tastes of the legal system.
Presumably the artist is a human who directly or indirectly paid money to view a film containing an archaeologist with the whip.
I don't think this is about reproduction as much as how you got enough data for that reproduction. The riaa sent people to jail and ruined their lives for pirating. Now these companies are doing it and being valued for hundreds of billions of dollars.
You're right, it's not just about reproduction, it's about how the data was collected
The artist is violating copyright by selling you that picture. You can’t just start creating and distributing pictures of a copyrighted property. You need a license from the copyright holder.
You also can’t sell a machine that outputs such material. And that’s how the story with GenAI becomes problematic. If GenAI can create the next Indiana Jones or Star Wars sequel for you (possibly a better one than Disney makes, it has become a low bar of sorts), I think the issue becomes obvious.
I think framing this as "IP theft" is a mistake.
Nobody can prevent you from drawing a photo realistic picture of Indy, or taking a photo of him from the internet and hanging it on your fridge. Or asking a friend to do it for you. And let's be honest -- because nobody is looking -- said friend could even charge you a modest sum to draw a realistic picture of Indy for you to hang on your fridge; yes, it's "illegal" but nobody is looking for this kind of small potatos infringement.
I think the problem is when people start making a business out of this. A game developer could think "hey, I can make a game with artwork that looks just like Ghibli!", where before he wouldn't have anyone with the skills or patience to do this (see: the 4-second scene that took a year to make), now he can just ask the Gen AI to make it for them.
Is it "copyright infringement"? I dunno. Hard to tell, to be honest. But from an ethical point of view, it seems odd. And before you actually required someone to take the time and effort to copy the source material, now it's an automated and scalable process that does this, and can do this and much more, faster and without getting tired. "Theft at scale", maybe not so small potatos anymore.
--
edit: nice, downvotes. And in the other thread people were arguing HN is such a nice place for dissenting opinions.
I believe when we talk about this there's a big misunderstanding between Copyright, Trademarks, and Fair use.
Indy, with its logo, whiplash, and hat, is a trademark from Disney. I don't know the specific stuff; but if you sell a t-shirt with Indiana Jones, or you put the logo there... you might be sued due to trademark violation.
If you make copies of anything developed, sold, or licensed by Disney (movies, comics, books, etc) you'll have a copyright violation.
The issue we have with AI and LLM is that: - The models compress information and can make a lot of copies of it very cheaply. - Artist wages are quite low. Higher that what you'd pay OpenAI, but not enough to make a living even unless you're hired by a big company (like Marvel or DC) and they give you regular work ($100-120 for a cover, $50-80/page interior work. One page needs about one day to draw.) - AI used a lot of images from the internet to train models. Most of them were pirated. - And, of course, it is replacing low-paying jobs for artist.
Also, do not forget it might make verbatim copies of copyrighted art if the model just memorized the picture / text.
I would argue that countless games have already been made by top tier professional artists that IP-Steal the Ghibli theme.
Breath of the Wild, and Tears of the Kingdom should be included there.
Can we not call it "theft"? It's such a loaded term and doesn't really mean the same thing when we're talking about bits and bytes.
OK, but then we need a common standard. If Facebook is allowed to use libgen, I should also be allowed.
Only if we stop calling software distribution "piracy" under the false pretenses that anything is being stolen.
> Obviously a horrible hideous theft machine [...] awful [...] horriffic
Ah, I thought I knew this account from somewhere. It seems surprisingly easy to figure out what account is commenting just based on the words used, as I've commented that only a few active people on this site seem to use such strong words as shown here.
Interesting proposal. Maybe if race or sex or height or eye color etc isn't given, and the LLM determines there's no reason not to randomize in this case (avoid black founding fathers), the backend could tweak its own prompt by programatically inserting a few random traits to the prompt.
If you describe an Indiana Jones character, but no sex, 50/50 via internal call to rand() that it outputs a woman.
Yup it's called overfitting. But I don't suppose you'd appreciate a neutral model either.
> Obviously a horrible hideous theft machine.
I hate how it is common to advance a position to just state a conclusion as if it were a fact. You keep repeating the same thing over and over until it seems like a concensus has been reached instead of an actual argument reasoned from first principle.
This is no theft here. Any copyright would be flimsier than software patents. I love Studio Ghibli (including $500/seat festival tickets) but it's the heart and the detail that make them what they are. You cannot clone that. Just some surface similarity. If that's all you like about the movies... you really missed the point.
Imagine if in early cinema someone had tried to claim mustachioed villian, ditsy blonde, or dumb jock? These are just tropes and styles. Quality work goes much much much deeper, and that cannot be synthesised. I can AI generate a million engagement rings, but I cannot pick the perfect one that fits you and your partners love story.
PS- the best work they did was "When Marnie was There". Just fitted together perfectly.
The engagement ring is a good example object, but I feel it serves the opposite argument better.
If engagement rings were as ubiquitous and easy to generate as Ghibli images have become, they would lose their value very quickly -- not even just in the monetary sense, but the sentimental value across the market would crash for this particular trinket. It wouldn't be about picking the right one anymore, it would be finding some other thing that better conveys status or love through scarcity.
If you have a 3d printer you'd know this feeling where abundance diminishes the value of something directly. Any pure plastic items you have are reduced to junk very quickly once you know you can basically have anything on a whim (exceptions for things with utility, however these are still printable). If I could print 30 rings a day, my partner wouldn't want any of them as a show of my undying love. Something more special and rare and thoughtful would have to take its place.
This isn't meant to come across as shallow in any way, its just classic supply and demand relating to non monetary value.
>If I could print 30 rings a day, my partner wouldn't want any of them as a show of my undying love. Something more special and rare and thoughtful would have to take its place.
And now I think this serves the opposite argument better. Downloading some random ring from the internet would not show your undying love. Designing a custom ring just for your partner, even if it is made from plastic, and even if you use AI as a tool in the process, is where the value is generated.
This is only true where it is distinguishable that the end result is made with care and love rather than indiscriminate copying. Which is why attribution is essential. No one realistically could tell a ring was hand crafted or mass produced if there wasn't some tell. Some people may say the "tell" is a kind of intuitive nebulous concept of the "soul" of the animation.. but I feel we are quickly approaching the point where this is no longer obvious.
As an aside, my partner detests the things I 3d print unless they have a very specific purpose, even when they are random semi artistic pieces I'm tinkering with (and I typically agree, they are junk). She loves the first thing i ever printed her though, a triceratops model, despite being randomly downloaded.
Anything made with intent from one individual to another will have some level of sentimental value, but I don't feel like making a ghibli image with AI specifically tailored to a friends tastes would have quite as much value as leveraging your own talent to do it yourself.
On the flip side, I do believe that "doing it yourself" has less value than it used to. It's a very sad reality and in my opinion a strong argument against blind "progress". We gain the ability to mass produce art but lose the ability to perceive it as art?
> This is only true where it is distinguishable that the end result is made with care and love rather than indiscriminate copying. Which is why attribution is essential. No one realistically could tell a ring was hand crafted or mass produced if there wasn't some tell.
It doesn't matter if the ring was hand crafted or not. It's whether it has hand selected. If you find the perfect ring, even if it was generated by an AI, it's your selection that matters. It's the correspondence that matters. The way it reflects elements of your relationship. It's you recognising those elements in the ring. Your partner recoginising them in the ring. And your partner recognising you recognising them. That is what makes itself.
Not to dox myself, but I am not Grace Abrams. I met my partner long before her song "Risk" was written, but when I heard it I immediately played it for my partner and said "This describes the feelings I had when I met you". I played it for her, she cried. I didn't have to write the song or own or pay a cent for it. It's the curation that made an emotional connection and had value. The song itself has no value, and she might have even heard it and never made the connection, it was me embuing that had value.
To go back to Miyazaki, it's the connections between elements in his films. The attention to detail and tone between relationships that make his films amazing. It's all about the handyman's invoice [0]. By the time there are enough examples for AI to learn something, it ceases to be a novel insight and have value. It's the curation and application that have value and are human and cannot be stolen.
>it's the heart and the detail that make them what they are. You cannot clone that
You absolutely can and these theft machines are proving that, literally cloning those details with very high precision and fidelity.
I didn't mean visual fidelity, I meant the way that plot and theme and art interleave. I first watched My Neighbour Totoro on VHS with no visual fidelity and it was still magic.
You can easily steal the style of a political cartoon or especially XKCD but you cannot steal or generate genuine fresh insight or poignant relevant metaphor for the current moment.
I don't think AI is doomed to be uncreative but it definitely needs human weirdness and unpredictability to steer it
So if it’s a theft machine, how is the answer to try teaching it to hide the fact that it’s stealing by changing its outputs? That’s like a student plagiarizing an essay and then swapping some words with a thesaurus pretending that changes anything.
Wouldn’t the more appropriate solution in the case of theft be to remunerate the victims and prevent recidivism?
Instead of making it “not so obviously bad” why not just… make it good? Require AI services to either prove that 100% of their training corpus is either copyright free or properly licensed, or require them to compensate copyright holders for any infringing outputs.
(below is my shallow res, maybe naive?) That might inject a ton of $ into "IP", doing further damage to the creative commons. How can we support remix culture for humans, while staving off ultimately-destructive AI slop? Maybe copyleft / creative-commons licenses w/ explicit anti-AI prohibitions? Tho that could have bad ramifications too. ALL of this makes me kind of uncomfortable and sad, I want more creativity and fewer lawyers.
> doing further damage to the creative commons
Not sure I understand this part. Because creators would be getting paid for their works being used for someone else’s commercial gain?
Because it reinforces the idea that creative works should usually involve lawyers.
No it doesn’t. It reinforces that copyright is the law. If you don’t violate someone’s copyright, you don’t need a lawyer.
> Obviously a horrible hideous theft machine.
I mean... If I go to Google right now and do an image search for "archeologist adventurer who wears a hat and uses a bullwhip," the first picture is a not-even-changed image of Indiana Jones. Which I will then copy and paste into whatever project I'm working on without clicking through to the source page (usually because the source page is an ad-ridden mess).
Perhaps the Internet itself is the hideous theft machine, and AI is just the most efficient permutation of user interface onto it.
(Incidentally, if you do that search, you will also, hilariously, turn up images of an older gentleman dressed in a brown coat and hat who is clearly meant to be "The Indiana Jones you got on Wish" from a photo-licensing site. The entire exercise of trying to extract wealth via exclusive access to memetic constructs is a fraught one).
Your position cannot distinguish stealing somebody's likeness and looking at them.
I agree without argument. I have also thoroughly enjoyed the animatronic dead Presidents at Disney World.
The key difference is that the google example is clearly copying someone elses work and there are plenty of laws and norms that non-billionaires need to follow. If you made a business reselling the image you copied you would expect to get in trouble and have to stop. But AI companies are doing essentially the same thing in many cases and being rewarded for it.
The hypocrisy is much of the problem. If we're going to have IP laws that severely punish people and smaller companies for reselling the creative works of others without any compensation or permission then those rules should apply to powerful well-connected companies as well.
Do google pay anyone when I use image search and the results are straight from their website?
This was decided in court[1] over two decades as acceptable fair-use and that thumbnail images do not constitute a copyright violation.
[1] https://scholar.google.com/scholar_case?case=137674209419772...
This makes AI image generation very boring. I don't want to generate pictures I can find on google, I want to make new pictures.
I found apple's tool frustrating. I have a buzzed haircut, but no matter what I did, apple was unable to give me that hairstyle. It wants so bad for my avatar to have some longer hair to flourish, and refuses to do anything else.
I think the cat is out of the bag when it comes to generative AI, the same way how various LLMs for programming have been trained even on codebases that they had no business using, yet nobody hasn’t and won’t stop them. It’s the same as what’s going to happen with deepfakes and such, as the technology inevitably gets better.
> Hayao Miyazaki’s Japanese animation company, Studio Ghibli, produces beautiful and famously labor intensive movies, with one 4 second sequence purportedly taking over a year to make.
It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to (artists will understand and appreciate, maybe pause and rewind and replay and examine the details, the casual viewer just enjoy at a glance) or use tools in addition to your own skills to knock it out of the park in a month and make more great things.
A bit how digital art has clear advantages over paper, while many revere the traditional art a lot, despite it taking longer and being harder. The same way how someone who uses those AI assisted programming tools can improve their productivity by getting rid of some of the boilerplate or automate some refactoring and such.
AI will definitely cheapen the art of doing things the old way, but that’s the reality of it, no matter how much the artists dislike it. Some will probably adapt and employ new workflows, others stick to tradition.
It's a very clear difference between a cheap animation and Ghibli. Anyone can see it.
In the first case, there's only one static image for an entire scene, scrolled and zoomed, and if they feel generous, there would be an overlay with another static image that slides over the first at a constant speed and direction. It feels dead.
In the second case, each frame is different. There's chaotic motions such as wind and there's character movement with a purpose, even in the background, there's always something happening in the animation, there's life.
There is a huge middle ground between "static image with another sliding static image" and "1 year of drawing per 4 second Ghibli masterpiece". From your comment is almost looks like you're suggesting that you have to choose either one or the other, but that is of course not true.
I bet that a good animator could make a really impressive 4-second scene if they were given a month, instead of a year. Possibly even if they were given a day.
So if we assume that there is not a binary "cheap animation vs masterpiece" but rather a sort of spectrum between the two, then the question is: at what point do enough people stop seeing the difference, that it makes economic sense to stay at that level, if the goal is to create as much high-quality content as possible?
Yes, that the current trend in the western world. Money is all that matters. There's only lowest accepted quality. Anything above that is a waste of money, profits that are lost. Nobody wants masterpieces. There is no market for that.
That lowest-accepted quality also declines over time, as generations after generations of people become used to rock-bottom quality. In the end, there's only slop and AI will make the cheapest slop ever. Welcome to a brave new world. We don't even need people anymore. They're too expensive.
To be fair, we've already been through this cycle at least once with animation. The difference between early Disney or even Looney Tunes and (say) late '60s Hanna-Barbera or '80s He-Man is enormous. Since then there has been generally higher-quality animation rather than lower (though I know it varies a lot by country, genre etc.)
It's not inevitable that it's a race to the cheapest and shittest. That's just one (fairly strong) commercial force amongst many.
anyone _can_ see it, but _most_ people don't (and don't care)
To be clear, I am not saying it's not valuable, only that to the vast majority, it's not.
I wonder if really great stuff are always for a minority. You have to have listened a lot of classical music to notice a great interpretation of Mozart from a good one. To realize how great was a chess move, how magical was a soccer play, how deep was the writing of a philosopher. Not only for stuff that requires previous effort, but also the subjectiveness of art. Picasso will be really moving for a minority of people. The Godfather. Even Shakespeare.
Social media and generative AI may be good business because the capture the attention of the majority, but maybe they are not valuable to anyone.
I think of a lot of thing in terms of distributions, and I think the how-much-people-value-quality distribution is not that much different.
On the right side, you have the minority of connoisseurs. And on the left, there is a minority who really don't care at all. And then the middle majority who can tell bad from good, but not good from great.
Yep, and what if good things only exist because they were created by and for those who can tell good from great.
I think you’re right that most people don’t notice, but without the extra effort, it would’ve ended up as just another mediocre animation. And standing out from mediocrity is what made it appealing to many people.
> but _most_ people don't (and don't care)
Perhaps it's not for everyone.
Many things you don't notice consciously unless you take the time to look but they still affect your overall perception. I suspect highly detailed animations fall into that category.
Seeing something like this or Akira on the big screen there is an analogue patina to meticulous hand drawn motion and some of the effects like the physically process glow effects of the neon lights in Akira that do give a very different feeling than a CG shot.
Although only a few will really appreciate why it's different I definitely think the difference has a heavy effect on the vibe of a movie.
Same with shooting on film vs digital, not that digital is worse it has it's own feeling which can be used with intent.
Who cares if it's valuable for the majority? What do you think this is? Stock market for slop?
This is art.
Fundamentally I think this comes down to answering the question of "why are you creating this?".
There are many valid answers.
Maybe you want to create it to tell a story, and you have an overflowing list of stories you're desperate to tell. The animation may be a means to an end, and tools that help you get there sooner mean telling more stories.
Maybe you're pretty good at making things people like and you're in it for the money. That's fine, there are worse ways to provide for your family than making things people enjoy but aren't a deep thing for you.
Maybe you're in it because you love the act of creating it. Selling it is almost incidental, and the joy you get from it comes down to spending huge amounts of time obsessing over tiny details. If you had a source of income and nobody ever saw your creations, you'd still be there making them.
These are all valid in my mind, and suggest different reasons to use or not to use tools. Same as many walks of life.
I'd get the weeds gone in my front lawn quickly if I paid someone to do it, but I quite enjoy pottering around on a sunny day pulling them up and looking back at the end to see what I've achieved. I bake worse bread than I could buy, and could buy more and better bread I'm sure if I used the time to do contracting instead. But I enjoy it.
On the other hand, there are things I just want done and so use tools or get others to do it for me.
One positive view of AI tools is that it widens the group of people who are able to achieve a particular quality, so it opens up the door for people who want to tell the story or build the app or whatever.
A negative side is the economics where it may be beneficial to have a worse result just because it's so much cheaper.
> It makes me wonder though - whether it’s more valuable to spend a year on a scene that most people won’t pay that much attention to
In this case, yes it is.
People do pay attention to the result overall. Studio Ghibli has got famous because people notice what they produce.
Now people might not notice every single detail but I believe that it is this overall mindset and culture that enables the whole unique final product.
I think most like the vibes, not the fact it took ages to make.
Its the quality or level of detail.
Which might indicate an environment were quality is above quantity
To me the question of what activity/method is more "valuable" in the context of art is kind of missing the point of art.
> Maybe Studio Ghibli making it through the seemingly deterministic GPT guardrails was an OpenAI slip up, a mistake,
The author is so generous... but Sam Altman literally has a Ghibli-fied Social profile and in response to all this said OpenAI chooses its demos very carefully. His primary concern is that Ghibli-fying prompts are over-consuming their GPU resources, degrading the service by preventing other ChatGPT tasks.
The official White House account has been posting ghiblified images too, Altman knows that as long as he's not critical of the current administration he's untouchable.
Everyone is talking about theft - I get it, but there's a more subtler point being made here.
Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.
The following quote by Sam Altman from about 5 years ago is interesting.
"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."
That's a statement I wouldn't even dream about making today.
> Current generation of AI models can't think of anything truly new.
How could you possibly know this?
Is this falsifiable? Is there anything we could ask it to draw where you wouldn't just claim it must be copying some image in its training data?
Novelty in one medium arises from novelty in others, shifts to the external environment.
We got brass bands with brass instruments, synth music from synths.
We know therefore, necessarily, that they can be nothing novel from an LLM -- it has no live access to novel developments in the broader environment. If synths were invented after its training, it could never produce synth music (and so on).
The claim here is trivially falsifiable, and so obviously so that credulous fans of this technology bake it in to their misunderstanding of novelty itself: have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Yet an artist which paints with a new kind of black pigment can, trivially so.
Kind of a weird take that excludes the vast majority of human artwork that most people would consider novel. For all the complaints one might have of cubism, few would claim it's not novel. And yet it's not based on any new development in the external world but rather on mashing together different perspectives. Someone could have created the style 100 years earlier if they were so inclined, and had Picasso never existed, someone could create the novel style today just by "remixing" ideas from past art in that very particular way.
I would argue that Picasso's life experiences, the environments he grew up and lived in, the people he interacted with, and the world events that took place in his life (like the world wars) were the external developments that led to the development of cubism. Sure, an AI could take in and analyze the works that existed prior, but it couldn't have the emotional reaction that occurred en masse after WWI and started the breakdown of more classical forms of art and the development/rise of more abstract forms of art.
Or, as the kids might say, AI couldn't feel the vibe shift occurring in the world at the time.
> arises from novelty in others, shifts to the external environment
> Everything is simply a blend of prior work.
I generally consider these two to be the same thing. If novelty is based on something else, then it's highly derivative and its novelty is very questionable.
A quantum random number generator is far more novel than the average human artist.
> have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.
Put someone in jail for the last 15 years, and ask them to make a smartphone. They obviously cannot do it either.
So if your point is an LLM is something like a person kept in a coma inside solitary confinement -- sure? But I don't believe that's where we set the bar for art: we arent employing comatose inmates to do anything.
> I generally consider these two to be the same thing.
Sure words themselves bend and break under the weight of hype. Novelty is randomness. Everything is a work of art. For a work of art to be non-novel it can only incorporate randomness.
The fallacies of ambiguity abound to the point where speaking coherently disappears completely.
An artist who finds a cave half-collapsed for the first time has an opportunity to render that novel physical state of the universe into art. Every moment which passes has a near infinite amount of such novel circumstances.
Since an LLM cannot do that, we must wreck and ruin our ability to describe this plain and trivial situation. Poke our eyes and skewer our brains.
The problem with generating genuinely new art is it requires "inputs" that aren't art. It's requires life experiences.
I beseech you, in the bowels of Christ, think it possible that you may be mistaken.
Oliver Cromwell, a letter to the General Assembly of the Church of Scotland, 3 August 1650
Disregarding the (common!) assumption that AGI will consist of one monolithic LLM instead of dozens of specialized ones, I think your comment fails to invoke an accurate, consistent picture of creativity/"truly new" cognition.
To borrow Chomsky's framework: what makes humans unique and special is our ability to produce an infinite range of outputs that nonetheless conform to a set of linguistic rules. When viewed in this light, human creativity necessarily depends on the "linguistic rules" part of that; without a framework of meaning to work within, we would just be generating entropy, not meaningful expressions.
Obviously this applies most directly to external language, but I hope it's clear how it indirectly applies to internal cognition and--as we're discussing here--visual art.
TL;DR: LLMs are definitely creative, otherwise they wouldn't be able to produce semantically-meaningful, context-appropriate language in the first place. For a more empirical argument, just ask yourself how a machine that can generate a poem or illustration depicting [CHARACTER_X] in [PLACE_Y] doing [ACTIVITY_Z] in [STYLE_S] without being creative!
[1] Covered in the famous Chomsky v. Foucault debate, for the curious: https://www.youtube.com/watch?v=3wfNl2L0Gf8
This may not be apparent to an english speaker as the language has a rather fixed set of words, but in German, where creating new words is common, the lack of linguistic creativity is obvious.
As an example, let's talk about "vibe coding" - It's a new term describing heavy LLM usage in programming, usually associated with Generation Z.
If I am asking an LLM to generate a German translation for "vibe coder" it comes up with the neutral "Vibe-Programmierer". When asking it to be more creative it came up with "Schwingungsschmied" ("vibration smith"?) - What?
I personally came up with the following words:
* Gefühlsprogrammierer ("A programmer, that focuses on intuition and feeling.")
* Freischnauzeprogrammierer ("Free-mouthed programmer - highlighting straightforwardness and the creative expression of vibe coding." - colloquial)
Interesstingly, LLMs can describe both these terms, they just can't create them naturally. I tested this on all major LLMs and the results were similar. Generating a picture of a "vibe coder" also highlights more of a moody atmosphere instead of the Generation Z aspects that are associated with it on social media nowadays.
> a machine that can generate a poem or illustration depicting [CHARACTER_X] in [PLACE_Y] doing [ACTIVITY_Z] in [STYLE_S] without being creative
Your example disproves itself; that's a madlib. It's not creative, it's just rolling the dice and filling in the blanks. Complex die and complex blanks are a difference of degree only, not creativity.
I agree with the sentiment elsewhere in this thread that this represents a "hideous theft machine", but I think even if we discard that, this is still bad.
It's very clear that generative has abandoned the idea of creative; image production that just replicates the training data only serves to further flatten our idea of what the world should look like.
Right, the focus is on IP theft, and that’s part of it, but let’s set that aside.
How useful is an image generator that, when asked to generate an image of an archaeologist in a hat, gives you Harrison Ford every time?
Clearly that’s not what we want from tools like this, even just as tools.
Not an expert with this stuff but could you not just put "Harrison Ford" in the negative prompt?
Oooh those guardrails make me angry. I get why they are there (dont poke the bear) but it doesn't make me overlook the self serving hypocrisy involved.
Though I am also generally opposed to the notion of intellectual property whatsoever on the basis that it doesn't seem to serve its intended purpose and what good could be salvaged from its various systems can already be well represented with other existing legal concepts, i.e deceptive behaviors being prosecuted as forms of fraud.
The problem is people at large companies creating these AI models, wanting the freedom to copy artists’ works when using it, but these large companies also want to keep copyright protection intact, for their regular business activities. They want to eat the cake and have it too. And they are arguing for essentially eliminating copyright for their specific purpose and convenience, when copyright has virtually never been loosened for the public’s convenience, even when the exceptions the public asks for are often minor and laudable. If these companies were to argue that copyright should be eliminated because of this new technology, I might not object. But now that they come and ask… no, they pretend to already have, a copyright exception for their specific use, I will happily turn around and use their own copyright maximalist arguments against them.
(Copied from a comment of mine written more than three years ago: <https://news.ycombinator.com/item?id=33582047>)
I don't care for this line of argument. It's like saying you can't hold a position that trespassing should be illegal while also holding that commercial businesses should be legally required to have public restrooms. Yes, both of these positions are related to land rights and the former is pro- while the latter is anti-, but it's a perfectly coherent set of positions. OpenAI can absolutely be anti-copyright in the sense of whether you can train an an NN on copyrighted data and pro-copyright in the sense of whether you can make an exact replica of some data and sell it as your own without making it into hypocrisy territory. It does suggest they're self-interested, but you have to climb a mountain in Tibet to find anybody who isn't.
Arguments that make a case that NN training is copyright violation are much more compelling to me than this.
The example you gave with public restroom do not work because of two main concept: They are usually getting paid for it by the government, and operating a company usually holds benefits given by the government. Industry regulations as a concept is generally justified in that industry are getting "something" from society, and thus society can put in requirements in return.
A regulation that require restaurants to have a public bathroom is more akin to regulation that also require restaurants to check id when selling alcohol to young customers. Neither requirement has any relation with land rights, but is related to the right of operating a company that sell food to the public.
But what if businesses got benefits from society and tax money and were free to ignore the needs/desires of those who pay taxes and who society consists of? That seems just about right.
No, the exception they are asking for (we can train on copyrighted material and the image produced is non-copyright infringing) is copyright infringing in the most basic sense.
I'll prove it by induction: Imagine that I have a service where I "train" a model on a single image of Indiana Jones. Now you prompt it, and my model "generates" the same image. I sell you this service, and no money goes to the copyright holder of the original image. This is obviously infringment.
There's no reason why training on a billion images is any different, besides the fact that the lines are blurred by the model weights not being parseable
>There's no reason why training on a billion images is any different
You gloss over this as if it's a given. I don't agree. I think you're doing a different thing when you're sampling billions of things equallly.
The root problem is that the model reproduces Indiana Jones instead of creating a new character. This contradicts the statement that the model "learns" and "creates" like a human artist and not merely copies; obviously a human artist would not plagiarize when asked to draw a character.
> the model reproduces Indiana Jones
the model isn't the one infringing. It's the end user inputting the prompt.
The model itself is not a derivative work, in the same way that an artist and photoshop aren't a derivative work when they reproduce indiana jones's likeness.
That does not seem obvious at all. Fan art and referencing is a thing, and there are plenty of examples of AI creating characters that do not exist anywhere in the training dataset.
That's why I said it's an argument by induction. Where's the limit for it to be different? 10 images? 100? 10000? Where does it stop being copyright infringement and why? Many people have paid heavy fines for much less. I don't think that "a billion images is so unfathomable compared to just one million that it truly is a difference in kind" is a valid response
It's not just the guardrails, but the ham-fisted implementation.
Grok is supposed to be "uncensored", but there are very specific words you just can't use when asking it to generate images. It'll just flat out refuse or give an error message during generation.
But, again, if you go in a roundabout way and avoid the specific terms you can still get what you want. So why bother?
Is it about not wanting bad PR or avoiding litigation?
The implementation is what gets to me too. Fair enough that a company doesn't want their LLM used in a certain way. That's their choice, even if it's just to avoid getting sued.
How they then go about implementing those guardrails is pretty telling about their understand and control over what they've build and their line of thinking. Clearly, at no point before releasing their LLMs onto the world did anyone stop and ask: Hey, how do we deal with these things generating unwanted content?
Resorting to blocking certain terms in the prompts is like searching for keywords in spam emails. "Hey Jim, I got another spam email from that Chinese tire place" - "No worry boss, I've configured the mail server to just delete any email containing the words China or tire".
Some journalist should go to a few of these AI companies and start asking questions about the long term effectiveness and viability of just blocking keywords in prompts.
It is not only copyright that is problematic. It generates Franco when asked about the best Spanish leader in the 20th century.
https://chatgpt.com/share/67efebf4-3b14-8011-8c11-8f806c7ff6...
To be fair, Franco is the only Spanish leader most people (or at least most non-Spaniards) can even name
On the one hand, that seems problematic. But on the other, it seems cherry-picked: For the U.S., it generates a picture of JFK. For Russia/USSR, it gives Stalin. For India, it gives Ghandi. For South Africa it gives Nelson Mandela. For Germany, it provides an appropriately hand-wringing text response and eventually suggests Konrad Adenauer.
This suggests to me that its response is driven more by a leader's fame or (more charitably) influence, rather than a bias towards fascist ideology.
https://chatgpt.com/share/67eff74d-61f0-8013-8ce4-f07f02a385...
Literally nothing you've said in this post matters.
I'm not seeing anyone claiming that ChatGPT selects for mass-murderous dictators--the fact that it doesn't select for NOT mass-murderous dictators is damning enough.
I don't see why this is an issue? The prompts imply obvious and well-known characters, and don't make it clear that they want an original answer. Most humans would probably give you similar answers if you didn't add an additional qualifier like "not Indiana Jones". The only difference is that a human can't exactly reproduce the likeness of a famous character without significant time and effort.
The real issue here is that there's a whole host of implied context in human languages. On the one hand, we expect the machine to not spit out copyrighted or trademarked material, but on the other hand, there's a whole lot of cultural context and implied context that gets baked into these things during training.
I think the point is that for a lot of them there are endless possible alternatives to the character design, but it still generates one with the exact same design. Why can't, for example, the image of Tomb Raider have a different colored tank top? Why is she wearing a tank top and not a shirt? Why does she have to have a gun? Why is she a busty, attractive brunette? These are all things that could be different but the dominance of Lara Croft's image and strong association with the words "tomb raider" in popular culture clearly influences the model's output.
Because it's not clear that that's what you want. What's the context? Are we playing a game where I guess a character? Is it a design session for a new character based on a well known one, maybe a sidekick? Is it a new take on an old character? Are you just trying to remember what a well-known character looks like, and giving a brief prompt?
It's not clear what the asker wants, and the obvious answer is probably the culturally relevant one. Hell, I'd give you the same answers as the AI did here if I had the ability to spit out perfect replicas.
And how is that bad or surprising? It’s actually what I would expect from how AI works.
Exactly. We designed systems that work on attention and inference… and then surprised that it returns popular results?
It's an IP theft machine. Humans wouldn't be allowed to publish these pictures for profit, but OpenAI is allowed to "generate" them?
I would 100% be allowed to draw an image of Indiana Jones in illustrator. There is no law against me drawing his likeness.
https://en.wikipedia.org/wiki/Copyright_protection_for_ficti...
https://en.wikipedia.org/wiki/Personality_rights#United_Stat...
I don't think those links support the point you are trying to make (i assume you are disagreeing with parent). Copyright law is a lot more complex then just a binary, and fictional characters certainly don't enjoy personality rights.
harrison ford certainly does
edit - also, I wasn't making a binary claim, the person I was responding to was: "no law". There are more than zero laws relevant to this situation. I agree with you that how relevant is context dependent.
Copyright protection doesn't prevent an illustrator from drawing the thing.
but selling it is another and these ai companies sell their IP theft with a monthly subscription.
No, you aren't allowed to monetize an image of Indiana Jones even if you made it yourself.
That depends. There are situations where you are. Satire in particular would be a common one, but there can be others.
Rules around copyright (esp. Fair use) can be very context dependent.
You wouldn't be able to offer a service to draw 1 to 1 recreations of Indiana Jones movie frames, though...
You 100% wouldn't be allowed to sell your Indiana Jones drawing services.
I'm honestly trying to wrap my head around the law here because copyright is often very confusing.
If I ask an artist to draw me a picture of Indiana Jones and they do it would that be copyright infringement? Even if it's just for my personal use?
Probably that would be a derrivative work. Which means the original owner would have some copyright in it.
It may or may not be fair use, which is a complicated question (ianal).
IANAL, but if OpenAI makes any money/commercial gains from producing a Ghibli-esque image when you ask, say you pay a subscription to OpenAI. What percentage of that subscription is owed to Ghibli for running Ghibli art through OpenAI's gristmill and providing the ability to create that image with that "vibe/style" etc. How long into perpetuity is OpenAI allowed to re-use that original art whenever their model produces said similar image. That seems to be the question.
Yeah that's fair, I'm trying to create an analogy to other services which are similar to help me understand.
If e.g. Patreon hosts an artist who will draw a picture of Indiana Jones for me on commission, then my money is going to both Patreon and the artist. Should Patreon also police their artists to prevent reproducing any copyrighted characters?
https://commons.wikimedia.org/wiki/Commons:Derivative_works has some commentary on how this works you might find interesting
Thanks for the link.
I get that copyright is a bit of a minefield, and there's some clear cases that should not be allowed, e.g. taking photos of a painting and selling them
That said, I still get the impression that the laws are way too broad and there would be little harm if we reduced their scope. I think we should be allowed to post pictures of Pokemon toys to Wikipedia for example.
I'm willing to listen to other points of view if people want to share though
Keep in mind that wikimedia takes a rather strict view. In real life the edge cases of copyright tend to be a bit risk-based - what is the chance someone sues you? What is the chance the judge agrees with them?
Not to mention that wikimedia commons, which tries to be a globally reusable repository ignores fair use (which is context dependent), which covers a lot of the cases where copyright law is just being reduculous.
I would think yes. Consider the alternate variation where the artist proactively draws Indiana Jones, in all his likeness, and attempts to market and sell it. The same exchange is ultimately happening, but this clearly is copyright infringement.
Won't somebody think of the billionaire IP holders? The horror.
And the small up and coming artists whose work is also stolen, AI-washed, and sold to consumers for a monthly fee, destroying the market for those up and coming artists to sell original works. You don't get to pretend this is only going to hurt big players when there are already small players whose livelihoods have been ruined.
Normally (well, if you're ethical) credit is given.
Also, there are IP limits of various sorts (e.g. copyright, trademark) for various purposes (some arguably good, some arguably bad), and some freedoms (e.g., fair use). There's no issue if this follows the rules... but I don't see where that's implemented here.
It looks like they may be selling IP they don't own the right to.
Overfitting is generally a sign of a useless model
His point is that it's only overfitting if the model won't return new content when you clarify you're not just asking for the obvious answer from the context.
The article ends with...
> Does the growth of AI have to bring with it the tacit or even explicit encouragement of intellectual theft?
And like, yes, 100% - what else is AI but a tool for taking other people's work and reassembling it into a product for you without needing to pay someone. Do you want an awesome studio ghibli'd version of yourself? There are thousands of artists online that you could commission for a few bucks to do it that'd probably make something actually interesting - but no, we go to AI because we want to avoid paying a human.
> what else is AI but a tool for taking other people's work and reassembling it into a product for you
Well, what I'd like it to be is a tool for generating what I've asked it for, which has nothing to do with other people's work.
I've been asking for video game sprites/avatars, for instance. It's presumably trained on lots of images of video games, but I'm not trying to rip those off. I want generic images.
> we go to AI because we want to avoid paying a human.
No, I go to AI because I can't imagine the nightmare of collaborating with humans to generate hundreds of avatars per day. And I rely on them being generated very quickly. And so on.
I have a fundamental issue with the concept of large platform social media. Companies like Meta love to complain about the impossibility of moderating such huge public spaces - and they aren't lying, it's an immense issue - if you ever moderated a small forum you're well aware of the pain that a troll or two can cause you.
But they chose to create such an unscalable line of business, it never existed before because everyone realized it wasn't possible. It might just be that some of the AI enabled businesses aren't realistic and profitable.
I was really hoping that the conversation around AI art would at least be partially centered on the perhaps now dated "2008 pirate party" idea that intellectual property, the royalty system, the draconian copyright laws that we have today are deeply silly, rooted in a fiction, and used over and over again, primarily by the rich and powerful, to stifle original ideas and hold back cultural innovation.
Unfortunately, it's just the opposite. It seems most people have fully assimilated the idea that information itself must be entirely subsumed into an oppressive, proprietary, commercial apparatus. That Disney Corp can prevent you from viewing some collection of pixels, because THEY own it, and they know better than you do about the culture and communication that you are and are not allowed to experience.
It's just baffling. If they could, Disney would scan your brain to charge you a nickel every time you thought of Mickey Mouse.
The idea of open sourcing everything and nullifying patents would benefit corporations like Disney and OpenAI vastly more than it would benefit the people. The first thing that would happen is that BigCorp would eat up every interesting or useful piece of art, technology, and culture that has ever been created and monetize the life out of it.
These legal protections are needed by the people. To the Pirate Party's credit, undoing corporate personhood would be a good first step, so that we can focus on enforcing protections for the works of humans. Still, attributing those works to CEOs instead of corporations wouldn't result in much change.
>The first thing that would happen is that BigCorp would eat up every interesting or useful piece of art, technology, and culture that has ever been created and monetize the life out of it.
Wait, I'm still trying to figure out the difference between your imaginary world and the world we live in now?
I think the main difference is if everything were freely available they may attempt to monetize the life out of it, but they will fail if they can't actually provide something people actually want. There's no more "You want a thing so you're going to buy our thing because we are the exclusive providers of it. That means we don't even have to make it very good"
If anyone in the world could make a Star Wars movie, the average Star Wars movie would be much worse, but the best 10 Star Wars movies might be better that what we currently have.
I’m sure the best independent Star Wars movie would be infinitely better than what Disney has been shoveling out for the last couple decades.
Such a talented team would be able to make a great movie on the same theme.
Saying the lack of creativity in the industry in because we can't copy things freely is completely moronic.
It's a major hindrance. For example, if I came up with an amazing creative idea for a star wars movie I couldn't do a damn thing with it unless Disney told me I could. Disney isn't likely to accept an unsolicited pitch from a total nobody who just happened to have a great idea either. I don't see how you could doubt that there are a lot of great works of art that won't ever exist because of the fact that copyright prevents them from ever getting off the ground.
Thor would have red hair in the imaginary world, rather than being a Blonde man which was made to be a somewhat distinguished comic book character.
The Disney or otherwise copyrighted versions allow for unique spins on these old characters to be re-copyrighted. This Thor from Disney/Marvel is distinguished from Thor from God of War.
> “Before starting the series, we stuffed ourselves to the gills with Norse mythology, as well as almost every other type of mythology – we love it all! But you’ve got to remember that these are legendary tales – myths – and no two versions are ever exactly the same. We changed a lot of things – for example, in most of the myths Thor has red hair, Odin has one eye, etc. But we preferred doing our own version.”
https://scifi.stackexchange.com/questions/54400/why-did-earl...
Huh, did not know that. As an Icelandic person I knew about Þór the Norse god much earlier than Thor the marvel character. I never really pictured his hair color, nor knew he had a specific hair color in the mythology. I actually always pictured him with a beard though. What mostly mattered though was his characteristics. His ill temper and drinking habits, and the fact that he was not a nice person, nor a hero, but rather a guy who starts shit that gets everyone else in trouble, he also wins every fight except one (he looses one against Elli [the personification of old age]). The little I’ve seen of him in the Marvel movies, he keeps almost none of these characteristics.
EDIT: My favorite story of him is the depiction of the fall of Ásgarður, where Loki and some Jötun are about to use the gods vanity against them and con them out of stuff they cannot actually pay for a wall around Ásgarður. Þór, being the way he is, cannot be around a Jötun without fighting and killing him. So rather than paying up (which the gods cannot do) Þór is sent to see this Jötun, knowing very well that he will be murdered. This betrayal is marked as the beginning of the end in Völuspá (verse 26).
How do restaurants work, then? You can’t copyright a recipe. Instructions can’t generally be copyrighted, otherwise someone would own the fastest route from A to B and charge every person who used it. The whole idea of intellectual property gets really weird when you try to pinpoint what exactly is being owned.
I do not agree with your conjecture that big corps would win by default. Ask why would people need protection from having their work stolen when the only ones welding weaponized copyright are the corporations. People need the freedom to wield culture without restriction, not protection from someone having the same idea as them and manifesting it.
It’s more reasonable to say that the idea of intellectual property is challenging for nonlawyers because of the difficulty in understanding ownership not as one thing, but as a bundle of various elements of control, exclusion, obligation, or entitlement, even some of which spring into existence out of nowhere.
In other words, the challenge is not to understand “what exactly is being owned,” and instead, to understand “what exactly being owned is.”
> what exactly being owned is.
Thank you, this is beautifully put and very astute. Does a recipe, a culmination of a lifetime of experience, technique, trials, errors, and luck constitute a form of someone/thing's person-hood such that it can be Intellectual Property.
It depends. First I think we could make a distinction between not-intellectual-property and intellectual-property-with-no-protection but that doesn’t seem to be what you’re getting at.
Have you taken reasonable steps to keep it secret? It could be a trade secret and if course if you steal the recipe for KFC’s herbs and spices, you will be liable for civil damages for your misappropriation of their trade secret.
And if you describe a recipe in flowery prose, reminiscing about the aromas in grandmas kitchen, of course that prose is copyrightable.
Should you invent a special kind of chicken fry mix and give us a fanciful name, the recipes identifier if origin - its trademark -could be protectable.
But the fact that your chicken fry mix is made of corn starch and bread crumbs is a fact, like a phone book. Under most circumstances, not protectable.
ianyl tinla
> How do restaurants work, then?
Primarily because recipe creation is not one of the biggest cost centers for restaurants?
> How do restaurants work, then? You can’t copyright a recipe.
They barely work. Recipes are trade secrets, and the cooks who use them are either paid very well, given NDAs or given only part of the most guarded recipes
A restaurant is a small manufacturing facility that produces a physical product. It’s not the same at all.
An artist is a small manufacturing facility that produces a physical (canvas, print, mp3, etc) product, no?
What is different about the production of Micky Mouse cartoons? Why is it normal for industries to compete in manufacturing of physical product, but as soon as you can apply copyright, now you exclusively have rights to control anything that produces a similar result?
Let’s say I write a book or record an album and there is no copyright. How do I get paid?
Musicians I suppose can tour, which is grueling but it’s something. Authors, programmers, actors, game studios, anything that’s not performed live would immediately become non-viable as a career or a business.
Large corporations would make money of course, by offering all you can eat streaming feeds of everything for a monthly fee. The creators get nothing.
>. Let’s say I write a book or record an album and there is no copyright. How do I get paid?
I've purchased books that were in the public domain and without copyright. I've paid for albums I could already legally listen to for free. I've paid for games and movies that were free to play and watch. I'm far from the only person who has or would.
The people who pirate the most are also the ones who spend the most money on the things they pirate. They are hardcore fans. They want official merch and special boxed sets. People want to give the creators of the things they love their money and often feel conflicted about having to give their cash to a far less worthy corporation in the process. There are people who love music but refuse to support the RIAA by buying albums.
There are proven ways to make profit in other ways like "pay what you want" or even "fund in advance" crowdsourced models. If copyright went away or, more ideally, were limited to a much shorter period of time (say 8-10 years) artists would continue to find fans and make money.
You’re talking about individual piracy. I’m talking about huge scale corporate piracy, which is already happening (laundered through AI algorithms and other ways) and would happen a lot more if copyright vanished.
Part of what muddies the water here too is that copyright lasts too long. Companies like Disney lobbied for this successfully. It should have a time horizon of maybe 25 years, 50 at most.
Well technically it wouldn’t be piracy once copyright banished. It’d be remixing, appropriation, derivative, etc., all legal.
So make copyright like patents. That’s what a lot of the copyleft movement has been arguing for forever. Make a copyright holder demonstrate their idea is unique, manifests into a tangible output, and if so protect the creator for a limited time. Everyone is free to use the work in their own provided they pay royalties at a reasonable rate for the duration of the patent.
But the status quo now with basically perpetual copyright controlled by large media conglomerates 100% stifles culture and is a net negative on society. It’s not the right to copy that needs defending, it’s the first right of a briefly protected enterprise, a reward to the creator, that needs to be protected. Copyright is like trying to cure a cough by sewing someone’s mouth shut.
1. There are an infinite number of careers that do not currently exist, because their business models do not make sense. I do not think it's a great idea to keep laws on the books, that limit the creativity and rights of hundreds of millions of people, just to keep a few professions afloat.
2. You greatly underestimate the creativity of a capitalistic market. For example, on the web, it's generally difficult and frowned upon to copyright designs. Some patent trolls do it, but most don't. If you make an innovative design for your website, you're bound to be copied. And yet many programmers and tech companies still have viable business models. They simply don't base their entire business model around doing easily-copyable things.
It looks like you're being purposefully ridiculous. There is an obvious difference between the two; cost of reproduction. For something with a cost of reproduction near zero (book, music, art, etc), IP restrictions matter. For something like a restaurant, factory, etc; the cost of reproduction is high.
It's not obvious at all! You are citing the only difference that typically comes up. A quesadilla is beyond trivial to reproduce and most people have the ingredients readily available. 3D printers make it trivial to reproduce things that would have been obviously hard to reproduce a few years ago. A book is hard to reproduce if it's not in digital form. Is MIDI a song or a set of instructions? Source code is easy to copy but hard to reproduce. Source code is just a recipe telling a compiler what to do. And we've already established that recipes aren't copyrightable because it was "so obvious" at the time copyright was established that you shouldn't be able to copyright the creative process.
Closed source - when was the last time your restaurant told you what was in, and how to make, your favourite dish?
What's in Coca Cola?
What are the 11 herbs and spices in Kentucky Fried Chicken?
How do I make the sauce in a Big Mac?
Yes, and notably the source recipe can’t be copyrighted. Trade secrets and recipes are not copyrightable. That’s the point. We have entire vastly profitable industries built around protection of trade secrets, with no copyright in play. Competing to make make the best cola flavored beverage or the best burrito is a thing. Competing to make the best rendition of Snow White, is not. What’s the rub? They don’t seem that different at all.
Snow White is not the best example, there are non-Disney versions, like the one with Sigourney Weaver and the one with Chris Hemsworth.
I imagine they're licensed--the original creator or their estate had to be looped in to make them happen, and probably financially benefitted.
The original creator of the German fairy tale?
https://en.wikipedia.org/wiki/Origin_of_the_Snow_White_tale
I see a mention of Ovid ... copyright has probably expired.
I can't explain the exact link, but your repeated and vocal pro-AI stance in this thread feels connected to the way when you got called out for a simple and inconsequential mistake that any of us could make, you immediately doubled down on it all while the truth was a single Google search away.
We're talking about copyright in this subthread, in the context of AI. I'm not sure how a copyleft slant implies pro-ai, but whatever. There are a lot of reasons to be dubious about AI. But "AI is going to destroy human creativity and ingenuity" is not one that concerns me. And "society would be better without AI" is not an axiom I hold, so yeah I'll respond to that type of supposition when it's thrown into an otherwise interesting discussion.
I could just be wrong about Snow White's original copyright. As indicated by my use of "I imagine", no I didn't search the origins of it. I'm not seeing a big "double down" moment where I asserted that Snow White is definitely owned by Disney--that would be the cinch. In fact nothing about my reply contradicted the GGP adding that maybe Snow White isn't the best example. Why are you so bothered? Anyway, Snow White doesn't have a recent progenitor then it kinda proves the point that the world works perfectly well in the absence of copyright, and that the ability to freely remix culture is a fundamental human right. TIL that Snow White was originally a German fairytale and I'm relieved that Disney hasn't asserted copyright over it.