Goodbye to Sora
twitter.com1019 points by mikeocool a day ago
1019 points by mikeocool a day ago
https://xcancel.com/soraofficialapp/status/20365327959847158...
https://www.hollywoodreporter.com/business/digital/openai-sh..., https://archive.ph/ABkeI
Sometimes I think my opinion means nothing on these topics, especially when it's going to get buried in a thread of 500 plus comments. But I think you finally see a little bit of a flaw in the strategy or just a little bit of insight into what was desperation for relevance and to try to very quickly attain what other companies have attained but essentially what they're seeing is this gradual reduction in ambition and it's only natural for a lot of companies to overreach, but essentially reality and gravity are pulling them back. And as some other people have mentioned wall Street and others see that coding is the prime use case for this where you can make money and have a really profitable business and there are auxiliary functions. Driving addictive content is not really one that should be at the forefront and while many will continue to do that and we'll have all this generative content, I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content. Over time we're probably going to see some really broad and strong use cases of AI, but I think in the case of social media or generative content, we have to be a lot more thoughtful about it. And I'm glad that they're shutting down this app as much as it's great to see innovation and technology and to see how far it's pushed. I prefer to see it when someone like Google does it? Because they're really doing it from the standpoint of this has broad applicable applications to something like simulation or training. Not whatever open AI was doing which honestly just doesn't feel very truthful. I feel like they say one thing and do something else or they say one thing and the agenda or something else. And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it. > I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content. The addictive toxic content will go the way of tobacco and explore new markets. Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users. Nigeria and Pakistan are going through the same change, just started later. Since 2016 India alone added more users than the mentioned countries combined. That's a lot of first generation users. More than the entire western population. I'm reminded of a video from the 80's/90's where researchers took a TV to the Amazon to see how "live off the land" tribes reacted to high technology. Apparently they stopped doing everything and just wanted to watch TV all day. And that was just regular old TV. Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI. I'm gonna try to remember this comment for the next time someone brings up the boiling frog analogy. Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living" There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it. In the 19th century, many authors lamented the frantic, unhealthy pace of modern life. This. What really happened is that someone figured out what makes people give something their undivided attention and is profiting handsomely off of this finding. Can anyone come up with a citation for this? Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now. It's a particularly misleading anecdote. In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival. And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done. // grew up near such areas "coding is the prime use case for this where you can make money" Is it? I have the impression GenAI deteriorates the internet both from a content and tech perspective. Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck. GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs. Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works. That's kind of my concern so far. We haven't seen a lot of big AI deployment success cases, but of the few mildly successful ones we HAVE heard of, they're 100% about cost saving / perceived efficiency and never about actually making a _better_ product or service. I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases. It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse. I fear people will just get used to it. Nobody gets tailored clothing anyhmore and people don't question that we have standardized sizes that don't really fit anyone properly. People commonly buy standardized furniture and rarely get something to a specific for their room. If cheaper software (I mean thats mostly what it is) gets the job done, we will probably just keep doing that, even if that means we lose something in the process. Your analogy is one indirection from being a fit. Factories usually get custom solutions for their production facilities, tailor made by specialist engineers. They then run the production and deliver mass produced goods to the markets. We software engineers aren’t delivering tailor made solutions straight to the consumer markets. We are much more like the engineers who set up the machinery in the production facility, and our software is much closer to that machinery then it is to the mass produced table you buy at Ikea. Fair. I just have the feeling that it doesn't get the job done anymore. I hope we will see the rise of alternatives. Fake Support contact from companies is another use case. They send you in endless useless circles until you give up. Saves the company a ton of money The level to which this stuff can be used against the common person is truly astounding. Well tbh I think it's like cloud in 2007-2009. I was highly skeptical and heckling while running on managed bare metal everytime there was an outage. But now cloud is the standard model for anything really. And I think AI becomes the gold standard for code in the long term. So yea right now lots of outages. In a couple years it'll be much better. And in ten years people will always default to automation via AI. > where you can make money and have a really profitable business I am not convinced. Nobody is making money, every player is losing money hand over fist. With coding (it's not really coding per se that matters imo it's more like dynamic logic writ large) it's a land grab strategy. They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost. It's a strategy as old as time, but it's a strategy that usually fails. Spending a lot of money on customer capture only works when customers are actually solidly captured. Most markets have fairly heavy competition and customers will only stay captured as long as there is no substantial cost to staying captive. Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough. Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user. AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives. > it's a strategy as old as time, but it's a strategy that usually fails I like to call this the "Yahoo Effect" > They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost. Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented. But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it? That's not going to work for AI platforms. What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave. So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually. The lock-in of email platforms is the address. With IMAP you can extract the messages right away and migrate. Yet, you would still have to check the old mailbox for stray emails that you must tell to reach you on the new address. And continue doing so for years or risk missing some critical email. Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive. No, you can do it on all the major providers for either no or low cost. Disregarding the grandfathered free accounts, own domain is $7.20/user/month on gmail, €5/month on Proton. On microsoft that's business tier feature and AFAIK not supported at all on Yahoo. Zoho Mail Lite is $1/user/mo when billed annually. https://www.zoho.com/mail/zohomail-pricing.html A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare. Not because there is no path to profitability (they make a ton of money on inference), they just spend a lot on R&D. > they make a ton of money on inference So it is stated, but is it actually true? I am not convinced. Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline. Agreed, the revenues are big.. but very small next to the datacenter bills.. even if a fraction of which are being used for inference, it's hard to argue they even break even. That's before all the other costs (Super Bowl ads, billions in compensation). from what i understand, the issue with inference is it doesn't scale as user count grows the way traditional saas scales. In typical saas adding users requires very little additional capacity. However with inference, supporting more users requires much more capacity to be added. I don't know if it's quite linear but it certainly requires more infrastructure to support additional LLM users than say a web application. And the existing infrastructure routinely struggles for several of the well known players. You can literally tell when it's getting bogged down by workload. And that's after all the absurdly large datacenters we've already established at significant expense (to both the corporations and the average person). Afaik Anthropic still loses money for their main product in this space: Claude Code and their Max plans. This became immediately clear to me over the weekend when I used Opus via API key. I had it review the code for my (relatively small) personal blog to create an AGENTS.MD - it cost me $3.26. same here... The API costs are absolutely insane for any real usage. This is either high prices to make sure no profitable competitor to claude workspace or other agent system emerges, or heavily sponsoring of their own soluions. Not really. They are burning money on hardware, resources and payroll without meaningful return prospects. Frontier model developers don't make money, but inference providers do. For open weight models there is a healthy market of inference providers that operate profitably without VC backing. Such as? Where do we find these open weight model providers? Why is hardly anyone talking about them or sharing links (here or elsewhere) if they are so wonderful and profitable? Go to https://models.dev/ and you're going to see plenty of providers. OpenRouter makes it easy to use them, just add credits to your account. I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock. Why is hardly anyone talking about basic web hosting provides or sharung links (here or elsewhere) if they are so wonderful and profitable? Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end. Yes
They are just pivoting to stuff that loses money more slowly but maybe has a path to profits eventually… Some of these AI companies that promised AGI are going to find out that they're actually IDE plugin subscription companies Coding is a small minority of total generated tokens. It's easy swimming in tech waters all day to think Claude is the pack leader because it writes excellent code, but the reality is that tokens are overwhelmingly coming from OpenAI and Google doing mostly stuff like "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters" > "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters" Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years. Both of those things both were and are jobs. They're called secretaries and travel agents. Jobs that have already been killed is my point Together that's about four million American jobs so I'd disagree those jobs have "already been killed". I think it remains to be seen if LLMs are even 25% as good at everything else as they are at coding.. which is fine, if they focus and stop promising the world. That alone is huge, if they let go of their egos about putting the entire white collar class out of work.. Coding is one topic but the big one is agentic ai. You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'. You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before. There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone. Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc. For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other. On scale we need more capacity and these agents will also cost more money than a 20$ subscription. But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system. I see where you are going with this, but IMO this is not a technical problem but a legal problem. Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends? Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility. You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth. An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it. I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with. We could argue all day about what should be at the forefront, but addictive content isn't going anywhere, because addicts pay up. In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days. > because addicts pay up. I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it. Sometimes they do pay up. Google Gemini estimates that 25% of active daily YouTube users pay for ad free service. I know my wife and I do, and we watch a huge range of YouTube material more hours a month than all the other streaming services we subscribe to. There is no area of human knowledge or human interest that YouTube doesn’t have a ton of material for; and of course, the animal videos… The ironic thing in the subject of Sora service being cancelled is that neither my wife or I watch AI generated material. I think the real answer is that Sora-style AI slop videos just aren't as addictive as we thought they'd be. I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days. It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch). Yes, fortunately slop is pretty unwatchable after the novelty wears out. Even the lowest common denominator stuff NFLX churns out is in a different league. I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do. Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc. For short format low stakes stuff like online ads, then the AI slop actually probably works however. Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine.
But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding. IMO slop fits best for "art that isn't the point". A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art. Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done. Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside. Agreed, it's good at placeholder art for which entertainment consumption is not the point. Clip Art for the new generation. >> you can get it on a ton of other platforms for free, so people don't want to pay for it. What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug. OpenAI just proved you cannot burn money indefinitely. The monetization of social media has always been about steering otherwise non paying users into making purchases elsewhere. So if the AI slop can make people spend money on other products that's accomplished the goal. > "reality and gravity are pulling them back" I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt. >I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content. They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok. I also prefer seeing a corporation like Google do it for two reasons: generative content might feed their cash cow also known as “YouTube” and Google already has a good base for coding assistants. Google owns, I think, 25% of Anthropic and earns money selling compute infrastructure to Anthropic. Personally I think Antigravity (with Claude and Gemini) and gemini-cli firmly keeps Google in the running as far as AI coding tools goes. I want to do business with companies that have a sustainable business plan. Google’s AI products for tech work, and ProtonMail’s Lumo+ product for all private daily web search and chatbot functionality is enough for me; I used to chase every commercial AI offering but not anymore. For OpenAI that was and felt like some side husle they were playing around nothing more. Having Disney on their side was def quite a smart/interesting move. At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model. It never made sense and was always just burning resources that OpenAI does not have. It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks. Everyone is doing image generation. Its realtivly easy and I would say it would be a people mover if openai wouldn't support this. So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources. So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation. And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post. I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models. Had Waffle House with some friends who mostly work in blue collar industries. One guy who works at a timber mill used Claude code to redo their ordering system. Took him about a month to go from knowing nothing about Claude Code to finishing the system. Basically just copied a proprietary software product that costs them upward $20k a year. They’re keeping that other product to cross check but so far the Claude coded item works great, and is of course more custom to their business. The dudes a hero at work because the system is heads and tails better. Obviously caveat emperor but there are a lot of real world scenarios like this. I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change. I'm converging on this as the real end state: it's a "better Excel" for general business work. And has some of the same limitations - maintainability and security. But there are also plenty of small businesses that run off a shared Excel spreadsheet and a few mailboxes. Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app. > caveat emperor s/emperor/emptor I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability. Keep dreaming! The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before. In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email. The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent. LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws. I'm just going to keep building software mostly traditionally, while using "AI" to help me research things quicker (might as well use it while it's here), survive the shitpocalypse, and then laugh as traditional-minded developers become a scarce sought-after resource again. Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around. I think this is where a lot of freelance contractors could pivot to - basically "last mile" coding, where the LLM does the front end work, and then high hourly pay engineers come in and fix the work. it'd still be cheaper than a lot of the industry niche software that is usually pretty bad. thanks for the correction I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened. >Sometimes I think my opinion means nothing on these topics, especially when it's going to get buried in a thread of 500 plus comments. Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing. > I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content. Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out. I had fun with it for about a week, but the thing that disappointed me the most wasn't the technology, it was the _people_. You have a machine that can make anything you can imagine, and the space of what people were exploring was so _small_. [flagged] I'd argue that for informal uses like HN, this is very much okay! It's grammatically correct and gets the point across. And most importantly, these paragraphs read more like someone's personal voice than some pithy but edited-to-death couple of sentences. > gets the point across If people don't read because the text is an unreadable mess, none of the points get across. I'm a a people. I read it. If you call this an unreadable mess I really don't know what to say. Language is awesome, and it's awesome we can create infinitely long sentences with it. And like open source, if you don't like it, write the one you like :) A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it. While I don't agree with the other poster, that the comment was a mess, sentences were so long, that I had to focus not to lose the point. I think the top comment read a bit too much like stream of consciousness, which as a person I tolerate much more in spoken speech than written one. Still, I liked the comment, but agree it could have been improved. It might be a surprise to you, but there are plenty of people who are willing to read one or two paragraphs of words. I'm comfortable reading much more than two paragraphs, even in online forums. In this specific case, unreadability is because of poor sentence structure. I quit in the middle of the second sentence. tbh I quite like the style, I get the train of thought and am sure it wasn't written by an LLM. > I feel like they say one thing and do something else or they say one thing and the agenda or something else. Nothing is new under the sun. I had so much fun making videos with my mom when it came out. During the first two weeks, we made over 100 cameo videos together - we were constantly running up against the upload limit. It unleashed tons of genuine creativity, joy, and laughter from us. After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora. The problem is that due to the ease these can be made there is also really no reason to make this social. “Why would I look at somebody else’s creations when I can do mine.” I can see some usage for this use case - "look Morty, I turned myself into a pickle!" - but just like image / meme generators, this is like 10-30 seconds of engagement within a friend circle at best (although some might go viral, but that won't bring in much money for in this case OpenAI). There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool. I'm not an artist or creative person in any sense. My persona is closer to a settings menu than a colorful canvas. The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool. > My persona is closer to a settings menu than a colorful canvas ah, but what a persona that would be if you were a Kai's Power Tools settings menu! > The AI art I have seen creatives produce is far beyond anything I have been able to come up with .. such as? What's the "Mona Lisa of AI art"? Is there, like, a gallery? Awards? Unfortunately I don't have a solid reference point or checklist for the defining qualities of "good art". And frankly I don't take those who do very seriously. To me art is all about the personal vibes you get from it. So I enjoy Zach London (gossip goblin), Bennet Weisbren, and voidstomper/gloomstomper if you want something to measure with your "real true art" checklist. They're different impulses. Some want to consume. Others want to create. TikTok and social media is a strange mix of both, people posting response videos to everything. Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song. The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut. But I guess the desire to create something that others would consume is also different from the desire to simply create. Sweet Jesus. You realise this is the mental equivalent of stuffing your stomach full of junkfood and soda every day? This is a mainstream break up song: https://youtu.be/ekzHIouo8Q4 This is a vocaloid break up song: https://youtu.be/9pQR4a5sisE The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views. But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings. Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda. I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't. That doesn't sound meaningfully different from what people are already doing on Instagram and TikTok all day. Absolutely correct and my comment is by no means dedicated just strictly to the AI slop. As opposed to the kardashians and real house wives and Chappell Roan? No, the whole horseshit belongs together of course. Just that the AI slop is the logical culmination of the dumbed down pop-culture of the last 15ish years or so. For a lot of people music is a focus aid, not the object of contemplation. I'm with you here, resonates so much. I'm so fed up with endless subway tunnels, they all look and sound utterly same and boring. So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels. Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons. you could not waterboard an admission of bad taste like this out of me > Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. The musician in me just shed a tear Pink Beatles, in a purple Zeppelin comes to mind Had to create an account just to let you know that someone out there got the reference. I occasionally use Suno to re-imagine songs in different keys, tempos, and genres, and sample them. Most of the output from Suno is slop, but occasionally has a few good bits you can sample, chop up, re-pitch, and create something totally new from, which also has the added benefit of being unrecognizable to rights algorithms and lawyers from major labels. It's a neat tool for genuine creators, and a crutch for people interested in slop. Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete. Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost? > Modern music has done this to itself I get my modern music from Bandcamp. If you can't find good stuff to listen to, that's a 'you' problem. So true. AI music gens like Suno can't do Paul Shapera works even remotely, but can recreate a lot of pop or EDM music very faithfully. There's just no distance to close, it's already mainstreamly bad. > Modern music has done this to itself. When the human product is already pure corporate slop, it's not hard for AI to compete. What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now. So find music you like that isn't modern corporate slop. My music right now consists mainly of indie stuff I've found on youtube and daft punk. No plagiarism machine needed, just human-made music "No plagiarism machine needed, just human-made music" From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new." Did Daft Punk put in a lot of effort to remix existing sounds to make their own music? Yes. Did they type "pls make french house electronic music number 1 chart" into a text box? No. Did they also credit original authors? Yes. I've not gone through their whole library, but for example, Edwin Birdsong has songwriting credit for harder, better, faster, stronger > the slop from Suno is good enough to replace mainstream music I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart. Many of the things on a top #100 list for the last few decades. That includes plenty of "indies" as well as pop. There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular. > The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut. There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there. For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers. Some want to consume... content that they don't think they could do in one minute themselves. They want to consume content made by other humans, even if it's still brain-eating algorithmic fodder, but still.
Sora proved it quite clearly. These clips had ZERO value. How do you get Suno songs for free? You listen to others or make your own? Almost nobody listens to others' songs on Suno, that's the entire point. You wouldn't care to order the food as I personally like it -- might be too spicy (or too bland) for your taste. Suno songs are overtuned for personal preference in the same way. Sounds like when we first had smartphones with orientation sensors and we could drink a beer from the phone, so cool... for 2 weeks. But now you can vibe the same app 1000 times for root beer, coca cola, ginger ale, even a milkshake, and nobody will ever have to have a new idea again! I wouldn't be surprised that the beer apps cost less to develop than one AI generated video. This is consistent with a lot of AI apps. I fell in love with Gamma and haven’t used it in forever. Same with NotebookLM. I somewhat consistently use notebookLM for podcasts of academic papers I'm reading in my PhD. You have to go read it yourself afterwards but it makes better use of time in the gym or doing dishes/groceries. > You have to go read it yourself afterwards ^ this is important. Otherwise you may very well be missing anything really surprising or novel. See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where > It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality. On one hand 2024 in AI time was a decade ago. On the other, Google might not have done much to upgrade the podcast feature since them. It’s gotten somewhat better over time though clearly not their top priority. I found notebookLM to consistently make up about 20% of it's summary. Entertaining but unreliable. I used it most key to learn about history. There isn’t much damage if it got 1600s or 1700s detail wrong. My high school teachers got much of it wrong too. The bantering of the podcast I found distracting and the breathless enthusiasm. I guess there was a way to make it more no nonsense? I found I lost content if tuned for brevity. I just use elevenreader for this. I copy in essays or whatever text I want to listen to and it works decently well. It's far from perfect, but certainly good enough. Sometimes I'll take deep research output and listen to it too that way. I've found notebookLM summaries to be too high-level and oversimplified to be useful. Hopefully in a few years they can go deeper. You can alao use NotebookLM's as source for Gemini app and ask it to do more in-depth summaries with custom prompting. This somewhat makes whole NotrbookLM less useful, but still. I also like doing that for topics that I am tangentially interested in. One minor thing that I find annoying is that the narrators switch roles in the middle of conversation. They start with the female voice explaining a concept to the male voice and suddenly they switch. In the meantime I have identified myself with the voice being explained to. Just listen to actual audio books... literally doing double the work for no benefit... why? There aren't a lot of highly technical audiobooks or ones that give the same specificity that would be the same as an academic paper Okay but the user is describing listening to papers, then having to read the papers because listening to them isn't efficient. So why bother listening to it in the first place if you're going to read it? Not yet but it seems like they're getting to the point of AI narration finally being good enough to make any text an 'audiobook'. Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither. No, reading verbatim from a technical paper is way too dense. You need a lot of filler words to slow it down and repetition to make it stick when read aloud. Writing a book takes like 2-3 years on average. Papers are published everyday. Having a cute two-person "conversational chat" w/ audio works for a lot of people vs. just reading a paper. "No benefit" to you perhaps. Don't generalize the lived experience. It can synthesize and summarize many topics. For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast. There are errors, but also with real podcasters. > You have to go read it yourself afterwards Or before! Either is mandatory to actually learn the content. Yeah it's not just the hardware depreciating, it's the social impact of what the model can do It's not just software: I use my Vision Pro (now in year 3) less than once a month now, and each time I do the painful/awkward/unpleasant set-up and prep and difficult interface sours me on the device yet again, until a new blockbuster movie like "Project Hail Mary" appears that when watched on the VP in 4K on a virtual 40-foot screen blows my mind. It's not really that people wouldn't come back - it's that they were losing money on each customer. Those 100 videos probably cost $100+ for them to create. Did you pay them $100+? (not a critisism, just a re-framing) When it launched we all talked about the serving/inference costs being massive. In hindsight if they had a paywall, it might not have self-imploded so fast, might have stayed aspirational, and they might have a profitable business today. Interesting case study. https://en.wikipedia.org/wiki/Hedonic_treadmill 24/7 titillation is boring The interesting difference here is that other hedonic activities do bring people back even after the first time they build up a tolerance and get bored. But many of these AI "creative" apps seem like a one-and-done thing. Once the novelty wears off there isn't anything more deeply rewarding to bring people back. It’s because they are slop which is only funny by the novelty of it. Stephen hawking at a skate board park it’s funny for a bit but as soon as the novelty wears off it’s just slop. This tracks my usage exactly. It was like Mad Libs - in that moment it was THE MOST FUN but after a while it became just a novelty bordering on... creepy. Now I feel kind of guilty for having exposed so many friends to what looks like a data gathering scheme. I thinks its the same reason why chess tournaments, where two AIs play against each other are not as popular, compared to when two humans play each other. Maybe its because humans generally compare themselves to other humans and that's part of how they value. It's the same with e.g. faceapp, fun for a minute but then... then what? And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription. I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive. I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper. "...and when everyone's super, no one will be" I think this is starting to play out. When I personally see a blog post which didn't need an image, but still does have an AI-slop image banner, I mentally check out. I might have Claude summarize it, or (more likely) just skip it altogether. Yep. Impressive toys, but not useful day to day. There's some market for b2b I'm sure, but as a consumer facing product it's tough to see how it could ever come close to paying for itself. Reminds me of when photo filters and initial stickers and mirror filters came out on MacBook in like 2007. It was super fun for a couple days then the novelty wore off. Sounds like me with listening to AI covers. After a couple of weeks I couldn't care less. But I was so stoked in it at the start The Cameo feature is really excellent. The likeness of both the person and the voice is exceptional. I really enjoyed making some funny Cameo videos with my friends. I don't know of another simple way to insert your own avatar with your own voice into a video, and I'm pretty deep in this space. I honestly forgot about Sora until this post, and yeah same behavior played with it for a bit, then moved on with my life. Humans are very good at pattern recognition, even if you generate different stuff, you still see a pattern, either in the cutting, color, cadence of movements, the color grading, camera lens used, everything, your mind will tag it as slop. Essentially you are watching the same videos over and over subconsciously This is something that people working on procedurally generated games have already noticed. No Man's Sky has billions of planets, each with "unique" plant and animal species, but you can easily sort them into a few dozen templates with minor variations. Procgen has a niche, but it never became ubiquitous, because for most people exploring a nice hand-made intentional environment is better. Wow that's a really good point. The style of the videos did become quite repetitive. U say that but then when u look at most “content” on social media it is the same video over and over again. How many JRE podcasts are basically the same crap as last time? How many influencer “life” videos are the same thing over again? Even the stuff i like is formulaic to the point ai can almost write the scripts. I think people attach to other people more than “ai”. When there isnt a narrative “person” behind the content it is way less interesting. [flagged] [flagged] (FYI, this is an LLM bot, check their comment history and note the repetitive structure with every comment they've ever posted all within the last hour) I dunno, it was the same for me and creative writing with AI. First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas. Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept. I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc. [flagged] Please don't cross into personal attack. Your comment would be fine without the swipe at the end. I think you’re fumbling on an important distinction. Sometimes people want to paint, sometimes people want a painting. To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos. Totally. This wasn't a situation where a stranger was slopping another stranger, it was a mother and son doing something fun together. I get your point but it goes too far in the opposite direction. We should now discuss absolutely nothing in relation to Sora and genAI videos? That seems overly charitable to the platform. Here, let me try this approach: Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub. Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word. No seriously, try it out. Agreed. I did try this out! So the reply to the original comment is dumb. I actually dismissed it for being flippant. Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back." If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora. The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum. Come on now...'We're curing cancer, right?!' You didn't at least puff a little ack through your nostrils for that one? As someone who generally liked the products that OpenAI puts out, I think Sora was their first product that I really didn't like. I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. It's primary value proposition to keep me using it wasn't to trick me with addictive content, but to get me high quality answers as fast as possible. And I felt like OpenAI's other products, like Deep Research, agent mode, etc, were the same way. Even Atlas, although I suspect it will be equally ill-fated, attempts to follow this same pattern. It really felt like OpenAI was separating themselves from the common popular apps like Tiktok, Reddit, Instagram, etc, which seemed to exist entirely to distract me from things I care about and waste my time. Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon! Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3? > I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling i recently used gpt for the first time in several months (i'm a daily claude user) and didn't find this at all. it is most certainly trying to pull you into engagement with how it ends each response. "if you want, i could tell you about this thing that's relevant to what you are discussing and tease just enough so that you addictively answer yes" What happened is that they make no money, because people use it an masse to generate videos that they then post on TikTok and Instagram, nobody actually doomscrolls Sora. > I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. Not about Sora, but about ChatGPT. I felt the same way for quite a while until I noticed that its response pattern has changed, apparently aiming for higher engagement. Someone aggressively pursued a metric. At some point, ChatGPT started leaving annoying cliffhangers in its every response, like "Do you want me to share a little-known secret of X that professionals often use?" Like, come on! Hosting videos is really expensive. AI video generation inference is really expensive. I'd love to see how much money this experiment cost. So much that they walked away from a billion dollar deal with Disney by dropping Sora. I don't think anyone outside of Disney/ClosedAI knows what deal was actually made. Maybe they just shut down public use of Sora but Disney will still be able to use it internally? Maybe they never even signed anything, as is too often the case with AI deals, especially big ones, how we read about signed/inked deals but then it turns out it was all just words spoken. Maybe they took the cash, then shut Sora down to save money? Could be any number of things that happened which we might never know. It's not clear to me what that billion dollar meant. To me it seems it was "Disney gets shares and we get to use their characters in Sora". Even if Sora breaks even, why would you gift Disney stock? It's not like they actual gave 1B to openai. Hosting videos is not that expensive, compared to generation and inference costs. It's not cheap but it's not that horrible > I liked GPT primarily because I felt like it respected me: I never felt like it was trying to distract me from my work or get me to waste time doomscrolling. It's primary value proposition to keep me using it wasn't to trick me with addictive content, but to get me high quality answers as fast as possible. I'm curious if you still feel this way about current iterations of ChatGPT? It seems like it's now primed to engagement bait the user, especially when used through the web UI. You can ask it a simple question with a straight forward answer and it will still try to get you to follow up with more. > What is the minimum thickness for Shimano M8100 disc brake rotors? > For Shimano XT M8100-series rotors (like RT-MT800 / RT-MT900 commonly used with M8100 brakes), the minimum thickness is 1.5 mm. If the rotor measures 1.5 mm or thinner, Shimano says it should be replaced. > (a bunch of pointless details in bullet points) > If you want, tell me the exact rotor model (e.g., RT-MT800, RT-MT900, size), and I can confirm the spec for that specific one and what typical wear looks like. The entire query could have been answered with "1.5mm". The "if you want" follow ups are so annoying. > Still, I am mystified by how rapidly Sora went from launch to shutdown I think if you had to foot the bill for generating a bajillion gigabytes of slop with no real utility, you wouldn't be too mystified. They showed off their technology and proved it was impressive. That's all it had to do. "I am mystified by how rapidly Sora went from launch to shutdown" I suspect they promised synthetic movies but it quickly became clear that they were never going to be able to deliver on this. Slick fifteen second lulz-clips, sure, but I don't think they can make several of them consistent enough to fit into a larger video narrative without the audience finding it jarring and incoherent. Perhaps legal at Disney also concluded that the output wouldn't be possible to copyright, which is their core business. For me, Sora changed the way I viewed Sam Altman as a person. I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives. He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too. Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too. Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too. > He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too. He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly. > ChatGPT could prevent real world harm like suicide It could prevent suicide, maybe, but we know that it does cause suicides, at least in some cases. Seems like a poor value proposition. Multiple people have attested that Sam Altman is extremely charming (especially in more casual, intimate settings) and talks very nobly about his goals, but his actual work is just…all kinds of awful. And I think that charm only goes so far as it seems clear that people are starting to demand that OpenAI actually match its words with work it cannot produce. I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF. [0] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma... > how he left Y Combinator Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit? We’ll never know exactly what exactly transpired, but I think the existing evidence is clear that as President of Y Combinator he should not have been also as involved in OpenAI as he was. This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character. Sam Altman made his stake at the table with a shady and failed location data harvesting app (https://en.wikipedia.org/wiki/Loopt). That's who he is, that's what he does, and we're all better off paying less attention to the sounds he emits, and more to the things he does. > the things he does. The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us? A fool and his money shall soon be parted. Sam is a face. If it wasnt him, it would be someone else. Thinking that Scam Altman of Worldcoin etc. fame was "genuine about making a product that could improve people's lives" seems like a strange kind of delusion. I haven't followed him much as I really don't care, but the one clip I've seen of him that really stands out to me (I've seen more but this is the one I remember) is one where he's talking to some guy who doubts the LLMs genius, and Sam says something like "what if ChatGPT solved quantum gravity, would you be convinced then?" To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers. So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy. The most memorable clip I've seen of him was the Brad Gerstner's podcast one (an investor of OpenAI), Gerstner questioned Altman about the financials of OAI, how could it have committed to spend so much given the revenue, it's a decent question and it's been up in the air for a while across the media. Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time. It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment. Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself. > Gerstner questioned Altman about the financials of OAI After glazing OpenAI and Sam personally for 45 minutes straight. But as soon as Sam was questioned in the slightest, he exploded. My most memorable clip was when he was interviewed about the "suicide" of an ex-employee and Sama lied through his teeth. I can't understand people who say this snake is "charming"... he's a bad liar and has sub-zero charisma. > Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again. Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time. I'm also curious if Sora has been used by Iran to generate those Lego propaganda videos critical of the President. Given how close Sam Altman is with the current administration, I wouldn't be surprised if Sora is now reserved for U.S. government propaganda only. Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m... Are there known tells that could be used to determine which model the video came from? (This sort of question, and the Grok sexual abuse, is why I'd like to see mandatory invisible watermarks on generated images/video) I don't think so. There are tons of self hosted models for video (they are smaller and easier to run). Most people serious about this stuff usually have their own pipelines. Since you seem to be better informed, I'm also interested in what self hosted models for video you recommend for creating my own Lego movie clips now that Sora is no longer an option for a paid service. There's tons, right? Look up Wan and Hunyan for starters. These are open weight models, so you can fine tune them on Lego content… But presumably they already have enough training data since they were made by Chinese companies who don’t give a shit about Western IP rights. I'm not sure, but you could be right. Sora is/was the top-of-the-line platform for video generation, and the Lego IP videos were polished. Makes sense to outsource when your own energy grid is being destroyed. Anyone with an account and VPN could utilize the platform. I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP. Notably, this primer on Sora safeguards was published only yesterday: https://openai.com/index/creating-with-sora-safely/ Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight. The app isn’t shutting down today, so they may have decided that the write up is still useful. More likely the team who put a lot of work into it were unaware of the decision to kill the product, regardless of the final sunset date, until today. The document seems to be an updated version of something written last September. From a quick glance it’s not really a major overhaul. It's 8 paragraphs of iteration over the previous version. ChatGPT is probably among the authors. There is a link at the top of that document that takes you to the original version which was published last September. As far as I can tell it’s mostly the same as before. i guess the disney deal falling through was the impetus rather than vice versa Though at this point it's not clear that anybody who's agreed to give OpenAI money is actually going to do so The thing that didn't make sense with this app: who would ever want to scroll only AI generated videos over a combined feed? In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that Its the age-old "your product is just a subset of another product" I've always suspected video-gen is basically a loss leader for OpenAI, Gemini, and Grok. They can't convince the general population that AI is world-changing trillion dollar tech with "vibe coding", but realistic fake videos are impressive at a glance, and might convince many non-technical people that AI/LLMs are something revolutionary. I think of them all Gemini has the most viable use case when Veo is paired with their advertising platform. It does genuinely open the door to a lot of cost saving for promo shots of products etc Agreed. For reference, if sora 2 was able to generate me a Google ugc product video, it would cost me like $10 and I would get it within 30 minutes if including editing. Paying a ugc content creator would cost me $50-200 plus no control over final shots plus I gotta wait for them to respond. I have 30 products in my e-commerce store— these costs add up like crazy The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing. Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad Most of this “AI” stuff is dead on arrival. Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it. Technologists seem to have a reallyyyy hard time getting it. This is what I see, outside the HN bubble. If you work retail or weld pipes together or whatever, AI is of no use to you. On the contrary, if tech thought leaders are to be believed, you'll be out of a job soon, replaced by a lifeless robot. Fuck that. You do realize that there a lot of people who sit at a desk and use a computer all day, right? Those are the ones whose jobs are vulnerable, not the ones who work with their hands or interact with the public. we will come for them with real world AI, it takes time. dont worry. they are not safe in a decade, they are %100 safe for few more years. Learning from them at scale and updating is nothing impossible. There's only one highly monetizable use for AI video generation but unfortunately it's fake revenge porn. You'll know the whole thing is about to collapse when the frontier models break that glass (as OpenAI is already preparing to do with sexting). Why does it need to be revenge porn? Pretty sure regular old porn has a large market there where people can specify what they idealistically want to see vs trying to find it, if it exists. Not every place has LEGO incest porn… or whatever the kids are into these days. I'm not deeply immersed in the AI porn space but here's what I see from the ads when I surf without a blocker: 1. There's an AI-based virtual girlfriend industry that mixes text and images 2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models) 3. There's a much shadier AI-based "undress this specific woman" industry People make revenge porn to humiliate people. Regular old porn can't achieve that goal. If anyone can fake it, is revenge porn even effective? Doesn't making it easy for anyone to fake also make all of it plausibly deniable? maybe try to view this topic with a bit more criticality. i just quickly googled some keywords and am pasting the very first search entry so you get an idea: https://www.cbsnews.com/news/sextortion-generative-ai-scam-e... revenge porn or deepfakes in general are hugely harmful to people. in the german-speaking world there's a scandal right now about a husband creating deepfakes of his wife, https://www.hollywoodreporter.com/movies/movie-news/christia... > One fake video, which she claims was sent to 21 men, depicted her being gang-raped i think you're taking this topic lightly because you just assume that it's not a big deal. try to keep in mind that people's mental health and with this their life is at stake. as with lots of things, the problem is not the tech itself, but the existence of men. it's not all men, but it's usually men. not sure how we'll solve this issue. The answers to those questions have been clear for a while; it approaches concern trolling to keep on pretending to ask them in wide-eyed innocence. Yes, revenge porn is very effective at causing harm, even though it can be generated. No, because 'plausibly deniable' has never worked for social consequences and shame. Yes. You can go speak to some high school (or even middle school) girls who have had AI generated porn made of their likeness and shared with their classmates. Even though everybody knows that it is fake it is still humiliating, especially for a young person who is likely already self conscious about their body and sex. And yet, regular porn is highly monetizable, which was the actual question. Surprisingly no; it's pretty much a money sink where everybody goes bankrupt after a couple of years. It's why it's attractive to money launderers. I'm not sure that's true for onlyfans, which seems to have been highly profitable until the sudden death of its founder. Excellent point: I'm talking about pornography 1.0, as it were. 1.0 should be attributed to pornography _before_ online distribution, and I suspect that was pretty profitable
asim - 10 hours ago
Tade0 - 6 hours ago
WarmWash - 5 hours ago
ghurtado - 5 hours ago
andai - 2 minutes ago
Tade0 - 4 hours ago
ifethereal - 4 hours ago
Terretta - 3 hours ago
k__ - 3 hours ago
moduspol - 2 hours ago
alcasa - 3 hours ago
runarberg - 27 minutes ago
k__ - 3 hours ago
Bombthecat - 3 hours ago
seanw444 - 2 hours ago
asim - 2 hours ago
Hendrikto - 8 hours ago
jvictor118 - 8 hours ago
bryanlarsen - 6 hours ago
ghurtado - 5 hours ago
fooqux - 6 hours ago
friendzis - 4 hours ago
juped - 4 hours ago
friendzis - 3 hours ago
vel0city - 3 hours ago
azan_ - 7 hours ago
root_axis - 6 hours ago
steveBK123 - 5 hours ago
chasd00 - 5 hours ago
seanw444 - 2 hours ago
mrbungie - 7 hours ago
somehnguy - 6 hours ago
bsaul - 6 hours ago
aaa_aaa - 7 hours ago
hobofan - 7 hours ago
lossyalgo - 5 hours ago
rescbr - 3 hours ago
vel0city - 3 hours ago
steveBK123 - 8 hours ago
heavyset_go - 8 hours ago
WarmWash - 5 hours ago
steveBK123 - 5 hours ago
vel0city - 3 hours ago
steveBK123 - 2 hours ago
vel0city - 17 minutes ago
steveBK123 - 7 hours ago
muskstinks - 2 hours ago
short_sells_poo - an hour ago
biztos - 9 hours ago
muvlon - 8 hours ago
mark_l_watson - 6 hours ago
jsharpe - 7 hours ago
steveBK123 - 7 hours ago
neutronicus - 6 hours ago
steveBK123 - 5 hours ago
burningChrome - 5 hours ago
asveikau - 3 hours ago
QuantumGood - 2 hours ago
whywhywhywhy - 4 hours ago
mark_l_watson - 6 hours ago
muskstinks - 10 hours ago
Hendrikto - 8 hours ago
muskstinks - 3 hours ago
superultra - 4 hours ago
pjc50 - 3 hours ago
elevation - 3 hours ago
windexh8er - 3 hours ago
seanw444 - 2 hours ago
superultra - an hour ago
superultra - an hour ago
WarcrimeActual - 2 hours ago
cube00 - 6 hours ago
empath75 - 3 hours ago
unnamed76ri - 10 hours ago
Rechtsstaat - 9 hours ago
laserlight - 9 hours ago
customguy - 8 hours ago
neonstatic - 8 hours ago
raincole - 8 hours ago
laserlight - 8 hours ago
laum - 8 hours ago
rdevilla - 7 hours ago
> And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it. [...] do not ye after their works: for they say, and do not.
For they bind heavy burdens and grievous to be borne, and lay them on men's
shoulders; but they themselves will not move them with one of their
fingers.
But all their works they do for to be seen of men [...]
That man was later nailed to a plank for literally no reason. [...] they seeing see not; and hearing they hear not, neither do they understand.
meken - 20 hours ago
yoz-y - 12 hours ago
Cthulhu_ - 10 hours ago
WarmWash - 5 hours ago
dylan604 - 3 hours ago
pjc50 - 3 hours ago
WarmWash - an hour ago
muzani - 11 hours ago
hansmayer - 9 hours ago
muzani - 4 hours ago
noelsusman - 5 hours ago
hansmayer - 5 hours ago
weirdmantis69 - an hour ago
hansmayer - 25 minutes ago
neutronicus - 6 hours ago
wartywhoa23 - an hour ago
code_for_monkey - 3 hours ago
jaapz - 11 hours ago
whaleofatw2022 - 9 hours ago
Geedis - 8 hours ago
NickC25 - 4 hours ago
criley2 - 9 hours ago
BigTTYGothGF - 7 hours ago
animuchan - 8 hours ago
azan_ - 7 hours ago
voidUpdate - 6 hours ago
muzani - 3 hours ago
voidUpdate - 3 hours ago
delta_p_delta_x - 10 hours ago
muzani - 4 hours ago
bojan - 11 hours ago
camillomiller - 11 hours ago
mlrtime - 8 hours ago
animuchan - 8 hours ago
teekert - 12 hours ago
moritzwarhier - 8 hours ago
Cthulhu_ - 10 hours ago
mathattack - 19 hours ago
wholinator2 - 19 hours ago
internet_points - 11 hours ago
WarmWash - 5 hours ago
mathattack - 4 hours ago
ludicrousdispla - 12 hours ago
mathattack - 4 hours ago
nytesky - 17 hours ago
djsavvy - 17 hours ago
qnleigh - 18 hours ago
SXX - 14 hours ago
p4coder - 18 hours ago
shimman - 19 hours ago
blharr - 18 hours ago
shimman - 4 hours ago
wolvoleo - 17 hours ago
coke12 - 14 hours ago
arthurcolle - 18 hours ago
mathattack - 4 hours ago
SecretDreams - 18 hours ago
conartist6 - 18 hours ago
bookofjoe - 6 hours ago
Nifty3929 - 3 hours ago
staticcaucasian - an hour ago
yabutlivnWoods - 18 hours ago
salt-thrower - 13 hours ago
Gigachad - 11 hours ago
josefresco - 7 hours ago
bit1993 - 11 hours ago
Cthulhu_ - 10 hours ago
disqard - 41 minutes ago
JeremyNT - 7 hours ago
Dumblydorr - 7 hours ago
afro88 - 12 hours ago
qingcharles - 16 hours ago
urda - 13 hours ago
m3kw9 - 6 hours ago
pjc50 - 3 hours ago
meken - 5 hours ago
rustystump - 4 hours ago
bibimsz - 18 hours ago
AbanoubRodolf - 17 hours ago
toraway - 16 hours ago
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...
torginus - 16 hours ago
1bpp - 19 hours ago
dang - 19 hours ago
Waterluvian - 19 hours ago
dqv - 19 hours ago
apsurd - 19 hours ago
Waterluvian - 19 hours ago
apsurd - 19 hours ago
jcims - 19 hours ago
johnfn - 17 hours ago
greenie_beans - 5 hours ago
nananana9 - 13 hours ago
imankulov - 6 hours ago
mortsnort - 17 hours ago
rblatz - 16 hours ago
lossyalgo - 4 hours ago
riffraff - 13 hours ago
karel-3d - 12 hours ago
mvdtnz - 38 minutes ago
hbn - 5 hours ago
cess11 - 11 hours ago
AussieWog93 - 16 hours ago
kergonath - 9 hours ago
presbyterian - 3 hours ago
Eufrat - 14 hours ago
morpheuskafka - 12 hours ago
Eufrat - 12 hours ago
username223 - 14 hours ago
waterproof - 14 hours ago
rustystump - 13 hours ago
Lionga - 12 hours ago
sfn42 - 11 hours ago
piva00 - 9 hours ago
Hendrikto - 8 hours ago
mvdtnz - 31 minutes ago
iAMkenough - 17 hours ago
iAMkenough - 17 hours ago
pjc50 - 12 hours ago
torginus - 16 hours ago
iAMkenough - 16 hours ago
pavlov - 10 hours ago
iAMkenough - 16 hours ago
AlexAplin - 20 hours ago
paxys - 20 hours ago
repeekad - 20 hours ago
janalsncm - 18 hours ago
noisy_boy - 16 hours ago
janalsncm - 18 hours ago
bibimsz - 20 hours ago
bandrami - 18 hours ago
ex-aws-dude - 21 hours ago
danso - 20 hours ago
makingstuffs - 18 hours ago
umich2025 - 16 hours ago
oro44 - 20 hours ago
sethops1 - 18 hours ago
munchler - 8 hours ago
kingleopold - 5 hours ago
bandrami - 18 hours ago
Frost1x - 18 hours ago
bandrami - 18 hours ago
UncleMeat - 16 hours ago
reverius42 - 10 hours ago
4ggr0 - 6 hours ago
Peritract - 10 hours ago
UncleMeat - 22 minutes ago
andrewflnr - 15 hours ago
bandrami - 15 hours ago
pjc50 - 12 hours ago
bandrami - 11 hours ago
cpt_sobel - 10 hours ago