Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation
wsj.com189 points by thm 17 hours ago
189 points by thm 17 hours ago
https://archive.is/j1XTl
The problem is that the conversation gets sidetracked on ASGI while ignoring the very real actual harms that are already currently happening - like AI "companions" addicting and harming kids, deepfakes, IP-theft, etc. I also worry about malicious AI used to socially engineer and attack people at scale. We have an aging population that is already getting constantly getting scammed by really amateur attacks, what happens when the AI can perfectly emulate your grandkid's voice? They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers. They are more likely trying to race towards wildly overinflated government contracts because they aren't going to profit how they're currently operating without some of that funny money. Isn’t that a bit like saying: storage is commodity and thus profit margins will be/should be low. All major cloud providers have high profit margins in the range of 30-40%. Storage doesn't require the same capex/upfront investment to get that margin. How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing. Buying hard drives and slotting them in also has capex associated with it, but far less in total, I'd guess. Yes, which is why the companies that develop the models aren't cost viable. (Google and others who can subsidize it at a loss obviously are excepted) Where is the return on the model development costs if anybody can host a roughly equivalent model for the same price and completely bypass the model development cost? Your point is inline with the entire bear thesis on these companies. For any use cases which are analytical/backend oriented, and don't scale 1:1 with number of users (of which there are a lot), you can already run a close to cutting edge model on a few thousand dollars of hardware. I do this at home already The other nightmare for these companies, is that any competitor can use their state of the art model for training another model. As some Chinese models are suspected to do. I personally think it's only fair, since those companies in the first place trained on a ton of data and nobody agreed to it. But it shows that training the frontier models have really low returns on investment Open source models are still a year or so behind the SotA models released the last few months. The price to performance is definitely in favor of Open Source models however. DeepMind is actively using Google’s LLMs on groundbreaking research. Anthropic is focused on security for businesses. For consumers it’s still a better deal for a subscription than to invest a few grand in a personal LLM machine. There will be a time in the future where diminishing returns shortens this gap significantly, but I’m sure top LLM researchers are planning for this and will do whatever they can to keep their firm alive beyond the cost of scaling. Definitely I am not suggesting these companies can't pivot or monetize elsewhere, but the return on developing a marginally better model in-house does not really justify the cost at this stage. But to your point, developing research, drugs, security audits or any kind of services are all monetization of the application of the model, not the monetization of the development of new models. Put more simply, say you develop the best LLM in the world, that's 15% better than peers on release at the cost of $5B. What is that same model/asset worth 1 year later when it performs at 85% of the latest LLM? Already any 2023 and perhaps even 2024 vintage model is dead in the water and close to 0 value. What is a best in class model built in 2025 going to be worth in 2026? The asset is effectively 100% depreciated within a single year. (Though I'm open to the idea that the results from past training runs can be reused for future models. This would certainly change the math) this is slightly more nuanced, since the AI portion is not making money. it's their side hustle Another way to win is through exclusive access to high quality training data. Training data quality and quantity represent an upper bound on LLM performance. That's why the frontier model developers are investing some of their "war chests" in purchasing exclusive rights to data locked up behind corporate firewalls, and even hiring human subject matter experts in order to create custom proprietary training data in certain strategic domains. The bottleneck for commoditization is hardware. The manufacture of the hardware required is led by tmsc and samsung being a close second. The tooling required for manufacture is centralized with ASML and several other smaller players like Zeiss and the design of the product centers around nvidia though there are players like AMD who are attempting to catch up. It is a complex supply chain but each section of the chain is held by only a few companies. Hopefully this is enough competition to accelerate the development of computational technologies that can run and train these LLMs at home. I give it a decade or more. Yeah, but we can self-host them. At this point in the span of it, it's more about infrastructure and compute power to meet demand and Google won because it has many business models, massive cashflow, TPUs, and the infrastructure to build expanding on their current, which would take new companies ~25 years to map out compute, data centers and have a viable, tangible infrastructure all while trying to figure out profits. I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary. It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO >It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer. 'think' is one of those words that used to mean something but is now hopelessly vague- in discussions like these it becomes a blunt instrument. IMO LLMs don't 'think' at all - they predict what their model is most likely to say based on previously observed patterns. There is no world model or novelty. They are exceptionally useful idea adjacency lookup tools. They compress and organize data in a way that makes it shockingly easy to access, but they only 'think' in the way the Dewey decimal system thinks. >Yeah, but we can self-host them Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use? Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM. Correct. And glad you're aware of the challenges with running them. I'm not saying the options are favorable for everybody, I'm saying the options are there if it becomes locked in to 1-3 companies. What profit margins? It is unclear. Everyday I seem to read contradictory headlines about whether or not inference is profitable. If inference has significant profitability and you're the only game in town, you could do really well. But without regulation, as a commodity, the margin on inference approaches zero. None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time. Probably it's "operationally profitable" when ignoring capex, depreciation, dilution and other required expenses to stay current. Of course that means it's unprofitable in practice/GAAP terms. You'd have to have a pretty big margin on inference to make up for the model development costs alone. A 30% margin on inference for a GPU that will last ~7 years will not cut it There are profit margins on inference from what I understand. However the hefty training costs obviously make it a money losing operation. The "few approved providers" model is what they have been fighting against since the Biden admin The only way to win is commoditize your complement (IMO). That's a good line but it only works if market forces don't commoditize you first. Blithely saying "commoditize your complement" is a bit like saying "draw the rest of the owl." Free models given away by social media companies (because they want people to generate content) and hardware companies (because they want people to buy GPUs, or whatever replaces them). Can the current subscription models compete with free? It's just a prediction - it could well be wrong. Every so often we see titans of some sort amassing large amounts of money to influence regulations, laws, and society in general. This is another in that tradition. The lesson is that what we need to regulate is not "tech" or "AI" but wealth, so that no one can amass a "war chest" to do anything. Only "multi-million"? Someone once told me about being a new journalist, on the city beat. They said something like: I wasn't surprised to find that bribery was going on; I was just surprised the bribes were so small. Someone always makes this kind of comment and I've always found it being pretty middle-brow. I think there's a classic comment that the USA spends more on potato chips than on lobbying. I think the best explanation is that there's a pushing a rope phenominum where there's still a limit on the amount of money the American political corruption system can absorb. We have long had robust laws that prevent people outright paying for political or regulatory outcomes (which used to be much stronger under honest services). We also had robust laws limiting the amount of campaign contributions. As that system has been torn down, we've seen the amount of money flowing in increase. > there's still a limit on the amount of money the American political corruption system can absorb. Wikipedia tells me [1] to win a seat in the House of Representatives (one of 435) costs on average $2.79 million, while a seat the Senate (one of 100) costs $26.53 million. So the political system can absorb $3.8 billion in 'donations' from 'supporters'. That might sound small compared to nvidia's $4.3 trillion market cap, but to a lot of folks $3.8 billion is serious money. And that's just spending by the winners - often there will be a loser who spent almost as much. [1] https://en.wikipedia.org/wiki/Campaign_finance_in_the_United... The thing is, lobbying isn't literally a bribe. Lobbying involves all sorts of expensive activities, like compiling reports, doing research, etc in addition to the more sketchy semi-bribe stuff. All that adds up. How many FTE lobbyists is a few million dollars? It just doesn't seem like all that much if they are trying to do a hard core lobbying campaign. The tech industry lobbyists should be able to eat their own dog food by using AI to do all the research and report writing now (half joking). That calls to mind this NYT article about the small time grifts that Eric Adams and associates were up to, including a small acting role in Godfather of Harlem in exchange for canceling a planned bike lane. https://www.nytimes.com/2025/08/22/nyregion/new-york-city-co... AI regulation should wait until after the crash. That way AI can be regulated for what it does and not the fever dream pushed by marketers. At that point nobody will care though. People pushing for regulation (not uniquely) want power- those that can write the regulation will be in a position to exert a lot of power over a lot of people/companies, making it an attractive thing to push for. They need to be more worried about creating a viable economic model for the present AI craze. Right now there’s no clear path to making any of the present insanity a profitable endeavor. Yes NVIDIA is killing it, but with money pumped in from highly upside down sources. Things will regulate themselves pretty quickly when the financial music stops. Nvidia's biggest mistake is investing money selling shovels into prospecting firms. If not for that they'd be fine. Maybe there's enough of a "money multiplier" to make it worth their while. Then again, possibly more likely, their entry could spook other investors. Do you mean that they need to find better ways to create value by using AI, or that they need better ways to extract value from end-users of AI? I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone. Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past. Preventing smaller entities (or private persons even) from just doing their own thing and making their own models seems like the biggest difficulty long term to me (from the perspective of the "rent seeking" tech giant). I don’t disagree that then AI is of “value.” The issue at the moment is the whole thing is being kept alive by hype and circular financing. There’s not anywhere near enough money entering in from outside (ie consumers and businesses buying this stuff) to remotely support the amount of money being spent. Not even close. Not even “we just need to scale more.” It’s presently one big spectacular burning pile of cash with no obvious way forward other than throwing more cash on the burning pile. > I'd argue that "value creation" is already at a decent position considering generative AI and the usecase as "interactive search engine" alone. > Regarding "value extraction": Advertising should always be an option here, just like it was for radio, television and online content in general in the past. Not at the actual price it's going to cost though. The cost of an "interactive search" (LLM) vs a "traditional search" (Google) is exponentially higher. People tolerate ads to pay Google for the service, but imagine how many ads would ChatGPT need, or how much it will have to cost, to compensate an e.g. 10x difference. Last time I read about this a few months ago, ChatGPT were losing money on their paid tier because the people paying for it were using it a lot. It's more likely that ChatGPT will just be spamming ads sprinkled in the responses (like you ask for a headphone comparison, and it gives you the sponsored brand one, from a sponsored vendor, with an affiliate link), and hope it's enough. > Not at the actual price it's going to cost though. But we don't know that pricepoint yet; current prices for all this are inflated because of the gold-rush situation, and there are lots of ways to trim marginal costs. At worst, high longterm un-optimizable costs are going to decrease use/adoption a bit, but I don't even think that is going to happen. Just compare the situation with video hosting: That was not profitable at first, but hardware (and bandwidth) got predictably cheaper, technology more optimized and monetization more effective and now its a good chunk of googles total revenue. You could have made the same arguments about video hosting in 2005 (too expensive, nobody pays for this, where's the revenue) but this would have led to extremely bad business decisions in hindsight. Not to mention, most arguments about costs of AI inference are plain inane. AI search being 10x more expensive than Google query? That's just a silly, meaningless number - especially considering that a good AI response easily stops the user from making 5+ search queries to get the same results, and AI query itself can easily issue the equivalent of 10-20 search queries + spends compute analyzing their results. The “well just pay for it with ads” concept is financially flawed. The cost to serve all this up is 30+x more expensive than traditional search. There’s no way the “online ad” space will suddenly start paying 30x more for ads. There’s simply not enough advertising dollars out there to pay for it all. You might be thinking of old models like banner ads or keyword results at the top of search and not when you ask ChatGPT the best way to clean up something and it suggests Dawn™ Dish Soap! The music is just getting started. The way it is going, AI will be inevitable. Companies are CONVINCED it’s adopt AI or die, whether it is effective or not. It's already starting to replace Google searching for many people. This is why Google (and other big tech firms) started investing in it immediately. All they need to do is start adding in sponsored results (and the ability to purchase keywords), and AI becomes insanely profitable. This is crazy for me with how inaccurate Google’s AI summaries are. They’ve basically just added a chunk of lies to the top of every search page that I have to scroll past. Not according to both Google’ latest revenue and profit numbers and even Apple hinted they aren’t seeing less revenue from Google searches. The race is to be the first to make a self-improving model (and have the infrastructure it will demand). This is a winner-takes-all game, that stands a real chance of being the last winner-takes-all game humans will ever play. Given that, the only two choices are either throw everything you can at becoming the winner, or to sit out and hope no one wins. The labs know that substantial losses will be had, they aren't investing in this to get a return, they are investing in it to be the winner. The losers will all be financially obliterated (and whoever sat out will be irrelevant). I doubt they are sweating to hard though, because it seems overwhelmingly likely that most people would pay >$75/mo for LLM inference monthly (similar to cell phone costs), and at that rate without going hard on training, the models are absolute money printers. There is zero evidence that the current approach will ever lead to a self-improving model, or that current GPU/TPU infrastructure is even capable of running self-improving models. I believe that the right regulation makes a difference, but I honestly don't know what that looks like for AI. LLMs are so easy to build/use and that trend is accelerating. The idea of regulating AI is quickly becoming like the idea of regulating hammers. They are ubiquitous general purpose tools and putting legislation specifically about hammers would be deeply problematic for, hopefully, obvious reasons. Honest question here, what is practical AND effective here? Specifically, what problems can clearly be solved and by what kinds of regulations? The most sane version of regulation IMO is the (already passed) EU AI Act. It's less about control of AI itself, more about controlling inputs/outputs. Tell users when they're interacting with an AI, mark/disclaimer AI-generated content, don't use AI in high-risk scenarios, etc. Along the lines of "we don't regulate hammers, but we regulate you hitting people with a hammer". Why should a user care whether the entity they're interacting with meets some arbitrary political definition of "AI"? Does it matter whether an article that I'm reading was written by AI or by a monkey randomly banging on a keyboard? Regulations seem totally pointless, just another excuse to shovel taxpayer money to a bunch of bureaucrats with fancy degrees who are incapable of finding real jobs. > Does it matter whether an article that I'm reading was written by AI Of course it does. At least to me, an AI disclaimer would make me immediately close the tab. Given that companies are pushing entire product lines[1] to combat this problem, I'd say I'm not alone :) This is silly. Regulation exists to help balance out the power disparity between consumers and corporations that sell AI. The EU wrote a good AI law, it's mainly focused on access to goods and services and the impact that AI can have in those domains. Among other things, it almost entirely bans surveillance pricing. Makes companies liable if an AIs discriminate on their behalf. Also restricts the use of facial recognition. This is it's role, to equalize the power disparity by prohibiting companies that do business within the EU from engaging in these predatory practices. Archive: https://archive.is/j1XTl I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation? What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone. Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI. > I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation? There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided: "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs. "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections. > "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections. This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun. The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP. > This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs. They already do this[1]. Why should there be an exception carved out for AI type jobs? ------------------------------ [1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery. How are tariffs supposed to stop domestically produced AI from replacing workers? > How are tariffs supposed to stop domestically produced AI from replacing workers? I didn't say it would. I said that politicians already artificially preserve jobs, and asked, quite legitimately I feel, why should they make an exception for AI? > Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery. I think that would be Singapore, as far as import tarrifs go? Not much starvation there! Do you mean taxes? Or excise duties or...? It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some. And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic. There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful. And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that. A vocal minority led to the French Revolution, the Bolshevik Revolution, the Nazi party and the modern climate change movement. Vocal minorities can be powerful. > "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections. Artists are not primarily in the 1% though, it's not only patents that are IP theft. Do the artists that are not in the 1% actually benefit from IP or does it hinder them from building new art based on other art? It seem to me that IP only benefits the top players. Can you give me an example of the situation you are picturing? Simply because I can't see what you mean by artists being hindered by IP, artists try to create original work, and derivative work from other IP is usually re-interpreted enough to fall under fair use. I can't picture a situation where artists could be hindered on their creations due to IP owned by others. Sampling in music is now paid for, it used to be free, as one example. Sampling is still done, I'm a hobbyist music producer, and friends with many professionals. They have to clear the samples and pay royalties, and they get royalties from sampled tracks. It's more cumbersome while being fairer, it hasn't stopped at all the practice. As a hobbyist I do it all the time while my professional friends clear their samples before earning money on their tracks. Agreed. There are some pretty cool platforms in the last 5 years to streamline this process, too. "Stop the laundering of responsibility/liability" - the risk that you can run someone over with a software controlled car and it's not a crime "because AI" whereas a human doing the same thing would be in jail. Image detection leading to false arrests, etc. It's harder to sue because the immediate party can say "it wasn't us, we bought this software product and it did the bad thing!" I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did. "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs. So politicians are supposed to create "non bullshit" jobs out of thin air? The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words? At this point if an LLM can do your job, it was already bullshit. But in the future when they can do non bullshit jobs, then you can go get another one just like every other person out of the billions who has had their job made obsolete by technology. It's not that hard. If large swaths of people lose their jobs to AI, have no job prospects due to the presence of AI, and can't afford their next meal in the here and now, that is a recipe for civil unrest. If... But most likely it will be like technology replacing all the many jobs it has replaced over the last 100 years and those people will move into other jobs. If it is different this time then it requires a different response, but that isn't needed until we know it actually is different. In those past times of technological change, it was reasonably obvious where the puck was headed. However, I feel like that has been changing over the past decade or two. I have met countless young people who have been willing and able to pick up a new skill to make a living. By and large, that has either turned out to be going into tech or going into gig work AI is threatening both of those. It is not obvious to me what comes after. Frankly, these days if someone younger comes to me asking for career advice, I honestly wouldn't know what to tell them. You give way too much credit to the US electorate. Right now vast swaths of the country are worshipping a billionaire and support policies that are actively harming them because the politicians claim to hate the same people they hate and/or quote scripture. Hunger knows no political party. Seeing the life if people in red states that continue to struggle and still vote for politicians that pass policies that hurt them, I disagree. How many farmers right now are suffering between the current tariff policies and immigration policies are still professing support for Trump? The very people that unions and higher minimum wages would help the most are opposed to because they support the very people who favoring their employers getting rich over them. If you take solace in “god will provide” as long as you give the church 10% of your income, you aren’t looking at things logically as long as the politicians can quote scripture. > But in the future when they can do non bullshit jobs, then you can go get another one just like every other person out of the billions who has had their job made obsolete by technology. It's not that hard. This was the argument made by the capitalists after they had jailed and murdered most of the people in the Luddite movement before there was employment regulation. They ignored what the Luddites were protesting for and suggested it was about people who just didn't understand how the new industrial economy worked. Don't they know that they can get jobs elsewhere and we, as a society, can be more productive for it? The problem is that this was tone deaf. There were no labor regulations yet and the Luddites were smashing looms as that form of violence was the only leverage they had to ask for: elimination of child labor, social support that wasn't just government workhouses (ie: indentured servitude), and labor laws that protected workers. These people weren't asking everyone to make cloth by hand forever because they liked making cloth by hand and thought it should stay that way. In modern times I think what many people are concerned about with companies getting hot for throwing labor out into the streets when it's not profitable for them anymore is that there are once more a lack of social supports in place to make sure those people's basic needs are met. ... and that's just one of the economic and social impacts of this technology. It's even simple than that, IMHO. Yes, there are always new jobs to replace one you've lost to automation. But no, those new jobs are not for you, and not for your children. Someone else will be doing them - you will be dealing with the fallout of having your life upended, suddenly facing deep poverty. You can re-skill - but you'll be competing for starter positions and starter salary with people who're just entering the workforce, much younger than you, with no dependents or health issues. The technology may have benefited everyone in the long run, but in immediate terms, sudden shifts like these ruin lives of people, and destroy futures of their descendants. They do create bullshit jobs in finance by propping up the system when it's about to collapse from the consequences of their own actions though. Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences. Which I believe is what's lacking in the whole discussion, politicians shouldn't be trying to maintain the labour status quo if/when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective. >There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided: There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it. Just from the top of my head: - information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage. - related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies? - healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it? - National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed? - energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle? - consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices? - economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with? - monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame? - enabling crime: can an army of AI hackers disrupt entire countries? how is this handled? - impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam. your point "It's on politicians to help people adapt to a new economic reality" brings a few: - Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake
- How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people.
- how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change? You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed. Your list of evidence-free vibe complaints perfectly exemplifies the reasons why regulations should be approached carefully with the advice of experts, or not at all. I'm not sure what you mean as evidence-free here. Debates for public regulation should not be started by evidence-backed conclusions, but rather they are what pushes research and discussion in the first place. Perhaps the conclussion to AI's impact on mental health is "hey, multiple high quality studies show that the impact is actually positive, let's allow it and in fact consider it as a potential treatment path". That's perfectly fine. What is not fine is not considering the topic at all until it's too late for preventive action. We don't need to wait for a building burning before we consider whether we need fire extinguishers there. My list is not made of complains at all, it's just a few of the ways in which we suspect AI can be disruptive, which are then probably worth examining. Evidence-free? Did you even skim OP's list? Healthcare/Social damage: we already have peer reviewed studies on the potentially negative impacts of LLMs on mental health: https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/ . We also have numerous stories of people committing suicides after "falling in love" or being nudged to do so by an LLM. Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on? Those are just the ironclad ones, you can make very good data privacy and national security arguments quite easily as well. > Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on? Yes, if you want to be taken seriously, then your claims about this should be based in evidence and contextualized amid the overall energy market. Exactly. Energy/resources line is by far the most silly out of anti-AI arguments being regurgitated by people. Electricity is fungible. Before decrying that LLMs are using it to provide the world (probably) more utility on the net per watt than your own work output (which segues into an actual problem of labor as source of personal and social worth), contrast it with what we'd otherwise be doing with that same electricity - e.g. more sportsball streams in higher definitions, more cryptocurrency shams, more Juiceros and other borderline-fraudlent startups in physical space (cheap energy means cheaper manufacturing, which means materials become more like bits, and it's easier to pull the same crap in the real world, as companies now pull in virtual). Point being, if you want to judge use of electricity on AI, judge it in context of the whole human condition - of everything else we'd otherwise be using it on. > "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs. This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved. The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same. This presupposes the existence of said jobs, which is a whopper of an assumption that conveniently shifts blame onto the most vulnerable. Of course, that's probably the point. This will work even worse than "if everyone goes to college, good jobs will appear for everyone." The good (or bad) thing about humans is they always want more than what they have. AI seems like a nice tool that may solve some problems for people but, in the very near future, customers will demand more than what AI can do and companies will need to hire people who can deliver more until those jobs, eventually like all jobs, are automated away. We see this happen every 50 years or so in society. Just have a conversation with people your grandparent's age and you'll see they've gone through the same thing several times. The last 50 years in the USA (and elsewhere) have been an absolute disaster for labor: the economy as a whole grew, the capital share grew even more, and the labor share shrank (unless you use a deflator rigged to show the opposite, but a rigged deflator can't hide the ratios). This contrasts to the 50 years prior, where we largely grew and fell together, proving that K shaped economies are a policy choice, not an inevitability. A Roosevelt economy can still work for most people when the "job creators" stop creating jobs. A Reagan economy cannot. > The real solution is for people to upskill and learn new abilities AI is being touted as extremely intelligent and, thus, capable of taking over almost any white collar job. What would I upskill to? > AI is being touted as extremely intelligent and, thus, capable of taking over almost any white collar job. The marketing buzz is not the same thing as reality. Upskill to something AI is bad at. There is plenty to chose from in the present. Upskill? No, that’s not the point. You’ll be down leveled (potentially several levels) economically and held there. Consider some poorly paid servant work. It will be sold to you as “noble work” or something along the lines to entice/slander you. Oh I know, I’m just sowing doubt in the OP’s claims. Note the lack of response to this. AI is not a value neutral tech. The jobs that would exist if only we gave the "job creators" just one more tax cut, I'm sure. > The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same. But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value. Because the world, sadly, doesn't revolve around just 1 individual. We are a society where other individuals have different goals and needs and when those are met by the development of a new product offering it shifts how people act and where they spend their money. If enough people shift then it affects jobs. > If enough people shift then it affects jobs. Yes, but again, the goal of automatization should be to reduce the need for people having jobs to secure their livelihood and enable a dignified life. However, what we are seeing in the Western Hemisphere is that per capita productivity is rising while the middle class is eroding and capital is accumulated by a select few in obscene amounts. 'Upskilling' does not happen out of personal motivation, but rather to meet the demands of the market so that one does not live in poverty. The idea of ‘upskilling’ to serve the market is also absurd because, in times of ever-accelerating technological development, there is no guarantee that the skills you learn today will still be relevant tomorrow. Yesterday it was “learn to code” but now many people who followed this mantra find themselves in precarious situations because they cannot find a job or are forced into the gig economy. So what do you do with people who couldn't foresee the future, or who are simply too old for the market? > But why do I have to? Because you enjoy eating? Whatever you think society should be, the fact is we live in one where you have to exchange labour for money. What ought to be and what is, are unrelated to each other. Its interesting how we feel this way about white collar jobs, but when a coal mine closes nobody cares. > If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage Regulating AI doesn't mean blocking it. The EU AI Act regulates AI without blocking it, just imposing restrictions on data usage and decision making (if it's making life or death decisions, you have to be able to reliably explain how and why it makes those decisions, and it needs to be deterministic - no UnitedHealthcare bullshit hiding behind an "algorithm" refusing healthcare) There are several concrete proposals to regulate AI either proposed or passed. The most recent prominent example of a passed law is California SB53, whose summary you can read here: https://carnegieendowment.org/emissary/2025/10/california-sb... You should ignore literally everything Musk says. He is incredibly unintelligent relative to his status. Musk wants extreme law and order and will beat down any protests. His X account is full of posts that want to fill up prisons. This is the highlight so far: https://xcancel.com/elonmusk/status/1992599328897294496#m Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now. Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care. Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?) It's possible that the interests of the richest man in the world don't align with the interests of the majority, or society as a whole. Of course he only wants the government to do only what benefits him. Oh he's that ready to give up the billions the government funnels to SpaceX? Alright lets do it. I'm sure that "smallest government possible" involves cancelling all subsidies to EV car companies and tax credits to EV customers. What a wanker. Like every other self-serving rich “Libertarian,” they want a small government when it stands to get in their way, and a large one when they want their lifestyle subsidized by government contracts. "subsidized by government contracts" Subsidized implies they are getting free money for doing nothing. It's a business transaction. I wouldn't call being a federal worker being subsidized by the government either. I mean it depends what kind of subsidy we're talking. On contracts: Space X builds rockets for the government, fair enough, in a vacuum. Though I would ask why we're paying a private corporation to recycle NASA designs we wouldn't fund via NASA, rather than just having NASA or the Air Force do it. On welfare: Corporations like Walmart benefit incredibly from the tattered remnants of America's social safety net, because if it didn't both exist and demand that people work to earn the benefits, nobody in their right mind would work for places like Walmart, because they wouldn't get paid enough to live. If nothing else, they would all die of starvation, which of course I don't want, but Walmart is also benefiting from that, albeit indirectly. Misc: artificially low taxes, the ability for corporations to shelter revenue overseas to avoid taxes, temporary stays on property taxes to attract businesses to a given area, lax environmental regulations in some areas, and lots of other examples of all the little ways private industry gets money from the government they shouldn't have. Most of these not only don't "give something back" but detract from the society or the larger world. And to emphasize, I'm not even arguing for or against here. I'm just saying Elon Musk doesn't want a small government, nor a large one. He wants a government he can puppet. As long as it benefits him and does not constrain him, he doesn't give a shit what else it does. A short list of libertarian principles I'd bet a LOT of money Elon does not endorse: - Abolition of Intellectual Property: Hardcore libertarians argue patents and copyrights are government-enforced monopolies that stifle innovation. Musk’s companies rely heavily on IP protections—Tesla’s battery tech, SpaceX’s designs, Neuralink’s research. Without IP law, his competitive moat collapses. - No Government Subsidies: Libertarian principle: the market should stand on its own, no handouts. Musk’s empire thrives on subsidies: Tesla leaned on EV tax credits, SpaceX lives off NASA and DoD contracts, SolarCity was propped up by state incentives. - Minimal Regulation: Libertarians want deregulation across the board. Musk benefits from regulation: environmental rules push consumers toward EVs, carbon credits generate revenue, and zoning laws often bend to his lobbying. - Free Markets Without Favoritism is a Libertarian ideal: no special treatment, no cronyism. Musk actively lobbies for policies that tilt markets in his favor, from energy credits to space launch contracts. That is not free competition, it is government-backed advantage. - Flat, Transparent Taxation: Libertarians often push for simple, low, flat taxes with no loopholes. Musk’s companies exploit tax shelters, property tax abatements, and complex accounting maneuvers to minimize liability. That is the opposite of transparent. > Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care. This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote. > most of the population [..] can vote. I mean, this is a solvable problem... Algorithmic Accountability. Not just for AI, but also social media, advertising, voting systems, etc. Algorithm Impact Assessments need to become mandatory. Sounds like a great way to make jobs for a bunch of talkers and parasites. We don’t know what kind of insecure systems we’re dealing with here, and there’s a pervasive problem of incestuous dependencies in a lot of AI tech stacks, which might lead to some instability or security risks. Adversarial attacks against LLMs are just too easy. It makes sense to let states experiment and find out what works and doesn’t, both as a social experiment and technological one. Are we seeing lobbying for liability exemptions for AI errors? That's probably the biggest practical concern on the consumer side. The headline says that as if a few million is a lot of money to spend on lobbying. I like how building up millions of dollars to bribe elected officials is reported on in such neutral terms. Modern times. I wouldn't be surprised if you learn about it in business school. Every generation has its own copyright war. First file sharing, then blockchains, now LLMs. As long as digital comouters are copy-on-write, this debate shall continue. It will only get solved once we have viable quantum computers. Oh wow! I think I found part of the problem. I replied earlier about algorithmic accountability, and the need for Algorithm Impact Assessments, and I got a snarky reply and down voted like I've never seen before. I guess accountability hits a nerve. So I'll just say... Algorithm Impact Assessments Algorithm Impact Assessments Algorithm Impact Assessments I down voted you because this is a stupid idea which wouldn't do anything for accountability. There's no need for algorithm impact assessments. I don't want my tax dollars wasted on more bureaucratic garbage. It would end up being totally politicized and subjective because there's no consensus on impacts. This is why I’ve refused to buy into the argument from these ghouls that AI would make the world a better place, and their occasional lip-service of requesting AI regulation for “human safety”: their own actions paint a dystopian world of mass surveillance, even heavier labor exploitation, the return of company scrip and stores, and the wholesale neglect of human well-being, all while blocking the very regulation they claim to want and/or need to succeed safely. If these people genuinely believed in the good of AI, they wouldn’t be blocking meaningful regulation of it. > their clamoring for AI regulation for “human safety” > they wouldn’t be blocking regulation of it Which is it? Do they want regulation or not? The answer is, in fact, they do want regulation. They want to define the terms of the regulations to gain a competitive advantage. That’s quite literally the point I was making. They’re lying to everyone while demanding protective backstops for their highly speculative investment. None of these companies, investors, or executives are making AI that’s actually going to improve humanity. They never, ever were, and people need to stop taking them at their word that they are. But are they really the ones in control? It's not the tech titans, it's Capitalism itself building the war chest to ensure it's embodiment and transfer into its next host - machines. We are just it's temporary vehicles. > “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.” Yes, these decisions are being made by flesh-and-blood humans at the top of a social pyramid. Nick Land's deranged (and often racist) word-salad sci-fi fantasies tend to obfuscate that. If robots turn on their creators and wipe out humanity then whatever remains wouldn't be a class society or a market economy of humans any more, hence no longer the social system known as capitalism by any common definition. If there is more than one AI remaining, they will have some sort of an economy between them. I mean, that could be drone swarms blasting each other to bits for control of what remains of the earth's charred surface though. That wouldn't be capitalism any more than dinosaurs eating each other. I don't see post-human AI selling goods to consumers and prices being set through competition. >We are just it's temporary vehicles. > “This is because what appears to humanity as the history of capitalism is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.” I see your “roko’s basilisk is real” and counter with “slenderman locked it in the backrooms and it got sucked up by goatse” in this creepypasta-is-real conversation I for one welcome our new AI overlords. (disclaimer: I don't actually, I'm just memeing. I don't think we'll get AI overlords unless someone actively puts AI in charge and in control of both people (= people following directions from AI, which already happens, e.g. ChatGPT making suggestions), military hardware, and the entire chain of command in between.) Literally no one on earth is trying to make an AI overlord that’s an AI. There’s like a handful of dudes that think that if they can shove their stupid AI into enough shit then they can call themselves AI overlords. As a reminder, Princeton did a study and found that public support for a bill has almost zero impact on whether or not that bill passes [1]. This government is bought and paid for by the ultra-wealthy and large corporations with a system of legalized corruption that began long before the disastrous Citizens United decision [2] but that decision put things into overdrive. When this bubble bursts as I firmly believe it will, it's going to get much worse because Congress will launch into action... by bailing out those that invested billions into AI without any prospect of ever recouping that money. [1]: https://act.represent.us/sign/the-problem-tmp [2]: https://www.brennancenter.org/our-work/research-reports/citi... Oh they aren't conspiring against democratically made decisions about AI, instead they are "ammassing war chests to fight AI regulation", how submissively worded, but that's expected when they have a grip on all mayor communication channels. Fighting regulation and taxation of the big tech megacorps is why the leaders and big investors / owners of these companies are looking the other way and being friendly with an administration that is increasingly supremacist and now even discussing denaturalization of legal immigrants (taking away citizenship). It’s despicable cowardice. God forbid we protect people from the theft machine There's a lot of problems with AI that need some carefully thought out regulation, but infringing on rights granted by IP law still isn't theft. It's theft. But not all IP theft, or theft in general, is morally equivalent. A poor person stealing a loaf of bread or pirating a movie they couldn't afford is just. A corrupt elite stealing poor farmers' food or stealing content from small struggling creators is not. >pirating a movie they couldn't afford is just I wish this argument would die. It's so comically false, and is just used to allow people to pave over their cognitive dissonance with the real misfortunes of a small minority. I am a millennial and rode the wave of piracy as much as the next 2006 computer nerd. It was never, ever, about not being able to afford these things, and always about how much you could get for free. For every one person who genuinely couldn't afford a movie, there were at least 1000 who just wanted it free. Speak for yourself. For many more it's about being unwilling to support the development of tech that strips users of their ability to control the devices that they ostensibly own. I happily pay for my media when there's a way to do so,
without simultaneously supporting the emplacement of telescreens everywhere you look. > For every one person who genuinely couldn't afford a movie, there were at least 1000 who just wanted it free. You have this backwards. There are way more poor people who can't afford things than there are people who can afford whatever they want Genuinely cannot afford is different than can't afford. Genuinely cannot afford means you don't have the $15 to buy the movie after paying for necessities. Cannot afford tends to mean "I bought a 72" OLED last week so no way I'm spending another $1400 on a movie collection". How many people can afford to pay cash for a 72" OLED? If you have to use credit to "afford" such things, then you can't actually afford them When you steal a loaf of bread, somebody's loaf of bread is missing. That's worlds apart from making an unauthorized copy of something. Ask yourself: who owns the IP you're defending? It's not struggling artists, it's corporations and billionaires. Stricter IP laws won't slow down closed-source models with armies of lawyers. They'll just kill open-source alternatives. Under copyright laws, if HN's T's & C's didn't override it, anything I write and have written on HN is my IP. And the AI data hoarders used it to train their stuff. Let's meet in the middle: only allow AI data hoarders to train their stuff on your content if the model is open source. I can stand behind that. Calling a HN comment “intellectual property” is like calling a table saw in your garage “capital”. There are specific regulatory contexts where it might be somewhat accurate, but it’s so different from the normal case that none of our normal intuitions about it apply. For example, copyright makes it illegal to take an entire book and republish it with minor tweaks. But for something short like an HN comment this doesn’t apply; copyright always permits you to copy someone’s ideas, even when that requires using many of the same words. People seem to either intentionally or unintentionally (large from being taught by the intentional ones), to not know what training an AI involves. I think most people think that AI training means copying vast troves of data onto ChatGPT hard drives for the model to actively reference. How do you expect open source alternatives to exist when they cannot enforce how you use their IP? Open source licenses exist and are enforced under IP law. This is part of the reason why AI companies have been pushing hard for IP reform because they to decimate IP laws for thee but not for me. I never advocated "stricter IP laws". I would however point out the contradiction between current IP laws being enforced against kids using BitTorrent while unenforced against billionaires and their AI ventures, despite them committing IP theft on a far grander scale. Agreed. Regulate AI? Sure, though I have zero faith politicians will do it competently. But more IP protection? Hard pass. I'd rather abolish patents. I think one of the key issues is that most of these discussions are happening at too high of an abstraction level. Could you give some specific examples of AI regulations that you think would be good? If we actually start elevating and refining key talking points that define the direction in which we want things to go, they will actually have a chance to spread. Speaking of IP, I'd like to see some major copyright reform. Maybe bring down the duration to the original 14 years, and expand fair use. When copyright lasts so long, one of the key components for cultural evolution and iteration is severely hampered and slowed down. The rate at which culture evolves is going to continue accelerating, and we need our laws to catch up and adapt. > Could you give some specific examples of AI regulations that you think would be good? Sure, I can give you some examples: - deceiving someone into thinking they're talking to a human should be a felony (prison time, no exceptions for corporations) - ban government/law-enforcement use of AI for surveillance, predictive policing or automated sentencing - no closed-source AI allowed in any public institution (schools, hospitals, courts...) - no selling or renting paid AI products to anyone under 16 (free tools only) > - deceiving someone into thinking they're talking to a human This is gonna be as enforceable as the CANSPAM act. (i.e. you will get a few big cases, but it's nothing compared to the overall situation) How do you proof it in court?
Do we need to record all private conversations? If you think spam is bad now imagine if trillion dollar corporations could do it. Just because something isn't perfect doesn't mean it doesn't help. I like where you're going. How about we just ban closed source software of any kind from public institutions? > Could you give some specific examples of AI regulations that you think would be good? AI companies need to be held liable for the outputs of their models. Giving bad medical advice, buggy code etc should be something they can be sued for. 90% of the time I'm pro anything that causes a problem for the big corporations, but buggy code? C'mon. It's a pile of numbers. People need to take some responsibility for the extent to which they act on its outputs. Suing OpenAI for bugs in the code is like suing a palm reader for a wrong prediction. You knew what you were getting into when you initiated the relationship. Here is my (hot take) proposal for regulation: 1) *All major players open source their unobfuscated training data.* a) The evidence so far shows that every major AI company engaged in intentional and historically unprecedented copyright violation to obtain their training data. b) LLMs have now poisoined future data for any new players. This is a massive negative externality, and we shouldn't accept this externality as a moat locking out future players from competition. 2) *Levy a 20% royalty on all future genAI revenue to authors and artists who appear in the dataset and exempt genAI from future copywright violations.* a) The current copyright model is bad for both authors and AI companies. It's hard for authors to collect from violations, and it's expensive and tedious for AI companies to comply with innumerable individual copyrights. Simplify the regime for everyone, and properly reward the people whose work is the foundation of these models. b) The specifics can be worked out, but, among other things, the royalty should be proprotional to the token count of a work, not just number of works. The government is far more dangerous than anything that you want it to regulate. Corporations and individuals with more capital and power than medium sized states are more dangerous than my tiny state and local governments, where I actually personally know some and have taken part in choosing my representatives. What is so novel about LLMs (I assume this is the form of AI being discussed) that they require regulation? It’s a dataset, an algorithm and some UI. Almost all the problems brought on by the scale-up are just supply/demand type things. Every problem people point at AI are also problems on some scale with computer software in general, so I’m wary of any regulation (and don’t kid yourself thinking it would be for the people) bleeding over. Some proposed regs would cover uses of AI outside LLMs, some of which tech folks might call “machine learning” these days to distinguish them from LLMs. Using algorithms to provide personalized pricing would be an example, where like a landlord, retailer, or airline would use an ML service trained on your personal data and aggregated purchase history to decide how much to charge you for a short-term rental, Nintendo Switch, or a plane ticket. Basically, instant underwriting at scale for every single purchase. Just got a new job with a raise? Your next vacation will cost you 26% more for the same experience. This fundamentally doesn’t work unless there is collusion involved, which we already have laws against. Let's say you're shopping for something that is about $80. You have Prime so you pull it up in Amazon and see you can get it for $78.74. You throw it into Google and see Target lists it for $76.18. You usually order from Amazon; are you going to order from Target this one time to save $2.56? What if it was $3.56? What if it was $5.00? What if it was $7.86? What if it was $20.14? Or let's say you need a flight. You usually fly American so you check there first. You've had Gold there for the last few years, and you're close now. You could go look up other airline prices and maybe you do as a quick gut check. American costs more, but not a lot more. Exactly how much more is it worth to you to fly American and hit your status? What if you just got a raise? What if you just moved? Or what if you just got laid off? What is the exact price delta that would get you to change a purchase habit? How does that change from purchase to purchase? How does it change depending on the other circumstances in your life? As a concept: there are price differences that don't matter to people, and those vary, sometimes wildly. Meanwhile, to a large company, adding even 1 percentage point to their margin, on average, could mean tens to hundreds of millions of dollars of additional profit that year. It could mean managers hitting targets and getting bonuses paid out. None of this is inherent to the functions of AI, though. Trying to regulate it from that angle will just cause battle lines to form in weird places for the sake of not being considered AI. Or else extension of regulation beyond AI, which sounds pretty dystopian to me but is sort of my overall point. Why doesn’t it work? It’s not obvious to me that a company with straightforward pricing would necessarily outcompete the algorithmic price discrimination one. They’d get somewhat more business from comparison shoppers who the algorithm feels can pay a lot, but lose business from people who the algorithm feels can pay less, and make less profit on everyone else. Because then it’s not distinguishable from price differences that already exist. Where are the damages? Who is being harmed? As a consumer you can always overpay for stuff. Amazon could start using its knowledge of spending habits TODAY to do this, without AI. I don't really understand the question. The harm is that many of us would prefer not to face algorithmic price discrimination. I agree that many kinds of abusive price discrimination are possible without AI, and that it's hard to distinguish them from noncontroversial practices like coupons. But why does that mean we can't or shouldn't regulate the AI-enabled version? I guess my point is regulating “algorithmic price discrimination” is orthogonal to “regulating AI”. Why do something indirectly that could more easily be done directly? We solved regulations everyone, a gun is just some metal, bombs are just some chemicals mixed together, we dont need regulations for this stuff! It was a genuine question. Do you have anything besides ridicule to offer the discussion?
siliconc0w - an hour ago
Deegy - 11 hours ago
fuzzy_biscuit - 11 hours ago
baxtr - 6 hours ago
adam_arthur - 4 hours ago
conradev - 3 hours ago
They don't, though! I can buy hardware off of the shelf, host open source models on it, and then charge for inference: How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.
adam_arthur - 2 hours ago
TheRoque - an hour ago
wyre - 2 hours ago
adam_arthur - an hour ago
kupopuffs - 4 hours ago
nradov - 4 hours ago
threethirtytwo - 7 hours ago
vpShane - 8 hours ago
wyre - an hour ago
idiotsecant - an hour ago
Arainach - 8 hours ago
vpShane - 6 hours ago
delusional - 11 hours ago
Deegy - 10 hours ago
adam_arthur - 10 hours ago
bko - 8 hours ago
missedthecue - 5 hours ago
flir - 11 hours ago
pclmulqdq - 11 hours ago
flir - 3 hours ago
BrenBarn - 41 minutes ago
neilv - 9 hours ago
jordanb - 9 hours ago
michaelt - an hour ago
bawolff - 8 hours ago
nradov - 4 hours ago
j5r5myk - an hour ago
throwaway48476 - 12 hours ago
andy99 - 11 hours ago
cmiles8 - 13 hours ago
philipwhiuk - 12 hours ago
HPsquared - 11 hours ago
myrmidon - 12 hours ago
cmiles8 - 3 hours ago
sofixa - 11 hours ago
myrmidon - 10 hours ago
TeMPOraL - 9 hours ago
cmiles8 - 3 hours ago
VectorLock - 11 hours ago
vachina - 12 hours ago
billy99k - 12 hours ago
bathtub365 - 9 hours ago
raw_anon_1111 - 10 hours ago
Workaccount2 - 11 hours ago
nradov - 9 hours ago
jmward01 - 9 hours ago
parliament32 - 7 hours ago
nradov - 4 hours ago
parliament32 - 3 hours ago
mrdevlar - 3 hours ago
TheAceOfHearts - 16 hours ago
jasonsb - 15 hours ago
LexiMax - 12 hours ago
lelanthran - 14 hours ago
flag_fagger - 8 hours ago
lelanthran - 7 hours ago
graemep - 12 hours ago
jillesvangurp - 15 hours ago
glitchc - 10 hours ago
piva00 - 15 hours ago
vjvjvjvjghv - 13 hours ago
piva00 - 12 hours ago
justincormack - 12 hours ago
piva00 - 11 hours ago
mh- - 10 hours ago
gosub100 - 9 hours ago
ulfw - 14 hours ago
terminalshort - 12 hours ago
LexiMax - 11 hours ago
terminalshort - 10 hours ago
LexiMax - 9 hours ago
raw_anon_1111 - 10 hours ago
LexiMax - 10 hours ago
raw_anon_1111 - 8 hours ago
agentultra - 11 hours ago
TeMPOraL - 8 hours ago
piva00 - 14 hours ago
kace91 - 13 hours ago
jeffbee - 12 hours ago
kace91 - 10 hours ago
sofixa - 11 hours ago
jeffbee - 10 hours ago
TeMPOraL - 8 hours ago
phyzix5761 - 15 hours ago
smallmancontrov - 14 hours ago
phyzix5761 - 13 hours ago
smallmancontrov - 12 hours ago
mattgreenrocks - 10 hours ago
bawolff - 8 hours ago
flag_fagger - 8 hours ago
mattgreenrocks - 30 minutes ago
smallmancontrov - 10 hours ago
plastic-enjoyer - 14 hours ago
phyzix5761 - 13 hours ago
plastic-enjoyer - 12 hours ago
bawolff - 8 hours ago
sofixa - 11 hours ago
hobom - 14 hours ago
solumunus - 11 hours ago
bgwalter - 10 hours ago
dist-epoch - 15 hours ago
titanomachy - 15 hours ago
vjvjvjvjghv - 13 hours ago
VectorLock - 11 hours ago
LightBug1 - 10 hours ago
ToucanLoucan - 15 hours ago
billy99k - 12 hours ago
ToucanLoucan - 6 hours ago
disgruntledphd2 - 15 hours ago
ishouldbework - 11 hours ago
_spduchamp - 14 hours ago
terminalshort - 12 hours ago
nis0s - 14 hours ago
Animats - 8 hours ago
bawolff - 8 hours ago
skywhopper - 11 hours ago
amelius - 8 hours ago
glitchc - 10 hours ago
_spduchamp - 6 hours ago
nradov - 6 hours ago
stego-tech - 12 hours ago
terribleidea - 12 hours ago
stego-tech - 11 hours ago
dist-epoch - 16 hours ago
faidit - 15 hours ago
dist-epoch - 15 hours ago
faidit - 14 hours ago
jrflowers - 15 hours ago
Cthulhu_ - 15 hours ago
jrflowers - 15 hours ago
jmyeet - 9 hours ago
AmbroseBierce - 12 hours ago
SilverElfin - 6 hours ago
conartist6 - 16 hours ago
__MatrixMan__ - 16 hours ago
faidit - 15 hours ago
Workaccount2 - 10 hours ago
__MatrixMan__ - 10 hours ago
bluefirebrand - 9 hours ago
Workaccount2 - 8 hours ago
bluefirebrand - 6 hours ago
__MatrixMan__ - 12 hours ago
jasonsb - 15 hours ago
Cthulhu_ - 15 hours ago
jasonsb - 15 hours ago
SpicyLemonZest - 11 hours ago
Workaccount2 - 10 hours ago
fzeroracer - 15 hours ago
faidit - 15 hours ago
jasonsb - 16 hours ago
TheAceOfHearts - 15 hours ago
jasonsb - 15 hours ago
j16sdiz - 14 hours ago
bcrosby95 - 11 hours ago
__MatrixMan__ - 12 hours ago
patrick451 - 12 hours ago
__MatrixMan__ - an hour ago
ramblenode - 10 hours ago
bilsbie - 12 hours ago
LastTrain - 11 hours ago
twodave - 12 hours ago
snowwrestler - 12 hours ago
twodave - 11 hours ago
snowwrestler - 8 hours ago
twodave - 8 hours ago
SpicyLemonZest - 11 hours ago
twodave - 8 hours ago
SpicyLemonZest - 7 hours ago
twodave - 5 hours ago
zwnow - 12 hours ago
twodave - 11 hours ago