Microsoft and OpenAI end their exclusive and revenue-sharing deal
bloomberg.com885 points by helsinkiandrew 21 hours ago
885 points by helsinkiandrew 21 hours ago
Gift Article: https://www.bloomberg.com/news/articles/2026-04-27/microsoft...
https://openai.com/index/next-phase-of-microsoft-partnership...
https://x.com/ajassy/status/2048806022253609115
Opinions are my own. I think the biggest winner of this might be Google. Virtually all the frontier AI labs use TPU. The only one that doesn't use TPU is OpenAI due to the exclusive deal with Microsoft. Given the newly launched Gen 8 TPU this month, it's likely OpenAI will contemplate using TPU too. Many labs use TPUs, but not exclusively. Most labs need more compute than they can get, and if there's TPU capacity, they'll adapt their systems to be able to run partially on TPUs. even google doesnt only use TPUs. Google is in a different position to others in that they're the only frontier lab with a cloud infra business. It obviously makes sense to sell GPUs on cloud infra as people want to rent them. In that respect Google buys a ton of GPUs to rent out. What's unclear to me is how much Google uses GPUs for their own stuff. Yes Gemini runs on GPUs now, so that Google can sell Gemini on-prem boxes (recent release announced last week), but is any training or inference for Gemini really happening on GPUs? This is unclear to me. I'd have guessed not given that I thought TPUs were much cheaper to operate, but maybe I'm wrong. Caveat, I work at Google, but not on anything to do with this. I'm only going on what's in the press for this stuff. Why is AMD not more popular then if labs are so flexibly with giving away CUDA? people are trying, especially for inference. For training, it’s just too high risk to tank your training I think. TPUs are at least dogfooded by Google deepmind, no team AFAIK has gotten the AMD stack to train well. Interesting. Why? My current mental model is that AMD chips are just a bit behind, so, less efficient, but no biggie. Do labs even use CUDA? This is somewhat out of date (Dec 2024), but gives you some idea of how far behind AMD was then: https://newsletter.semianalysis.com/p/mi300x-vs-h100-vs-h200... Pull quotes: AMD’s software experience is riddled with bugs rendering out of the box training with AMD is impossible. We were hopeful that AMD could emerge as a strong competitor to NVIDIA in training workloads, but, as of today, this is unfortunately not the case. The CUDA moat has yet to be crossed by AMD due to AMD’s weaker-than-expected software Quality Assurance (QA) culture and its challenging out of the box experience. [snip] > The only reason we have been able to get AMD performance within 75% of H100/H200 performance is because we have been supported by multiple teams at AMD in fixing numerous AMD software bugs. To get AMD to a usable state with somewhat reasonable performance, a giant ~60 command Dockerfile that builds dependencies from source, hand crafted by an AMD principal engineer, was specifically provided for us [snip] > AMD hipBLASLt/rocBLAS’s heuristic model picks the wrong algorithm for most shapes out of the box, which is why so much time-consuming tuning is required by the end user. etc etc. The whole thing is worth reading. I'm sure it has (and will continue to) improved since then. I hear good things about the Lemonade team (although I think that is mostly inference?) But the NVidia stack has improved too. That’s insane. There should be a big team of people at AMD whose whole job is just to dogfood their stuff for training like this. Speaking of which, Amazon is in the same boat, I’m constantly surprised that Amazon is not treating improving Inferentia/Trainium software as an uber-priority. (I work at Amazon) > There should be a big team of people at AMD whose whole job is just to dogfood their stuff if they had this management attitude, they wouldn't have been so far behind so as to need this action in the first place! I'll just leave this here from 10 years ago: > “Are we afraid of our competitors? No, we’re completely unafraid of our competitors,” said Taylor. “For the most part, because—in the case of Nvidia—they don’t appear to care that much about VR. And in the case of the dollars spent on R&D, they seem to be very happy doing stuff in the car industry, and long may that continue—good luck to them. https://arstechnica.com/gadgets/2016/04/amd-focusing-on-vr-m... "car industry" is linked to the GPU-accelerated self-driving car work, ie, making neural networks run fast on GPUs: https://arstechnica.com/gadgets/2016/01/nvidia-outs-pascal-g... Anecdotal but over several years with an AMD GPU in my desktop I've tried multiple times to do real AI work and given up every time with the AMD stack. Im running fine on my AMD 7800xt 16gb... Yes memory is a bit limited, but apart from the i have found that it works great using Vulcan in LM studio for example. ROCm works great too, the only issue i have had is that my machine froze a couple of times as it used 100% of the graphics and the OS had nothing left. Since moving to vulcan i stopped getting these errors apart from a little UI slowdown when i had 4 models loaded at the same time taking turns. Im also on a i7 6700 with 32gb DDR4 so im sure that is causing more slowdowns then the graphics card. Yet another reason to doubt claims that ”software is solved”. Anthropic did retire an interview take-home assignment involving optimising inference on exotic hardware, because Claude could one shot a solution, but that was clearly a whiteboard hypothetical instead of a real system with warts, issues and nuance. I don’t know what’s a chicken and what’s an egg here. But ROCm support is often missing or experimental even in very basic foundational libraries. They need someone else to double down on using their chips and just break the software support out of the limbo. i'm doing inference on a free mi300x instance from AMD right now. not sure if the software stack is just old or what, but here's what i've observed: stuck on an old version of vllm pre-Transformers 5 support. it lacks MoE support for qwen3 models. oss-120b is faaaar slower than it should be. int8 quantization seems like it's almost supported, but not quite. speeds drop to a fraction of full precision speed and the server seems like it intermittently hangs. int4 quantization not supported. fp8 quantization not supported. again, maybe AMD is just being lazy with what they've provided, but it's not a great look. right now the fastest smart model i can run is full precision qwen3-32b. with 120 parallel requests (short context) i'm getting PP @ 4500 tokens/sec and TG @ 1300 tokens/sec > Do labs even use CUDA? From the papers I've read and the labs that I have worked in personally, I would say that most scientists developing Deep learning solutions use CUDA for GPU acceleration amd gpus compete but they lack the interconnect. NVLink performance is a huge deal for training. What I hear is that getting your network to work on AMD is a huge pain. Yeah, historically it’s been software that’s limited AMD here. Not surprised to hear that may still be the issue. NVidia’s biggest edge was really CUDA. And almost by happenstance Apple. Turns out they have a great platform for inference and torched almost nothing comparatively on Siri. The Apple/Gemini deal is interesting, Google continues to demonstrate their willingness to degrade their experience on Apple to try and force people to switch. If you do the math (I did), in 2 years, open source models that you can run on a future MacBook Pro will be as capable as the frontier cloud models are today. Memory bandwidth is growing rapidly, as is the die area dedicated to the neural cores. And all the while, we have the silicon getting more power efficient and increasingly dense (as it always does). These hardware improvements are coming along as the open source models improve through research advancements. And while the cloud models will always be better (because they can make use of as much power as they want to - up in the cloud), what matters to most of us is whether a model can do a meaningful share of knowledge work for us. At the same time, energy consumption to run cloud infrastructure is out-pacing the creation of new energy supply, which is a problem not easily solved. I believe scarcity of energy will increasingly drive frontier labs toward power efficiency, which necessarily implies that the Pareto frontier of performance between cloud and local execution will narrow. A Opus 4.7/Gpt5.5 class model is 5 trillion parameters[1]. To run a 8 bit quantized version of that you need roughly 5TB of RAM. Today that is around 18 NVidia B300. That's around $900,000, without including the computers to run them in. It's true that the capability of open source models is improving, but running actual frontier models on your MPB seems a way off. [1] https://x.com/elonmusk/status/2042123561666855235?s=20 (and Elon has hired enough people out of those labs to have a fair idea) People had this "why you probably can't run a GPT-4 (or even GPT-3.5) class model on your MBP anytime soon" conversation before. Today's LLMs are able pack much more capabilities into fewer parameters compared to 2023. We might still be at the very rudimentary phase of this technology there are low-hanging efficiency gains to be had left and right. These models consume many orders of magnitude more energy than a human brain, this all seems like room for improvement. The right question: is there a law in information theory that fundamentally prevents a 70B model of any architecture from being as smart as Opus 4.7? There is a huge gap between "in two years" and "theoretically possible" >> People had this "why you probably can't run a GPT-4 (or even GPT-3.5) class model on your MBP anytime soon" conversation before. Opus and Gpt are generic LLMs with knowledge on all sort of topics. For specific use cases you probably don't need all the parameters? Suppose you want to generate code with opencode, what part of the generic LLM is needed and what parts can be removed? we're already doing that, it's called distillation and how models like deepseek are trained. The OP said "as capable as the frontier cloud models are today" which might assume model improvements that do more with less. Opus 4.7/Gpt5.5 performance might be achievable with a fraction of the parameters. Exactly. I also feel like being able to choose a model for the use case could be worth an idea. So instead of trying to squeeze all kinds of knowledge into a single model, even if it's moe, just focus models on use cases. I bet you only need double digit billion parameter models for that with same or even better performance I wish more people were more aware of this. I think so much of the current optimism is based on "it doesn't matter if companies are raising prices since I'm just going to run the model locally", doesn't fly. As far as I can tell Minimax M2.7 is better than anything available a year ago, but it runs on an ordinary PC. Will that continue? Not sure, but the trend has continued for the last two years and I don't know of any fundamental limits the models are approaching. Do that will only be possible with something like better 3D NAND flash memory, needs a new hardware. People are already trying to bring that the market. Contemplated taking a compiler position in such a company. I think your own math leads to the conclusion the public apis are not serving models of that size. They couldn’t afford to > A Opus 4.7/Gpt5.5 class model is 5 trillion parameters[1]. You could run it on a cluster of nodes that each do some mix of fetching parameters from disk and caching them in RAM. Use pipeline parallelism to minimize network bandwidth requirements given the huge size. Then time to first token may be a bit slow, but sustained inference should achieve enough throughput for a single user. That's a costly setup of course, but it doesn't cost $900k. > You could run it on a cluster of nodes Not sure this is a MBP either. Not even a cluster of Mac Pros could run a dense 5T parameter model with RDMA, to my knowledge. I did this calculation a bit ago and don't think frontier models are just a few MacBook Pro generations away. Yes numbers reliably go up in tech in general but in specific semiconductors & standards have long lead-times and published roadmaps, so we can have high confidence in what we're getting even in 3-4 years in terms of both transistor density and RAM speeds. In mid-2028 we have N2E/N2P with around 15% greater transistor density than today's N3P, and by EOY2028 we'll likely have A14 with about 35-40% density improvement. Meanwhile, we'll be on LPDDR6 by that point, which takes M-series Pros from 307GB/s -> ~400GB/s, and Max's from 614GB/s -> ~800GB/s. Model improvements obviously will help out, but on the raw hardware front these aren't in the ballpark for frontier model numbers. An H100 has 3TB/s memory bandwidth, fwiw What do you need 3 TB/s memory bandwidth for in a single user context? DeepSeek V4 pro (the latest near-SOTA model) has about 25 GB worth of active parameters (it uses a FP4 format for most layers) which gives 12 tok/s on a 307 GB/s platform as the current memory bandwidth bottleneck, maybe a bit less than that if you consider KV cache reads. That's not quite great but it's not terrible either for a pro quality model. Of course that totally ignores RAM limits which are the real issue at present: limited RAM forces you to fetch at least some fraction of params from storage, which while relatively fast is nowhere near as fast as RAM so your real tok/s are far lower (about 2 for a broadly similar model on a top-end M5 Pro laptop). That's not "math". That's a "wild guess", or baseless extrapolation at best. My son doubled in size in the first 8 months of his life. At age 12, he will be larger than the Moon. So long as you don't require deep search grounding like massive web indexes or document stores which are hard to reproduce locally. You can do local agentic things that get close or even do better depending on search strategy, but theoretically a massive cloud service with huge data stores at hand should be able to produce better results. In practice unless you're doing some kind of deep research thing with the cloud, it'll try to optimize mostly for time and get you a good enough answer rather than spending an hour or two. An hour of cloud searching with huge data stores is not equivalent to an hour of local agentic searching, presumably. I think that problem will improve a little in the coming years as we kind of create optimized data curation, but the information world will keep growing so the advantage will likely remain with centralized services as long as they offer their complete potential rather than a fraction. They also degrade their own direct services with little warning or thought put into change management, so, to be fair, Apple may be getting the same quality of service as the rest of us. I think that's just how Google is, by nature. They don't intentionally degrade their services. They just aren't a customer centric company. They run on numbers. As a corporate, it doesn't really encourage support and maintenance work either. Indeed. I'm wondering if Apple's "miss the train" with AI ended up being a blessing for them. Not only in the Google deal but also there's a lot of people doing interesting stuff locally.. Apple is basically in the same boat as AMD and Intel. They have a weak, raster-focused GPU architecture that doesn't scale to 100B+ inference workloads and especially struggles with large context prefill. TPUs smoke them on inference, and Nvidia hardware is far-and-away more efficient for training. This doesn't get talked about enough - the GPU is weak, weak, weak. And anyone who can fix them will go to a serious AI company (for 2-3x the salary). The GPU is monstrously good. Depending on the workload, the M1 series GPU using 120W could beat an RTX 3090 using 420W. Same with the CPU. Linux compiled faster on an M1 than on the fastest Intel i9 at the time, again using only 25% of the power budget. And the M-series has only gotten better. It is kind of sad Apple neglects helping developers optimize games for the M-series because iDevices and MacBooks could be the mobile gaming devices. >the M1 series GPU using 120W could beat an RTX 3090 using 420W You're cooked if you actually believe this I very recently ran the numbers on these GPUs for an upcoming blog post. The token generation performance is bad, but the prefill performance is _really_ bad. For a Qwen 3.6 35B / 3B MoE, 4-bit quant: - parsing a 4k prompt on a M4 Macbook Air takes 17 seconds before generating a single token. - on an M4 Max Mac Studio it's faster at 2.3 seconds - on an RTX 5090, it's 142ms. RTX 5090 uses more power than an M4 Max Mac Studio but it's not 16x more power. Somehow Apple has always been able to sell their stuff as somehow Magic. Remember the megahertz myth? Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon. > Apple hertzes and apple bytes are much better than PC hertzes and bytes because they are made by virgin elves during a full moon. The thing that Apple has always been excellent at is efficiency - even during the Intel era, MacBooks outclassed their Windows peers. Same CPU, same RAM, same disks, so it definitely wasn't the hardware, it was the software, that allowed Apple to pull much more real-world performance out of the same clock cycles and power usage. Windows itself, but especially third party drivers, are disastrous when it comes to code quality, and they are much much more generic (and thus inefficient) compared to Apple with its very small amount of different SKUs. Apple insisted on writing all drivers and IIRC even most of the firmware for embedded modules themselves to achieve that tight control... which was (in addition to the 2010-ish lead-free Soldergate) why they fired NVIDIA from making GPUs for Apple - NV didn't want to give Apple the specs any more to write drivers. > NV didn't want to give Apple the specs any more to write drivers. I think that's a valid demand, considering Nvidia's budding commitment to CUDA and other GPGPU paradigms. Apple, backing OpenCL, would have every reason to break Nvidia's code and ship half-baked drivers. They did it with AMD's GPUs later down the line, pretending like Vulkan couldn't be implemented so they could promote Metal. Apple wouldn't have made GeForce more efficient with their own firmware, they would have installed a Sword of Damocles over Nvidia's head. On Geekbench 5, the M1 hits 483 FPS and the RTX 3090 hits 504 FPS. There are other workloads where the M1 actually beats the 3090. Apple does plenty of hyping but it's always cute when irrational haters like you put them down. The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt. What geekbench 5 fps are you talking about? Geekbench only has OpenCL and Vulkan scores for the 3090 as far as I can tell, and the M1 Ultra is less than half the OpenCL score of the 3090. And the M1 Ultra was significantly more expensive. Find or link these workloads you think exist, please > The M1 was (well, is) a marvel and absolutely smokes a 3090 in perf per watt. The GTX 1660 also smokes the 3090 in perf per watt. Being more efficient while being dramatically slower is not exactly an achievement, it's pretty typical power consumption scaling in fact. Perf per watt is only meaningful if you're also able to match the perf itself. That's what actually made the M1 CPU notable. M-series GPUs (not just the M1, but even the latest) haven't managed to match or even come close to the perf, so being more efficient is not really any different than, say, Nvidia, AMD, or Intel mobile GPU offerings. Nice for laptops, insignificant otherwise Apples and limes. The context of this thread isn't consumer chips, but Apple's analog to an H/B200. The GPUs are bottom-barrel for compute-focused industries. It is mobile-grade hardware that arguably can't even scale to prior Mac Pro workloads. > The GPU is monstrously good. Depending on the workload, the M1 series GPU using 120W could beat an RTX 3090 using 420W. You're just listing the TDP max of both chips. If you limit a 3090 to 120W then it would still run laps around an M1 Max in several workloads despite being an 8nm GPU versus a 5nm one. > It is kind of sad Apple neglects helping developers optimize games for the M-series Apple directly advocated for ports like Death Stranding, Cyberpunk 2077 and Resident Evil internally. Advocacy and optimization are not the issue, Apple's obsession over reinventing the wheel with Metal is what puts the Steam Deck ahead. Edit (response to matthewmacleod): > Bold of them to reinvent something that hadn't been invented yet. Vulkan was not the first open graphics API, as most Mac developers will happily inform you. > Vulkan was not the first open graphics API, as most Mac developers will happily inform you. OpenGL had become too unmanagable which is why devs moved to DirectX. Unless you meant a different one? > The GPUs are bottom-barrel for compute-focused industries. It is mobile-grade hardware that arguably can't even scale to prior Mac Pro workloads. Surprised Apple didn't create a TPU-like architecture. Another misstep from John Gianneadrea. I'm confused how anyone ever thought the NPU would be a good idea. The GPU is almost always underutilized on Mac and could do the brunt of the work for inference if it embraced GPGPU principles from the start. Creating a dedicated hardware block to alleviate a theoretical congestion issue is... bewildering. That goes for most NPUs I've seen. Apple had the technology to scale down a GPGPU-focused architecture just like Nvidia did. They had the money to take that risk, and had the chip design chops to take a serious stab at it. On paper, they could have even extended it to iPhone-level edge silicon similar to what Nvidia did with the Jetson and Tegra SOCs. I think they built the NPU with whatever models they needed to run on the iPhone in mind vs trying to build a general purpose chip, and then got lucky it was also useful for LLMs. (Like “I want to do object detection for cutting people into stickers on device without blowing a hole in the battery, make me a chip for that”.) I'm not sure even Apple thought that, given that they don't officially provide access to ANE internals under macOS (barring unsupported hacks). But if that was fixed, it could then be useful for improving the power efficiency of prefill, where the CPU/GPU hardware is quite weak (especially prior to the M5 Neural Accelerators). Apple's obsession over reinventing the wheel with Metal Bold of them to reinvent something that hadn't been invented yet. Apple is in a much better boat than AMD or Intel. They have a gigantic warchest and can just snap up whoever looks like a leader coming out of the bubble burst. It's becoming increasingly clear that there is no moat on models. The winners will be the ones who have existing products and ecosystems they can tie AI in to. You will pay adobe for credits because that will be the only AI that works in Photoshop, you will pay microsoft because only theirs will work on your microsoft cloud apps. Open AI has nothing. Their tech will rapidly be devalued by free models the moment they stop lighting stacks of cash on fire. I kind of agree with you at this point. When ChatGPT was rapidly gaining popularity I thought that they will eventually replace search (esp. for shopping), which would have given them a huge ad revenue. Maybe they could have even tried social networking e.g., to help you sort out the huge flow of information that today's social networks are and get to the important/rewarding/whatever posts. But now ChatGPT is kind of getting commoditized. I would even dare say that gemini feels to me a bit better now, so the search route for ChatGPT is clearly gone. OpenAI is handling 15% of US traffic. > OpenAI is handling 15% of US traffic. The parent post was arguing that they can do this now because they are lighting stacks of cash on fire. And once they stop doing that, their LLM lead will be gone in a hurry. They appear to not have a moat, like other more established players do. I wish Google would launch Mac Mini-like devices running their consumer-grade TPUs for local inference. I get that they don't want it to eat into their GCP margins, but it would still get them into consumer desktops that Pixel Books could never penetrate (Chromebooks don't count and may likely become obsolete soon due to MacBook Neo). Had written a blog post on the same a few days back, if anyone's interested in readng (hardly 5 minute read): Can Google Win the AI Hardware Race Through TPUs? Hello, your link says "~20 min read" wich seems to be the case! I guess I myself have read it too many times by now so in mind it was just 5 minute read when I made this comment... sorry.. Dont forget Elon, i am sure this news will come up on the up and coming OpenAI vs Elon Musk trail starting soon! I cant wait to hear all the discovery from this trail > Microsoft will no longer pay a revenue share to OpenAI.
> Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap. How is this helping OpenAI? OpenAI uses GCP. I don't know if they use TPUs. https://www.reuters.com/business/retail-consumer/openai-taps... > The only one that doesn't use TPU is OpenAI For inference? This is from July 2025: OpenAI tests Google TPUs amid rising inference cost concerns, https://www.networkworld.com/article/4015386/openai-tests-go... / https://archive.vn/zhKc4 > ... due to the exclusive deal with Microsoft This exclusivity went away in Oct 2025 (except for 'API' workloads). [flagged] Some on this forum will be working for companies with conflicts of interest on the topic, and if an employees words were construed to be the opinions of the company that could be bad for that person. I was once almost fired for saying a little too much in an HN comment about pentesting. Being dragged into an office and given a dressing-down for posting was quite traumatic. The central issue (or so they claimed) was that people might misconstrue my comment as representing the company I was at. So yeah, I don’t understand why people are making fun of this. It’s serious. On the other hand, they were so uptight that I’m not sure “opinions are my own” would have prevented it. But it would have been at least some defense. > On the other hand, they were so uptight that I’m not sure “opinions are my own” would have prevented it. In my experience it didn't matter at all, they considered "you work for us, its known you work for us, therefore your opinions reflect on us". Absolute nonsense, they don't pay me for 24 hours of the day. I told them where they can stick it (politely) and got a new job. Good on you. I’m happy to hear you got out of that kind of environment. It’s soul-draining. Also a relief to hear that other people had to deal with this nonsense. I was afraid the reaction would be “there’s no way that happened,” since at the time I could hardly believe it either. Opinions are my employers, and they are also bastards. Bold and silly of you to even reveal where you work tbh. > Who's else would they be? Their employer? They may work at related company, and are required to say this. At this point that phase is an attempt at status signaling. it's hilarious though it's like people are LARPing a Fortune company CEO when they're giving their hot takes on social media reminds me of Trump ending his wild takes on social media with "thank you for your attention to this matter" - so out of place, it makes it really funny *typo > it's like people are LARPing a Fortune company CEO when they're giving their hot takes on social media At least in large tech companies, they have mandatory social media training where they explicitly tell employees to use phrases like "my views are my own" to keep it clear whether they're speaking on behalf of their employer or not. If their name is on the post or their company is listed in their profile. The person above has neither as far as I can tell. Why would they be speaking on behalf of their employer? That is what would need a disclaimer not the common case. Besides, he can put it one time in his profile, not over and over again in every comment like he does. There is no expectation that some random employee is a spokesperson for Google on tech message board comment threads. It's just a way to brag. > Why would they be speaking on behalf of their employers? Disclaimers aren’t there for folks who are thinking and acting rationally. They are there for people who are thinking irrationally and/or manipulatively. There are (relatively speaking) a lot of these people. They can chew up a lot of time and resources over what amounts to nothing. Disclaimers like this can give a legal department the upper hand in cases like this A few simple examples: - There is a person I know who didn’t renew the contract of one of their reports. Pretty straightforward thing. The person whose contract was not renewed has been contesting this legally for over 10 years. The outcome is guaranteed to go against the person complaining, but they have time and money, so they tax the legal team of their former employer. - There is a mid-sized organization that had a small legal team that had its plate full with regular business stuff. Despite settlements having NDAs, word got out that fairly light claims of sexual harassment and/or EEO complaints would yield relatively easy five-figure payments. Those complaints exploded, and some of the complaints were comical. For example, one manager represented a stance for the department to the C-suite that was 180 degrees opposite of what the group of three managers had agreed to prior. Lots of political capital and lots of time had to be used to clean up that mess. That person’s manager was accused of sex discrimination and age discrimination simply for asking the person why they did that (in a professional way, I might add). That person got a settlement, moved to a different department, and was effectively protected from administrative actions due to it being considered retaliation. Sounds like the company in the latter example really screwed up, but how does that connect to disclaimers? Is it just an example of malicious behavior? i've worked in two different large tech companies when i give my hot takes pseudonymously on social media these phrases would be nothing but a LARP i don't put my real name here nor do i put my professional commitments in my profile, and neither does this guy Exactly. There is no scenario where we should expect some random anon to be speaking for Google. When that is the case a disclaimer is warranted, not the common case of speaking for oneself. He can write it once in his profile if he's so worried about it, not every other comment like he does. It's just inflated self importance You seem smart and knowledgeable. Maybe you should reach the lawyers at these companies and then they can change the policy! No I think it's made up, there is no policy, and the lawyers couldn't care less, it's just something people do to massage their own ego.
thanhhaimai - 18 hours ago
bastawhiz - 16 hours ago
zobzu - 5 hours ago
danpalmer - 5 hours ago
gpt5 - 14 hours ago
mattnewton - 13 hours ago
coder-3 - 13 hours ago
nl - 10 hours ago
_vertigo - 7 hours ago
chii - 5 hours ago
nl - 4 hours ago
moritonal - 2 hours ago
calgoo - an hour ago
djhn - 6 hours ago
f6v - 41 minutes ago
electroglyph - 5 hours ago
bean469 - 4 hours ago
uberduper - 11 hours ago
0-_-0 - 13 hours ago
dnadler - 11 hours ago
maxclark - 18 hours ago
ttul - 13 hours ago
nl - 10 hours ago
crazylogger - 8 hours ago
kvern - 2 hours ago
hnben - 2 hours ago
ako - 5 hours ago
byzantinegene - 36 minutes ago
Difwif - 9 hours ago
spflueger - 4 hours ago
ricardobayes - 3 hours ago
hedgehog - 9 hours ago
rurban - 4 hours ago
Nimitz14 - 6 hours ago
zozbot234 - 9 hours ago
nl - 9 hours ago
bigyabai - 3 hours ago
npunt - 12 hours ago
zozbot234 - 11 hours ago
xorcist - 11 hours ago
polski-g - 10 hours ago
CMay - 3 hours ago
GorbachevyChase - 16 hours ago
vharish - 14 hours ago
manueltgomes - 4 hours ago
bigyabai - 17 hours ago
brcmthrowaway - 16 hours ago
jorvi - 16 hours ago
AuthAuth - 13 hours ago
scottjg - 7 hours ago
wolvoleo - 12 hours ago
mschuster91 - 11 hours ago
bigyabai - 9 hours ago
jorvi - 11 hours ago
kllrnohj - 11 hours ago
ethbr1 - 16 hours ago
bigyabai - 16 hours ago
to11mtm - 12 hours ago
brcmthrowaway - 15 hours ago
bigyabai - 13 hours ago
easton - 12 hours ago
zozbot234 - 13 hours ago
matthewmacleod - 16 hours ago
munk-a - 12 hours ago
Gigachad - 6 hours ago
kavalg - 5 hours ago
ipaddr - 4 hours ago
cheema33 - 4 hours ago
kushalpandya - 8 hours ago
freakynit - 9 hours ago
OlivOnTech - 3 hours ago
freakynit - an hour ago
agentbc9000 - 17 minutes ago
alphabeta3r56 - 9 hours ago
VirusNewbie - 18 hours ago
ignoramous - 13 hours ago
https://blogs.microsoft.com/blog/2025/10/28/the-next-chapter... / https://archive.vn/1eF0V OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
PKop - 16 hours ago
ehnto - 16 hours ago
sillysaurusx - 16 hours ago
girvo - 13 hours ago
sillysaurusx - 9 hours ago
u_fucking_dork - 10 hours ago
jedberg - 16 hours ago
operatingthetan - 16 hours ago
muyuu - 16 hours ago
PretzelPirate - 16 hours ago
operatingthetan - 16 hours ago
PKop - 16 hours ago
csa - 15 hours ago
Dylan16807 - 9 hours ago
muyuu - 16 hours ago
PKop - 16 hours ago
eklavya - 14 hours ago
PKop - 14 hours ago