Sam Altman may control our future – can he be trusted?

newyorker.com

1567 points by adrianhon a day ago


ronanfarrow - a day ago

Ronan Farrow here. Andrew Marantz and I spent 18 months on this investigation. Happy to answer questions about the reporting.

laylower - 3 hours ago

Reading this makes me even happier to pay for Anthropic.

Amodei and his sister saw through the behavior and called it out.

" “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals."

rupi - 7 hours ago

Ronan Farrow, the write of this article, made a comment in this thread that is buried in all the comments, "As is always the case with incredibly precise and rigorously fact-checked reporting like this, where every word is chosen carefully (the initial closing meeting for this one was nearly eight hours long, with full deliberation about each sentence), there is more out there on that subject than is explicitly on the page."

I saw that before I read the article and it made me read the article in a very different way than I normally do. As I was reading, I found myself thinking, "Why is it worded that way? What else is the writer trying to say, or not say?"

It made reading this a lot more interactive than I normally associate with passive reading. Great job, Ronan!

4ggr0 - 3 minutes ago

> while Y.C. took a six- or seven-per-cent cut

shamefully have to admit that my monkey-brain smirked because of an accidental 67-meme in a serious article.

strgrd - 10 minutes ago

I remember reading these direct quotes from SA in 2016 from the New Yorker and thinking, yeah, this guy is just miserable:

> “Well, I like racing cars. I have five, including two McLarens and an old Tesla. I like flying rented planes all over California. Oh, and one odd one—I prep for survival. My problem is that when my friends get drunk they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources. I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

> "If you believe that all human lives are equally valuable, and you also believe that 99.5 per cent of lives will take place in the future, we should spend all our time thinking about the future. But I do care much more about my family and friends.”

> "The thing most people get wrong is that if labor costs go to zero... The cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”

> "...we’re going to have unlimited wealth and a huge amount of job displacement, so basic income really makes sense. Plus, the stipend will free up that one person in a million who can create the next Apple.”

andrewrn - 10 hours ago

“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

You can subtly see residue of this frustration in Dalton and Michael’s videos when Sam Altman comes up. It’s only thinly veiled that Sam was a snake while at YC.

arionhardison - 15 hours ago

Hi @ronanfarrow — I have only had one interaction with Sam Altman in person, and I was advised to keep it to myself. I know this crowd may not care, but Altman is absolutely terrified of Black people — not in any contextual sense, but in a visceral, instinctive way. For someone who, as you put it, "controls our future," this should matter.

FYI: I am by far not the only one to have experienced this and it 100% impacts hiring and other decisions at OpenAI.

jablongo - 9 hours ago

For me, the attempted productization of Sora was conclusive proof that 1) OAI was overcapitalized and desperate for revenue 2) safety didn't matter to them much 3) improving the world didn't matter much either.

At one point you mentioned an interaction with OpenAI staff where you were looking to interview AI Safety researchers. You were rebuffed b/c "existential safety isn't a thing". Does this mean that you could find no evidence of a AI Safety team at OAI after Jan Leike left? If you look at job postings it does seem like they have significant safety staff...

stavros - 13 hours ago

I found it very interesting that Altman et al were worried that AI will become supremely intelligent and China will make a supervirus or some AI drones or whatnot, but not a single person was worried about destroying all jobs because we wouldn't need humans any more.

Or maybe they were not so much "worried" but "hopeful" that they'd amass literally all the wealth in the world.

kmfrk - 21 hours ago

Gobsmacking details about Altmans' time as Y Combinator president, in case anyone's wondering.

Fantastic reporting.

thrwaway55 - 9 hours ago

We need only ask the dead. Aaron Schwartz knew what Altman is. The answer to the topic is no.

mt18 - 12 minutes ago

Altman's character is almost irrelevant next to how frictionless it is for a handful of people to set defaults for millions.

neonate - 15 hours ago

https://archive.ph/hOYMn

krackers - 13 hours ago

[1] is also good to read as a follow-up, and compare the personalities

https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai...

swingboy - 15 hours ago

It's really interesting reading about how these folks view LLMs. Yeah, they're transformative, but I don't know that we're going to be eating ramen in a Neo-Tokyo street bar anytime soon. So much "A.G.I" mentioned in the article.

vlovich123 - 8 hours ago

> Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.

Ronan interesting writing as always. I’m curious if the role of the media as a pawn of the rich and powerful to sway perception and build narratives concerns you, especially given your personal experiences with this and the reporting you’ve done. Are there reforms you think reporters and/or news organizations should adopt to make sure access doesn’t become direct or indirect manipulation and how do you fight against that in your own reporting?

snakeboy - 5 hours ago

I usually use free archived versions to read mainstream journalism pieces. Seeing this convinced me to subscribe. I've always loved The New Yorker, and am happy to support serious longform journalism (and I know that Ronan is one of the best).

However, it's a shame that the only way to subscribe to the print version is to pay $260 upfront for the yearly subscription. Meanwhile the digital version is $1/week ($52 upfront) for one year, or even just $10 for one month.

ainch - 14 hours ago

Great piece. And a good excuse to read up on the use of diaeresis in English (eg. coördination, reëlection) to distinguish repeated vowels - I hadn't seen the New Yorker's usage before.

bkummel - 4 hours ago

Without having read the article, reacting on the headline: no single person should be allowed to control our future. Democracy is a thing in large parts of the world, and we should try very hard to keep that functioning and even improve it.

morleytj - 14 hours ago

Wow, this is an incredibly detailed piece. Really in depth reporting and the kind of detailed investigation we need more of on important topics like this.

> "Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence."

This is a very small detail, but an instinctive grimace crosses my face at the thought of these sort of Marvel references and I'm not entirely sure why.

adrianhon - a day ago

Archive link: https://archive.is/2026.04.06-100412/https://www.newyorker.c...

just_once - a day ago

Amazing that this article and an actual comment from Ronan Farrow is this far down the list while...Scientists Figured Out How Eels Reproduce (2022) has 6 times the points.

hansmayer - an hour ago

Beyond the question of should we trust Sam Altman to control our future - why on Earth should we want any single individual to control our future at all?

wk_end - 15 hours ago

This anecdote is so absurd it sounds like satire. This is the guy with the $23M mansion?

> Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied.

throw4847285 - 19 hours ago

A new Ronan Farrow piece is a rare gift (and Marantz is no slouch). Can't wait to read this in the physical magazine when it arrives!

locust101 - 4 hours ago

It’s hard to know what’s the new information here. Altman’s history has been reported on exhaustively.

Few people have left openai over the year - safety abandonments, non profit status change, deception etc. but there is too much money involved. Here lies the actual rub. A lot of people involved and named in the article are reprehensible, kushners, saudis, Emiratis, PayPal mafia, vc folks with god complexes. But as long as they have the money, we have to dance to their tune.

We really really need a way for our society to be more equitable and hold these people responsible.

ambicapter - 14 hours ago

I didn't have the mental energy to read the whole thing but man the final paragraph is some really good writing. Way to tie it all in together.

HardwareLust - a day ago

Of course he cannot be trusted. Anyone whose motivation is based on greed is by nature untrustworthy.

nerdyadventurer - 7 hours ago

Why would anyone trust him at all? their tech is used to bomb children, all of these rich folks are immoral only about their selfish gain.

wolvoleo - 3 hours ago

https://archive.is/Cd0Yl

bootload - 12 hours ago

“By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice”

This statement rings true.

JL, PG has mentioned often, is his weapon to test the “people” integrity aspect of YC / Startups. It’s not lost on me both Altman and Thiel both associated with YC were useful short term only, highlighting how regular “character” evaluations are required at higher levels of responsibility.

latentframe - 7 hours ago

It’s less about trusting one person but more about the structure indeed AI is concentrating capital and compute and talent into a few hands so we’ve seen this before with railroads, oil, semiconductors. It brings innovation and also pricing power and political influence.

steve_adams_86 - 14 hours ago

> Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I really want?” Among his answers is “Financially what will take me to $1B.”

I can't imagine having such uninspired thoughts and actually writing them down while in a role of such diverse and worthwhile opportunities. I'd like to ask "how the hell do these people find themselves in these positions", but I think the answer is literally what he wrote in his diary. What a boring answer. We need to filter these people out at every turn, but instead they're elevated to the highest peaks of power.

pharos92 - 14 hours ago

We focus these critiques far too much on the face rather than the underlying mechanics. Just like in politics, we critique the personality/politician yet the underlying system architecture evades it.

Sam Altman clearly has a long history of nefarious activity. But the underlying threat posted by AI to society, the economy and human freedom persists with or without his presence.

calf - 25 minutes ago

The last quote, to a layperson, may sound completely sinister, but therein lies a deep and open computer science question: AIs really do seem to get their special capabilities from having a degree of freedom to output wrong and false answers. This observation goes all the way back to some of Alan Turing's musings on how an AI might one day be possible. And then there were early theorems related to this e.g. PAC learning. I'd love to know about what's happened since on this aspect, such as the role of noise and randomness, and maybe even hallucinations are a feature-not-bug in a fundamental sense, etc.

6Az4Mj4D - 12 hours ago

I am in 40s and going to be made redundant this June. In future only people who can afford to keep things like Claude, OpenAI and most importantly create value using them more than what others can do be able to survive. Otherwise, game is more or less over, and I question what's next for my own future while I learn to use Claude in FOMO. I cannot trust Sam or others if they will have any interest to keep this tech affordable for common people like me.

nextlevelwizard - 2 hours ago

"If I don't destroy humanity someone far worse will do it" -Sam Altman

cmiles8 - 3 hours ago

It seems unlikely OpenAI can survive long term with Sam at the helm. Challenge is folks already realized that once and yet here we are.

ycui1986 - 13 hours ago

he won't. if anything, openai is falling behind recently. the trend won't change easily. it is like the old time Netscape.

innocenttop - 19 hours ago

Why is the story so downranked? Folks at HackerNews have something to do with it ?

slg - 15 hours ago

One thing that stands out when reading profiles like this is the number of positive and negative descriptions of the subject that agree. For example, there seems to be little dispute that Altman will happily say something that he knows/believes isn't true, there's just a lot of people who are willing to forgive any lies if the lies are in service of something they themselves agree with.

dmitrygr - 14 hours ago

The number of "Altman doesn’t remember this" or "Altman denies this" is hilarious

keepamovin - 4 hours ago

YC invests in people, not ideas. They have vetted him. They are always right about people. It's probably nothing.

b8 - 9 hours ago

Sam failed upwards.

- 29 minutes ago
[deleted]
bambax - 4 hours ago

> Altman does not recall the exchange.

Altman SAYS he does not recall the exchange. Not the same thing.

netcan - 5 hours ago

My tendency is to believe that the individuals do not what matter as much, when it comes to the biggest risks. I'm not sure if this is a bias or a theory... but I lean to some sort of "medium is the message" determinism.

>"He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram."

Before "don't be evil" was a cliche, I think it was a real guiding principle at Google and they built a world class business that way.

Facebook's rival ad platform didn't have search queries to target ads at. Aggressive utilization of user data was the only way they could build an Adwords-scale business. As they pushed this norm, Google followed.

Doomscroll addiction gets a lot of attention because engineers and journalists have children and parents. There are other risks though. Political stability, for example.

By early 2010s, smartphones were reaching places that had almost no modern media previously. Often powered by FB-exclusive data plans. The Arab spring happened, then ISIS. FB-centric propaganda seemingly played a major role in a major conflict/atrocity in Burma. Coups in Africa powered by social media based propaganda. Worrying political implications in the west. Unhinged uncle syndrome. Etc. Social media risks/implications were more than just "inconvenience."

At no point did we really see tech companies go into mitigation mode. Even CYA was relatively limited. There was no moment of truth. It was business as usual.

So... I think OpenAI's initial charter was naive. Science fiction almost. It was never going to withstand commercial reality, politics, competition and suchlike. I think these are greater than the individuals involved.

That doesn't mean we should ignore, excuse or otherwise tolerate lack of integrity. But, I don't think it is a way of reducing risk.

Whether the risk is skynet, economic turmoil, politics, psych epidemics or whatever... I don't think the personal integrity of executives is a major factor.

einrealist - 13 hours ago

I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

avaer - 9 hours ago

Who would you trust more: Sam Altman, or a council of 1000 representative AI models?

basyt - 3 hours ago

he doesn't control his own future... chatgpt implodes in 18 months max depending upon how the strait of hormuz play goes...

ergocoder - 14 hours ago

I wonder if Sam might abandon the ship soon. Other co-founders already did.

The main reason is that he gets all the downsides without the upsides. I know $5B is a lot but, for a 700B company, it isn't. If OpenAI was a regular for-profit, he would have been worth >$100B already.

This is probably one of the significant factors why other co-founders left too. It's just a lot of headaches with relatively low reward.

saeranv - 12 hours ago

Greg Brockman honestly sounds like a psychopath:

> In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?

mvkel - 4 hours ago

> Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different.

Isn't this really what everything is about? A pure research non-profit transitioned to a revenue generating enterprise because it had to, and a lot of people don't like that. Does that make it evil?

It's romantic to think that the magic of science and research can stand on its own, but even Ilya has admitted more recently that SSI needs to ship something consumer facing.

Anthropic, the lab that put all of its social capital in the safetyism basket, is having the exact same realization, with Claude Code being a mess of technically reckless vibe coded slop that nevertheless is the cash cow for the company.

Maybe it's time for everyone to realize that for an innovation this big to come to bear, it either needs to be state funded, or privately funded, the latter requiring revenue and a plausible vision of generating ROI.

- a day ago
[deleted]
brap - 7 hours ago

He’s a grown ass man tweeting in all lowercase, that’s all I needed to know.

I could more or less infer the rest from that.

kazinator - an hour ago

I place my trust in Betteridge's Law of Headlines.

ernsheong - 4 hours ago

I bet Satya Nadella is regretting defending Altman now.

pupppet - a day ago

Ask Condé Nast if he can be trusted..

https://www.reddit.com/r/AskReddit/s/VWJVBNzc2u

trakkstar - 6 hours ago

Girls and boys, this is a prime example of a rhetoric question.

383toast - 14 hours ago

if you have to ask if someone can be trusted, they usually can't

- 9 hours ago
[deleted]
sph - 5 hours ago

Excellent article, truly well-researched. As someone close to a pathological liar [1], the idea that one could be at the forefront of the creation of an artificial superintelligence confirms all the existential risks of such a piece of technology and how naïve, if not ignorant, the average starry-eyed tech worker and investor is about this whole endeavour. It's easy to believe there is a lot of idealism and wish for a better world, but underneath the greedy drive for money and power is excellently summarized in Greg Brockman's own thoughts: “So what do I really want? [...] Financially what will take me to $1B.”

Literally, the only hope for humanity is that large language models prove to be a dead-end in ASI research.

---

1: “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.” — I guess now I know of two people with these traits.

the_arun - 12 hours ago

The main animated picture reminded me of evil king Ravan from Ramayan with 10 heads. Not sure it is intentionally done that way.

charlescearl - 2 hours ago

The very idea of “trusting” monopoly capitalism.

CyborgUndefined - 6 hours ago

ugh, i don't understand why only altman scares you? what about google, china, and other players?

for me, the answer >>> we need to create our own systems. decentralized agent networks and etc.

if you don't want to depend on one person or one company controlling your AI, build your own infrastructure.

the concentration of power in one/two persons is the problem.

BrenBarn - 9 hours ago

Of course not. No one can be trusted to control our future.

tines - 11 hours ago

Two "insure" typos?

eximius - 6 hours ago

Fuck no! Of course he can't be trusted. We know that. Nobody questions that. We know that about most of the "elites" running the show.

We're just in this shitty pit of despair where people are desperate. It's difficult to campaign for good when you're struggling and capital can jerk people around.

People pursue good for the sake of good at cost to themselves when times are very good or times are very, very bad.

Right now times are only merely very bad.

game_the0ry - 17 hours ago

For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

Some concepts from the book:

> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

> Trust your instincts over a person's social role (e.g., doctor, leader, parent)

Check and check.

OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.

isolay - 3 hours ago

As for the titular question, Betteridge's law of headlines applies. The answer is: No, we can't trust Sam Altman.

almostdeadguy - 20 hours ago

Seems this got buried from the front page very quickly

RagnarD - 4 hours ago

No.

lenerdenator - 15 hours ago

If you are asking if a single human can be trusted with such a responsibility, the answer is, by default, no.

pdonis - 13 hours ago

Does the article ever actually answer the title question?

panzi - 12 hours ago

No. Next question.

brandonpollack2 - 12 hours ago

I haven't read it yet. The answer is no.

tw04 - 12 hours ago

I don't even need to read the article to know that he unequivocally can't be trusted. Every action he's taken to this point have shown he will say literally anything to get what he wants.

Rover222 - 5 hours ago

I don’t know, but any time I see an interview of Altman and I look at those eyes, I get creeped out.

Arubis - 13 hours ago

This is unfair to the original article, which is well-researched and worth a read. But the answer this question is _always_ no. Nobody should have as much power as the oligarch class currently does, even if of inscrutable power.

KellyCriterion - 16 hours ago

Na, it will be Dario instead of Sam, Id say? :-))

cm2012 - 14 hours ago

I don't see anything bad about Altman in this article that cant be explained by the chaos of growing a billion dollar company in a few years.

cedws - 7 hours ago

Sounds like a snake pit. None of them can be trusted. If we have to rely on companies to self appoint a benevolent ‘AI dictator’ we’re fucked.

The only high profile person in AI I’d consider perhaps worthy of trust is Demis Hassabis.

jesterson - a day ago

Watch Altman's reaction in Tucker Carlson interview to the question about (alleged) murder of OpenAI researcher Suchir Balaji.

The overall response and particularly the body language speaks a lot.

- 21 hours ago
[deleted]
shevy-java - 6 hours ago

I don't trust him. He already made statements that convinced me I don't want to touch anything he controls. In a way it is similar to Meta and co. For some reason the US corporations behave very suspiciously once past a certain threshold size. With Win11 from Microsoft I always wonder whether there is a not so hidden subagenda in place.

jader201 - 15 hours ago

Am I the only one that feels like Claude is clearly winning code generation, and Gemini in general LLM?

I just don’t feel like OpenAI has a legitimate shot at winning any of the AI battles.

Therefore, I feel like “Sam Altman may control our future” is a far stretch.

primer42 - 14 hours ago

"Any headline that ends in a question mark can be answered by the word no."

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

lizhang - 10 hours ago

i think im shadowbanned :(

slibhb - 10 hours ago

It is disconcerting how Altman has used "AI safety" as a marketing tool. The more people imagine the universe turned into paperclips, the more they invest. Obviously Altman doesn't care about safety (I don't either; I'm not an AI-doomer). But he truly does come across as someone incapable of telling the truth. Are you even a liar if honesty is not in the set of possible outcomes?

Still, there's something oddly reassuring here: if you believe "AI safety" is essentially a buzzword (as I do), then this whole affair comes down to people squabbling over money and power. There really is nothing new under the sun.

simoncion - 14 hours ago

Can Sam "The board can fire me, I think that's important." Altman be trusted?

If for no other reason, given what happened when the board fired him... no. I'd say not.

hirako2000 - 5 hours ago

tautology

AbuAssar - 5 hours ago

no

mayhemducks - 12 hours ago

I would really appreciate it if someone in the know could explain to me how a markov chain with some backpropagation can surpass human cognition. Because right now I call BS.

- 14 hours ago
[deleted]
jrflowers - 11 hours ago

I hope somebody just publishes The Ilya Memos. Sounds like a fun read

o0-0o - 11 hours ago

Hey, Ronan. Did the IPO come up at all in the research or interviews for this article? A yes or no will suffice, and color it if you want. ~_^

zoklet-enjoyer - 13 hours ago

I believe Annie Altman.

lnenad - a day ago

This whole situation goes to show that yesterday's conspiracy theorists are today's realists. What's happening to USA's leadership and as a country and what's happening with with their top companies is really scary for the rest of us. If this trend continues we're all definitely gonna end up in a kleptocracy.

thewileyone - 11 hours ago

[flagged]

jerrygoyal - 5 hours ago

could someone please give a tldr? this was way too long

imagetic - 9 hours ago

No.

therobots927 - a day ago

Excellent work. I’ll have to wait until we get the print version delivered to finish as I’m not signed into the new Yorker on my phone.

I’ve always been a huge fan of Ronan Farrow’s journalism and willingness to speak truth to power. I think he’s pulling at exactly the right thread here, and it’s very important to counteract Altman’s reputation laundering given that we run a very real risk of him weaseling his way into the taxpayer’s wallet under the current administration.

davidmurdoch - 11 hours ago

"Good luck, have fun, don't die."

wileydragonfly - 11 hours ago

No

y1n0 - 12 hours ago

Betteridge's law of headlines: no

GlibMonkeyDeath - 20 hours ago

Disclaimer: I have no association with any AI company and have never met Altman or any of the other top AI scientists.

The real question is: can anyone be trusted if the fever dreams of super-intelligence come true? Go ahead and replace Sam Altman with someone else - will it make a difference? Any other CEO is going to be under the same overwhelming pressure to make a profit somehow. I think the OpenAI story is messier because it was founded for supposedly altruistic reasons, and then changed.

Methinks many of Altman's detractors protesteth too much. He's doing his job as it is defined (make OpenAI profitable.) Nothing of substance in this article seemed to make him exceptionally "sociopathic" compared to any other tech CEO. It goes with the territory.

What depressed me most is that trillions of dollars are being raised for building what will undoubtedly be used as a weapon. My guess is the ROI on that money is going to be extremely bad for the most part (AI will make some people insanely rich, but it is hard to see how the big investors will get a return.) Could you imagine if the world shared the same vision for energy infrastructure (so we could also stop fighting wars over control of fossil fuels and spewing CO2?) A man can dream...

ProAm - 13 hours ago

Nope, never trust this man. His history proves why you cannot. Pure greed.

Aboutplants - a day ago

Seeing Sam Altman slowly degrade into the realization that he is in fact not as smart as others in this space has been fascinating to watch. He used to speak with enthusiasm and confidence and now he’s like a scared little boy who got in way too deep.

The last person that this happened to was Sam Bankman Fried as investors and regular folk finally realized he was full of complete shit and could only talk the game for so long until the truth emerged.

guzfip - 18 hours ago

> Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”

lol do you think these guys have ever been hit? Let alone in the face. They’d probably be less eager to mouth off as much as they do if so.

firemelt - 4 hours ago

obviously not

andrewstuart - 8 hours ago

Meh. I’m no particular fan of Altman but there’s nothing in this article particularly surprising or terrible.

The whole AI safety thing has always seemed extreme to me and has turned out to be a storm in a teacup. All those prominent people who used to tell us how AI will end humanity seem to have stopped talking about it.

I get the sense that Altman is not particularly like-able person but Bill Gates and Steve Jobs both seem to have scored a 10/10 on their “is this guy a jerk” rating, it’s common for tech CEOs.

So, the article and headline are dramatic but not much really there.

I think all the AI safety obsessed people turn out to have been the ones off course.

Cheyana - a day ago

Harvey Dent…

thm - a day ago

Hybris.

nickphx - 14 hours ago

speak for yourself, he doesnt control my future.

jojobas - 13 hours ago

The guy called out for being a sociopath by a multitude of Silicon Valley CEOs of all people, sure we can trust him our future.

seba_dos1 - a day ago

Looks like Betteridge's law of headlines applies here too.

smcg - 12 hours ago

Rule of Headlines says "no"

josefritzishere - a day ago

Betteridge's law of headlines is an adage that states: "Any headline that ends in a question mark can be answered by the word "NO."

sumeno - a day ago

Betteridge strikes again

ambicapter - 15 hours ago

> The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”

These sociopaths are so good at giving away nothing. He managed to engender sympathy instead of saying "I'm not gonna talk about anything that happened then".

Also very weird how many of these people are so deeply-linked that they'll drop everything they're doing just to get this guy back in power? Terrifying cabal.

drivingmenuts - a day ago

Short answer: No. Long answer: Hell, no.

selimthegrim - 9 hours ago

Quite frankly, if he went and scrubbed (or had scrubbed) a Facebook thread I got in an argument with him on in 2018 (about the last time someone did an article about him) I can only imagine how obsessive he is about controlling his past and info about it.

danielszlaski - 2 hours ago

[dead]

tylerchilds - 14 hours ago

[dead]

ihsw - 13 hours ago

[dead]

HarHarVeryFunny - 15 hours ago

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

surcap526 - 19 hours ago

[dead]

huflungdung - a day ago

[dead]

giwook - 14 hours ago

tl;dr

No, he cannot.

covercash - a day ago

[flagged]

romeroej - 15 hours ago

Can anybody tho?

- 13 hours ago
[deleted]
neya - a day ago

[flagged]

FpUser - 15 hours ago

>"Sam Altman may control our future"

TLDR but just the heading is already ugly. No single person no matter how nice they're should be able to control our future. Power corrupts, what fucking trust. We are supposed to be democratic society (well looking at what is going on around this is becoming laughable)

asK1ajsh - 14 hours ago

The New Yorker is owned by Conde Nast just as Reddit. Conde Nast has a deal with OpenAI:

https://www.reuters.com/technology/openai-signs-deal-with-co...

This is a damage control piece, and you see that the most stinging comments here get downvoted.

gchokov - a day ago

He is cooked. Only a matter of time before the whole thing blows up. Once a scammer, always a scammer.

aduty - 14 hours ago

LOL, no.

ahartmetz - a day ago

Well, no, obviously not. Not one bit.

nielsbot - 15 hours ago

No one person control our future. Stop there.

killbot5000 - 14 hours ago

No. Why is this a question?

LetsGetTechnicl - a day ago

No

catigula - a day ago

1. No.

2. You cannot "control" superintelligent AI.

ekjhgkejhgk - a day ago

No.

aksss - 15 hours ago

"could", "may", "might" - these words do so much heavy lifting in "journalism". Almost always it's an invitation to worry and be miserable.

drob518 - 10 hours ago

[flagged]

bijowo1676 - 14 hours ago

This article is just another typical New Yorker fluff piece that tries to look deep but misses the actual point.

The biggest flaw is that it spends way too much time on high-school level drama and "he-said-she-said" gossip about Sam Altman’s personal life instead of focusing on the actual technical and corporate capture of OpenAI.

The author treats the "nonprofit mission" like some holy quest that was "betrayed," when anyone with a brain in tech saw the Microsoft deal as the moment the original vision died. Instead of a hard-hitting look at how compute-monopolies are actually forming (MSFT AMZN NVDA and circular debt dealing inflating the AI bubble that could crash the economy), we get 5,000 words of hand-wringing over whether Sam is a "nice guy" or a "liar."

Who cares???????

The board failed because they had no real leverage against billions of dollars, not because they didn't write enough Slack messages. It's a long-winded way of saying "Silicon Valley has internal politics," which isn't news to anyone here.

ninjahawk1 - 15 hours ago

OpenAI is like #3 or #4 of the AI companies right now in terms of power, and last place in the court of public opinion.

I’d be more concerned about Anthropic both being in the good graces of the public and having access to all of our computers indirectly with Claude Code.

quantified - 14 hours ago

A bit of a feeling of "so what" here. Maybe he's less trustworthy than some. We have people of X trustworthiness running the government, crypto exchanges, a certain space exploration and satellite company, social media companies, and so on. We know their trustworthiness. Isn't the real issue how to cope?

rambambram - 4 hours ago

Any idea how stupid this title sounds!? It's past exaggeration.

aryehof - 6 hours ago

I might expect such a subjective, gossipy exposé of a public official, but this of a private individual in a non-public sector commercial company?