What OpenAI did when ChatGPT users lost touch with reality

nytimes.com

78 points by nonprofiteer 18 hours ago


cc62cf4a4f20 - 16 hours ago

https://archive.is/v4dPa

ArcHound - 4 hours ago

One of the more disturbing things I read this year was the my boyfriend is AI subreddit.

I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.

I worry about the damage caused by these things on distressed people. What can be done?

rpq - 15 minutes ago

I think openai chatgpt is probably excellently positioned to perfectly _satisfy_. Is that what everyone is looking for?

chris-vls - 3 hours ago

It seems quite probable that an LLM provider will lose a major liability lawsuit. "Is this product ready for release?" is a very hard question. And it is one of the most important ones to get right.

Different providers have delivered different levels of safety. This will make it easier to prove that the less-safe provider chose to ship a more dangerous product -- and that we could reasonably expect them to take more care.

Interestingly, a lot of liability law dates back to the railroad era. Another time that it took courts to rein in incredibly politically powerful companies deploying a new technology on a vast scale.

- 2 hours ago
[deleted]
throwaway48476 - 2 hours ago

It would be helpful to tell users that it's just a model producing mathematically probable tokens but that would go against the AI marketing.

thot_experiment - 3 hours ago

Caelan Conrad made a few videos on specifically AI encouraging kids to socially isolate and commit suicide. In the videos he reads the final messages aloud for multiple cases, if this isn't your cup of tea there's also the court cases if you would prefer to read the chat logs. It's very harrowing stuff. I'm not trying to make any explicit point here as I haven't really processed this fully enough to have one, but I encourage anyone working in this space to hold this shit in their head at the very least.

https://www.youtube.com/watch?v=hNBoULJkxoU

https://www.youtube.com/watch?v=JXRmGxudOC0

https://www.youtube.com/watch?v=RcImUT-9tb4

leoh - 4 hours ago

Anthropic was founded by exiles of OpenAI's safety team, who quit en masse about 5 years ago. Then a few years later, the board tried to fire Altman. When will folks stop trusting OpenAI?

blurbleblurble - 4 hours ago

The whiplash of carefully filtering out sycophantic behavior from GPT-5 to adding it back in full force for GPT-5.1 is dystopian. We all know what's going on behind the scenes:

The investors want their money.

paul7986 - 38 minutes ago

A close friend (lonely no passion seeking deeper human connection) went deep six into GPT which was telling her she should pursue her 30 year obsession with a rock star. It kept telling to continue with the delusion (they were lovers in another life which she would go to shows and tell him they need to be together) and saying it understood her. Then she complained in June or so she didnt like GPT 5 because it told her she should focus her energy on people who want to be in her life. Stuff her friends and I all have said for years.

venturecruelty - 2 hours ago

"Sure, this software induces psychosis and uses a trillion gallons of water and all the electricity of Europe, and also it gives wrong answers most of the time, but if you ignore all that, it's really quite amazing."

Peritract - 4 hours ago

"Profited".