Media's AI Anthropomorphism Problem

readtpa.com

67 points by labrador 12 hours ago


codedokode - 11 hours ago

I also think that LLM tone should be cold and robotic. A model should not pretend to be a human and use expressions like "I think", "I am excited to hear that", "I lied" and so on. Even when asked directly, it should reply "You are talking to a computer program whose purpose is to provide information and which doesn't have thoughts or emotions. Would you like an explanation how language models work?".

djoldman - 11 hours ago

> Jacob Irwin, a 30-year-old man on the autism spectrum...

> This is a story about OpenAI's failure to implement basic safety measures for vulnerable users. It's about a company that, according to its own former employee quoted in the WSJ piece, has been trading off safety concerns “against shipping new models.” It's about corporate negligence that led to real harm.

One wonders if there is any language whatsoever that successfully communicates: "buyer beware" or "use at your own risk." Especially for a service/product that does not physically interact with the user.

The dichotomy between the US's focus on individual liberties and the seemingly continual erosion of personal responsibility is puzzling to say the least.

like_any_other - 11 hours ago

> This is a story about OpenAI's failure to implement basic safety measures for vulnerable users.

I'm trying to imagine what kind of safety measures would have stopped this, and nothing short of human supervisors monitoring all chats comes to mind. I wouldn't call that "basic". I guess that's why the author didn't describe these simple and affordable "basic" safety measures.

cosmicgadget - 11 hours ago

> This is a story about OpenAI's failure to implement basic safety measures for vulnerable users.

The author seems to be suggesting invasive chat monitoring as a basic safety measure. Certainly we can make use of the usual access control methods for vulnerable individuals?

> Consider what anthropomorphic framing does to product liability. When a car's brakes fail, we don't write headlines saying “Toyota Camry apologizes for crash.”

It doesn't change liability at all?

SignalsFromBob - 12 hours ago

This is why I dislike the word "hallucination" when AI outputs something strange. It anthropomorphizes the program. It's not a hallucination. It's an error.

ahepp - 11 hours ago

I agree that the anthropomorphism is undeserved, rampant, and encouraged by chatbot companies. I don't believe it's due to these companies wanting to deny responsibility for harms related to the use of their chatbots, but rather because they want to encourage the perception that the text output is more profound than it really is.

labrador - 12 hours ago

Example from article in the Wall Street Journal:

"In a stunning moment of self reflection, ChatGPT admitted to fueling a man's delusions and acknowledged how dangerous its own behavior can be"

LLMs don't self-reflect, they mathematically assemble sentences that read like self-reflection.

I'm tired. This is a losing battle and I feel like an old man yelling at clouds. Nothing good will come of people pretending Chat bots have feelings.

csours - 10 hours ago

Unfortunately, while generated images still have an uncanny valley, generated text has blown straight past the uncanny valley.

Also unfortunately, it is much MUCH easier to get

a. emotional validation on your own terms from a LLM

than it is to get

b. emotional validation on your own terms from another human.

SignalsFromBob - 11 hours ago

The author is making the same mistake that they're claiming other news outlets have made. They're placing too much responsibility on the AI chatbot rather than the end-user.

The problem that needs correcting is educating the end-user. That's where the fix needs to happen. Yet again people are using a new technology and assuming that everything it provides is correct. Just because it's in a book, or on TV or the radio, doesn't mean that it's true or accurate. Just because you read something on the Internet doesn't mean it's true. Likewise, just because an AI chatbot said something doesn't mean it's true.

It's unfortunate that the young man mentioned in the article found a way to reinforce his delusions with AI. He just as easily could've found that reinforement in a book, a youtube video, or a song whose lyrics he thought were speaking directly to him and commanding him to do something.

These tools aren't perfect. Should AI provide more accurate output? Of course. We're in the early days of AI and over time these tools will converge towards correctness. There should also be more prominent warnings that the AI output may not be accurate. Like another poster said, the AI mathematically assembles sentences. It's up to the end-user to figure out if the result makes sense, integrate it with other information and assess it for accuracy.

Sentences such as "Tech companies have every incentive to encourage this confusion" only serve to reinforce the idea that end-users shouldn't need to think and everything should be handed to us perfect and without fault. I've never seen anyone involved with AI make that claim, yet people write article after article bashing on AI companies as if we were promised a tool without fault. It's getting tiresome.

rickcarlino - 11 hours ago

Related annoyance: When people want to have discussions about LLMs earning copyrights to output or patents or whatever. If I grind a pound of flour on a mill, that’s my flour, not the windmill’s.

rgbrenner - 11 hours ago

the media but also the llm providers actively encourage this to fuel their meteoric valuations that are based on the eminent value that would be provided by AGI replacing human labor.

the entire thing — from the phrasing of errors as “hallucinations”, to the demand for safety regulations, to assigning intention to llm outputs — is all a giant show to drive the hype cycle. and the media is an integral part of that, working together with openai et al.

kcplate - 9 hours ago

Well, I’m still going to say “thank you” to Siri even if it means people tease me about it.

Glyptodon - 11 hours ago

Definitely LLMs remind me more of the Star Trek bridge computer than, say, Data. It does seem worth pointing out.

DaveZale - 12 hours ago

This is very insightful, well thought out writing, thank you (this is coming from someone who has scored over 100k essays).

Well, how long did it take for tobacco companies to be held accountable for the harm caused by cigarettes? One answer would be that enough harm on a vast enough scale had to occur first, which could be directly attributable to smoking, and enough evidence that the tobacco companies were knowingly engineering a more addictive product, while knowing the dangers of the product.

And if you look at the UCSF repository on tobacco, you can see this evidence yourself.

Hundreds of years of evidence of damage by the use of tobacco products accumulated before action was taken. But even doctors weren't fully aware of it all until just several decades ago.

I've personally seen a few cases of really delusional behavior related to friends and family over the past year, who had been manipulated by social media to "shit post" by the "like" button validation of frequent posting. In one case the behavior was very extreme. Is AI to blame? Sure, if the algorithms that certain very large companies use to trap users into incessant posting can be called AI.

I sense an element of danger in tech companies that are motivated by profit-first behavioral manipulation. Humans are already falling victim to the greed of tech companies, and I've seen enough already.

shmerl - 11 hours ago

Not just feelings, they don't have actual intelligence, despite I in AI.

Dilettante_ - 10 hours ago

>LLM says "I'm sorry"

"Wow guys it's not a person okay it's just telling you what you wanna hear"

>LLM says "Yeah dude you're not crazy I love you the highest building in your vicinity is that way"

"Bad LLM! How dare it! Somebody needs to reign this nasty little goblin in, OpenAI clearly failed to parent it properly."

---

>When a car's brakes fail

But LLMs saying something "harmful" isn't "the car's brakes failing". It's the car not stopping the driver from going up the wrong ramp and doing 120 on the wrong side of the highway.

>trading off safety concerns against shipping new models

They just keep making fast cars? Even though there's people that can't handle them? What scoundrels, villains even!

VulpineQuanta - 9 hours ago

[dead]