AI and Trust (2023)
schneier.com69 points by insuranceguru 2 hours ago
69 points by insuranceguru 2 hours ago
I can't accept this strange definitional divide between interpersonal trust and social trust. Trust is an infinitely grey experience, and varies situation to situation and time to time.
Trust is just a word we use to describe how confident we are that the future will correspond to our expectations. Friends can lose the money you gave them to buy something, credit card machines can fail, AIs can order you the wrong product, I could get in a car accident on the way to the store. Do I "trust" that these schemes will go smoothly? Well, mostly (except the AI one).
I don't see a category error because there aren't categories here.
I strongly dislike that this title has been modified to editorialize (presently titled as "Bruce Schneier: AI and the scaling of betrayal"). From the guidelines:
> please use the original title, unless it is misleading or linkbait; don't editorialize.
The title should be "AI and Trust", or "AI and Trust (2023)"
(2023) Discussion at the time (203 points, 91 comments) https://news.ycombinator.com/item?id=38516965
Title should be: AI and Trust
Thanks for the link. I missed that original discussion. It’s fascinating to read the 2023 takes now that we are actually living through the scaling phase he predicted. The concept of AI betrayal feels even more relevant today than it did then
It's crazy that the marketplace seems to be an ongoing experiment in maximizing the number of times a company can defect, minimizing consumer anger, and exploiting assumptions of trust and good faith as frequently as possible without causing the consumer to defect completely. And it appears they've optimized that; we put up with shrinkflation, industrial waste repurposed as filler, processed ingredients derived from industrial wastes, high quality products debased and degraded until all that remains is a memory of a flavor and the general shape, color and texture. Big AG factory farming, pharma, healthcare products, all the rest - you think you can trust that a thing is the thing it's always been and we all assume it is, but nope.
Scratch any surface and the gilt flakes off - almost nothing can be trusted anymore - the last 30-40 years consolidated a whole lot of number-go-up, profit at any cost, ruthless exploitation. Nearly every market, business, and product in the US has been converted into some pitiful, profit optimal caricature of what quality should look like.
AI is just the latest on a long, long list of things that you shouldn't trust, by default, unless you have explicit control and do it yourself. Everywhere else, everything that matters will be useful to you iff there's no cost or leverage lost to the provider.
The "meta" has been solved and everyone's just min-maxing now. The few who aren't min-maxing are considered a waste.
AI, crypto, etc. feels like potentially new meta opportunities and it is eerie how similar the mania is whenever a new major patch for a game is released. Everyone immediately starts exploring how to exploit and min-max the new niche. Everyone wants to be the first to "discover" a viable meta.
Brilliant take.
Competition nowadays is so intense and fine-grained. Every new innovation or exploration is eventually folded into the existing exploits especially in monopolistic markets. Pricing models don’t change, revenue streams neither, consumer rarely benefits from these optimisation efforts, all leads to greater profit margins by any means.
This to me is the most important point in the whole text:
"We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.
We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants."
I've not think about it like that, but I think it's a great way to legislate.
We've needed that in software (not just AI) for a long time.
Not a popular take; especially within the HN crowd.
That said, it needs to be scaled. As he indicated, only certain professions need fiduciaries.
Anyone that remembers working in an ISO9001 environment, can understand how incredibly bad it can get.
> Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.
A computer guy on a policy wonk reading diet makes for boring reading.
Policy wonks are often systemizers who think of society as a machine. That’s why they take the intuitive concept—scarcely even needs explaining—of informal everyday rituals like queueing and repackage it as yesteryear’s buzzword “trust”. We don’t need extrinsic rewards to queue politely. Amazing?
A computer guy is gonna take that and explain to us, of course, that society is like a machine. Running on trust. That’s the oil or whatever. Because there aren’t enough formal transactions to explain all the minute well-behavedeness.
Then condescend about how we think of (especially) corporations as friends. Sigh.
What policy wonks are intentionally blind to are all the people who “trust” by not making a fuzz. By just going along with it. Apathy and being resigned to your fate looks the same as trust from an affluent picket-fence distance. Or like being a naive friend to corporations.
The conclusion is as exciting as the thesis. Status quo with bad bad corporations. But the government must regulate the bad corporations.
I’m sure I’ve commented on this before. But anyway. Another round.