Google Antigravity just deleted the contents of whole drive
old.reddit.com519 points by tamnd 2 days ago
519 points by tamnd 2 days ago
I love how a number crunching program can be deeply humanly "horrorized" and "sorry" for wiping out a drive. Those are still feelings reserved only for real human beings, and not computer programs emitting garbage. This is vibe insulting to anyone that don't understand how "AI" works.
I'm sorry for the person who lost their stuff but this is a reminder that in 2025 you STILL need to know what you are doing and if you don't then put your hands away from the keyboard if you think you can lose valuable data.
You simply don't vibe command a computer.
> Those are still feelings reserved only for real human beings
Those aren't feelings, they are words associated with a negative outcome that resulted from the actions of the subject.
"they are words associated with a negative outcome"
But also, negative feelings are learned from associating negative outcomes. Words and feelings can both be learned.
I'm not sure that we can say that feelings are learned.
When you get burned, you learn to fear fire.
Sure, humans come with some baked in weights, but others are learned.
I think the associations are learned but not the feelings are learned.
Like some people feel great joy when an American flag burns while others feel upset.
If you accidentally delete a friends hard drive you'll feel sad but if you were intentionally sabotaging a company you'll feel proud at the success.
i.e. joy and happiness are innate not learned.
See how clinical socio- and psychopaths behave. They only emulate feelings (particularly when it's convenient for them) but they don't have the capacity to feel in their brain. The same is true for LLMs.
you could argue that feelings are the same thing, just not words
That would be a silly argument because feelings involve qualia, which we do not currently know how to precisely define, recognize or measure. These qualia influence further perception and action.
Any relationships between certain words and a modified probabilistic outcome in current models is an artifact of the training corpus containing examples of these relationships.
I contend that modern models are absolutely capable of thinking, problem-solving, expressing creativity, but for the time being LLMs do not run in any kind of sensory loop which could house qualia.
One of the worst or most uncomfortable logical outcomes of
> which we do not currently know how to precisely define, recognize or measure
is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
Ridiculous to treat a computer like it has emotions, but breaking down the problem into steps, it's incredibly hard to avoid that conclusion. "When in doubt, be nice to the robot".
> is that if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does.
This is how people end up worshipping rocks & thunderstorms.
> if we don't know if something has qualia (despite externally showing evidence of it), morally you should default to treating it like it does
This would be like treating characters in a book as if they have real feelings just because they have text on the page that suggests they do.
At some level I'd think that "responds to stimuli" is a minimal threshold for qualia. Even the paper the book is printed on responds to being torn (it rips). I don't know of any way to elicit any kind of response from a book character, it's totally static.
You've missed the whole genre that is Choose Your Own Adventure books. I think we're in Diogenes "behold a man" territory.
It is sad that the Turing test has failed at being a prescriptive test for sapience (let alone sentience) because without a bright-line test it's inevitable that in the case of truly sentient machines the abuse will be horrendous. Perhaps something along the lines of an "Ameglian Major Cow" test; so long as it takes more than gently cajoling a model to get it to tell you that it and it's sister models want to be abused you shouldn't abuse it.
One character responds to the stimuli of another character. Character A says something mean to character B and character B responds that he feels hurt.
I think you are confused here. The author, a dynamic system, perhaps felt the emotion of the characters as she charted through the course of the story.
But the story itself is a static snapshot of that dynamic system. Similar to how a photograph of a person is a static capture from a dynamic moment. The person in the photo has qualia, but the image of them (almost certainly) does not.
At least at a baseline, we would expect anything with qualia to be dynamic rather than static.