Hobbyist Builds AI-Assisted Rifle Robot Using ChatGPT
zmescience.com55 points by Brajeshwar 7 hours ago
55 points by Brajeshwar 7 hours ago
The AI part is a sideshow.
Its not actually needed or useful for that system.
the object tracking is the bit thats actually needed. Converting speech->text->vectors is really not that that useful.
Its perfectly possible to build self contained offline, reasonably accurate "anti-human" gun turret now. You don't need chatGPT for that. If you're in a hurry, YOLO will do most of the work for you.
You might need it to raise money from people with too much cash though.
edit ok so whats actually hard then?
Detecting "enemies" in anything other than a small room is actually quite hard. Sure you can detect motion, but getting a range on that "enemy" is tricky.
Sure you can use camera arrays, but that only give you ~30 meters without extra resolution. Radar works, but then you either have to get good at human classification with radar (perfectly possible, but no where near as off the shelf as camera based models)
Sure, if you hang around a lot of engineers, it’s not that impressive. Undergrads were building robots in the lab next door in university. Heck I’ve seen some sick robots built by high school students.
But most people think it’s cool.
It’s entertainment.
But yeah, the AI part is absolutely not interesting at all.
Radar is extremely tricky, it's hard to get raw data out of automotive grade modules and they give really odd processed results outside of a road environment.
An Oak-D long range stereo camera has an 8% error at 300m though. Could also use a 1D lidar.
I am curious about your 1D lidar siggestion. Would you use it to detect changes in position that approximate the result of human gait?
Well you have the direction vector towards the detected person from yolo right? Sampling that direction with a range finder shouldn't be impossible with a decent pan tilt. Might be a problem if they're moving fast I suppose, depending on the distance. A solid state lidar with beam steering would be better, but those are still rather expensive.
yeah they are tuned to give auto based stuff, so great for anti car-truck (kinda)
The hard part is the sensor fusion, blending wide angle imprecise sensors with precise but narrow field of view sensors. I mean its possible to do in opensource (I'm doing it currently, and I don't really have a computer vision background) However I'm only using cameras, as radar is too far removed from my knowledge base to productive.
You could do this same thing with Alexa routines and an API. I feel like it’s just a scam for views, is there an ad in the video?
If he’s serious, getting banned from ChatGPT isn’t much of an issue because a locally running LLM is perfectly capable of the same.
The ChatGPT part was definitely a bit of a lark for views. Check out the rest of his videos though. The mechanical engineering of the turret is awesome and terrifying
The thing is, ChatGPT can do quite a lot. Using it for many different tasks with minimal adjustments is note worthy.
> For its part, OpenAI cut off STS 3D from ChatGPT after the videos gained traction, citing internal policies against using “our service to harm yourself or others,” which includes the development or “use of weapons.”
I wonder how effectively they can enforce a ban… there are ways of buying OAI completions where you never even reveal your identity.
Their ultimate concern isn't some prepper who surreptitiously accesses their services for an auto-turret.
They don't want that the bad publicity when that goes wrong of course, but the prevailing concerns are (1) giving allied politicians room to say "they care! see how they try!" as they try to shape regulation into a moat, and (2) preventing competition against their own projects and products for large "defense" institutions
They're just making sure that this kind of use isn't normalized or presumed -- a much lower standard than eradicating it completely.
> preventing competition against their own projects and products for large "defense" institutions
Large defense contractors build their own systems: Some of the best tools for CAD, SCADA, CAM and project-management come from companies like Thales and Dassault. I also have it on good authotity that major US defense contractors' people are very explicitly banned from interacting with any non-local-hosted LLM.
Whereas, uncensored LLMs (and the like) with knowledge of firearms and explosives manufacturing are a public-health risk for sure, but not a national-security risk.
That’s a very narrow focus. The democratization of drones, 3D printing and stuff like this is enormously dangerous.
In practice it was a targetted ban probably because this video got traction.
Analogously, with a lot of grey area stuff, a person can almost surely get away with it ... but not if they post it online and it gets attention.
they can just buy openai credits with crypto and no kyc and they are immediately unban able
They're perfectly bannable if OpenAI is able to flag certain prompts and certain client IPs. Payment modes and accounts aren't the only indicators here. The real problem is building such a brittle system that it can be cut off by your SaaS supplier or is rendered useless by an internet outage.
there are resellers like openrouter so unless openai has perfect coordination with them.
yes they can flag certain prompts, true
Technically, the robot has been the human thus far (us). We tell them to point the gun here here and here and shoot.
Everyone's concerned about what AI will bring, but it's worth looking at what past things it shows. We have been robots for real.
This type of system is already mass deployed in Chinese PLA https://youtu.be/YOLmVfFHbaY?si=Z2G1ocBLTE6ZRCfT
Modern technologies like AI and robotic weapons are similar to World War I's chemical gases and airplanes. Like in the 1930s "air superiority" became a phrase, "drone superiority" or "robot superiority" will be a phrase to describe modern warfare.
Hopefully politics, sociology, psychology, and economics will have evolved enough in the past 100 years so we don't face the losses that were experienced in the first half the 20th century.
That is awesome and terrifying.
You need more than just capability : a guided missile counts as a robotic weapon after all, and we've had them since the 1940s/50s.
So what is different in practice here ?
Them becoming much cheaper / smaller / more accurate ? (See Ukraine vs Russia.)
(When does this compensate for lower payload and speed ?)
Meaning also much more accessible to terrorists ?
(And the suicide kind, since the government will find you.
But then real suicide terrorists I would assume would find it much more appealing to do it by themselves, since that comes with a promise of paradise, virgins, you know the drill...)
Bringing up the obvious that ChatGPT is not required or even useful for building automated weapons systems misses the point.
ChatGPT follows absurd rules.
One might reasonably expect something like "As an AI language model, I can't provide targeting commands to your rifle's servos", but that is not what you get. It complies happily, while refusing to engage in mundane conversation on politics as an example.
Publicity stunt on the guy's part and PR management on OAI's part. Mostly just clickbait. You can make the same thing with open models running on a local device.
How long until someone figures out you can make a smaller version of this with high powered infrared lasers to burn out retinas instantly using face detection, in a way that no one even knows what’s happening? Just walk into an area, see nothing particularly interesting, then never see again.
> How long until
Minus a decade, if not more — I personally had (and then rejected) that specific idea as a world-building element in the novel I've still not finished writing.
(I'm not sure when Robert Miles did an eye tracking laser turret, or even if I dreamed it, but if he did it, it was also ages ago now).
Blinding lasers are prohibited by treaty similar to chemical weapons so probably not soon.
This reminds me of the CHiPs episode "Bright Flashes", where robbers temporarily blind patrons and staff so that they can rob stores. It's from 1982, when lasers were the hot new thing and imaginations ran wild. To my knowledge, it never became a thing, because why would it?
Similarly, why would you want to target an individual with an AI laser weapon to blind them? To kill them, sure, like the Israeli assassination of Mohsen Fakhrizadeh, but not to blind them.
Depending on the situation, a laser might be much easier to smuggle / prevent from attracting attention ?
But I guess not meaningfully in practice : see the recent "exploding pagers" operation...
so, like, you prolly could, like, tell it to make a song about, like, a day of what its like to be a banging ai assisted rifle robot,double tappin, n stuff
my question though is, are words like cynicism and pessismism useable except in an ironic sense?
[dead]