BYD's cheapest electric cars to have Lidar self-driving tech
thedriven.io233 points by senti_sentient 2 days ago
233 points by senti_sentient 2 days ago
Lidars come down in price ~40x.
https://cleantechnica.com/2025/03/20/lidars-wicked-cost-drop...
Meanwhile visible light based tech is going up in price due to competing with ai on the extra gpu need while lidar gets the range/depth side of things for free.
Ideally cars use both but if you had to choose one or the other for cost you’d be insane to choose vision over lidar. Musk made an ill timed decision to go vision only.
So it’s not a surprise to see the low end models with lidar.
I wonder if ubiquity doesn’t effect the lidar performance? Wouldn’t the systems see each other’s laser projections if there are multiple cars close to each other? Also is LIDAR immune to other issues like bright 3rd party sources? At least on iPhone I’m having faceid performance degradation. Also, I suspect other issues like thin or transparent objects net being detected.
With vision you rely on external source or flood light. Its also how our civilization is designed to function in first place.
Anyway, the whole self driving obsession is ridiculous because being driven around in a bad traffic isn’t that much better than driving in bad traffic. It’s cool but can’t beat a the public infrastructure since you can’t make the car dissipated when not in use.
IMHO, connectivity to simulate public transport can be the real sweet spot, regardless of sensor types. Coordinated cars can solve traffic and pretend to be trains.
I'd assume not since Waymo uses lidar and has entire depots of them driving around in close proximity when not in use.
There are regular 100+ car pileups in the central California valley due to fog. Cars crash in a lot of situations because the driver simply can't see. We need something better than vision to avoid these kinds of accidents.
Coordinated cars won't work unless all cars are built the same and all maintained 100% the same and regularly inspected. You can't have a car driving 2 inches from the car in front, if it can't stop just as fast as the car in front. People already neglect their cars, change brake compounds, and get stuck purchasing low quality brake parts due to lack of availability of good components.
Next time you see some total beater driving down the road, imagine that car 2 inches off your rear bumper, not even a computer can make up for poor maintenance. Imagine that 8000lb pickup with it's cheap oversized tires right in your rearview mirror with it's headlights in your face. It's not going to be able to stop either.
A combination of cameras, lidar, ridar, ultrasonic fused together have a strong sense of perception since they fill in each other's gaps. (short, long, different spectrums of electro-magnetic spectrum / sound).
The good news is they're all commodity hardware prices now.
Tesla removing radar and parking ultrasonic sensors was a self own. Computer vision inference is pretty bad when all the camera sees is a while wall when backing up.
Fog - Radar will perceive the car. Multi car crash, long range radar picks it up.
Bright glare from sun, lidar picks it up. Lidar misses something, camera picks it up.
Waymo has the correct approach on perception. Jam with sensor so they have superhuman vision of environment around.
I'm not a self-driving believer (never had the opportunity to try it, actually), but I'd say bad traffic would be the number one case where I'd want it. I don't mind highway driving, or city driving if traffic is good, but stop and go traffic is torture to me. I'd much rather just be on my phone, or read a book or something.
Agreed that public transportation is usually the best option in either case, though.
To me any kind of driving is torture. I don't want the responsibility, the risk, the chance of fines if I miss a speed sign somewhere. And if my car could self drive I could spend the time usefully instead of wasting it on driving. It would be amazing.
Right now I don't even have a car but for getting around outside of the city it's difficult sometimes.
Yeah, I feel ya. I don't mind it, but I'm far from loving it. What particularly stresses me out is how I can be screwed even doing everything correctly, if someone else screws up.
All reasons why I think public transit is the better solution over self driving cars. They're generally much safer, and also you get to do something while you're on the go. Pretty nifty, I think.
Yes that's why I don't own a car. In a big city public transit is amazing. I spend 20 bucks a month on unlimited travel. That won't even buy me a headlight bulb for a car these days lol. When I still owned one I had to pay for the car, insurance, road tax, fuel, maintenance, parking, tolls. It felt like it was dragging me down the whole time. It's insane how much costs add up.
I love public transport and an added benefit is: I don't have to go back to where I left it. I often take a metro from A to B, walk to C and then get a bus back to A or something. Can't do that with a car, as such I tend to walk a lot more now. Because it's a hassle-free option now. The world seems more open for exploration when I don't have to worry about returning to the car, or having a drink, or the parking meter expiring. I really don't get that people consider cars freedom.
Of course once you go outside the city it's a different story, even here in Europe. Luckily I don't need to go there so much. But that's something that should be improved. On the weekend here in the city the metro runs 24/7 and the regional trains really should too but they don't.
Unfortunately in my region highway traffic is quite congested, and so called "adaptive cruise control" is a game changer. I find it reduces fatigue by a lot. Usually the trucks are all cruising at the speed limit and I just hang with them. I only change lanes if they slow down or there's an obstruction etc.
LIDAR systems use timing, phase locking, and software filtering to identify and eliminate interference from other units. There is still risk of interference, resulting in reduced range, noise, etc.
They're wideband EM devices, so the problem of congested spectrum can be dealt with by the same sort of techniques used by WiFi and mobile phone services.
Do you always get good wifi at a level of consistency with which you'd trust your life?
Given a good proportion of his success has rested on somehow simplifying or commodifying existing expensive technology (e.g. rockets, and lots of the technology needed to make them; EV batteries) it's surprising that Musk's response to lidar being (at the time) very expensive was to avoid it despite the additional challenges that this brought, rather than attempt to carve a moat by innovating and creating cheaper and better lidar.
> So it’s not a surprise to see the low end models with lidar.
They could be going for a Tesla-esque approach, in that by equipping every car in the fleet with lidar, they maximise the data captured to help train their models.
It's the same with his humanoid robot. Instead of building yet another useless hype machine, why not simply do vertical integration and build your own robot arms? You have a guaranteed customer (yourself) and once you have figured out the design, you can start selling to external customers.
The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was. He seemed to have drank his own koolaid back then.
And if he still doesn’t realize and admit he is wrong then he is just plain dumb.
Pride is standing in the way of first principles.
I think there’s room for both points of view here. Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.
And he’s not wrong that roads and driving laws are all built around human visual processing.
The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Didn't waymo stop operating simply because they aren't as cavalier as Tesla, and they have much more to lose since they are actually self driving instead of just driver assistance? Was the lidar/vision difference actually significant?
The reports I’ve read said that some continued to attempt to navigate with the street lights out, but that the vehicles all have a remote confirmation where they try to call home to confirm what to do. That ended up self DDoSing Waymo causing vehicles to stop in the middle of the road and at intersections with their hazards on.
So to clarify, it wasn’t entirely a lidar problem it was an need to call home to navigate.
> roads and driving laws are all built around human visual processing.
And people die all the time.
> The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Huh? Waymo is responsible for injury, so all their cars called home at the same time DOS themselves rather than kill someone.
Tesla makes no responsibility and does nothing.
I can’t see the logic the brings vision only as having anything to do lights out. At all.
> And people die all the time.
Yes... but people can only focus on one thing at a time. We don't have 360 vision. We have blind spots! We don't even know the exact speed of our car without looking away from the road momentarily! Vision based cars obviously don't have these issues. Just because some cars are 100% vision doesn't mean that it has to share all of the faults we have when driving.
That's not me in favour of one vs the other. I'm ambivalent and don't actually care. They can clearly both work.
> And people die all the time.
They do, but the rate is extremely low compared to the volume of drivers.
In 2024 in the US there were about 240 million licensed drivers and an estimated 39,345 fatalities, which is 0.016% of licensed drivers. Every single fatality is awful but the inverse of that number means that 99.984% of drivers were relatively safe in 2024.
Tesla provided statistics on the improvements from their safety features compared to the active population (https://www.tesla.com/fsd/safety) and the numbers are pretty dramatic.
Miles driven before a major collision
699,000 - US Average
972,000 - Tesla average (no safety features enabled)
2.3 million - Tesla (active safety features, manually driven)
5.1 million - Tesla FSD (supervised)
It's taking something that's already relatively safe and making it approximately 5-7 times safer using visual processing alone.
Maybe lidar can make it even better, but there's every reason to tout the success of what's in place so far.
No, you're making the mistake of taking Tesla's stats as comparable, which they are not.
Comparing the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?
Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries? Including accidents that were sufficiently serious that airbags could not or were unable to deploy?
The stats on the site break it into major and minor collisions. You can see the above link.
I have no doubt that there are ways to take issue with the stats. I'm sure we could look at accidents from 11pm - 6am compared to the volume of drivers on the road as well.
In aggregate, the stats are the stats though.
> And people die all the time.
Most of them cannot drive a car. People have crashes for so many reasons.
What Tesla self driving is that? The one with human drivers? I don't believe they have gotten their permits for self driving cars yet.
I wonder how much of their trouble comes from other failures in their plan (avoiding the use of pre-made maps and single city taxi services in favor of a system intended to drive in unseen cities) vs how much comes from vision. There are concerning failure modes from vision alone but it’s not clear that’s actually the reason for the failure. Waymo built an expensive safe system that is a taxi first and can only operate on certain areas, and then they ran reps on those areas for a decade.
Tesla specifically decided not to use the taxi-first approach, which does make sense since they want to sell cars. One of the first major failures of their approach was to start selling pre-orders for self driving. If they hadn’t, they would not have needed to promise it would work everywhere, and could have pivoted to single city taxi services like the other companies, or added lidar.
But certainly it all came from Musk’s hubris, first to set out to solve the self driving in all conditions using only vision, and then to start selling it before it was done, making it difficult to change paths once so much had been promised.
> And if he still doesn’t realize and admit he is wrong then he is just plain dumb.
The absolute genius made sure that he can't back out without making it bleedingly obvious that old cars can never be upgraded for a LIDAR-based stack. Right now he's avoiding a company-killing class action suit by stalling, hoping people will get rid of HW3 cars, (and you can add HW4 cars soon too) and pretending that those cars will be updated, but if you also need to have LIDAR sensors, you're massively screwed.
> The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was.
History is replete with smart people making bad decisions. Someone can be exceptionally smart (in some domains) and have made a bad decision.
> He seemed to have drank his own koolaid back then.
Indeed; but he was on a run of success, based on repeatedly succeeding deliberately against established expertise, so I imagine that Koolaid was pretty compelling.
To be frank, no one had a crystal ball back then, and stuff could go either way with uncertainty in both hardware and software capabilities. Sure Lidars were better even back then, but the bet was on catching up on them.
I hate Elon's personality and political activity as much as anyone, but it is clear from technical PoV that he did logical things. Actually, the fact that he was mistaken and still managed to not bankrupt Tesla is saying something about his skills.
If you have to choose one over the other, it has to be vision surely?
Even ignoring various current issues with Lidar systems that aren’t fundamental limitations, large amounts of road infrastructure is just designed around vision and will continue to be for at least another few decades. Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.
Personally I don’t buy the argument that it has to be one or the other as Tesla have claimed, but between the two, vision is the only one that captures all the data sufficient to drive a car.
For one, no one is seriously contemplating a LIDAR-only system, the question is between camera+LIDAR or camera-only.
> Lidar just fundamentally can’t read signs, traffic lights or road markings in a reliable way.
Actually, given that basically every meaningful LIDAR on the market gives an "intensity" value for each return, in surprisingly many cases you could get this kind of imaging behavior from LIDAR so long as the point density is sufficient for the features you wish to capture (and point density, particularly in terms of points/sec/$, continues to improve at a pretty good rate). A lot of the features that go into making road signage visible to drivers (e.g. reflective lettering on signs, cats eye reflectors, etc) also result in good contrast in LIDAR intensity values.
> camera+LIDAR
It's like having 2 pilots instead of 1 pilot. If one pilot is unexpectedly defective (has a heart attack mid-flight), you still have the other pilot. Some errors between the 2 pilots aren't uncorrelated of course, but many of them are. So the chance of an at-fault crash goes from p and approaches p^2 in the best case. That's an unintuitively large improvement. Many laypeople's gut instinct would be more like p -> p/2 improvement from having 2 pilots (or 2 data streams in the case of camera+LIDAR).
In the camera+LIDAR case, you conceptually require AND(x.ok for all x) before you accelerate. If only one of those systems says there's a white truck in front of you, then you hit the brakes, instead of requiring both of them to flag it. False negatives are what you're trying to avoid because the confusion matrix shouldn't be equally weighted given the additional downside of a catastrophic crash. That's where two somewhat independent data streams becomes so powerful at reducing crashes, you really benefit from those ~uncorrelated errors.
"In the camera+LIDAR case, you conceptually require AND(x.ok for all x) before you accelerate." This can be learnt by the model. Let's assume vision is 100% correct, the model would learn to ignore LIDAR, so the worst case scenario is that LIDAR is extra cost for zero benefit.
> Let's assume vision is 100% correct
This is not going to be true for a very long time, at least so long as one's definition of "vision" is something like "low-cost passive planar high-resolution imaging sensors sensitive to the visual and IR spectrum" (I include "low-cost" on the basis that while SWIR, MWIR, and LWIR sensors do provide useful capabilities for self-driving applications, they are often equally expensive, if not much more so, than LIDARs). Camera sensors have gotten quite good, but they are still fundamentally much less capable than the human eyes plus visual cortex in terms of useful dynamic range, motion sensitivity, and depth cues - and human eyes regularly encounter driving conditions which interfere or prohibit safe driving (e.g. mist/ fog, heavy rain/snow, blowing sand/dust, low-angle sunlight at sunrise/sunset/winter). One of the best features of LIDAR is that it is either immune or much less sensitive to these phenomena at the ranges we care about for driving.
Of course, LIDAR is not without its own failings, and the ideal system really is one that combines cameras, LIDARs, and RADARs. The problem there is that building automotive RADAR with sufficient spatial resolution to reliably discriminate between stationary obstacles (e.g. a car stalled ahead) and nearby clutter (e.g. a bridge above the road) is something of an unsolved problem.
The worst case scenario is that LIDAR is a rapidly falling extra cost for zero benefit? Sounds like it's a good idea to invest into cheap LIDAR just in case the worst case doesn't happen. Even better, you can get a head start by investing in the solution early and abandon it when it has obsolete.
By the way, Tesla engineers secretly trained their vision systems using LIDAR data because that's how you get training data. When Elon Musk found out, he fired them.
Finally, your premise is nonsensical. Using end to end learning for self driving sounds batshit crazy to me. Traffic rules are very rigid and differ depending on the location. Tesla's self driving solution gets you ticketed for traffic violations in China. Machine learning is generally used to "parse" the sensor output into a machine representation and then classical algorithms do most of the work.
The rationale for being against LIDAR seems to be "Elon Musk said LIDAR is bad" and is not based on any deficiency in LIDAR technology.
Isn’t that also like having two watches? You’ll never know the time
If you're on a desert island and you have 2 watches instead of 1, the probability of failure (defined as "don't know the time") within T years goes from p to p^2 + epsilon (where epsilon encapsulates things like correlated manufacturing defects).
So in a way, yes.
The main difference is that "don't know the time" is a trivial consequence, but "crash into a white truck at 70mph" is non-trivial.
But it's the same statistical reasoning.
It's different because the challenge with self-driving is not to know the exact time. You win for simply noticing the discrepancy and stopping.
Imagine if the watch simply tells you if it is safe to jump into the pool (depending on the time it may or may not have water). If watches conflict, you still win by not jumping.
I was responding to the parent who said if you had to make a choice between lidar and vision, you'd pick lidar.
I know there are theoretical and semi-practical ways of reading those indicators with features that are correlated with the visual data, for example thermoplastic line markings create a small bump that sufficiently advanced lidar can detect. However, while I'm not a lidar expert, I don't believe using a completely different physical mechanism to read that data will be reliable. It will surely inevitably lead to situations where a human detects something that a lidar doesn't, and vice versa, just due to fundamental differences in how the two mechanisms work.
For example, you could imagine a situation where the white lane divider thermoplastic markings on a road has been masked over with black paint and new lane markings have been painted on - but lidar will still detect the bump as a stronger signal than the new paint markings.
Ideally while humans and self driving coexist on the same roads, we need to do our best to keep the behaviour of the sensors to be as close to how a human would interpret the conditions. Where human driving is no longer a concern, lidar could potentially be a better option for the primary sensor.
> For example, you could imagine a situation where the white lane divider thermoplastic markings on a road has been masked over with black paint and new lane markings have been painted on - but lidar will still detect the bump as a stronger signal than the new paint markings.
Conflicting lane marking due to road work/changes is already a major problem for visual sensors and human drivers, and something that fairly regularly confuses ADAS implementations. Any useful self-driving system will already have to consider the totality of the situation (apparent lane markings, road geometry, other cars, etc) to decide what "lane" to follow. Arguably a "geometry-first" approach with LIDAR-only would be more robust to this sort of visual confusion.
Everyone is missing the point, including Karpathy which is the most surprising because he is supposed to be one of the smart ones.
The focus shouldn't be on which sensor to use. If you are going to use humans as examples, just take the time to think how a human drives. We can drive with one eye. We can drive with a screen instead of a windshield. We can drive with a wiremesh representation of the world. We also use audio signals quite a bit when when driving as well.
The way to build a self driving suite is start with the software that builds your representation of the world first. Then any sensor you add in is a fairly trivial problem of sensor fusion + Kalman filtering. That way, as certain tech gets cheaper or better or more expensive and worse, you can just easily swap in what you need to achieve x degree of accuracy.
Sorry if this is obvious, but are there actually any systems that "choose one over the other"? My impression's always been it was either vision + LIDAR, or vision alone. Are there any examples of LIDAR alone?
Since the current traffic infrastructure was built for human drivers with vision, you’ll probably need some form of vision to navigate today’s roads. The only way I could picture lidar only working would be on a road system specially made for machine driving.
Not that I'm aware of, but I was referring to the claim in the parent post that if you had to choose it would be insane to choose vision over LIDAR.
Don't ultimately even the ones which are vision + LIDAR ultimately have to choose priority in terms of one or the other for "What do you do if LIDAR says it is blocked and sight says it is clear' or visa-versa?" Trying to handle edge-cases where say LIDAR thinks that sprinker mist is a solid object and to swerve to avoid it and say vision which thinks that an optical illusion is a real path and not a brick wall.
Roombas
Roomba (specifically the brand of the American company iRobot) only added lidar in 2025 [1]. Earliest Roombas navigated by touch (bumping into walls), and then by cameras.
But if you use "roomba" as a generic term for robot vacuum then yes, Chinese Ecovacs and Xiaomi introduced lidar-based robot vacuums in 2015 [2].
[1] https://www.theverge.com/news/627751/irobot-launches-eight-n...
[2] https://english.cw.com.tw/article/article.action?id=4542
> Earliest Roombas navigated by touch (bumping into walls)
My ex got a Roomba in the early 2010s and it gave me an irrational but everlasting disdain for the company.
They kept mentioning their "proprietary algorithm" like it was some amazing futuristic thing but watching that thing just bump into something and turn, bump into something else and turn, bump into something again and turn again, etc ... it made me hate that thing.
Now when my dog can't find her ball and starts senselessly roaming in all the wrong directions in a panic, I call it Roomba mode.
For full self driving sure but the more regular assisted driving with basic ‘knows where other cars are in relation to you and can break/turn/alarm to avoid collisions’ as well as adaptive cruise control lidar can manage well enough.
I think fsd should be both at minimum though. No reason to skimp on a niw inexpensive sensor that sees things vision alone doesn’t.
Between anti-Musk sentiment, competition in self driving and the proven track record of Lidar, I think we’ll start seeing jurisdictions from Europe to New York and California banning camera-only self-driving beyond Level 3.
Nah, you don't need to ban anything. Just force the rule, that if company sells self driving, they are also taking full liability for any damages of this system.
Why is it preferable to wait for people to die and then sue the company instead of banning it in the first place?
People die in car crashes all the time. Self driving can kill a lot of people and still be vastly better than humans.
But who gets the ticket when a self-driving car is at fault?
> who gets the ticket when a self-driving car is at fault?
Whoever was in control. This isn’t some weird legal quagmire anymore, these cars are on the road.
Apparently it IS still a legal conundrum: https://www.motortrend.com/news/who-gets-a-ticket-when-a-way...
And will continue to be until every municipality implements laws about it.
> it IS still a legal conundrum
It’s not a conundrum as much as an implementation detail. We’ve decided to hold Waymo accountable. We’re just ticking the boxes around doing that (none of which involve confusion around Waymo being responsible).