See 31:00 - 33:00. https://www.youtube.com/live/mIK1Y8ssXnU?si=fNx6k-MSNB18JNiD
From Rivians autonomy day.
I’m curious how Tesla can actually get to level 5 once you see the simple demo.
See 31:00 - 33:00. https://www.youtube.com/live/mIK1Y8ssXnU?si=fNx6k-MSNB18JNiD
From Rivians autonomy day.
I’m curious how Tesla can actually get to level 5 once you see the simple demo.
I don't think Tesla will get to level 5 with camera only. However, and I know this is a boomer take, but I have no fucking use for truly autonomous driving until it's a proven tech for years. Basically if my car still has a steering wheel I'm not interested in totally looking away from the road while I ride in it.
I have zero desire to rely on a system to drive me if I'm the one responsible if it messes up. If I have to constantly be aware and take over if needed, then I may as well just drive myself.
In fact I would argue that really good FSD systems that still require intervention are more dangerous than bad ones since they'll lull you into trusting them until that time you shouldn't have.
Exactly. I would never trust FSD until the liability of an accident would fall on the car and not on me. Imagine getting in an accident that's your responsibility because you decided to let a shitty clanker drive your car while you scroll on your phone or whatever you were doing.
I would never trust FSD until the probability of an accident is low enough for my risk tolerance. Some human drivers don’t meet that threshold for me, even. Until I can, say, take a nap on my morning commute like I can with a bus or train, I’ll operate my car myself.
For FSD I believe this will require dedicated FSD-only infrastructure. Perhaps some day existing HOV and express lanes will become FSD lanes, and FSD lanes get extended into surface streets, and they all have sensors, transmitters, markers and whatever else is needed to give an FSD car full and confident information about its surroundings.
This is the most reasonable view into the future I’ve seen
The Mercedes system actually already will take liability...
Isn't that only on a few specific highways and below a certain speed though? It says a lot about their confidence in the system that they will take responsibility in those situations but it is a far cry from taking responsibility for what the cars do on the vast majority of roads they will be on.
Correct, but honestly, I wouldn't want a system like this to even attempt it if they weren't that confident.
They also have special lighting on some models to indicate its operating autonomously.
And it certainly seems like a much more pragmatic approach, once the system has been proven to be faultless they can start opening it up...
I would never trust FSD from any Elon Musk company.
I won’t ride his rocket either.
That's already a part of defining levels of self driving. At level 4/5 manufacturers will be liable for accidents.
Mercedes has already demonstrated this, they will be liable if the car has an accident while their hands free, eyes off level 3 is active
I would also assume the liability part would also be dependent on the maintenance of the vehicle? Like if the owner didn’t follow the service schedule, presumably to the tee, the automaker’s could deny being held liable there?
Afaik, and ianal, they'd have to prove the lack of maintenance proved the accident. Like not getting your cabin air filter replaced at whatever miles would have no bearing on self driving but brakes failing from age and use would.
Great, until I have something with that kind of guarantee I’m not interested.
I agree. In fact, I think that the car manufacturers should have to prove (on the road) that a vehicle is impossible for a human driver to crash for years before FSD is deemed safe to be on the streets with the general public.
What Tesla is aloud to do is no different than giving his humanoid robot a gun to protect the home and accidentally deciding to shoot a neighbor, and the government letting it happen over and over again. I bet that would be a different story even though the car can take many lives with 1 miscalculation. In my opinion.
"I have zero desire to rely on a system to drive me if I'm the one responsible if it messes up."
1000% if they're not willing to put their money where their mouth is, then I'm not interested.
Right now I view these things like adaptive cruise control.
Google made the same decision years ago. Full, actual self driving - not a scam by that name - is best done without a wheel and pedals. Humans can't actually properly operate a car when 99.9% of the time it drives itself but occasionally needs immediate input for safety; people simply zone out, nap, watch tv, etc in that scenario.
Your argument is backwards and here’s why.
When you are driving by yourself, you are responsible for keeping the car in the lane, keeping an eye on other vehicles, keeping an eye on obstacles that aren’t vehicles.
If you view FSD or blue cruise or whatever as an additional “set of eyes” then your safety should improve. Now, if you hand over control and take a nap, that’s a different story.
It’s not backwards at all though. The point I’m making is that any FSD systems that I can’t just let go of isn’t saving me any effort because I have to be paying enough attention if something goes wrong, I may as well just be driving. I’m not talking about advanced driving aids, I explicitly mean things where I tell it where to go and it “drives”.
Then you clearly haven’t used it. It is nothing but an advanced driving aid. It is incredibly useful.
I have, I don’t like them.
I found it useful for long stretches of empty highway where im confidence there won't be anything suddenly jumping into my way and it's easy enough for my cars basic assistance system to manage
Same for me, but I pretty much only use it when I'm dealing with a drink or snack on a long trip. Still keeping my eyes on the road. Just want use of both hands.
This would be my only use case for it too. I drive a lot for work and when I've been in the car five hours in a day already the last thing I want to do is stop for 15-20 minutes to eat dinner. If it makes it easier to to eat while I'm driving that's all I want. Although to be honest adaptive cruise control and lane keep on my old AMG worked just fine for that.
I miss my work Sprinter with both of those. Now I have a shitty Maverick with no radar /ACC, and no lane keep.
You can do what you want for yourself, but I don't think anyone is actually recommending that anyone look away completely while driving with these systems yet. We should be looking at these systems and their effect on aggregate car crash data. There are about 40,000 fatalities per year in the US from car crashes. If wide spread adoption of even these flawed systems reduces that number, then I think most people should be in favor of them.
You think you may not have use for them, but if the radar cruise control and self braking of the car behind you prevents a distracted person from rear ending you, then you are a beneficiary of this tech in some form. The problem is you would never even know that almost happened. The other person would realize their mistake, but you may have no idea that such a system prevented a crash you were the potential victim of.
That person should be paying attention, but people are flawed. Some make mistakes from obvious and selfish behavior like texting while driving. Others make honest mistakes, like getting momentarily distracted by music, or a conversation with their passenger, or just not reacting in time.
If the tech of the car behind you protects you from a crash, do you really care that it is not actually perfect? It just has to be good enough in decent conditions to lower statistical crash rates. They start there, and then progress. ABS isn't technically perfect, is it? It just has to be better on average at limit breaking than an average driver to be useful. Technically, a driver should never need ABS if they are vigilant and have adequate following distance to react, but we know that isn't going to happen across the board with all drivers. So, I think these systems will likely improve safety and are therefore valuable. If you are a good driver and never get distracted for even a moment, then I tip my hat to you. I still think you would be safer with the tech in your car, even if you drive in such a way that it only ever engages a few times to step in and help you. It would just be one more layer of security.
"If widespread adoption of this tech reduces that number"
You wrote a lot of paragraphs based on an "if" that is just assumed and not at all proven
Tesla's terrible autopilot keeps causing accidents and killing people, so the notion that it's inherently more safe is incredibly flawed
As technology develops and costs drop, it eventually goes from being exploratory tech, to standard, and then eventually regulated.
Rear view cameras, as an example.
"exploratory tech"
Full Self Driving is a retail product that's been on sale for a decade
Yes. A l2 software that is actively and continuously improved. It has gone from basic adaptive cruise control to end to end operation, in a market that still hasn’t delivered a complete autonomy package in a consumer vehicle.
It’s very much on the emergent side. You can change the term if it bugs you, the point is still the same. This is how technology goes from an idea to a standard feature.
Tesla is not the only system around. I think radar cruise control counts as a basic form of self driving, and I bet it does improve safety overall.
Teslas own data does show that it is statistically safer. You have to believe their data, but what other data do we really have? Again, you are not thinking statistically over large populations and miles. Teslas causing a few deadly crashes is irrelevant and anecdotal stories. You need to account for all the data of safe miles they have logged with the system, and run the statistics to see if crash performance has improved.
Conflating ADAS with self driving is a mistake - the core problem with "self driving" is that it is sold on the fact that you don't have to drive. Except you're still the driver and the safety failback. That system fails and kills people.
Tesla is one of the most dishonest companies in existence
They lie constantly about everything
There's a reason nobody but them has ever reviewed their data, and it's because it doesn't stand up. If their cars are 10,000 times safer than human drivers, why are they making the humans take the liability when they crash? Because they're lying. If they believed it they'd own the liability
I'm not talking about just Tesla, but sure, they probably are fudging. Lets toss them out. I have been talking about the class of technology as a whole. Perhaps there is some confusion because of "FSD" kind of being a brand name that Tesla created. My apologies if I am unclear. Googles Waymo service has published some crash statistic data. They literally don't have steering wheels operating right now!
The NHTSA literally defines radar cruise as level 1 self driving. It is a driver assist. I absolutely get to include that in any conversation about such technologies reducing crash probabilities. Again, Mazda, Toyota, Honda, Ford, and all the rest keep the liability on you with their level 1 systems. Do they not trust them? Again these systems do not have to be 10,000x better. That is you standard, and it is a bad one. These systems just need to be a level of improvement that is statistically significant/noticeable AT ALL to be worth it.
All of these companies (even Tesla) have stated that the driver needs to remain in control of the vehicle for emergencies. No one (except maybe Waymo) is encouraging drivers to look away as the parent comment implied. You can argue that Tesla is using deceptive marketing to imply that, but most others are not (in my opinion). To act like there is no use or benefit to such technology between having nothing and completely solved autonomous driving seems unrealistic.
"That system fails and kills people." The current system fails and kills people. People fail and kill people. The question isn't whether this happens. The question is whether it happens more or less frequently in comparison. You want to crap on Tesla, go ahead. You are safer in you 1920s Ford Model T because they guy behind you has radar cruise which will almost definitely lower his risk of rear ending you. You don't have to use the technology for it to benefit you. So hold out all you want. As more cars come with these features as standard, the safer you become even if you don't participate actively.
I agree for my regular driving, but on long road trips I would appreciate the shit out of some autonomous driving.
You can use and benefit from FSD without handing complete control of the car over. Your inputs still override the computer’s inputs.
I completely disagree. I'm the kind to buy a comma.ai for every car I own that supports it.
I don't want camera only. What's the point of it all if they are going to rely on the same senses as us. I want LIDAR and all sorts of other sensors.
I have FSD v14 right now as a trial and I have to say, it's really REALLY good. I drive like 40K miles a year and it really does eliminate driving fatigue and is very safe. FSD doesn't mean you should be doing anything other than paying attention to the road. I certainly am not going to trust it implicitly, but it makes driving the long hauls WAY fucking better.
Self driving is more useful for preventing crashes from irresponsible people that drive distracted or impaired, which I think is a very good thing for society. For the average responsible driver it’s more of a luxury feature that I would use occasionally if I was tired or something, and it’s not something I would pay a ton of money for.
As someone that owns a car with FSD, I use it frequently, and I think as more people experience it they will come around to the idea more and more. It’s very useful if you’re driving in an unfamiliar area, peak hour traffic, or on the highway/motorway. Any time driving becomes a chore or stressful is where it really shines.
I'm not sure I'd trust it until all or most cars can be networked on FSD, which won't be for a very, very long time. All it takes is one bad driver in a clapped out 2006 Nissan Altima (or whatever, just totally picked a car at random) acting unpredictably to result in an accident. At least for now, human drivers are better at responding to unpredictable situations and misleading road conditions.
I do like some of the safety features in newer cars that help detect hazards and alert the driver (I drive rentals for work and my partner has a 2021 Bronco Sport that I drive sometimes), so I'm all for stuff that helps prevent accidents, think it's just a ways off for FSD.
I agree, as of now anything beyond adaptive cruise control is a little too “beta test with my life” for me.
If u are truly a boomer you of all people should be interested in it bc one day soon you won’t be able to drive as good as it bc of your old reflexes. Also imagine if you have a stroke or some medical issue instead of relying on a family member to drive you to doctors appointments you can just have the autonomous car do it.
Yeah unless im on a long drive in the middle of buttfuck nowhere with 0 traffic, id prefer to be attentive and in control
Honestly it’s a good take, and this is coming from a millennial software developer that works on some pretty cutting edge stuff for a big tech company.
I just placed an order earlier this week for an R1S and no way I’ll be using the autonomous features for at least a year or two.
Also will never update the software within a week of release unless it’s a critical fix.
Yeah, that’s definitely a boomer take. Even in its current form, FSD relieves a fair amount of the mental load and stress out of commuting.
I think autonomous driving will always have problems. There's many unique scenario's where only a human can think of how to react, and a machine won't know what to do. In SF there was a Waymo that killed a cat because it was sitting under the car right as it was about to take off. Nearby pedestrians could've warned the human driver about the cat, but they can't do that with a machine. Granted in that accident it's something a human driver could've easily done as well
That's the whole point of AI techniques though, to be able to react to things it hasn't seen before. It's only going to get better at doing that over time.
Manual driving has even more problems. And fixating on a single cat is absolutely bizarre when 40,000 Americans are killed every year in traffic accidents.
This is a big claim to be made based on marketing statements mostly from one of the most dishonest car companies in the entire industry
So you think that they're covering up hundreds of accidents and deaths?
https://www.tesladeaths.com/
65 and counting that have been crowdsourced since the company only releases slanted data that makes them look good
I'm talking about Waymo, Tesla does not have FSD and those deaths are almost always due to idiot drivers.
Even that 65 is probably lower than the average though.
I wholeheartedly disagree. I still have to watch out for phantom braking and FSD suddenly veer off course. If anything it actually significantly increases my stress.
This is only true if you don't use it as instructed.
You're the failsafe. That means you need to be alert, ready to react in an instant and paying attention to everything in your surroundings at all time. Most people I know don't consider sitting hover hands on the wheel while rapt at attention waiting for it to mess up to be relaxing
Because in real life, nobody uses it that way. They sit there doing the bare minimum to defeat the attention monitoring systems while they let the car drive itself. Because hey, checking out and letting the car drive itself? That can be really relaxing if you erroneously trust the car to drive itself, even though *you* are driving, *you* are responsible and *you* are the failsafe.
The tech is only beneficial if people abuse it.
You don’t need to be hyper alert. Watch the road, be ready to react, just like normal. But you don’t need to be looking for gaps to merge, or worrying about missing your exit, or any of the other little tasks that cause mental load. Most people I know who have used it a lot say the same thing.
FSD will ONLY be worth trusting and working properly when it’ll be in all of the cars, „speaking” to themselves via the same protocol.
Wrong, Waymo already has a far better track record that the vast majority of human drivers.
In fact, it doesn’t matter from where you’ll get data. What’s important is to use them in one system.
I agree that one system would certainly be the best case scenario, but even a variety of FSD systems is almost certainly going to be safer than manual driving.
The problem with this data set is that Waymo has broad control over what environments the cars get used in. Can I take a Waymo on an icey mountain highway, after dark, up to a ski resort?
Their data is interesting, but it can't be compared to the average driver, or the vast majority of drivers, because their dataset is intentionally restrictive. That's a good thing from a business perspective, but from a data viewpoint, it's not apples to apples with humans.
Sure you can, you can compare Waymo to the drivers in the same area. And even if they aren't as good at driving around snowy mountains, that doesn't mean that they never will be. Hell, that might never even be a use case, and it doesn't have to be to save tens of thousands of lives every year.
Environment is not just geographic. It's also situational. Waymo can just not deploy their cars on icey days or in foggy conditions. Most humans don't have the luxury to just stay home when the weather is bad. The fact that Waymo has broad access to driver safety data means they can just avoid the times and places and weather events that correlate with a higher risk of damage to life and property.
I don't doubt that self driving vehicles are safer than the average driver, but I recognize that Waymo is intentionally presenting data that makes their product look safer and more advanced than it actually is.
I'm sure that Waymo plays up their own abilities, but even a marginal improvement over real life drivers can end up saving tons of lives. I just don't get why people are so opposed to this tech.
Car enthusiasts are just like this. Lots of car enthusiasts seriously believe that they can brake better than an ABS because they saw a video one time where a race car driver anticipating braking on a closed course was able to beat ABS stopping distance.
Being better than the average idiot doesn't make it better than an attentive, competent driver.
It doesn't need to be to save tens of thousands of lives.
According to who? And why are we comparing them to "The majority of human drivers" and not similar drivers in similar vehicles in similar conditions?
Oh word the Waymo is safer than a drunk guy in a 2003 altima that's missing a wheel in in a snowstorm? wow
According to the data. And we're not just talking about one specific condition, we're talking about millions of hours of drive time.
We're talking about a company's marketing materials which compare their brand new cars with full safety suites to the "average driver" which means a 14 year old car
Yes, Waymo has determined after examining themselves that their cars operating in extremely limited conditions are safer than the "average human driver"
That's marketing, not research
There's a reason none of these companies have let an objective scientific study try to prove if they're actually safer or not
Because it's marketing
Weird that you place so much importance on the age of the car. It's like you trust the technology to prevent accidents, but when it's part of a FSD suite you seem to think that it does the opposite.
Its data is independently verified.
Yes, it's not operating in tons of places yet, because of attitudes like yours.
Data gathering is the same as an objective scientific study. You just don't like what the data says, so you disregard it.
https://waymo.com/safety/impact/
safer than the vast, vast majority of humans, not just a drunk 2003 Altima driver
This is marketing material, not research
Company Selling Self Driving Cars Says Their Self Driving Cars Are The Safest Thing Ever
yeah, what else would they say?
Is there anything to suggest their data is incorrect? Their reporting is required to be public
When they let independent 3rd party researchers actually vet their data for people to review then great
Until then it's just them saying they're awesome.
Off the top of my head comparing Brand New 2020+ Model Cars with "Average drivers" means you're comparing incredibly different vehicles in a comparison that is supposed to talk about how safe the driving is. Are they safer than other equivalent vehicles driving equivalent routes? We need people putting actual controls and vetting on the data rather than companies who have a vested interest in making themselves look good.
They are required to disclose every crash to the NHSTA, you can vet it yourself
I guess we can go ahead and take their word for it, billion dollar corporations have no incentive to lie or fudge the numbers at all.
V2V and V2I are really going to be game changers, but they require so much trust in the fact that there aren't "dark" cars out there.
I don't know if I'd want my car trusting what other cars are telling it rather than observing what the other cars are actually doing. What happens if you have a malicious actor?
Other cars with the same FSD are far more trustworthy than any driver.
They would do both.
I could think of one company that would do this to other manufacturers for the "lulz", making it "memeworthy" and naming it after something utterly cringe
This plus some sort of government adoption of standards for roads, feedback systems, etc that’ll basically make the cars talk to each other as well as the roads, lights, infrastructure talk to them.
Then we can link them up and make a ticketing system that gets funded and maintained fully by the government
It'll be worth trusting when it's legal to do it while intoxicated.
I agree. I work in aerospace. With aircraft we use all the sensors we can get as more available information enables us to reduce risks and improve precision and accuracy in how we control the vehicle. Cars must be cheaper, but electronic sensors get really inexpensive when we go to industrial scales.
Autonomous driving should be far safer than human controlled driving before it becomes the norm. Otherwise we will be right to never trust it.
Things like broader wavelength than just visible light in cameras, many cameras, LIDAR, high resolution GPS, inertial sensors, and whatever else we can come up with should all be technologies we use in controlling automated cars.
While I agree we need more sensors we need to take care with methods which emit energy into an area to understand it. Currently LIDAR and even stronger higher resolution RADAR don't mingle with living beings that much and certainly not on a scale that would come about from having them on every vehicle. If they can fry a camera then how can we prove saturation does not affect living beings?
Vision can work provided the software does not outrun what it can detect. Now people might not accept self driving vehicles that are hyper cautions but people also always believe they won't be the next victim of some other person's poor decisions
The "cameras alone" argument only makes sense if you are a massive cheapskate.
I mean I wouldn't mind cheaper cars...
Best we can do is higher profits
It's not really about being cheap, it's about the Tesla way and believing they can outdesign conventional technology. Full disclosure, I am no where near as smart or knowledgable as the people actually working on Tesla FSD but I did dabble in SLAM projects and in all applications I'm aware of, they have some form of lidar, sonar, radar, etc. Cameras are just Tesla believing they can do it all through software and machine learning, plus it makes for a cool sounding marketing talking point when you say you're doing it with less hardware, sensors, etc. Consider the way they deal with rain sensing. I believe Tesla also uses the cameras whereas usually it uses infrared and measures light diffraction
It all comes down to cost/benefit. Of course if LIDAR was $1 then yes, but it won't be $1 - it will be perhaps thousands in parts and incremental labor - and there will be a mass of people that refuse to spend a few thousand more. Cameras are (already) extraordinarily cheap and easy to integrate into a vehicle - you're talking a few hundred bucks all in. If you can get vision only FSD that is 95% safer than most humans and reduces fatal crashes by half for that amount, you take the win, and you push the safer, more expensive stuff towards wealthy buyers until critical mass is there to drive costs down to the point they are irrelevant.
People forget that new cars are $50K now, and people are waiting 10-15 years to get new cars, if they ever do. That's hundreds of millions of lives avoiding the latest safety features entirely because the collective cost of all of them is just too much.
The module Rivian is using is a couple hundred bucks at most. Not thousands.
ITT: People who have never used FSD v14 circle jerking about how they don't need or want automated driving and how cameras don't work based on a mark rober video that tested autopilot, not FSD. Also musk bad (this one actually tho)
Edit: This is a presentation by their AI lead talking about how FSD 14 works, how it is being trained and how they intend to get to full autonomy. It's quite a watch but makes it clear how far ahead Tesla is from everyone except waymo when it comes to accessible autonomy. https://youtu.be/c2hL8tcqsz0?si=imP3AEE4quDSsh-S
I wish rivian best of luck, competition in the autonomy space is the best thing we can ask for. Hopefully they can get the R2 to market with a full sensor suite and deliver on their vision for autonomous driving.
14.2 is working really well right now
Tesla's system is fine--good even. But stop labeling it fucking Full Self Driving when it's not. Regards, someone who loves the technology and looks forward to it progressing.
edit: also yes, Musk indeed bad. Musk sucks.
Just reddit being reddit. I'm not even defending tesla, I'm just sick of that collective "everything bad" mindset.
V14 is amazing. I’ve never had to take over on v14 and it now auto parks and backs out of my driveway.
& Rivian's current demos attempt to run stop lights, don't detect some pedestrians, have disengagement, all on pre-mapped demos etc.
Doesn't really matter if the theoretical ceiling is higher if you can't make use of it, even if Tesla's ceiling is lower, they are well making use of it, better than anyone else short of Waymo & maybe mercedes depending on priorities
I used the old version in my Y when I had one during Tesla's free demo months. It worked OK but around the time I got a "bad road conditions detected, max speed reduced" warning I started to feel like it was not quite "Full" self driving.
Does v14 work better in bad conditions?
And having actual experience, yeah it's not something I want enough to pay for in my car. Especially as a subscription. Basic L2 lane centering with adaptive cruise is enough for me, hell I do road trips in the Wrangler with just regular cruise.
Did I miss something? Is FSD v14 level 5 autonomy all of a sudden?
Did I miss something? Is any other automaker offering the level of supervised ADAS that FSD V14 provides?
I wouldn't call FSD V14 unsupervised ready yet, but it's at the point where you can put a destination in and it will back out of your garage, drive with zero intervention and automatically park at your destination.
Not what level 5 autonomy is. That’s all possible in ideal conditions at only level 2. Tesla is just willing to let the public test features they aren’t willing to be liable for
Yes. For one example, Mercedes offers Level 3 autonomy (in certain conditions). Tesla "FSD" is still Level 2.
People keep saying this but don't realize Mercedes's "Level 3" system is a joke and isn't enabled on most roads and has very specific criteria that makes it unavailable in most driving scenarios. Its nothing more than glorified traffic jam assist. Something FSD can handle flawlessly at the moment.
FSD works on all roads, even dirt roads. While Tesla will not take liability, I would rather have a system that I can use on all roads and up to 85mph rather than a handful of roads at less than 40mph.
Teslas "level 2" FSD is significantly more usable and accessible than Mercedes's "level 3". I would rather have a system that is 90% autonomous 90% of the time rather than a system that is 100% autonomous 5% of the time.
Ok but when are they going to make this leap? This is foundational to get to real autonomy.
When it's ready. They are taking the approach of getting the whole stack to autonomy instead of a single task such as freeway traffic jams to level 3. There are definitely parts of tesla's FSD stack that could be level 3 already. In V14 they added condition based attention monitoring where if outside conditions are favorable, FSD is more forgiving with driver attention which allows limited phone use. If driver attention is not provided when requested, FSD will now pull the vehicle over safely. They have the groundwork in place to bring sections of the stack up to level 3, they just have not gone through the regulatory path yet.
I'm honestly fine with their approach with regard to this, I'm getting access to the most advanced driver assistance system for $100 a month that handles most of my driving to the point I can type a destination into my nav and hit go and the car backs out my garage, drives all the way to the destination without intervention and parks. If I have to intervene rarely, that is fine with me because most of the time I'm just chilling and watching the car do the driving. I'm fine with being liable as I am the operator and it's making driving easier for me. I would imagine unsupervised FSD would come at a much higher cost due to the liability and risk that Tesla would be taking on.
The only company that has meaningfully solved autonomy is waymo and that is within geofenced areas and not on freeways (yet). Their tech stack is impressive, but I don't have access to it in a vehicle that I can purchase.
Right, but as I said this is foundational. It's making the leap to say "we believe we have figured out autonomy to the point that we do not need human intervention" from "we're almost there."
Again, it's exciting stuff and my biggest problem is really only please Tesla call it something else. Too many dumdums out there with more money than brains are turning on "Full Self Driving" and reading a book or doomscrolling their phones.
Yeah they seriously do need to rename it. In its current state it should be treated as an extremely advanced ADAS system. They tried by slapping (supervised) on the end of it.
It's Level 3 on certain mapped highways in California, and a couple of highways in Nevada. The roadways it allows Level 3 on all need to be 3D mapped and scanned. That will never happen for every road in the US. Tesla FSD will be Level 3 on any road in any jurisdiction they're legally allowed to be in 6 months to probably a year at most because it fundamentally works completely differently. I don't own a Tesla because I just don't like the styling of them, and I was a massive skeptic of Tesla's FSD until a month or two ago. I've followed its progress, but it seemed like it was still years away at best.
I was in a friend's car with the newest software recently, and it is ridiculous. It honestly feels like 90% of people in this thread just aren't aware of what Tesla FSD is capable of now. That being said, Tesla should be held financially accountable for the fact that it's not going to ever work on their older cars when it was promised.
This has been promised for years.
I'll grant you that I haven't experienced the latest software update, and maybe it's fantastic. But Tesla putting its money where their mouth is with liability for accidents is where the rubber meets the road.
and edit:
why not?
The demo was built to show improvements when adding radar and lidar to camera systems, so of course it's going to be obvious that it's better.
I fully agree that radar and lidar are important for self-driving cars, but I also think that you can build a self-driving car with only cameras. With great software, cameras offer similar abilities to human eyes, so there is no reason why it can't work. Radar and lidar are currently used to supplement data to account for inadequate software.
However, even with great software, radar and lidar can be used to build better self-driving cars because they add senses that humans don't have. Human vision isn't perfect, and there are lots of places where driving would be safer if we could see better. Fog, snow or rain storms, wet roads with bright reflections, or oncoming lights occluding our sight.
The goal shouldn't be to make self-driving cars as good as humans, but to make them as good as possible.
The purpose of the demo was to show how additional sensors can see more than cameras in conditions with limited visibility. Since cars sometimes drive in those conditions it's super important if they can "see" further by using data from other sensors. There are physical limitations to visual optics that result in more variables for the system to deal with.
How does "good" software defeat the physics of light photons being obstructed from hitting an electronic sensor.... Or a lack of light photons preventing the electronic sensor from detecting an object?
The point of the demo was to show that the additional sensors provides the software with better information for self driving in less than optimal conditions since cars are driven in all conditions. It also to show how it makes self driving as good as possible.
I never said it did, only that good software can safely drive in those conditions, just like a human can. Good software can make better decisions than bad software based only on visual information.
Yes, which is why I said additional sensors can make it safer than a human driver. The ideas that vision-only driving can be safe and that additional sensors are safer aren't mutually exclusive.
I agree, but initially if you can make a product that is as good or better than humans than that is surely an improvement, especially when you consider it never gets drunk, tired, or distracted. Of course we shouldn’t settle for that, and I doubt any company would, because there will always be competition.
The key to what people keep missing about why a camera based system can work is simple, the computer knows how far it can resolve objects and will not out run its vision. People take this risk far too often.
As for LIDAR, it has many limitations of its own and does not solve all the issues vision has and in fact has many of the same weather if not worse weather issues in some conditions.
I would prefer a higher definition RADAR to back up vision and it would need to be there for LIDAR which still needs a vision system to interpret what it sees.
What I really want to see is long term studies on human, insect, and animal, health in LIDAR saturated areas. Throw RADAR into that as well. My concern is both RADAR and LIDAR are emitters and generally have been kept from saturating areas where living beings are. Let alone LIDAR does love to fry cameras.
In theory, the fact that humans can drive with just vision (and the road feedback) means that it should be possible for a machine to do the same.
However, I'd be willing to bet that adding more sensors makes it easier to implement, more robust, more safe, and potentially able to drive in conditions where humans cannot.
When I drive cars, I do more than see things
I hear things, like sirens, and horns, and engines
I feel things, like bumps in the road or vibrations in the car
I smell things, like something in the car that doesn't smell right or something outside the car that might be something I want to avoid
"humans drive with just vision" is a statement so dumb only Elon Musk could have said it
Just to steel man the argument - a human would do a decent job driving a car via remote video control where they could not hear, feel, or smell anything but they can see.
We see this all the time with FPV drones, people are capable of insane acrobatic feats
All of those things besides smell could also be implemented with sensors a Tesla already has, such as accelerometers, microphones and monitoring the load on the electric motors used for the power steering, all of which is much cheaper then including lidar. There's no sense that a human has that a Tesla doesn't, the software is just not good enough, and likely won't be for some time
But why limit it to human senses? That’s the real question. If a car can have sensors beyond human senses and therefore perform beyond even a peak human then that’s how it should be.
I agree with you, especially when LIDAR is way cheaper then when Elon first started making this statement
Tesla is already using microphones with FSD for identifying emergency vehicles.
The real issue isn't all these things. It's that computers cannot reason. Not as they are or will be in the very near future. They need data to perform their actions.
A computer can make estimates based upon previous datasets and on current sensor data, but it can't act on a "feeling" because it can't make logical deductions. We don't have Gen AI. So the more data it has the better it will be. You know what provides more data? More sensors.
I don't know why I see people argue AGAINST more sensors? Is the cost not worth further safety and improvements?
FSD can to some extent do "reasoning", or at least communication. For example on a narrow street with an oncoming car, it's extremely good at either giving away or taking the initiative based on how the oncoming car acts, and in a very human way where the other driver will understand the intent. This is one of the most impressive things with FSD.
That’s the anecdotal evidence, it only means that you’re a good driver. As a whole, people are bad drivers, you can observe that every single day around you.
Counterpoint: When I honk my horn, the vast majority of people hear it which says they are using their ears
FSD will hear it too, but I think it currently only reacts to sirens.
Not just vision only, the brain is doing a lot more than just looking things around.
A camera can’t and will not operate during heavy down pours, snow storms and other poor condition events. A lot of our eye is also filled in with “experiences” to fill in the gaps. We can’t see the lanes, but we know based on past experiences where they should be.
If only there were a way to give computers the benefit of past experiences…
Doesn’t work like that in real life, pal.
So Waymo currently uses HD maps for navigation. They’ll use Lidar to scan locations in very high detail, then use tha data along with their real time data. So they’re already doing this.
Tesla doesn’t do this, but we’ve seen FSD operating in heavy rain and snow, and it can do it although there are some issues still to be resolved for sure
That’s exactly how it works in real life, tuts.
Cars already have GPS. There's no technological reason why you couldn't have each car "remember" conditions where it drove, and even send that information back to the manufacturer so that the next car driving that way could benefit from knowledge learned.
Tesla FSD already works in heavy downpours so I don’t think you know what you’re talking about.
The front camera on Teslas are heated and have wipers. Even during a car wash the side and the rear cameras have full visibility. Water and soap doesn't stick to them. You evidently have no experience with them.
If snow is blocking the cameras, how is that any different than a negligent driver not cleaning any of the snow on a car? Snow and rain block lidar too.
Rain effects lidar in the same way if not worse during the rain since its an active system not a passive system. Its waiting for light returns which scatter in rain and inclement weather. FSD tells you if youre cameras are covered in dirt and if you need to clean them.
Same with FSD, you don't need lane lines. It can drive in grass fields and be fine.
Volvo just removed their lidar system for self driving recently. Also DCar Studio in China recently ran a 36 car test comparing ADAS systems. Tesla with vision only came out number 1.
Cars could also remember where they've been before, and what things looked like.
I'm not saying that this is better than also using radar/lidar/GPS/etc., just that it should in theory be possible.
To be fair, when conditions get that extreme, you shouldnt be driving either.
That's nonsense.
FSD does the same thing. It doesn't need lane markers to know where the road is. It can use environmental context.
Certainly, competent driving achieved by a human based on vision input is an existence proof it can be done.
However, replicating it in non human form is a pretty big challenge.
Bumble bee flight has been an existence proof that hover flight can be achieved without laminar air flow over wing section. But we have been able to replicate something similar only recently, nearly 100 years since the first heavier than air flight. The feat required a lot of technological developments outside aeronautics, like microelectronics, miniaturizations of motors batteries, micro sensors, etc.
Similarly, vision only FSD is achievable, but guessing the timing is a dicey. I suppose it’s up to technology directors at companies like Tesla to decide. But we frequently see such people making errors.
As a whole, a human being is the most unpredictable element in that system. The goal to make FSD safe is to eliminate as much unpredictable elements as possible. And adding more sensors is the easiest thing to do. If you link up as many cars as you can under common protocol. That’s the fastest multiplier of sensors, of course if you can somehow „force” car manufacturers to unify that protocol.
When the problem we're trying to solve is the front bumper of a 7000lb murder missile liquifying a 7 year old child's body on impact, maybe we don't even consider limiting ourselves to what's theoretically possible.
I would be ok with triple redundancy being the bare minimum.
If someone is worried about cost, then get rid of the heated/cooled/massaging seats or the panoramic roof or reduce the size of the battery by 1%, or swap the 22" wheels for 18" wheels.
Hi Elmo! Why do planes not use just cameras then for autopilot? I'm not a Tesla hater btw. I own 2.
The first airplane autopilot used gyroscopes to just keep the plane straight and level.
Nowadays you can put an autopilot in an RC model airplane with just a GPS and a barometer, which is far simpler than trying to use cameras.
Neither of those options would work with a car, which has to follow the road.
Because planes don’t even get flown by pilots on vision alone. They require pilots to be rated to fly based on their instrumentation with curtains over the cockpit windows because there are so many scenarios where you will be expected to see jack shit.
I'm most interested in FSD being available to the people who don't seem to want to drive and would prefer to be on their phone. I myself want to drive but not have those people failing to also drive.
There are more use cases than these 2.
camera-only isn't good enough for a standard adaptive cruise control. When I drove a BMW without a radar acc it would constantly turn off in the slightest fog, in rain or snow. I don't care how many cameras are there or how advanced hi-tech they are, they can't beat systems that use it in combination of radar or lidar or any other -dar.
Teslas system works fine in rain, snow, fog and complete darkness.
It's very simple. I won't put my life on the line for inferior camera only system, so that TSLA can squeeze extra profit. TSLA may barely squeeze by, but I need system that comfortably exceeds safety standards and capbility above and beyond what humans are capable. That's the standard I expect when it comes to autonomous driving. I don't need no suboptimal system that TSLA is selling to public.
Tesla*
When I drive my Tesla, sometimes in bright sunlight it will say 'camera occluded - FSD not available' or something like that - not sure what it would do if you didn't have a steering wheel.
Add an adjustable ND filter to the camera. An added expense but sunglasses for the cameras should be a thing if it's just using that like a human.
ND filters do not fix flaring. Dynamic range isn't the issue.
They have to fix this with better optical coatings that prevent strong flaring.
I’m still not over the fact that none of us consented to be part of a beta test with self-driving cars.
Firmly in the camp that any driver aids should be like autopilot on a plane. If you can’t fly it manually, you’re not using autopilot.
Relying on “self driving” to cover lack of driver training feels like we are in for a wild ride.
Cameras alone is absolutely not enough. Every new form of transportation required supporting technologies and infrastructure. The roads today were built for the human eyes. There’s nothing creative or innovative about replicating that using cameras only. Its actually pretty dumb trying to teach a car to drive like a human.
Cameras are enough, as evidenced by humans being able to drive with our eyes (which is effectively camera only driving).
The real question is "can combined sensing be better than cameras/eyes" which is an obvious yes. Cameras can work, but why stick with human limitations?
The cameras being put on modern vehicles are not 1:1 equivalent to human eyes, and that's what always gets left out of this argument. Our eyes are attached to our brains, and our brain processing combined with our vision is a very different system than the cameras and computers used in automated driving systems. The most expensive cinema cameras ever made can't compete with the dynamic range and adaptability of human vision to different conditions. There are scenarios where the automation outcompetes humans, but there are many scenarios where humans are far more capable.
Rather than fight for years and years with the limitations of camera-only systems to bridge that gap, it saves heaps of time and money - not to mention saving even more lives - to just add LIDAR. Yes, cameras could one day be enough, but why wait when we already have a better solution?
They definitely can. Modern cameras have enormous dynamic range, and the cameras on Teslas are no exception. They'll see well in the darkness (better than us, especially in regards to colour), and they can see details in highlights and deep shadows at the same time. An oncoming car's highlights reduces contrast for us way more than for the Teslas cameras.
Also, a Tesla can pay attention in 360° at all times. It's extremely attentive and never gets distracted or tires. In city settings this has some huge benefits for situational awareness.
As a professional who's worked on state of the art sensors with Tier 1 suppliers, I have to wonder why random professions feel the need to chime in with irrelevant points.
You're changing the question. The question was "Is camera sensing enough for full self driving", which the answer is absolutely yes, because humans are capable of self driving and we only have vision sensing (which is what cameras are). If you have a perfect system, a camera is technically enough.
Cool, nobody ever made that claim and it isn't relevant to the question of sensing needs for FSD systems.
Nobody brought up currently. The timeframe is a self-imposed qualifier that isn't relevant to the question of whether or not sensor fusion is necessary for full self driving.
There's an old saying "Any idiot can build a bridge that stands, but it takes an engineer to build a bridge that barely stands" because engineering is a profession built around doing the most with the least. A properly engineered FSD solution COULD only use vision, it won't be the best, but it technically meets the FSD requirement which is the whole conversation.
No, the WHOLE conversation would be: why then, if it's not the best, would not include further sensors?
Go ahead and read the title of the post big dog.
"why cameras only isn’t enough for FSD" and not "why we should include more types of sensors".
You should read more than you speak, that's why god gave you 2 eyes and 1 mouth.
I meant the whole conversation outside, but including, this post -- this post being a very specific slice of conversation on self-driving vehicles. There's also the whole nature of how discussions on one thing naturally lead to further discussion. Have you ever had a conversation? No?
God failed to give you a brain, sadly. But he certainly gave you a mouth.
I love the videos of Teslas self driving though a painting of the road like Wile E Coyote.
Waymo is the only one with something approaching successful self drive. They're only in warm cities (haven't looked where they are recently, but I don't imagine a completely snow covered road is ideal) and they have a ton of stuff protruding from the vehicle: cameras, sensors, and what looks like an AWACS radar on the top.
https://youtu.be/GU16hXSSGKs?si=XC7nM4H4LK8As1L1
https://youtube.com/shorts/c31pR1IMl44?si=eck13zRLeUyOSZ_l
It works great until it doesn't.
Whoosh.
No whoosh. I understood you're using Tesla advertising.
Just a satisfied user. Commutes have become very relaxing over the last few updates.
Watch the video you linked.
Yes. Wile E Coyote.
Enjoy paying to be beta-testers for the world's richest man.
It’s great. For 6 years I’ve watched my car go from managing highway driving to managing my commute in full. Hopefully other companies including rivian advance theirs as well.
Here’s the description of the video you posted:
“This is the final scene from a much longer video. The Tesla Model Y with the latest HW4 computer was able to see the wall and stop, while the older Model Y with HW3 computer was not able to see the wall.”
So, the video you posted is from a video debunking the original claim that Tesla can’t see the fake road painting with just cameras.
The old versions can’t (like the 4-year-old car in the famous video), but the new ones can and will stop.
https://www.thecooldown.com/green-business/tesla-door-handles-investigation-nhtsa/
Quick, let the NHTSA know that they're all good so they can stop the ever-expanding probe of the technology.
What does self driving have anything to do with people not knowing how to use door handles?
Why are you moving goal posts?
You know EVs from Hyundai/Kia/Lexus/BMW/Lexus have cars with electronic door handles too right? The reason NHTSA doesn't care about those is because those cars don't have the sales Teslas have.
We’re talking about the Wile E Coyote test, not door handles.
That video wasn't using FSD it was using autopilot. It's a completely different technology.
100% agree
For Tesla, lying has been really effective in the past.
Yet when tested in the real world it couldn't even see red lights.
It had risky multiple sketchy incidents with critical 3 interventions that the driver had to take over in this small video. It couldn't see a red stop light and driver had to intervene.
4:00 = Was about to run a red light
9:13 = had to disengage for a pedestrian
6:22 = hard braking and almost hits the car in front
There are a bunch of events like this through the video. Some of which is edited out with jump cuts who who knows what the car tried to do.
I've been in a Tesla running FSD, and I've been in a Waymo.
Both appear to give you displays of what they can see. The Waymo gives me total confidence. it's absolutely amazing what it picks up and displays. The timing of things disappearing behind an obstruction and reappearing on the other side is phenomenal. That's something I specifically watched for in the Tesla and it just couldn't do it as well. The Tesla has me convinced it can safely handle a trip on the highway, which is great, but IMO a camera-only system will never exceed Level 4 (and really shouldn't be pushed that far anyway), especially as lidar gets cheaper and cheaper and financial motivation to make that argument is essentially removed.
Annoyingly the "sleek" Tesla FSD visualization doesn't display nearly as much as the car sees internally, and it doesn't update quick enough. FSD used to have a more "raw" visualization many years back which gave a better insight into it.
But just be assured that it sees a lot more than it displays today. Lots of video evidence for this.
I didn't see this posted anywhere but Mark Rober did a good amount of testing of cameras vs radar/lidar.
It plays out exactly as you would expect.
https://youtu.be/IQJL3htsDyQ?si=rA9XC6MZsKbh4UHN
Very misleading though. They used Teslas old "dumb" lane keep system instead of FSD, which is the modern and relevant machine learning one.
Ah interesting. I wasn't aware of that.
I'd be concerned about machine learning tbh.
Seems great on paper but I would expect a low level set of deterministic and absolute controls. (I.e. radar says big object within parameters == start braking) and a higher level with machine learning for early reactions before the lower level actions happen.
Ultimately I'm just thinking out loud about curiosity. It's an interesting topic.
I like driving. If you have a car that's fun to drive, it's fun. Sitting in a chair... Not specifically fun.
Level 5 can never happen if rain impedes visibility and the system can’t discern objects
How do you manage?
ITT: people conflating overall autonomous driving with Tesla’s “FSD”
While true, Tesla calling it FSD set them up for that on purpose.
Crazy that people just take marketing terms at face value
To paraphrase George Carlin, think of what average intelligence/media literacy/critical thinking is and then realize half the people are below that.