This November, I rented a Tesla Model Y and drove it for about 150 miles, depending on your personal definition of “driving.” For about 145 of those miles, I let Tesla’s “Full Self Driving (Supervised)” control the Model Y, only intervening to park or, occasionally, for fun. The car handled countless complex traffic situations effortlessly, with only about two safety related interventions the whole time. It felt like a real self-driving car.
But it isn’t one. I wouldn’t buy it, and I wouldn’t recommend it.
Better Than I Ever Expected
Tesla is one of those companies that frequently embarrasses its doubters.
I know because I’m a doubter and have been one for a while. I was reviewing cars for CNBC while I was in college, and back then I called out Tesla’s 2017 iteration of Autopilot for being over-confident, marketed with a misleading name and still, legally, not autonomous. All of those complaints hold true today, but even I must admit that Tesla has gotten closer to full autonomy than many ever expected in a car you can actually buy. The Tesla I drove for this test. Photo by: Mack Hogan/InsideEVs
Early Autopilot was just a classic combination of lane-following and adaptive cruise control. In the eight years since I first reviewed it, Tesla’s flagship driver-assistance system has become “Full Self Driving (Supervised),” gaining the capability to handle basically every form of driving under a watchful human eye, not just divided highways. On the road between lie many lawsuits and fatal accidents—accidents that I’d argue were preventable, if the system was more cautiously deployed—but the end result still astounded me.
It’s expensive—$8,000 upfront for lifetime access or $99 a month (it is now included for free on the Model X and Model S). And because Tesla has not been updating older “hardware 3” vehicles with equally sophisticated software, I’d say “lifetime” is not really true; you get it until Tesla abandons your generation of tech. Even still, it’s hard to complain about the price when no one else is offering a system this capable to consumers. FSD will attempt to handle just about any situation you encounter in urban and suburban environments, and it usually performs well. Photo by: Mack Hogan/InsideEVs
I used FSD 13.2.9, which is not even the latest release. But it showed what a refined version of Tesla’s AI-driven software looks like. It looks, in a word, remarkable. FSD made short work of freeway drives, with only one questionable late merger requiring me to step in. In the city, it was cautious around blind intersections and patient at stop signs. It navigated uncertainty extremely well in most circumstances.
Rolling The Dice
The car does the safest thing in most situations, most of the time. Sometimes, however, it will get it way, way wrong. Trouble is, because you don’t know how it really works, you probably won’t see these moments coming. This means it requires constant vigilance, something that untrained drivers facing misleading marketing just aren’t equipped for.
The combination had already led to one fatal crash involving the system by 2017; dozens more have followed, with many plaintiffs suing Tesla and alleging wrongful death. Tesla says its systems are not legally driving, and that owners are responsible for supervising the car at all times.
The facts of each case are different, and the software has certainly matured. But it’s also landed deeper in an uncanny valley: My Tesla Model Y so rarely made a mistake that I started to let my guard down. But when it did make a mistake, it required me to quickly act to prevent it hitting a merging car, or turning left into a red light (albeit without cross traffic). The big problem is that the car’s mistakes are unpredictable, and it will confidently persevere even when it doesn’t know what’s going on. Photo by: Mack Hogan/InsideEVs
This is the challenge: Without understanding how it works, you can’t predict when it won’t. So your vigilance must be constant. And if you’re really engaging with it—thinking through where it may have issues, holding your hands so they are ready to quickly take over, keeping an eye on your mirrors—well, is that really more relaxing than driving?
For me, trying to predict the errors of a reasonably competent but inherently unpredictable AI is just as stressful as driving. Yet it’s also more boring: I can’t text, I can’t look away and I can’t really daydream. For this reason, driving on FSD often felt a bit easier, but time passed more slowly as I struggled to stay engaged.
Photo by: Mack Hogan/InsideEVs
The ultimate goal, of course, it to take the driver out of the loop entirely. That’s what Tesla is trying to do with its robotaxi pilot program in Austin, Texas, and that’s the long-term promise Musk has been dangling for years. It seems closer than ever, but still out of reach: For now, you have to sit quietly and watchfully, fending off both unexpected crashes and your own boredom.
An Unsettling Balance
Early versions of Autopilot were more limited, but that also made them mentally easier to deal with. I knew that Autopilot wasn’t really driving, so I used it like a more sophisticated cruise control. There was a clear divide between what it could do, and what it couldn’t.
These days, though, it’s all muddy. FSD is so good in so many situations that you want to relax, and trust it. But since you do not and cannot know how it makes decisions, you can’t really trust it enough to check out, especially when the lives of people around you are at stake. You lock in, and wait for mistakes. Credit where it’s due, though: FSD couldn’t be easier to use. Set a destination and the car sets off after one tap of this button (and a quick confirmation that you’re there and paying attention). Photo by: Mack Hogan/InsideEVs
But what if mistakes are rare? I experienced two clear mistakes in 150 miles of driving.
I was locked in, and caught them both before they became problems. But if this is the expectation, think about what we’re really asking Joe Public to manage. Driving 150 miles in and around San Diego took me around five hours, cumulatively. That means I had an intervention-required mistake every 2.5 hours. Try to imagine sitting idle, “supervising” a driver for 2.5 hours, unable to distract yourself at all. By the time the mistake happens, do you really think you’ll be paying attention?
That’s a scary situation: When a system is trustworthy enough that we let our guard down, but not safe enough that it can actually be used without supervision.
This Model S P100D I drove back in 2017 had a much less mature version of Tesla’s driver-assistance feature, then only called “Autopilot.” But while Tesla has made unbelievable progress since then, the system’s biggest problem has actually gotten worse.
Photo by: Mack Hogan/InsideEVs
That is where I put Autopilot in 2017, writing for CNBC:
“The bad part is that the car refuses to acknowledge its shortcomings. Even when it clearly should, it seems reluctant to tell the driver, “I can’t handle this! Snap out of it and start driving!” Because of this, I never trusted Tesla Autopilot. And I don’t think you should, either.”
That complaint still rings true today: The line between what FSD can and cannot handle is blurrier than ever. In almost all situations, the car will just try its best, and trust that you’ll quickly intervene to avoid disaster. Related Stories
But while the marketing and real-world limitationss of the system are seemingly intentionally fuzzy, the legal position is clear. Tesla repeatedly says that FSD is a driver assistance system, and is never legally driving the car. If the car crashes based on its own inputs, Tesla says you were still “driving,” and it’s still going to be your problem.
So Tesla can call it whatever it wants: Autopilot, Full Self-Driving or Full Self-Driving (Supervised). It still can’t legally drive. Until it can, I’m not interested.
Contact the author: Mack.Hogan@insideevs.com.
Click here to see all articles with lists of the best EVs We want your opinion! What would you like to see on Insideevs.com? – The InsideEVs team




