The Future of Cars

We recently had our family car stolen, so I’ve got cars on my mind (that’s also why I’ve been far away from the blog for a few weeks). Everything is fine, thankfully, because insurance was functional (though certainly not helpful), but I am thinking about cars, the tech we have in them, and what I feel the future may and should look like. Our car was stolen because we had one of the Kias and Hyundais that are incredibly vulnerable to theft, which, in the most 2020’s outcome possible, has resulted in car thefts becoming a TikTok trend (more on TikTok here). One of the critical vulnerabilities is the lack of a feature that has become common in most cars since the mid-late 2010s, a chip inside the original manufacturer’s key. If the key is not present the engine is disabled, simple enough.

A lot of the safety features we have in cars have felt like simple evolutions, it’s a wonder we never had them earlier. I’m sure we had the capability to add backup cameras long before they became ubiquitous. Phones have come with their own hand-free features (read: speaker phone) for years, though it does make sense that it took some time for cell phones to really integrate with vehicle technology. Going back even further, safety restraints, crumple zones, airbags all lagged our ability to build and implement the tech. Even laws have lagged the obvious need for restrictions on driver behavior, because there’s always someone that thinks the next new law is the path to communism.

But even as the US adopts new safety measures, in recent years that has not contributed to much more safety on US roads. From the NYT:

Recently we learned that US pedestrian deaths from motor vehicles reached a 40-year high. There are lots of potential explanations for our poor performance on safety. It’s pretty clear that many of our roads were designed for higher speed limits than we allow, and I think we can understand the logical misfire of that decision. Our vehicles are too big, the front ends obscure the road so that we can’t see children over the massive grills on the latest lifted pickups or SUVs. We are too lenient on minor traffic offenses, which can eventually lead to major infractions for some drivers. Nationally, we are hostile to effective public transit, which forces more cars on the road in some perverse drive towards “independence”. Frankly, people are bad at driving and our culture is impatient and angry these days. That’s a bad mix on the road. Recently, car theft has rapidly accelerated, and the people driving stolen cars are not exactly driving carefully. It all adds up to a pretty bad situation.

Understandably, there is not much optimism about solving most of these problems, but the last couple problems, involving human error, do have a potential solution: self-driving/automated cars. Unlike previous motor vehicle safety advances, self-driving has been in development since well before the capability was there, in fact it still looks like we’re not there. Recent NHTSA studies show that self-driving cars may be up to ten times more lethal than their human driven counterparts. I’m not sure we’ll ever be there, at least in the near-term and mid-term future, without substantial changes to our core infrastructure.

Driving a motor vehicle, on its face, is a problem best solved by technology. There are only so many movements and rules, and it feels like it’s easy to program. The difficulty comes in coding how the car takes in information, and the lag between inputs and behavior. In the interim that means that while self-driving is tested, the human driver must be alert and able to change the car’s behavior quickly. This is a critical problem, as many driving instructors can tell you, it is much more difficult to take over for a driver making a poor decision than it is to just drive the car on your own.

This leads to another major problem, as self-driving improves, human intervention will become less effective. Tesla, for example, has a lot of user self-reported data (they have managed to skirt regulations in CA, for example, that require self-driving cars to submit disengagement reports, which is why we’re relying on user submitted data) that indicates that their FSD beta requires about one intervention every ten miles. I’m sure there’s substantial noise around that figure and it’s almost certainly right-skewed, meaning long highway trips are offsetting the much more frequent interventions happening on short city trips. To ballpark it, this likely means that people driving in the city on FSD are intervening once every couple minutes, and on the highway once every 10-20 minutes. As those numbers increase to 10-15 mins in the city or hours on the highway, the human driver is going to become increasingly distracted and unable to engage quickly when intervention is needed. We’re almost certainly due for an increase in the number of accidents involving self-driving cars when self-driving improves. It’s a difficult hurdle to overcome so long as 1) other vehicles with humans as primary operators are on the road and 2) the human driver is the failsafe.

A somewhat similar problem is a form of the classic trolley problem from philosophy. Often I’ve heard the trolley problem applied to self-driving vehicles in a lose-lose situation where the programming in the vehicle must decide who will die when the vehicle is put in a scenario where it must choose to avoid a set of dangers, but every possible decision will likely result in a fatality. While this is an interesting thought exercise about the way we should design self-driving cars and how we code their decisions, it is not the most consequential application of the trolley problem to self-driving cars. Our decision to put self-driving cars on the road in general is a decision that puts people at risk that are highly unlikely to be at risk under normal scenarios when humans are driving. We design self-driving vehicles to avoid common human errors, but self-driving cars will open up a whole new set of errors that are unique to tech-centric driving. Humans know that jaywalkers exist, and can react to them, but a car that wasn’t programmed to correctly identify a human walking in an area where humans are not “legally” allowed to walk failed to even attempt to slow down and killed a woman. Humans also know it’s generally not a good idea to stop in the middle of a freeway, self-driving cars might not realize that’s not a great idea. Sensor failures open up a wide range of potential bad outcomes that look nothing like typical car accidents today, even the worst ones. The other problem is that we may be compounding these sensor failures simply because we are allowing both human drivers and self-driving vehicles on the road, the interaction between human error and technical error can increase risk. As the NYT chart shows, with all the tech assistance we already have in most vehicles, we’re actually increasing the number of deaths from motor vehicles over the last few years. I am skeptical that, when faced with these kinds of outcomes, we will accept the tradeoff even if, from a utilitarian perspective, the overall fatality rates come down.

In my mind, the best way to reduce these risks would be to change our infrastructure. Smart roads that can talk to these self-driving cars through their own sensors and tech would greatly reduce the possibility of new and tech-specific failures. It would also increase the effectiveness of the technology we already have helping the human drivers. But that would require massive infrastructure investments, would come in conflict with our country’s paranoid concerns about surveillance, and would require coordination across many local, state, and federal agencies. I’m not optimistic about this possibility. It’s probably best to just build high-speed rail and get more people off the roads, but I’ve already discussed our issues with public transit. It’s hard to see where we go from here, and I’m not sure there’s a breaking point where danger on the road gets so bad that any of these solutions will become viable. What do you think the solution is?

Subscribe to Signal-Noise Ratio

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe