This past Monday, I accompanied my 18-year-old daughter to her road test. While there, I began reflecting on the amount of emphasis we put on developing good judgement. It struck me that, as we transition into an autonomous driving future, we need to question what we are doing to prepare our machines, and ourselves, for this inevitability. To determine the best path forward, one that will be both successful and publicly accepted, I will explore the catastrophic human error that led to Joshua Brown’s fatal Tesla crash (below), as well as the approaches of other self-driving technologies.
First, the facts:
- Brown’s fatality was the first in more than 130 million miles driven by Autopilot
- According to NHTSA, the US average is 1 death for every 90 million miles
- Autonomous driving has the potential to reduce traffic fatalities, as more than 90% of the 35,000 traffic fatalities in 2015 were due to human error
- Despite the safety benefits offered by autonomous driving, Tesla’s Autopilot gives drivers a false sense of security:
While numbers 1-3 above are promising, in order for autonomous driving to become both ubiquitous and safe, the industry should adopt a “zero accident goal” in order to combat number 4. Unfortunately, this important objective was not considered before Tesla’s Autopilot deployment; abuses have been well documented for over a year (see YouTube examples) and Tesla’s response to the Brown tragedy, as reported in the Wall Street Journal, has been insufficient. According to police reports, Mr. Brown had a portable DVD player in his car and eye-witnesses reported that he was watching a movie (Harry Potter) as his car plowed full speed into a turning semi-trailer truck.
Yet, Mr. Brown’s death seems to have been a result of a current inherent flaw in the current autonomous driving model. Mobileye NV, a supplier of driverless equipment to Tesla and other auto makers, relies on a camera to detect cars, lane lines and other obstacles. Mobileye said its system isn’t designed to always detect vehicles cutting in front with a left turn, which occurred in the fatal Florida crash. The company said its “lateral turn across path” function will be available in 2018. And although Autopilot will usually detect such crossings, that function is hardly foolproof. According to Tesla, the death of Mr. Brown was a result of a system that failed to distinguish a turning 18-wheeler’s white trailer from the bright Florida sky.
Last March, Tesla’s CEO, Elon Musk, told reporters that, in the future, the Autopilot system could make it possible for Tesla vehicles to drive themselves “from parking lot to parking lot.” According to Musk, Tesla was testing Autopilot on a route between Seattle and San Francisco and that the technology was almost capable of handling the entire trip without the driver’s touching any controls. This statement (along with the branding of Autopilot) undoubtedly evokes a euphoria that has led to overconfidence and abuse of Tesla’s hands-free/driving-assist feature.
In numerous interviews with The Wall Street Journal, drivers’ descriptions of Autopilot accidents suggest that the incidents were likely caused by overexcitement for a technology that was oversold in its marketing, but under-delivered in execution. A year before the Brown accident, Carl Bennet, a Tesla owner from Virginia, had a similar, but fortunately non-fatal, experience. Bennet enabled Autopilot and then confidently began to read a magazine. Suddenly, he realized that his car was steering him into a parked truck. Reacting quickly, Bennett slammed on the brakes, swerved, and hit the truck. He wasn’t hurt, but his one hundred thousand dollar electric car was totaled. Tesla Motors sent a sympathetic letter, but also indicated that the crash was Bennet’s fault, citing log files with a copy of his signed owner’s manual that declared that Autopilot “cannot detect all objects and may not brake/decelerate for stationary vehicles.”
Tesla owner’s manuals state that the system “is designed for your driving comfort and convenience and is not a collision warning or avoidance system.” On this front, Elon Musk does urge caution when using Autopilot, even as he has been a very outspoken advocate of the technology. While Tesla has gone on record pointing out that it has “no reason to believe Autopilot has anything to do with this [Brown’s] accident,” it has been unable to substantiate this claim due to the inability to access certain data from the vehicle.
According to an industry analyst, Tesla might decide to temporarily disable Autopilot in its vehicles when a driver consistently doesn’t respond to a warning light or bell, and fails to put his hands back on the wheel. A Tesla spokesman asserted that, in fact, Tesla is planning to update the Autopilot software in coming months, but won’t make specific changes as a result of the fatal crash.
Tesla is not the only auto maker racing towards autonomous driving, according to CB Insights. The autonomous market is already crowded with technology companies, Tier-1 suppliers and traditional car manufacturers (see above image). In addition, there are widening views on the timing of deployment, leading to boisterous debate and lack of standards. The industry is already seeing greater consolidation and partnership among software companies, manufacturers, and ride-sharing services.
Just this week, Mobileye announced a strategic partnership with Intel and BMW to build “an open-source autonomous driving development.” In order to fend off encroachment from tech hardware/software firms like Google, Didi, Uber and Apple, Mobileye is aiming to set the industry standard by creating a united front of established auto manufacturers. According to Morgan Stanley, which raised Mobileye’s stock rating, “the sooner Mobileye can attach its products to real-time data capture and learning from as many billions of miles across as many OEMs as possible, the more likely its position can be defended for longer.” This report suggests that the fact that Toyota, the world’s largest and most valuable auto company, is not yet a customer of Mobileye is “critical”to its long-term value.
Toyota, which has been struggling with the same safety issues as Tesla, instead favors rolling out safety technologies before it introduces fully autonomous driving. Last year, the world’s largest carmaker announced the formation of a US $1-billion AI research effort, the Toyota Research Institute, to develop new technologies around the theme of transportation. The vision of its CEO, Gill Pratt, is of “guardian angel” systems that allow humans to drive, but leap in at the last second if an accident seems likely. His aspiration is for vehicles to avoid fatal accidents no fewer than once every trillion miles.
As I pack my manual car for the weekend, I am reminded that this debate is still in its infancy. My concern is that Tesla’s approach of bringing autonomous driving technology to market quickly could sour the waters for a successful long-term strategy. Maybe Gill should give his buddy Elon a call…