The luddites of the lamestream media are looking to throttle autonomous vehicles in the wake of a fatal crash in May involving a Tesla Model S with the Autopilot function engaged. Bill Vlasic in The New York Times writes that Elon Musk’s “…determination to push limits has hit its most formidable roadblock.” He questions “…how much longer Mr. Musk and Tesla can continue to defy auto industry convention in trying to stay so far ahead of the competition.”
He quotes Joseph Phillippi, president of AutoTrends, as saying, “They’ve always had this attitude of invincibility. But what can they say about a self-driving car that drove straight into a tractor-trailer?”
Well, “they” might say a self-driving car is not perfect but still safer than a manually driven car.
Meanwhile, Mike Spector and Jack Nicas in The Wall Street Journal lament that the NHTSA “…lacks authority to approve or disapprove of [Tesla’s Autopilot] technology or meaningfully slow its deployment.” In what might be a first for the Journal, the reporters welcome government officials’ “…first significant chance to flex regulatory muscle.”
They quote Dean Pomerleau, a Carnegie Mellon University professor who has worked on driverless cars for 25 years and led several NHTSA research programs, as saying, “I think NHTSA is going to want Tesla to turn off Autopilot at least until they learn more.”
According to Tesla, “This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the U.S., there is a fatality every 94 million miles.” (NHTSA statistics show 1.08 fatalities per hundred million miles traveled in 2014.)
Vlasic in the Times fails to note this response in his article, although he does state ominously, “Tesla did not respond to emails Friday about…any plans for possibly alerting vehicle owners about the dangers of misusing the Autopilot feature.”
Maybe because the company didn’t want to repeat itself. According to a Tesla blog post on Thursday, “It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot ‘is an assist feature that requires you to keep your hands on the steering wheel at all times,’ and that ‘you need to maintain control and responsibility for your vehicle’ while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to ‘Always keep your hands on the wheel. Be prepared to take over at any time.’ The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”
Vlasic’s article suggests to me the Times wants to continue a feud with Tesla stemming from a February 2013 review of a Tesla S. Responding to Vlasic’s article, commenter Dexter Ford writes, “I understand that Elon Musk and Tesla were wrong to vilify a New York Times reporter who, years ago, reported on his Tesla S that died on a winter trip in the Northeast. But that does not make it right to pillory Musk and Tesla for a single accident….” He adds, “Tesla and other companies are, no doubt, refining their automation systems tonight. Humans, on the other hand, are still making the same fatal mistakes they have been making since the invention of the automobile, and will be making them, apparently, forever.”
Adds commenter Michael Moore, “Blaming Mr. Musk for being a pioneer in creating safer, cleaner cars seems petty and shortsighted. Do we curse Henry Ford every time someone dies in a car accident?” And as Asher B. puts it, “It speaks to our backward nature to think that this is newsworthy. If the Times had a major article every time a non-automated car got [into a fatal accident], it would run out of ink in three days. Please, please, please can we get the humans out from behind the wheel and take our chances with the machines, which have every step of the way been far, far safer?”
There is legitimate debate about appropriate levels of autonomy and whether humans can be expected to safely regain control from autonomous operation when necessary. Spector and Nicas write, “Alphabet has been testing autonomous technology for several years, for instance, and consistently said it believes driverless cars must be fully autonomous to meet its safety standards. The company says semi-autonomous systems that require drivers to sometimes take control of the car can be unsafe because drivers put too much trust in the machine and can’t retake control if needed.”
Full autonomy may ultimately prove to be the safest technology. Nevertheless, experience thus far with Tesla’s Autopilot (albeit with a very limited sample size) suggests cars equipped with it are safer than those without. Spector and Nicas at the Journal quote Brad Templeton, a consultant and former Alphabet driverless-car engineer, as saying Tesla’s approach is for the greater good and that driverless cars “are going to save a lot of lives. And letting customers test vehicles could advance the technology faster.”
And finally, Maggie Koerth-Baker at FiveThirtyEight reminds us that no technology can be completely safe. Referring to Normal Accidents, a 1984 book by Yale sociologist Charles Perrow that grew out of Perrow’s work on the President’s Commission on the Accident at Three Mile Island, she writes, “Normal accidents are a part of our relationship with technology. They are going to be a part of our relationship with driverless cars. That doesn’t mean driverless cars are bad. Again, so far statistics show they’re safer than humans. But complex systems will never be safe. You can’t engineer away the risk. And that fact needs to be part of the conversation.”