You’ve likely seen the footage. A Dutch man is driving his Tesla Model S on the highway, except he’s not in the driver’s seat — he’s in the back. At 51 mph, the car is navigating by itself, using intelligent software and hardware to keep within the lines and away from other vehicles, all while the driver is at least ten seconds from being able to retake control in the event of an emergency. To call it foolish is a mild understatement.
Human beings have a problem: we are, by and large, a species of idiots. We constantly push boundaries, test extremes, and ignore safety for the sake of that next hit of adrenaline. The anonymous Tesla owner in that video from last year was proof that, when given something powerful that’s intended for our safety, we’ll find a way to be moronic.
The man’s actions were so foolhardy that Tesla’s CEO promised immediate retribution. In response to the video and others of drivers doing similarly stupid things with the company’s Autopilot system, Tesla issued an update that required the driver’s hands to always be on the wheel in order for the car to navigate by itself.
“There have been some fairly crazy videos on YouTube…this is not good,” Musk said late last year. “We will be putting some additional constraints on when Autopilot can be activated, to minimize the possibility of people doing crazy things with it.”
Of course, the likelihood of a crash was minimal. Autonomous systems and other such intelligent safety features have been proven to reduce accidents by reacting far faster than a human driver can, by stamping on the brakes or swerving to avoid accidents. This last week, an American driver shared footage of his Model S swerving to avoid a truck that had erroneously pulled into his lane — neither the truck driver nor Model S driver had seen the other vehicle, but the Tesla sensed the impending crash and acted to avoid it.
It’s why Google’s vision for the future of cars is one where the driver isn’t required at all. Unlike people, computers can’t drink and drive, don’t get tired, and can react at speeds far beyond those of the average Joe. Volvo is getting in on the action, too, with expectations that their autonomous vehicles will be on American streets by 2020. Most major manufacturers are working on some degree of autonomy for their vehicles, be it simple accident avoidance systems that can detect crashes before they happen, or fully integrated systems that can take control of a vehicle on the highway or in stop-start traffic.
It’s why Google is now pressuring the government to catch up with the speed of development, and work to implement new rules governing the testing of driverless cars. But there’s one group that isn’t happy with Google’s plans: Consumer Watchdog has lobbied a complaint with the National Highway Traffic Safety Administration, asking it to halt any fast-track plans it may have and implement several restrictions that it maintains are vital to ensuring safety for road users.
The crux of Consumer Watchdog’s complaint lies in Google’s own data. In 15 months of testing, with over 400,000 miles driven by autonomous vehicles, the internet search giant reported that the systems in its vehicles failed 272 times. In addition, human drivers felt the need to physically intervene a further 69 times. Likely not helping matters is a video from last month showing one of Google’s Lexus RX self-driving test cars sideswipe a bus at low speeds — the first such time any of its cars had been involved in an accident, Google claimed. For Consumer Watchdog, once was clearly enough, and they’re demanding that any laws written require a driver be behind the wheel — and have a wheel to actually grab, something Google’s self-built prototypes lack.
“What the disengagement reports show is that there are many everyday routine traffic situations with which the self-driving robot cars simply can’t cope,” says Consumer Watchdog’s Privacy Project Director John M. Simpson. “It’s imperative that a human be behind the wheel capable of taking control when necessary. Self-driving robot cars simply aren’t ready to safely manage too many routine traffic situations without human intervention.”
The organization has put ten questions to the NHTSA in an effort to force a decision against Google’s fast-tracking. They include: whether Google will publish a list of the situations in which its cars can’t currently cope, and how the NHTSA will deal with that; what Google plans to do if the computer “goes offline” and the passenger has no wheel or pedals to manipulate; whether Google plans to release its algorithms, including whether a robot driver would prioritize the vehicle’s occupants or pedestrians in an accident; and how Google will prove that its driverless cars are safer than today’s vehicles.
“NHTSA officials have repeatedly said safety is the agency’s top priority,” Simpson continued. “You must not allow your judgment to be swayed by rosy, self-serving statements from companies like Google about the capabilities of their self-driving robot cars.”
For the NHTSA, which has promised to “develop guidance on the safe deployment and operation of autonomous vehicles” by the middle of this year, dealing with Google’s request is a matter of urgency: there are already driverless systems out there, but no regulations in place to deal with them. The NHTSA’s concern, according to AP, is that “people are just going to keep putting stuff out on the road with no guidance on how do we do this the right way.”
And how we keep idiots from climbing into the backseat at 50 mph.