The incident in Arizona might have been a fluke. But the mysteries of AI make it possible that more was going wrong.
The first thing to understand about the fatal incident involving a pedestrian and self-driving Uber vehicle is the lay of the land, literally.
Primary avenues in metropolitan Phoenix are very wide. The typical right of way for a major “arterial” is 140 feet. That’s about twice the width of busy Fourth Avenue in downtown Seattle.
This is a metro engineered for driving, moving cars quickly and efficiently. Thousands of shade trees were felled over the decades to widen streets.
Signaled intersections where pedestrians can safely cross can be half-a-mile apart, and even then the size of the streets makes them treacherous. At night, the wide black asphalt swallows up the illumination of streetlights and headlights.
Not surprisingly, all this contributes to Arizona having the highest rate of pedestrian deaths in the nation. And this dire milestone, double the national average, comes with humans behind the wheel.
Uber couldn’t have picked a more challenging place to test self-driving vehicles. But despite experiments in Pittsburgh and San Francisco, Arizona became its principal laboratory. This thanks to Gov. Doug Ducey’s promise of an unregulated, “pro-business” environment.
How much these issues played into the death of Elaine Hertzberg, 49, in suburban Tempe remains unknown as the investigation continues. Uber suspended its tests immediately and Ducey followed with a similar order this week.
We know she was hit by an Uber Volvo XC90 sport-utility vehicle, operating in autonomous mode. Neither the sensing technology in control nor the human “safety driver,” meant to take over in an emergency, saw her and applied the brakes. The SUV was driving at least five miles under the 45-mile-per-hour (!) speed limit.
Not surprisingly in today’s America, battle lines were quickly drawn. The incident was initially downplayed by police and some media reports. The woman was jaywalking, came “out of nowhere” at night, had a criminal record and may have been homeless. Critics of autonomous vehicles were out on social media.
But the initial police version that the woman appeared suddenly was disproved by a video showing what the vehicle saw. She is clearly crossing the street, with several seconds’ reaction time.
As Laura Bliss wrote in CityLab, “The fundamental safety promise of autonomous vehicles, after all, is their ability to automatically detect and brake for people, objects and other vehicles using laser-based LIDAR systems: In darkness and light, they’re supposed to be programmed to drive far more safely than humans.”
Larger issues are being litigated here, too: The ubiquity of artificial intelligence, its reliability, consequences and trajectory, all at a time when Big Tech is under increasing criticism. Our society’s hunger for techno-magic is running up against tough realities, whether Facebook’s role in the 2016 election or the hyped self-driving future.
It might be a situation confined to one company.
According to the New York Times, Uber’s self-driving car program was struggling even before the fatal incident. “The cars were having trouble driving through construction zones and next to tall vehicles like big rigs. Uber’s human drivers had to intervene far more frequently than the drivers of competing autonomous-car projects.”
Among them are Waymo, a subsidiary of Google parent Alphabet, and Cruise, the self-driving car company owned by General Motors.
Uber was having trouble meeting its target of 13 miles before the self-driving car required the intervention of the safety driver. By contrast, Waymo cars can drive 5,600 miles before an intervention.
Dara Khosrowshahi, the former chief executive of Expedia, considered shutting down the experiment when he took over at scandal-plagued Uber this past August. He changed his mind, convinced the company needed self-driving vehicles for its future.
Most Read Business Stories
- We freaked out over Amazon's HQ2 search. But it turned out to be for all the wrong reasons | Danny Westneat
- U.S. pilots flying 737 MAX weren't told about new automatic systems change linked to Lion Air crash
- FAA evaluates a potential design flaw on Boeing's 737 MAX after Lion Air crash
- Starbucks laying off 350 people, mostly at Seattle headquarters
- Will Amazon's HQ2 sink Seattle's housing market?
Much will depend on whether investigators find this a rarity that can be corrected, or whether it undermines confidence in self-driving cars. Perfect safety is impossible, but if these cars can’t be safer than human-driven vehicles, they shouldn’t be on the public streets.
An overlooked element is the importance of artificial intelligence to these vehicles. Engineers are making rapid strides in this technology, from military uses to Amazon’s Alexa. The goal is to build a neural network that allows machines to process millions of pieces of data to learn on their own. But much of this highly complex effort is mysterious even to experts. We don’t even fully understand the neural net of the human brain.
In the case of vehicles, enthusiasts have much riding on the concept: Combining a robot with a car, offering a breakthrough in convenience and, with electric vehicles, much less greenhouse gas emissions. Competition is international, including major automakers.
The revolution will eventually come. It won’t merely involve accidental deaths. Professional drivers will lose their jobs, perhaps gradually but inevitably.
Don’t look for autonomous vehicles to ease traffic congestion. They are, after all, additional cars on the road. And they’ll be mixed with drivers who won’t give up their steering wheel or can’t afford to do so. They won’t ease the need for transit in urban areas but could ease the “last mile” from station to suburban home.
Before that happens, we need to know what happened to cause the death of Elaine Hertzberg.