The company said a Model X was using Autopilot when it hit a concrete divider on March 23. The system, one analyst said, “works fine, until it suddenly doesn’t.”

Share story

In fall 2016, Tesla beamed new software over the air to cars on the road in the United States and elsewhere that added safeguards to its Autopilot system to prevent drivers from looking away from the road or keeping their hands off the steering wheel for long periods.

The move came in the wake of a crash in Florida in which an Ohio man died when his Model S sedan hit a tractor-trailer while Autopilot was engaged. Federal investigators found that the driver’s hands had been on the steering wheel for only a few seconds in the minute before the crash.

When the upgrades were released, Tesla’s chief executive, Elon Musk, said the new Autopilot system was “really going to be beyond what people expect” and would make the Tesla Model S sedan and the Model X sport-utility vehicle the safest cars on the road “by far.”

Now, however, Tesla’s semiautonomous driving system is coming under new scrutiny after the company disclosed late Friday that a fatal crash March 23 in California occurred while Autopilot was engaged.

The company said the driver, Wei Huang, 38, a software engineer for Apple, had received several visual and audible warnings to put his hands back on the steering wheel but had failed to do so, even though his Model X SUV had the modified version of the software. His hands were not detected on the wheel for six seconds before his Model X slammed into a concrete divider near the junction of Highway 101 and 85 in Mountain View, and neither Huang nor the Autopilot activated the brakes before the crash.

The accident renews questions about Autopilot, a signature feature of Tesla vehicles, and whether the company has gone far enough to ensure that it keeps drivers and passengers safe.

“At the very least, I think there will have to be fundamental changes to Autopilot,” said Mike Ramsey, a Gartner analyst who focuses on self-driving technology. “The system as it is now tricks you into thinking it has more capability than it does. It’s not an autonomous system. It’s not a hands-free system. But that’s how people are using it, and it works fine, until it suddenly doesn’t.”

On Saturday, Tesla declined to comment on the California crash or to make Musk or another executive available for an interview. In its blog post Friday about the crash, the company acknowledged that Autopilot “does not prevent all accidents,” but said the system “makes them much less likely to occur” and “unequivocally makes the world safer.”

For the company, the significance of the crash goes beyond Autopilot. Tesla is already reeling from a barrage of negative news. The value of its stock and bonds has plunged amid increasing concerns about how much cash it is using up and the repeated delays in the production of the Model 3, a battery-powered compact car that Musk is counting on to generate much-needed revenue.

It is also facing an investor lawsuit related to Tesla’s acquisition of SolarCity, a solar-panel maker where Musk was serving as chairman.

Autopilot uses radar and cameras to detect lane markings, other vehicles and objects in the road. It can steer, brake and accelerate automatically with little input from the driver. Tesla readily points out that Autopilot — despite the implications in its name — is only a driver-assistance system and is not intended to pilot cars on its own.

At least three people have now died while driving with Autopilot engaged. In January 2017, a Chinese owner was at the wheel of a Model S when the car crashed into a road sweeper on a highway.

The National Transportation Safety Board is investigating the March 23 crash that killed Huang. Its investigation of the 2016 Florida accident concluded that Autopilot “played a major role,” and said that it lacked safeguards to prevent misuse by drivers.