GettyImages-659266224 (1)
Cars in Uber's self-driving cars are back on the roads after the program was temporarily halted following a crash in Tempe, Arizona. Getty

In the wake of a tragic self-driving car accident that took the life of a pedestrian last week in Tempe, Arizona, waves of troubling questions about autonomous vehicle road-testing have broken over this corner of the automotive industry. Have self-driving cars been allowed onto public roadways prematurely? Are stricter government oversight and regulations urgently needed to guarantee safer rollout of driverless cars? And should this closer scrutiny include a demand for autonomous car makers to accumulate the billions of test miles required to validate their vehicles?

The fatal accident involving an Uber autonomous car shocked the autonomous vehicle (AV) industry. On a mild day with a light breeze, the self-driving car’s sensors and computer should have easily read the environment. What went wrong? The sensors might have been confused by the large, complex intersection where it happened, a site where pedestrians would be unexpected, and although there was a crosswalk, the victim wasn’t walking within it. We’ll need to wait for a full investigation to understand what happened and learn if the car was to blame.

Whatever the result, it’s highly likely that new regulations and procedures will emerge. In the U.S., existing regulations for AV road tests are established at the state level and vary greatly from California to Pennsylvania. There’s no consensus on where the lines should be drawn from city to city. San Francisco wants to add another layer of local regulations to the state requirements, including hands-on training for local police, while Pittsburgh has no interest in making more rules. “You can either put up red tape or roll out the red carpet,” Mayor Bill Peduto says. “If you want to be a 21st-century laboratory for technology, you put out the carpet.”

In Arizona, officials have chosen to skip reporting requirements for AV makers testing their cars in the state, unlike some other states.

Lastly, there are no publicly known procedures and standards for putting autonomous vehicles on the road among the players. Following recent events, perhaps now would be the right time to reconsider this state of affairs.

Simulate Before You Test Drive

Rand Corporation estimates that it will take 11 billion miles of driving before any autonomous vehicle can match the capabilities of a human driver. The Arizona accident sadly confirms that autonomous vehicles have years of development and validation ahead before they’re truly ready for the roads. That’s a compelling reason to ensure that all autonomous vehicles undergo exhaustive testing in a risk-free environment—a cloud-based simulation platform developed with artificial intelligence, deep learning, and computer vision.

In order to simulate real-world test driving, a simulation engine creates a realistic virtual automotive environment, reproducing an AV’s sensor input in high fidelity and accurately emulating the car’s interactions with the real world. Further, the simulation technology provides a hyper-realistic test environment, recreating real cities anywhere in the world with striking accuracy. Once data is collected via the sensors, it runs algorithms to analyze that data.

A simulation environment allows car makers to amass the miles needed for validation within days, shaving years and risks off the process. Simulation is sure to become a key technology in the quest to provide safer validation for autonomous vehicles before they take to the roads in the real world.

Teaching AVs To Drive Safely

The simulation platform does more than test AVs; it teaches them how to drive safely, too. Simulations prepares autonomous vehicles for the unexpected situations they’re bound to encounter in the real world. AVs can also learn to make decisions on the fly just as human drivers must (how to deal with a stopped school bus, for instance, or an ambulance flashing its lights), but within the safety of a simulated environment.

In this light, perhaps the most important lesson learned from the Arizona case is the need for training self-driving cars not to assume that pedestrians will always obey traffic rules, but on the contrary, to expect irrational behaviors from walkers. A simulation platform can train AVs to handle the unexpected in many forms by simulating a range of situations that could happen at any time on public roads. These so-called edge cases could include unpredictable pedestrian behavior as well as reckless driving by other (human) drivers on the road.

At a more basic level, an autonomous vehicle validation platform can reveal any safety “bugs” in a self-driving car, and ensure that those bugs will be fixed before they ever make it to the public roads.

Better Than The Real Thing

Real-world road tests could run for years upon years and still fail to show autonomous vehicles all the variables a simulation platform can generate. The variations on everything that affects an AV’s performance are unlimited, including weather, traffic, time of day, and road conditions.

With an intricate collection of sensors (including cameras, radar, and LiDAR) precisely replicating the sensors in a real self-driving vehicles as well as the mechanical behavior of the vehicles, every simulated test mile trains the autonomous to understand its environment even more thoroughly, and therefore drive far more safely in the physical world.

Expecting The Uunexpected

More than half of Americans would rather not step into a driverless car, according to a Pew Research survey. Most of these doubt that “machines could ever handle the unpredictability inherent in driving; worry about ceding their agency to a process they do not fully understand; or simply view humans as uniquely well-prepared to handle unexpected situations on the road.” They’re not wrong to be cautious. In fact, we might be facing another decade before autonomous vehicles are not only as safe as human-controlled vehicles, but actually safer.

That’s why new validation procedures like simulation are destined to become an integral part of the onboarding process for bringing autonomous vehicles to the real world. Until the day that self-driving cars bypass the human ability to expect the unexpected, we can call on simulation technology to get autonomous vehicles where they need to be, as quickly as possible.

Danny Atsmon is the CEO of Cognata

GettyImages-659266124
Uber Getty