A recent fatal crash involving a fully automated vehicle has again called into question the accountability of the different parties involved in automated vehicle development.
This latest incident involved an Uber Volvo autonomous vehicle, which struck and killed a pedestrian directly in its path on a darkened road. Had this been a conventional vehicle, there would have undoubtedly been an investigation into the lighting conditions, the victim’s clothing choices, the driver’s state of alertness, etc.
But what makes the case important is that the results of the incident are the things that consumers have been told autonomous vehicles excel at in order to make roads safer — collision mitigation, pedestrian detection and automatic braking, among others.
And it’s, of course, raising questions about whether manufacturers are moving too quickly to implement the technology, without all the potentially catastrophic bugs being worked out.
The ironic thing is that even though autonomous vehicles aren’t scheduled to go into mass production for several years, many of the safety systems they will rely on are already being used on the roads in several capacities — adaptive cruising, lane keeping assist, front collision avoidance with crash mitigation and automatic braking, for example.
The main difference between current and future motoring is human involvement — whereas today, road regulations require the driver to be in control of the vehicle, autonomous vehicles are currently being promoted as allowing (encouraging?!) drivers to be distracted (which video footage shows the Uber driver definitely was) because the vehicle’s computer controlled systems will react to emergency situations (and reportedly much more quickly than a human could).
Makers such as Subaru already have cars on our roads that use a camera system to detect a stationary or slow-moving object in front of the car and automatically (if the driver doesn’t respond to visible and audible warnings) bring the car to a complete stop from up to 140 km/h. What even Subaru’s Eyesight system can’t overcome, though, is physics — vehicles travelling at higher speeds require longer distances to come to a complete stop.
The autonomous Volvo was reportedly travelling at 61 km/h (38 mph), apparently slightly over the posted limit on that stretch of road (35 mph), and it’s unlikely the crash could have been avoided, based on video of the dimly lit street where the victim appeared in the vehicle headlights far too quickly.
But, a couple of the autonomous systems should have changed the circumstances. First, road sign detection should have had the vehicle travelling closer to, if not at, the posted speed limit.
Secondly, pedestrian detection systems are supposed to detect a presence far better than the human eye, relay warnings to the driver and initiate braking. In this case, braking was apparently never initiated prior to impact, and that’s worrisome because the main selling point of autonomous systems is safety — technology can supposedly outperform human reaction in every aspect of the crash scenario (detection, reaction, resolution). But in this case, it didn’t.
When your system doesn’t do one of the primary things it’s promoted to do — make roads safer, and in Volvo’s case, work toward zero road fatalities — this one incident carries a lot of weight in declaring it a massive failure.
Uber has already suspended its autonomous vehicle test program, and the state of Arizona, which had positioned itself as a leader in autonomous vehicle road testing, is also now halting the testing program on its roads.
The question now becomes “can autonomous driving overcome this tragedy?” Many are hoping it doesn’t.