Cybersecurity firm wary of autonomous systems

Kaspersky says layer of software missing that makes "human" decisions

Published: October 6, 2016, 4:00 PM
Updated: November 21, 2021, 3:23 PM

Volvo Autonomous Driving

An unexpected company is speaking out about the potential dangers of autonomous vehicles. Kaspersky, a world leader in cybersecurity, is cautioning people about the one thing it says is missing in the development of autonomous vehicles — a layer of software that can manage all the systems as well as a human being does.

The company admits the hardware in autonomous vehicles is neither faulty nor complex, much of which is made up of quite ordinary components similar to those of very early automobiles. And some of the newer technologies are effectively replacing “human” traits — embedded cameras to watch over the road and other road users, suspensions that adapt to road imperfections to keep the ride smooth, and interconnectivity to find the fastest route from point A to point B.

But the technology can’t yet take all the information and work out a best-option scenario. It currently is programmed to work in the “ideal” way, which isn’t always the least-harmful way.

As an example of that lack of attention to road and traffic conditions, it points to the one documented autonomous driving fatality earlier this year in a Tesla. The car failed to detect a truck, which had failed to yield the right of way. It was the truck-driver’s fault that lead to the crash, but the autonomous system should have detected the obstacle and didn’t. The driver of the Tesla should have also had the presence of mind to lessen the severity of the impact, if not prevent it altogether, by braking and avoiding, but he might have been distracted in the assumption that if anything were to happen, the car would handle it.

To further highlight its point of view, Kaspersky points to the no-win scenarios, where a computer would decide to save the car without the consideration of saving pedestrians or even the vehicle’s own occupants, under some collision models.

As the company points out, say a child wanders into the path of an oncoming autonomous car. The computer might choose to initiate a full stop, which means the car would continue its trajectory and if it didn’t stop in time, injure or even kill the child; but a human driver would in all likelihood choose to avoid the child, even at the cost of endangering him or herself. The driver might choose to steer away from the child and hit a tree, for example, which might result in frontal airbag deployment that might also allow him or her to walk away relatively unscathed.

The other extreme example involves a moose wandering onto the road. The autonomous car might again brake aggressively, which might result in a full-impact with the moose and perhaps serious injury to all vehicle occupants, whereas the human driver might in control, might slow the car sufficiently to allow the moose to cross while veering around it, and continue driving.

It’s important to note that some of today’s more sophisticated systems already discern between different types of objects in its path — cyclists, animals, other cars, etc. — and others can map roadsides to differentiate between fire hydrants and garbage cans, for example. It’s therefore conceivable that a computer could be given enough algorithms to identify the movements of a small child or a large moose in the road and scan for alternatives to steer away or slow and steer, in order to avoid the collision.

But the company also points out that some people won’t always make the “humane” decision either.

Kaspersky references a survey by Cognitive Technologies, a creator of automated systems for cars, of 80,000 people in Russia. Now anybody with internet and Youtube has seen the driving exploits of Russians, but you may be unprepared to find out that if faced with a pedestrian in the road and another vehicle in the oncoming lane, 38% said they would hit the pedestrian. Only 3% would steer toward the oncoming car in hopes the other car would also swerve to avoid the collision. The remaining 59% would drive off the road to avoid the pedestrian.

If it was a group of pedestrians, only 26% said they would hit them, while 71% would consider veering off the road. However, if it was a dog running into the road, 55% said they would hit it. And given the option to emergency brake and get rear-ended by the car behind, 40% said they would choose that option.

Kaspersky quoted another survey, this time of Americans, which concluded that an autonomous vehicle should prioritize pedestrian lives over its own occupants, and the more people there are in the pedestrian group, the more people believed the car should save the pedestrians at the cost of its own occupants. Nearly 8 out of 10 people (actually 76%) said the car should save the pedestrians and injure its passengers, if that’s the option. However, asked if they would buy an autonomous vehicle that would act in such a manner, just 19% said they would consider it.

The company concludes by saying that barring the quick development of such Artificial Intelligence, perhaps the solution is to have pedestrians wear reflective clothing, such as many jurisdictions in Europe already decree during night-time walks, so vehicle systems could identify them more quickly and take decisive non-catastrophic action.

We don’t know how you’d go about making moose or deer more reflective at night, though.