fbpx

Web Summit addresses issues around self-driving

Cars to make decisions like Humans?

It would be impossible to ignore the buzz surrounding the Web Summit this week in the RDS, Dublin. One interesting piece of coverage for us was an article that appeared on US Tech channel CNET. The article concerns the development of the self-driving car and it considers the insights of Stanford engineering professor Chris Gerdes who has been examining the complexities of programming self-driving cars to make moral decisions. So, for instance, in the event of an unavoidable accident, how can a car choose where it can inflict least damage, or, as the article puts it:

But what about deciding which people to kill when an accident is unavoidable?”

One option is to give rules to the vehicle in advance, so, for example, “don’t hit pedestrians, don’t hit cars, don’t hit other objects”. This seems to be ok, if all car’s are programmed this way – but we all know self-driving vehicles will have to share the road with other vehicles for some time and these rules hardly seem realistic when an impact – not of its own making – is imminent. Another possibility is to use what is known as “projection of consequences”. This is what humans do when they are faced with a perilous situation. It involves weighing up the consequences of each possible action before making a decision. On Tuesday at the web summit Gerdes remarked that a self-driving vehicle would  have to decide “Should it hit the person without a helmet? The larger car or the smaller car?” – issues like those.

car outline and wheels rushes on road with high speed

Another interesting issue is the issue of self-sacrifice. A human driver would be far less likely to make a decision to sacrifice his or her own vehicle, whereas a self-driving car could easily be programmed to do this – if that consequence was the least catastrophic available.

Self-driving cars use an array of sensors — lasers, radar, 2D and 3D cameras — to sense what’s around them and make decisions 200 times per second about the best course of action. Eventually, Gerdes believes, it’s likely those decisions will come to resemble those that humans make. That includes breaking laws — crossing a double-yellow line to pass a stopped car, for example, or breaking the speed limit to pass another car as swiftly as possible.

If we want to actually drive with these cars as other participants in this dance of social traffic, we may need them to behave more like we do, to understand that rules are more like guidelines. That could be a much more significant challenge.”