Tuesday, December 29, 2015

The Problem of Consciousness in Self Driving Cars

Technology Review has an article on why self-driving cars must be programmed to kill, which is one of their best of 2015, which attracted a whole lot of comments.    The starting point is the old philosophical dilemma: in a choice between killing one and killing many, which is the right choice?  Do you push the fat man onto the railroad tracks to derail a train bearing down on a stopped school bus, or whatever? Does a self-driving car go off the road and over the cliff to avoid killing people in the road, if it kills the driver?

It strikes me as a problem only for the self-driving car which is conscious.  What do I mean? A computer processes one bit of information at a time, it's sequential.  The philosophical dilemma is one of consciousness: because humans are conscious we know, or think we know, things simultaneously: both the fat man and the school bus and the possible different courses of action.

But how would a computer know those things?  Say its driving a car which rounds the curve on the mountain road.  Maybe it knows there's no shoulder on the side, just guard rails which it will try to avoid. At some point it starts to see something in the road. It starts braking immediately.  It doesn't take the time to distinguish between live people and dead rocks, it just does its best to stop, perhaps being willing to hit the guard rail a glancing blow.   Presumably its best is a hell of a lot better than a human's: its perception is sharper, its decision making quicker, its initial speed perhaps slower.  I suspect the end result will be better than either of the alternatives posed in the philosophy class.  

The self-driving car is going to be optimized for its capacities, which don't include consciousness.


No comments: