We could argue that we don't need self-driving cars because we need to get rid of cars altogether. But that would make Don Quixote look like a reasonable fellow.
So we better get on with understanding the new technologies, and looking for how principles of sustainability can be applied so that this inevitable technology can help us live in harmony with our world. A video just released by Tesla Motors gives a glimpse of how the self-driving cameras and software "see" the world. With a reported refresh rate of (an amazing) 2100 frames per second, cameras assess other vehicles, pedestrians, road signs and markings, and the vehicle reacts appropriately to the surrounding conditions.
If you are short on time, zip to about 1:20 min into the video, where the Tesla soon comes upon two joggers on the right shoulder. It is hard to imagine a human reacting with such care -- the car slows down enough (with regenerative braking!) that one can see the two joggers pull away from the car for a brief second before the car plots its route past them and proceeds.Autopilot Full Self-Driving Hardware from Tesla Motors on Vimeo.
But all of this technology raises huge ethical questions, the least of which is "who is responsible in an accident? The owner of the car or the designer of the self-driving hardware and software?" As the rate of growth of this technology seems to be outstripping the rate at which people are thinking about the consequences, it is good news that Carnegie Mellon University just received a $10 million gift dedicated to studying the ethical issues posed by artificial intelligence.
Carnegie Mellon President Subra Suresh tells NPR that he was privileged to join the mayor of Pittsburgh, where Uber is testing self-driving cars, in an inaugural ride. Suresh points to driverless cars as an example of the issues on the table:
"Take driverless cars. If there's an accident involving a driverless car, what policies do we have in place? What kind of insurance coverage do they have? And who needs to take insurance?"
Where road accidents used to be on the driver of the car, it seems that driverless cars are more of a product liability question. If companies that manufacture cars are on the hook for accident damages, medical costs, and legal fees, how will that change the cost of a car? What does that mean for public infrastructure design? Wouldn't it be ironic if self-driving liability becomes the driver for the end of the personal automobile?
And how will companies program their self-driving software when the car has to make ugly choices, like whether to swerve into a light post to miss hitting a child in the street. One can imagine that smart technology might reduce the number of such accidents relative to the performance of human drivers, but not that the number will go to zero.
The Carnegie Mellon grant is but a drop in the bucket of investments needed to stay on top of the rate of technological change. But it's a start, and we look forward to seeing what comes out of the effort.