Google completely pulls the driver from their driverless car, but what ethical can of worms does this open up?
Friday, May 30, 2014, 1:54 PM - Google has introduced a new feature in their driverless car - they've taken away the ability for a person to drive it, removing the steering wheel, gear shift and acceleration and brake pedals. It's a bold move meant to compensate for one of the weakest parts of a car's safety systems - us - but will the car be able to take over for every role we play behind the wheel?
So far, Google's driverless cars have been ordinary, everyday vehicles with added equipment to sense their environment, to process what they see and to control the vehicle as it drives through traffic. If need be, all of that extra equipment could be removed and someone inside the vehicle could take over driving. This was part of the design, specifically, because Google's engineers wanted to give the vehicle the ability to pass control over to whoever was sitting in the traditional driver's seat, should there be a need. However, apparently they hit a snag in that plan.
After testing the system with Google employees on their commutes to and from work, they found that it wasn't any problem with the technical aspects. There were no clunky mechanical arms to shove out of the way. The computer didn't cause any awkward quirks of steering control as it handed over 'the conn' to the human driver. It was actually that the human driver had spend so much of the commute not driving, that suddenly getting them to take over was dangerous.
"We saw stuff that made us a little nervous," said Christopher Urmson, who heads up Google's self-driving car program, according to the NY Times.
Their solution: take the driver out of the equation altogether. This new car, shown in the video below, has two comfortable passenger seats, plenty of leg-room, and absolutely no way for a passenger to control the vehicle, beyond was basically amounts of a 'panic button' in the middle of the centre console.
This certainly feels a lot more like the futuristic self-driving car that science fiction has shown us over the years. Also, it definitely offers some interesting solutions for the traffic problems we experience on a day-to-day basis, at least if all vehicles were like this and run on the same system. However, there's one thing that many people have been thinking about lately that scifi tended to glossed over. When you put the robot car in control, it has to make the decisions in a crisis situation - such as what to do in an impending accident - and will we be able to give the car what it needs to make the right decision?
As Wired pointed out last year, things get a little murky when you put the control in the 'hands' of the computer. In many instances, the car's decision-making will save lives - including many who would have perished in an alternate world where the driverless car wasn't invented. However, based on the decisions it may be forced to make, the car could end up taking the lives of many 'innocents' who would have otherwise lived long lives in that alternate world. For example, it may decide that sacrificing itself along with you and whoever else is in the car is a more appropriate alternative to taking out a bus full of schoolchildren. Is the user manual for the car going to include a legal waiver absolving the company of any wrongdoing in your death? Hopefully, if all vehicles were operating under this same kind of control, we wouldn't have these kinds of dilemmas, but we'll definitely have to plan for this kind of thing now.
It may come down to our own ability to teach a computer the value of human life. It worked for John Connor in Terminator 2, but that's yet another example of scifi glossing over the details. Hopefully we work it all out soon, since it really looks like a lot of fun to ride around in this car.