In Polanyi’s Paradox and the Shape of Employment Growth David Autor asserts that the Google Car is really a glorified train driving on painstakingly hand curated tracks. It doesn’t drive on roads, it “drives on maps”. It works not because it thinks, but because we have finally built an environment where it doesn’t have to.
There are implications to MOOCs, where Google Car inventor Thrun also found his system only worked in environmentally controlled conditions. What makes the Google Car possible is not great AI, but a built environment heavily adapted to the machine’s needs. In education have likewise solved their problems by seeking more predictable, higher quality environments.
While Kiva Systems provides a particularly clear example, the same principle of environmental control is often operative in unexpected places. Perhaps the least recognized—and most mythologized—is the Google Car. It is sometimes said by computer scientists that the Google car does not drive on roads but rather on maps. This observation conveys the fact that the Google car, unlike a human vehicle operator, cannot pilot on an “unfamiliar” road; it lacks the capability to process, interpret and respond to an environment that has not been pre-processed by its human engineers. Instead, the Google car navigates through the road network primarily by comparing its real-time audio-visual sensor data (collected using LIDAR) against painstakingly hand-curated maps that specify the exact locations of all roads, signals, signage, obstacles, etc. The Google car adapts in real time to obstacles (cars, pedestrians, road hazards) by braking, turning and stopping. But if the car’s software determines that the environment in 34 which it is operating differs from the key static features of its pre-specified map (e.g., an unexpected detour, a police officer directing traffic where a traffic signal is supposed to be), then the car signals for its human operator to take command. Thus, while the Google car appears outwardly to be as adaptive and flexible as a human driver, it is in reality more akin to a train running on invisible tracks. (Source)
Polanyi’s Paradox is drives much of our current frustration with AI. See Polanyi’s Paradox
Philippa Foot laid the philosophical groundwork to ask the question — “Should Google Car ever be programmed to hit people? “
Does Google car need to make moral tradeoffs? See License to Kill