The autonomous car is getting lots of press, and generating considerable controversy, for its potential to replace humans behind the wheel. But it remains only a potential; the technology is not robust enough yet to start building a significant presence on the world’s highways and byways. But there’s a subset of autonomous cars that is robust enough to start making its mark: autonomous transport. This technology can provide insights into potential evolutionary steps for autonomous cars.
Autonomous transport systems work not in the wide-open spaces of the world, but in confined and relatively controlled spaces such as factories or warehouses. The robotic stock delivery systems that Amazon uses are one example. These gadgets carry towers full of supplies around to workstations so that human packers can retrieve items and put them in boxes to fill orders. But the robots require a specially-prepared environment. There are stripes on the floor that are almost, but not quite, invisible. Those are tracks the robots must follow as they move about, and probably are quite distinctive under IR or UV light, allowing the robot to see them while remaining subtle to human eyes. Such guided movement is a first evolutionary step in autonomous transport.
A second evolutionary step has been taken by Seegrid, which provides transport vehicles with 360°; vision. The devices use their vision capability to first map their surroundings and then to navigate based on that internal map. This frees the robots from the need for laying stripes or otherwise modifying the environment. They are able to operate with greater freedom.
These devices are not totally autonomous, however, in that they cannot choose their own path to follow. Instead, they must be “trained” by a human operator who shows the transport where to go and how to get there. A human operator also selects the route to be used when dispatching the device on it assignment. Still, once instructed, the device makes its way along its route using only its vision system for guidance.
This use of a vision system rather than a guide line is much closer operation to that of proposed autonomous cars than that of the Amazon stock robots, and it suggests ways in which autonomous cars might first come into public use. Part of the software system that monitors and manages these autonomous transport devices provides the human users with information on the fleet, showing when devices will be arriving at the pre-defined stopping points along their routes. It’s rather like the announcements on a subway platform telling riders which trains are coming in when. In a manufacturing operation such information allows workers to anticipate their next stock delivery and prepare to receive it.
Now fast forward a decade or so to the era of autonomous cars. Such vehicles might first operate in the relatively constrained environment of a downtown area that is off limits (or under restricted access) to manually-operated vehicles. The cars would then roam the streets, responding to transportation requests from users initiated via a smartphone app or perhaps via a display screen at a pickup station. Under a central dispatcher, or maybe via a collective decision made over a mesh network, an empty vehicle would pick up calling passengers and take them to their destination.
Like the Seegrid autonomous transports now in warehouses and factories, these first-deployed autonomous vehicles would be operating in a relatively constrained environment and would have vision systems to help control their operation. They might even have a pre-defined map of their operating area to work from in selecting and following a route. The vision system could provide a safety element by recognizing and reacting to the movement of pedestrians.
Such a restricted version of autonomous cars might have several advantages. First, its technology does not need to be as comprehensive as that of a vehicle intended to operate in the wide open spaces of today’s roads or to contend with the behavior of the human drivers with which it shares those roads. So, it would be easier to develop and cheaper to build and thus more likely to get to market quickly.
Equally important, though, would be user reactions. Such a system would have a use case that is a cross between taxis and subways. Like the subway, transportation arrives, you get on, it takes you somewhere within its operating area, and you get off. But unlike a subway and like a taxi, the pickup and drop-off points are free from restrictions. But in both cases, the issue of not being in control of the vehicle has already been resolved. Folks today are very hesitant about surrendering control of their car to a computer. They are used to surrendering control in subway and taxi transport.
True, there is no human operator in this scenario, but you don’t see one in the subway, either, and most folks don’t pay much attention to their taxi driver. So it’s an easy mental shift to learn to ignore the fact of human absence in the autonomous vehicle, especially if all the other vehicles operating in the area are also robotic. People can more quickly become comfortable with such a system than they will with fully autonomous, free-range cars.
Eventually, though, advances in technology will give autonomous vehicles an ability to see and react to the world around them comparable to that of human operators. And it may occur sooner than we think.
But the emotional element behind the resistance to autonomous cars may be a much harder problem to solve. This is where autonomous transport like the Amazon and Seegrid robots may be showing us the way. Build confidence in and comfort with autonomous devices a step at a time, taking on simple tasks first and adding more complex tasks later. Having autonomous vehicles metaphorically crawl before they walk can speed up both market availability and user acceptance of the technology.