Amazon recently announced Prime Air, its project to begin package delivery via robotic drones. Many questions have already been raised about its feasibility: Will the FAA approve the drones? How will we prevent accidental injury, theft, or hacking? What if you live in Deer Trail, Colorado, where it may soon be legal to hunt drones? There's also a larger question, one that's not unique to drones but that has farther-reaching implications: How will a proliferation of robots change the way we work?
In 1950, Isaac Asimov published his pioneering laws of robotics in his novel I, Robot:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;
- The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
If we assume robots have been engineered to follow these laws precisely, then the laws' simple brilliance answers some of the questions about Prime Air’s feasibility while providing context for an examination of how work may change with robots. Drones, following Asimov’s laws, will neither harm humans nor allow themselves to be easily shot down. In other words, they will be trusted to act in humans' best interest. With such trust, however, emerges two psychological phenomena unique to working with robots: anthropomorphization and automation bias.
Anthropomorphization is simply the attribution of human characteristics and emotions to inhuman objects. It's ubiquitous within children’s entertainment, where animals wear human clothes and toasters are considered brave. Anthropomorphization is also ubiquitous when adults work with robots. Robotics manufacturer Kiva Systems has a customer that allows its warehouse workers to adopt and name the oversized Roomba-like machines that tirelessly find and retrieve products to be shipped. Rethink Robotics has developed a collaborative robot that can safely work alongside humans, even receiving training without the need to be be programmed. During training it uses a set of eyes displayed on a large LCD screen to convey its thinking as a human trainer slowly guides it through tasks. Researcher Julie Carpenter interviewed Explosive Ordinance military personnel, who use robots to safely disarm explosives, and discovered that the destruction of a solider’s robot evoked feelings of anger and loss.
As robots become more sophisticated and further integrated into the work environment, there is the possibility that anthropomorphization could lead to the same consequences found when working with humans. For example, the departure or destruction of a robot could lead to its human colleague's needing counseling. There is also still the possibility of interpersonal issues. Since robots are, after all, only machines, it would seem that people working with robots wouldn't need strong social skills; in fact, they may need even better social skills to be able to best train and instruct robots, which lack humans' versatile learning ability.
Robots can, of course, lead to efficiency gains. ABB and Carnegie Mellon University’s Robotics Institute have developed Spitfire, a robot that instructs its human teammates. It chooses the most efficient workflow and assigning tasks to itself and the humans based solely on who's able to accomplish those tasks faster. The human-robot team completed a complex-frame welding task at one-seventh the cost of a control team. So it would not be surprising to find people who work with these robots following them blindly, assuming their directives will always lead to the safe and efficient completion of tasks. This is, of course, a problem. Research on autopilot within aviation has demonstrated the existence of automation bias, or the tendency to over-rely on and trust in automation. Pilots have been known to ignore signs of deteriorating weather if automation does not detect them, or to not verify automated commands. People working collaboratively with directive robots will still need to take control or question the commands they receive, when necessary — much like employees with human supervisors sometimes need to.
In a work environment full of robots, there will be new issues involving the emotional tolls of anthropomorphization and the potential for automation bias, but because robots emulate humans, some of the same issues found within all-human teams will still remain. People will still need interpersonal skills to work with team members, and we will still need some form of human oversight. My prediction? Humans in the workplace will become more efficient, but not yet obsolete. (Unless Skynet becomes self-aware; then all bets are off.)