News

Human bias burdens bots

Published online 22 November 2019

People cooperate less when they believe they are interacting with a bot.

Sedeer el-Showk

Bots can interact with humans more efficiently if they pretend to be human.
Bots can interact with humans more efficiently if they pretend to be human.
Iyad Rahwan
People interact differently with bots than with other humans, and this bias can sometimes negate the advantage of using a bot. These findings could mean it might be appropriate to hide machine involvement in some situations.

Continuing improvements in artificial intelligence are making it possible for bots to pass as humans. For example, in 2018, Google demoed an automated voice assistant that mimicked natural conversation well enough to seem human when making appointments over the phone. The announcement raised ethical concerns, with many arguing it was deceitful, at best, to hide that someone is talking to a machine.

Alongside the ethical concerns is the question of whether people interact differently with machines, possibly reducing their benefit. To address this, an international team of researchers recruited roughly 700 people to play 50 rounds of prisoner's dilemma, a game in which players can act selfishly for the chance of a large payoff or cooperate to be sure of getting a smaller reward. The participants played with either a human or a bot, but half of them were misled about their partner’s identity, believing they were paired with a human when they were actually interacting with a bot, or vice versa. 

Although bots proved better than humans at coaxing cooperation from their partner, their advantage disappeared when people thought they were playing with a bot. Humans were less likely to cooperate with a partner they believed was a bot, and over the long term this bias was enough to offset the bots’ skills at eliciting cooperation.

Talal Rahwan of New York University Abu Dhabi, who led the study, says the findings “highlight the possible cost society may incur in return for transparency,” though he adds that this may depend on the context. “Identifying applications in which the transparency-efficiency trade-off exists and applications in which it does not is an open research question.”

The researchers also tested whether this bias could be overcome by informing people. They told 190 of the participants that data show that people are better off if they treat the bot as though it were human, but it made no difference – the players still showed the same low level of cooperation with bots.

Brent Mittelstadt, an ethicist at the Oxford Internet Institute who was not involved in the study, says the research raises the difficult question of “to what extent we have a right to know about the inclusion of an automated system in a human or organizational decision-making process. Should a defendant, for example, have an absolute right to know that an automated risk scoring system was used in their trial and taken into account by the judge? Or similarly for a doctor in giving their diagnosis of a patient's condition?”

“We do not like to take a position on the normative question, since we are in the business of science, rather than policy making,” says Rahwan. “However, our work suggests that a blanket policy may not always be optimal for all objectives,” highlighting the need for a discussion about whether and when there needs to be transparency about interacting with machines.

Mittelstadt hopes that, in the future, “we default to a right to know in most situations, or at a minimum in those situations where material or psychological harm could result from the interaction,” adding that “we will need much more research looking at the social, psychological, and ethical effects of interactions with bots, both when they announce themselves as non-human, and when they don't. Interacting with another person, especially face-to-face, can have significant psychological and social benefits that are difficult to quantify, and can easily be lost or ignored in development driven primarily by a pursuit of greater efficiency or saving.”

doi:10.1038/nmiddleeast.2019.153


Ishowo-Oloko, F. et al. Behavioural evidence for a transparency-efficiency tradeoff in human-machine cooperation. Nat. Mach. Intell. 1, 517–521 (2019).