Alex Gillespie and Kevin Corti, researchers at the London School of Economics have found a novel way to test the realism of AI speech. Inspired by a lesser known set of experiments by infamous obedience researcher Stanley Milgram, Gillespie gave volunteers small ear pieces which, during conversations with unsuspecting subjects, fed them responses generated by a chatbot. They call these human-machine mash-ups "echoborgs." The researchers were interested in seeing how we may react in the future, when robots may look more like humans.

“Most of the time we encounter AI today, it’s in a very mechanical interface,” Corti said. “This was a way of doing it so that people actually believe they are encountering another person."

Their study found that when participants were unaware that they were speaking to a computer, they assumed that their conversation partner was merely socially awkward. The researchers write, "the vast majority of participants who engaged an echoborg did not sense a robotic interaction." 

Corti and Gillespie ran two other similar studies. In one, they asked subjects to determine whether their conversation partner was an echoborg or merely pretending to be one. In that condition, participants were actually more likely to flag the human echoborgs as being robots. The researchers believe this is because we ask for a higher level of functioning when we know we're speaking to a robot. “The bar is higher – people expect more,” said Gillespie. This is one of the hurdles that will have to be overcome before robots can pass as our equals. 

Cover: Flickr