Facebook kills AI after it creates own language

Wednesday, 02 Aug, 2017

Alice: "Balls have zero to me to me to me to me to me to me to me to me to".

Bob: you i i i i i everything else.

The exchange went on the same way repeatedly.

Despite initial appearances, there is some logic to the discussion - the way the chatbots keep stressing their own name appears to a part of their negotiations, not simply a glitch in the way the messages are read out.

And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle hard problems more effectively.

However, Facebook announced last week that it had created a code which allowed bots to have more sophisticated conversations with either another bot or a human.

"There was no reward to sticking to English language", Dhruv Batra, Facebook researcher, told Fast Company magazine.

Researchers at Facebook's AI research lab (FAIR) found that two AI chatbots in an experiment had taught themselves their own language - without any human input. One of the reasons that the communication gap is significant is that it could theoretically mean that machines will be able to write their own languages and lock users out of their own systems. Batra stated that sometimes humans use "shortcuts" that are easily understood while talking to each other to help get things done quickly.

But when they were being programmed, they were not given guidelines that forbade them from interacting outside of a human language. However, humans would have no clue what the robots were actually saying to one another, and this would be a problem in the future. It might sound like the plot to sci-fi movie where robots take over the world, but it's precisely what happened.

That AI shorthand, however, made it hard for human researchers to know what was going on, so Facebook eventually changed the experiment's terms to require bots to speak in regular English.

"I think people who are naysayers and try to drum up these doomsday scenarios - I just, I don't understand it. They would, for instance, pretend to be very interested in one specific item so that they could later pretend they were making a big sacrifice in giving it up", said a paper published by the Facebook Artificial Intelligence Research Division. The online translation tool started using a neural network to translate between some of its most popular languages. There are certainly ethical, not to mention existential, issues around how much freedom is given to machine learning systems.