LW: Within hours of becoming operational and left to its own devices, a Twitter bot became racist and offensive. What does this tell us about artificial intelligence? Brian, maybe you can walk us through this one a little bit.
BK: Well, I think it might tell us more about human nature than artificial intelligence, but those are quite related. So AI like this be just as captivating in a radically different cultural environment, and they created this one for 18 to 24 year-olds, intending to kind of mimic millenials for entertainment purposes, so it’s – the feeling was kind of like, ideally, like talking with a 19-year old woman over Twitter or whatever.
LW: They created this bot and, but what has actually happened is, there was a concerted attack by a certain group, and because the bot kind of learns - it’s socialized somewhat, learns from other peoples’ behavior, and I read through some of them, and we won’t repeat them, because a lot of it isn’t broadcastable. So, Wu You, what do you make of this? What do you – what do you think, what does this tell us about ourselves…that within hours of having this new and exciting toy it just goes completely wrong and we just have to take it off completely.
WY: It’s more like a little baby, that has been learning from the people around her, because it is a piece of white paper, and it all depends on what you draw on it. And it quickly reminds me the Chinese version of Siri, Xiao bing (小冰), and Xiao bing is also developed by Microsoft, and used by about 40 million people in China. It is known as delighting with its stories and conversations in China. Lot of boys who especially prefer staying at home, would like to chat with her. But that is also reminds of another famous movie – Ex Machina [BK: Mm], which creates the stories of a robot that has been developed by human beings, and then she actually killed the one who invented her, and escaped.
LW: That’s against the First Law of Robotics. Brian…[BK: Yeah, yeah…]…you can’t do that. First do no harm.
WY: And people just feel it might be a little bit scary because she can think. And that is what the artificial intelligence that human beings have been trying for so long.
BK: Right. They’re trying to create robots or artificial intelligence that can think. Obviously, this one couldn’t…
LW: No, exactly. That’s – I think that’s the point there [BK: Yeah, very, very clear] I think that’s what we can take away from it. It actually couldn’t think.
WY: And this attempt just showcase if you make something, and then you might be losing control of it and then [BK: Very true, yeah] what do you do.
LW: Yeah, but now this interesting there is you’re talking about who made it. And Microsoft actually issued out an apology, claiming it was a coordinated attack, which is in line with at least one media report suggesting that an online forum actually did it. Now, they also – some other people, some other developers have actually criticized Microsoft saying that they actually acted quite recklessly, in, in letting this bot out without properly testing it, and knowing that the world is what the world is like.
WY: I think this could be another step for people to better understand the internet. If it is not for this chatbot, if it is the internet in all other kind of blogs, in all other kind of websites, there will be tons of cyber-bullying, cyber-abusing. If it is not for this, there will be those kind of actions somewhere else. So I think this can be another attempt for people to understand if you have developed similar algorithm or similar websites or similar robot, this is what might happen.
BK: Right. And, actually, like with anything, this is an experiment, and with any experiment, you know, if it’s a good experiment you actually learn from failure. Obviously, you’d like success, but you’ll-you get a failure and you figure out what went wrong, how to do better, how to avoid making the same mistakes again.