Microsoft might just be the hipsters of the AI chatbot world – they were dabbling in call-making chatbots long before Google made them mainstream cool.
Earlier this month, Google made waves when CEO Sundar Pichai demoed Duplex, a new Google Assistant feature that can make routine voice calls on a user’s behalf. Yesterday, Microsoft CEO Satya Nadella demoed a social chatbot called Xiaoice (pronounced “Shao-ice”) at an AI event in London.
Turns out, Xiaoice has been having two-way verbal conversations with users in China since August. In fact, the bot has spoken to 600,000 people already, according to a blog post by Harry Shum, Google’s Executive Vice President of Artificial Intelligence and Research. The fact that Xiaoice (which Microsoft refers to using female pronouns) is already deployed is just one of several differences between her and Google’s Duplex.
While the latter can call third-parties, such as a restaurant or salon, Xiaoice can only call the Microsoft user. Rather than making a reservation or setting up an appointment, she straddles the line between personal assistant and worried mother in Microsoft’s demo, asking the user about their stress level, offering to set a wake-up call, and suggesting they get some sleep — it is midnight, after all.
Making sure users get their recommended eight hours isn’t Xiaoice’s only skill, either — she can also spin a tale or two to help the little ones fall asleep at night. According to Shum’s post, Microsoft will roll out a free new feature on June 1 that lets Xiaoice create customized 10-minute long audio stories in just 20 seconds after listening to input from parents and their children.
In his blog post, Shum also took a not-so-subtle dig at Google’s Duplex demo. During those calls, Duplex never identified itself as non-human, an omission that left some with a bad taste in their mouth.
“Google’s experiments do appear to have been designed to deceive,” Thomas King, a researcher at the Oxford Internet Institute’s Digital Ethics Lab, told TechCrunch in reference to the demo. “[E]ven if they don’t intend it to deceive, you can say they’ve been negligent in not making sure it doesn’t deceive.”
That’s never been the case with Xiaoice, according to Shum: “[M]ost importantly, we made sure people were informed that she wasn’t a real person.”
Given that the bot can only call its user (and not an unsuspecting third-party), that bit of info probably didn’t need saying. But then again, what’s more hipster than throwing a bit a shade at anyone a step or two behind behind you?