Double standard, much?
Double Standard
Google's parent company Alphabet is warning employees about how they should use AI — including its own homegrown chatbot Bard, Reuters reports.
That's despite the company making considerable investments in the tech. According to the report, Alphabet is particularly concerned about employees feeding confidential data into the chatbots — a reasonable concern, although it's striking to warn employees about tech that it's heavily marketing to customers.
Stop the Leaks
Companies worldwide are now looking for ways to protect themselves from employees giving away secrets to chatbots.
Earlier this year, Amazon warned employees not to leak sensitive information to OpenAI's ChatGPT. In April, Samsung employees got into trouble for reportedly leaking sensitive company information to the chatbot as well. Even Apple restricted the use of ChatGPT and other AI-based tools over fears of workers leaking confidential data.
But the cat is already out of the bag. Despite these warnings, a survey found that almost half of professionals were using AI tools like ChatGPT as of February.
Google is already — conveniently — offering expensive chatbot tools to enterprises that it claims won't leak data to any public-facing AI models, as Reuters reports.
But the company clearly still has a lot of convincing to do. It's already facing major headwinds in the global rollout of its Bard chatbot, and was forced to postpone its launch in the European Union this week, according to Politico, after regulators raised privacy concerns.
As AI rattles the world, it's worth remembering: watch what they do, not what they say.
More on Bard: Google Staff Warned Its AI Was a "Pathological Liar" Before They Released It Anyway
Share This Article