How long until the AI starts a podcast?
Google DeepMind researchers have finally found a way to make life coaching even worse: infuse it with generative AI.
According to internal documents obtained by The New York Times reports, Google and the Google-owned DeepMind AI lab are working with "generative AI to perform at least 21 different types of personal and professional tasks." And among those tasks, apparently, is an effort to use generative AI to build a "life advice" tool. You know, because an inhuman AI model knows everything there is to know about navigating the complexities of mortal human existence.
As the NYT points out, the news of the effort notably comes months after AI safety experts at Google said, back in just December, that users of AI systems could suffer "diminished health and well-being" and a "loss of agency" as the result of taking AI-spun life advice. The Google chatbot Bard, meanwhile, is barred from providing legal, financial, or medical advice to its users.
But now, not even a year later, Google is looking like it's tempted to throw its previous caution — and safety research — to the wind. There's an AI-sized hole in the life coaching market, it seems, and the longtime search giant is hoping to fill it — even though the ethics feel murky at best.
Per the NYT's reporting, researchers at the DeepMind-contracted firm Scale AI have been working to test the project, and have assembled large teams of workers — including more than 100 experts with doctorates in various fields — to do so. The NYT was also privy to an example of the types of questions that Google believes its AI might be helpful with, including a pressing query about destination wedding etiquette. Legitimately challenging stuff, in other words.
Elsewhere, according to the report, Google is also working on a tutoring tool, in addition to a planner of sorts with the ability to create things like budgets, meal plans, and workout guides.
According to the NYT, Google has yet to decide whether it'll deploy any of these tools. And for its part, Google DeepMind defended the ongoing effort, noting that its evaluation process is a "critical step in building safe and helpful technology," and that "at any time," there "are many such evaluations ongoing."
To be fair, it's not terribly surprising to see Google go this route. Almost every major tech company has been working to break through with a truly advanced, comprehensive AI assistant for some time, and humanity has already automated ourselves halfway into oblivion, anyway. Whether it's healthy or not, making real-life choices from unreal, human-mimicking AI programs feels like the only logical next evolution for our automation-addicted lives.
More on Google AI: Google’s Lead AI Guy Says AI Will Be Amazing, Unless It Kills Us
Share This Article