A senior scientist at Google says there's a "serious problem of censorship" at the company.
Careful Packaging
When Google AI scientists publish work on topics deemed to be "sensitive," the company subjects them to extra scrutiny and makes sure that they portray the technology in a positive light.
Starting this past summer, according to a bombshell NBC News investigation, the company imposed a "sensitive topics" review that seems to be preventing scientists from accurately tackling the potential dangers of emerging technology — especially ones developed by Google and other Alphabet companies.
Under the guise of not disclosing trade secrets, the reports suggest that Google may be more concerned with its public perception than with publishing important, well-executed research.
Public Eye
Maybe that's not surprising — corporations aren't known for their commitment to a free and open debate — but it is eyebrow-raising coming from a company where the slogan used to be "don't be evil."
Google's alleged mishandling of controversial topics in AI — especially its recent ousting of top AI ethicist Timnit Gebru who had spoken out about issues with the company — has brought the company under new scrutiny over the past several weeks. Now, Gebru's colleague Margeret Mitchell, a senior scientist at Google, is speaking up.
"If we are researching the appropriate thing given our expertise, and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship," Mitchell told NBC.
Smile!
Google's new review policy guides scientists to "take great care to strike a positive tone," according to internal correspondence obtained by NBC. Scientists also are told to refrain from mentioning Google products when writing about sensitive topics, distancing their own work from the ethical conundrums of facial recognition, self-driving cars, and other forms of controversial technology.
For example, one paper on recommendation AI like that deployed by YouTube to suggest new videos, originally said the tech can promote "disinformation, discriminatory or otherwise unfair results" and "insufficient diversity of content." The final version, after the review, said it could promote "accurate information, fairness, and diversity of content."
READ MORE: Google told its scientists to 'strike a positive tone' in AI research, documents show [NBC News]
More on Google: Google Ousts Top AI Ethicist
Share This Article