The company got burned when its chatbot turned into a Nazi. Now it's worried about the future.

Bumpy Ride

One day, we might have autonomous cars that drive us to work and robots that prepare our dinners.

But what we have right now are autonomous cars that run red lights and robots that agree to "destroy humans."

Clearly, we have a ways to go before we work out all the kinks in the extremely promising area of artificial intelligence — and Microsoft wants to be sure its investors know the path to an AI-powered future could include a few more bumps.

Bad Reputation

On Aug. 3, 2018, Microsoft filed its annual 10-K form with the U.S. Securities and Exchange Commission (SEC). This form serves as a company's opportunity to let the public know about any new business risks that might affect their decision to invest in the company.

On Tuesday, Quartz reported that Microsoft had added a new section to its 2018 10-K, one specifically dedicated to the company's AI efforts and the risks inherent in them:

We are building AI into many of our offerings and we expect this element of our business to grow... As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.

If our reputation or our brands are damaged, our business and operating results may be harmed.

Tay Day

Microsoft has a reason to be wary of flawed or biased AI.

In 2016, it found itself at the center of a major scandal when its Tay chatbot began spewing Nazi propaganda online. Then, in Feb. 2018, MIT researchers announced that the company's AI-powered facial recognition system returned a 21 percent error rate for darker-skinned women — and less than 1 percent for light-skinned men.

Given the remarkable promise of AI, it's highly unlikely a tech company as big as Microsoft would ever consider getting out of the space. So it's doing the next best thing with this 10-K filing: letting investors know it might need to weather a few more storms before reaching the sunny future AI could deliver.

READ MORE: Microsoft Warned Investors That Biased or Flawed AI Could Hurt the Company’s Image [Quartz]

More on Microsoft: Microsoft Announces Tool to Catch Biased AI Because We Keep Making Biased AI


Share This Article