It’s a nightmarish mix of Big Tech overreach and state authoritarianism.

The Argentinian province of Salta approved the development of a Microsoft algorithm in 2o18 that allegedly could determine which low-income “future teens” would be likely to get pregnant, a shocking investigation by Wired reveals.

The algorithm — which Microsoft called “one of the pioneering cases in the use of AI data” — used demographic data including age, ethnicity, disability, country of origin, and whether or not their home had hot water in its bathroom, to determine which of the women and girls living in a small Argentinian town were “predestined” for motherhood.

The opaque program, which was celebrated on national television by then-governor Juan Manuel Urtubey, was offered to the province by Microsoft in 2018, at the same time Argentina’s Congress was debating whether to decriminalize abortion, Wired notes.

The magazine’s reporting found that the women and girls Microsoft’s algorithm identified as would-be teen moms were often disenfranchised in various ways, from having poor backgrounds and migrant families to indigenous heritage.

The algorithm, known as the Technology Platform for Social Intervention, is noteworthy due to the fact that an American company like Microsoft chose to deploy such a program in a country with a long history of surveillance and population control measures.

Those leanings are apparent in the lack of transparency surrounding the program. For one, the Argentinian government never formally assessed the algorithm’s impact on girls and women.

Worse yet, according to Wired, the program involved the deployment of “territorial agents” who surveyed those identified by the AI as being predestined for pregnancy, took photos of them, and even recorded their GPS locations.

It’s still unclear what the provincial or national governments did with the data and how — or if — they related to the abortion debate.

In 2020, Argentina voted to decriminalize abortion, a historic moment for the South American nation — but the program’s very existence should be cause for concern.

The report should serve as a warning of the potentially dangerous intersection between American AI tech and authoritarianism, and offers a much-needed reminder that we have, for the time being, much less to fear from the algorithms themselves than from the humans behind them.

READ MORE: The Case of the Creepy Algorithm That ‘Predicted’ Teen Pregnancy [Wired]

More on predictive algorithms: Data Scientists Say They’ve Developed Algorithms to Predict the Next Coup Attempt


Share This Article