This sounds like something out of science fiction.


From predictions of AI treating humans like animals to repeated claims of algorithmic sentience, we thought we'd heard it all this year — until this new research about a purported death-predicting AI model dropped.

In a new paper published in the journal Nature Computational Science, researchers from the Technical University of Denmark (DTU) claim they've devised an AI model that can supposedly predict the outcomes of peoples' lives, including the rough date of when they're going to kick the bucket.

"We used the model to address the fundamental question: to what extent can we predict events in your future based on conditions and events in your past?" DTU professor and paper author Sune Lehmann said in the school's press release about the research.

Using health and labor data on Denmark's population of about six million people, Lehmann and her team built "life2vec," a so-called "transformer model" that can translate one form of input into a different output via context clues. In the case of life2vec, the initial inputs are things like time and location of birth, education, health status, occupation, and salary — and the outputs supposedly predict everything from "early mortality" to "personality nuances," as the study explains.

"What's exciting is to consider human life as a long sequence of events," Lehmann said in the press release, "similar to how a sentence in a language consists of a series of words."

Ethical Implications

Although that's a strange way to describe it, the data scientist does make a good point: our lives do have a progressive and linear flow to them that's arguably not dissimilar to that of a sentence, with easily-detected beginnings, middles, and ends.

It gets weirder, however, when trying to parse how exactly one would verify whether the model's death predictions are accurate, which neither the DTU statement nor the paper seem to address.

What Lehmann does address openly is the ethical problems that would be raised by such a model were it to be accurate — and in doing so shouts out the ways for-profit entities already engage in these sorts of soft predictions to profit off of us.

"Similar technologies for predicting life events and human behavior are already used today inside tech companies that, for example, track our behavior on social networks, profile us extremely accurately, and use these profiles to predict our behavior and influence us," he aptly pointed out.

There would be a lot more research necessary to figure out whether AI could actually predict human mortality information based on data inputs. But if that does come to fruition, we as a species would, as Lehmann suggests, need to decide "where [this] technology is taking us and whether this is a development we want."

More on next-level AI models: OpenAI's Chaos Linked to Super Powerful New AI It Secretly Built

Share This Article