A team of MIT researchers has found that in many instances, replacing human workers with AI is still more expensive than sticking with the people, a conclusion that flies in the face of current fears over the technology taking our jobs.
As detailed in a new paper, the team examined the cost-effectiveness of 1,000 "visual inspection" tasks across 800 occupations, such as inspecting food to see whether it's gone bad. They discovered that just 23 percent of workers' total wages "would be attractive to automate," mainly because of the "large upfront costs of AI systems" — and that's if the automatable tasks could even "be separated from other parts" of the jobs.
That said, they admit, those economics may well change over time.
"Overall, our findings suggest that AI job displacement will be substantial, but also gradual – and therefore there is room for policy and retraining to mitigate unemployment impacts," the team concluded in their paper.
The topic of AI coming for jobs has reached a fever pitch lately, especially with the democratization of powerful tools like OpenAI's ChatGPT and Google's Bard.
While many have warned of the dire consequences major job losses could have in the near future, tech leaders have remained optimistic about such an eventuality, arguing that these jobs will be replaced by new kinds of professions.
According to OpenAI CEO Sam Altman, who spoke at last year's Wall Street Journal Tech Live conference, it's an inevitable part of any "technological revolution."
"I'm not afraid of that at all," the billionaire said at the time. "In fact, I think that's good. I think that's the way of progress, and we'll find new and better jobs."
While there appears to be a consensus about AI one day coming for our jobs, when such a change will occur is still a hotly debated topic.
In their paper, the MIT team focused on computer vision-assisted tasks, using a bakery worker visually checking ingredients to "ensure they are of sufficient quality" as an example.
Such a job could "theoretically be replaced with a computer vision system by adding a camera and training the system to detect food that has gone bad," the researchers write.
However, installing and operating this kind of system would still be prohibitively expensive, considering it would only take care of the task of checking ingredients, which represents a mere six percent of the employee's work.
But with decreasing costs of deploying these systems through economies of scale or with the introduction of an "AI-as-a-service" platform, the researchers suggest that the "economics of AI can be made more attractive."
"'Machines will steal our jobs' is a sentiment frequently expressed during times of rapid technological change," the paper reads. "Such anxiety has re-emerged with the creation of large language models (e.g. ChatGPT, Bard, GPT-4) that show considerable skill in tasks where previously only human beings showed proficiency."
And while other experts have concluded that these fears aren't misplaced, many of them fail to "directly consider the technical feasibility or economic viability of AI systems," the MIT researchers argue.
Yet many questions remain. What about jobs that don't involve visual analysis, like the bakery example the researchers give? What about jobs that can be augmented with AI instead of being replaced entirely?
As TechCrunch points out, the MIT research was backed by IBM's Watson AI Lab, meaning there may have been a financial interest in downplaying the risks of replacing jobs with AI.
The authors, however, claim that it's simply a matter of creating meaningful regulatory frameworks to prepare for the future.
"For policymakers, our results should reinforce the importance of preparing for AI job automation," MIT research scientist and coauthor Neil Thompson told TechCrunch. "But our results also reveal that this process will take years, or even decades, to unfold and thus that there is time for policy initiatives to be put into place."
More on job automation: Google Reportedly Replacing Some Human Staff With AI
Share This Article