HR departments, though, have expressed their willingness to use software and algorithms to layoff employees.
After Google's parent company Alphabet laid off some 12,000 employees in January, or about six percent of Google's entire workforce, many of the aggrieved former workers began to speculate on why they were chosen, The Washington Post reports. The distribution of the layoffs, after all, seemed random.
In the words of one employee in a Discord chatroom, they wondered if a "mindless algorithm carefully designed not to violate any laws" was responsible for singling out who got cut.
Google has denied using an AI, saying there was "no algorithm involved" in its decision making, it told WaPo.
True or not, the employees have ample reason to be suspicious. According to a recent survey cited by the newspaper, 98 percent of human resources leaders at American companies admitted that they will use software and algorithms to "reduce labor costs" this year — despite only half of them being confident that the tech will make unbiased recommendations.
Hiring to Firing
It's the darker flipside of a long-accepted practice. HR departments of big firms often use algorithms to find the "right person" for "the right project," Harvard Business School professor of management practice Joseph Fuller told WaPo.
The tech helps to form a database known as a "skills inventory," which comprehensively lists the skills and experiences of every employee and helps companies decide whether these, in total, will be enough for them to meet their goals.
"They suddenly are just being used differently, because that's the place where people have... a real... inventory of skills," Fuller said.
Take, for example, a startup called Gloat: an "AI Talent Marketplace" that uses AI to connect employees to more relevant projects, and vice versa. Gloat vice president Jeff Schwartz told WaPo that he isn't aware of any clients using it to layoff employees, but acknowledged the need for transparency from HR leaders.
Fight or Flight
Employee performance might be the most important factor analyzed by these technologies, but many other metrics are more nebulous, such as "flight risk," which predicts how likely it is for someone to quit the company.
If, for example, a company has a discrimination problem causing non-white workers to leave higher on average, AI software could inadvertently identify non-white workers as a "flight risk" and recommend firing them at a higher rate, Brian Westfall, an analyst at the software review site Capterra, told WaPo.
"You can kind of see where the snowball gets rolling, and all of a sudden, these data points where you don't know how that data was created or how that data was influenced suddenly lead to poor decisions," he added.
Share This Article