"The idea is that audio captures more than just what is in text."
Investors and asset managers are harnessing the power of artificial intelligence to analyze the speech of CEOs — in an effort to glean insights into their underlying emotional states.
This kind of analysis could allow them to figure out executives' true emotions and intentions, as the Financial Times reports, and possibly determine their next move.
It's an entire trend, with companies like Speech Craft Analytics developing AI algorithms using natural language processing, which can detect things like shifts in speech rate, pitch, or volume. These models can also detect "microtremors" that are impossible to pick up with the human ear as well as hesitations like "ums" and "ahs."
"The idea is that audio captures more than just what is in text," Mike Chen, head researcher at Robeco, an asset management company that recently started adding AI speech analysis to its strategies earlier this year, told the Financial Times. "Even if you have a sophisticated semantic machine, it only captures semantics."
Executives are also catching on that investors are using AI to try to figure out what's going on in their heads — which is having an interesting result as a consequence.
"We found tremendous value from transcripts," Yin Luo, head of quantitative research at Wolfe Research, told the FT. "The problem that has created for us and many others is that overall sentiment is becoming more and more positive... [because] company management knows their messages are being analyzed."
Worse yet, the tech being developed by the likes of Robeco has an Achilles heel. While it's good at comparing the emotional states of a given executive, things fall apart when a company appoints a new CEO.
Then there's the issue of bias, which could be introduced by the developers working on these algorithms.
Execs may also choose to take lessons on how to maintain a positive outlook using their voice and mannerisms — but that would require a specific skill set that not every CEO possesses.
"Very few of us are good at modulating our voice," David Pope, Speech Craft Analytics chief data scientist, told the FT. "It’s much easier for us to choose our words carefully."
"We’ve learned to do this since we were very young to avoid getting in trouble," he added.
More on AI: AI Companies Are Running Out of Training Data
Share This Article