- AI Street
- Posts
- Analyzing Earnings Call Tone with AI
Analyzing Earnings Call Tone with AI
Sean Austin, CEO of Markets EQ, explains why voice analysis is the next frontier in AI-driven market insights.

Hey, it's Matt. Welcome to AI Street Markets, where I highlight AI investing tools. Adjust your AI Street delivery preferences here.
Was this email forwarded to you?
Subscribe below to get AI Street straight to your inbox.
TONE
Analyzing Tone of Voice on Earnings Calls with AI

“While fundamental narratives explaining the price action abound, the majority of equity investors today don’t buy or sell stocks based on stock specific fundamentals.”
This quote from JPMorgan’s Marko Kolanovic is from 2017 and remains true today.
Financial media typically focuses on narrative when discussing market movements, but most daily trading activity is driven by systematic strategies. I think this is partly because narratives are easier to grasp than black-box trading strategies.
Systematic trading has long relied on text-based sentiment analysis.
Now with AI, quants can add tone of voice / how executives deliver information into their market analysis.
I recently got a demo from Sean Austin, CEO and co-founder of Markes EQ, who showed me how their platform analyzes voice tone in earnings calls to uncover insights that text analysis alone might miss.
Watch the demo on YouTube below or here on Descript, which also includes a transcript.
Austin said the idea stemmed from a conversation with a Goldman quant, who pointed out that while much effort had gone into analyzing language, voice remained an untapped frontier due to its complexity and computing demands.
“If you can do voice, that would be the holy grail for us as a systematic firm,” the quant told him.
When generative AI tools like ChatGPT emerged around 18 months ago, Austin and his co-founder realized they could combine AI with voice data analytics. "It was sort of clear to us there was going to be a world of AI analysts, and then we thought, why not AI voice-enabled analysts or voice-specific analysts?" Austin said.
Adding voice analysis to traditional text-based sentiment models can have a significant impact. "We've seen systematically quite some gains on portfolios. You can think of like 20-25% gain in Sharpe ratios, which is pretty insane when you think about it on top of just language models," Austin noted.

Analyzing voice patterns quarterly
During the demo, Austin showed how Markets EQ can identify trends across multiple earnings calls and detect "emotional peaks" in executives' speech. The platform analyzes metrics like arousal, valence, and dominance – key components of how sentiment is expressed through speech.
Austin pointed to a case where both an NBC report and its AI analysis independently flagged the same emotionally charged moment in an earnings call—when a UnitedHealth executive said, “The health system needs to function better” after the fatal shooting of a colleague last year.
The technology uses a 20-year FactSet dataset to track speaker cadence, analyst coverage, and global trends—enabling AI to detect tonal shifts across earnings calls.
While systematic firms were early adopters, Austin believes the biggest impact will be for non-quants, who have traditionally relied on intuition to assess executive confidence.
The platform allows analysts to get a deeper understanding of executives' state of mind and the behavioral components of their communications – something that experienced analysts have been trying to do by ear for years.
"Every analyst I've ever asked who's been on these calls says the same thing – they're trying to hear, literally listen, to see if you sound a little different," Austin said.
PROGRAMMING NOTE
AI Street is off next week and back March 23.
ICYMI
Check out the last few editions* on using AI for investment analysis, creating customized news feeds and tracking earnings call mentions:
*Not investment advice
How did you like today's newsletter? |
Reply