• AI Street
  • Posts
  • Tucker Balch on Scaling Investment Analysis with AI

Tucker Balch on Scaling Investment Analysis with AI

Five Minutes with the Emory Professor and former JPMorgan Executive

INTERVIEW
Tucker Balch on Scaling Investment Analysis with AI

As a kid, Tucker Balch dreamed of becoming an astronaut. Inspired by NASA’s finest, he plotted a course: become a military pilot, earn a PhD, and ultimately reach space.

After eight years flying F-15s and earning a doctorate in computer science, he secured an interview at NASA—but a minor health issue kept him grounded.

That setback became a launchpad. 

Tucker went on to navigate an unusual career as a robotics researcher, professor, and Wall Street AI innovator, authoring over 100 peer-reviewed papers and earning more than 15,000 citations.

Balch holds multiple AI and financial technology patents and co-founded Lucena Research (now Neuravest) a firm specializing in AI-driven investment solutions. 

Most recently, in 2019, he moved from Georgia Tech to join JPMorgan, where he helped his former postdoctoral advisor Manuela Veloso expand the bank's AI team from four to 110 members, cementing its position as a leader in financial AI.

This summer, Balch returned to academia, joining the Business School at Emory University.

I met Tucker at the International Conference on AI in Finance, the peer-reviewed conference he founded four years ago, which is a great event

Our conversation explored how AI is transforming investment analysis, from processing vast amounts of data to unlocking insights from alternative sources.

This interview has been edited for length and clarity. 

What made you want to become an astronaut? 

Thirty years ago or so, I was very committed to us getting into space.

I wanted to personally participate in that. After reading many biographies of astronauts, I learned that almost all of them—up until a certain date in NASA's history—had been military pilots. A lot of them also had PhDs, so I decided I would go that route. It's not like I decided to do those things only because I wanted to be an astronaut. 

As a kid, I was always interested in being a fighter pilot, but when I graduated from college, I thought, "That's selfish—to go be a fighter pilot." I felt I should use my computer science knowledge for good in some way.

So my first job out of college was at Lawrence Livermore National Lab on a fusion energy project. It was fascinating, but it ended up getting canceled. That's when I thought, "Maybe I really should go be a fighter pilot after all."

So I left Livermore and went that route. And then once I completed training, I was able to work on my PhD, which I did. And I thought it was pretty well aligned to be an astronaut. And I did get the interview. I think personality-wise and interview-wise, I got the job. But it turned out there was one medical red flag that kept me out. I had antibodies against my thyroid which ended up not having mattered at all over my years. But because astronauts might go on long duration space flights, it was considered a risk.

Turns out that it didn't matter, but I was able to keep my fighter pilot job for eight years. 

How did you move from fighter pilot to finance?

After the Air Force, I was a roboticist at CMU, then a professor at Georgia Tech.  I’ve always been motivated to “do good” for society, and over time, I began to realize that AI in finance can be that sort of “good.” Not just for making them rich, but helping people figure out how to manage their money and so on.

In 2018, my former postdoc advisor [Manuela Veloso] joined JPMorgan to establish an AI research group there.  I touched base with her and she said, “Hey, I'd like you to join me here.”

I spent six years at JPMorgan contributing to AI research, which I enjoyed immensely. About a year ago, I started thinking about returning to academia—and now I'm here at Emory. So that's my journey. 

How’s it jumping between academia and Wall Street?

There's important good AI work happening at places like JPMorgan and other banks, but they can't tell anybody about it. So you don't get the synergy of someone half-baking an idea at Morgan Stanley and someone else doing a different part of it at Goldman - they can't get together and make it a better, bigger thing. Among AI groups at financial institutions, AI Research at JPMorgan is a notable exception: They have a remarkable track record publishing in this area.

Whereas in academia, we're perfectly free to interchange in that way. Although individually in academia, we might not be able to execute and excel like some of the people at the banks, through this exchange of ideas we can. That’s, for instance, how large language models came to be—without academia and open-source contributions, they simply couldn’t exist.

What surprises you about AI's current capabilities?

I think we’re asymptotically approaching a peak of AI capabilities, but we’re nowhere near reaching the peak impact AI can have.  Even if it doesn't become more capable, there are so many applications and uses that remain available that nobody's tried or applied it to and yeah, those are, those are emerging and still underexploited. 

How do you see AI changing investment analysis? 

It's still the case that for most analysis tasks people are better than AI, but people are much slower than AI. So there's the capability to scale this up. And, you can do an 80 percent quality job on a thousand companies, as opposed to a 99 percent job on 10 companies. 

If you can have a fairly, decently informed opinion on thousands of stocks you can turn that into a robust investing strategy.

What's more important - the AI algorithm or the data?

The key question is, what is your data? In my experience, the effectiveness and quality of AI in investing strategies is less about the particular algorithm you're using and more about the quality or uniqueness of your data.

Large language models change that landscape in the following sense. When I talk about data, I guess I generically mean numerical data. In other words, some sort of data that you can very easily feed into some sort of trading algorithm. What large language models do is they enable you to turn language into numbers.

Wherever you're getting your written language about a topic, you need to have something that is predictive and perhaps unique, but LLMs allow you to leverage different sources. For instance, if you can listen to the news in Vietnam, translate it in real time, and identify relevant information for specific stocks, you greatly expand your data sources.

People can do that but it takes time and, you’ve got to pay people to be listening and translating and what AI does is to enable a broader net. It opens up a lot more sources of data, more accessible that weren't accessible before. 

You previously started some alternative data companies. How do you think LLMs affect that market? 

Back then, about 12 years ago, a lot of the hard work was in converting this information into some form that was usable for investing. With the cell phone example, you had to have knowledge of where the retail stores were and merge that with where the cell phones were to create some sort of indicator for home improvement stores. It was manual - you had to find one data source that told you where the stores were, another data source that told you where the cell phones were, and put all that together.

I think those sorts of problems are going to be a lot easier now. You can make an LLM stick those two things together for you in a reasonable, reliable, automatic way and scale those things up more aggressively. That's another example of being creative in how AI can scale up and amplify and accelerate what we're doing.

What’s been surprising to you about AI? 

The little things. The big change in the last year and a half is that everybody can use AI. 

I'm using it for routine administrative things like you were just at that conference with me. I needed to send an email to a bunch of people that were there and I had some documents with their email addresses, but I didn't want to manually edit them out, so I simply asked AI to extract the addresses and boom, done. 

IN CASE YOU MISSED IT
Recent Five Minutes with Interviews

  • USC's Matthew Shaffer on using ChatGPT to estimate “core earnings.”

  • Moody’s Sergio Gago on scaling AI at the enterprise level.

  • Ravenpack | Bigdata.com’s Aakarsh Ramchandani on AI and NLPs

  • PhD candidate Alex Kim on executive tone in earnings calls

  • MDOTM’s Peter Zangari, PhD, on AI For Portfolio Management

  • Arta’s Chirag Yagnik on AI-powered wealth management

  • Finster’s Sid Jayakumar on AI agents for Wall Street

Thanks for reading!

Drop me a line if you have story ideas, research, or upcoming conferences to share. [email protected]

Reply

or to participate.