• AI Street
  • Posts
  • Five Minutes with Finster CEO Sid Jayakumar

Five Minutes with Finster CEO Sid Jayakumar

On building AI Agents for Wall Street

INTERVIEW

This week, we interview Sid Jayakumar, CEO and founder of London-based Finster AI and formerly a research engineer at Google DeepMind.

Before joining DeepMind, he turned down a lucrative job offer to work at a quant hedge fund, leaving his would-be employer perplexed — considering they offered free housing in London. Sid returned to financial markets by founding Finster last year.

The company is building AI agents to make financial analysis more efficient. Finster AI is currently working across use cases on the buy side and investment banking with firms managing over $800 billion and collectively employing over 1,000 analysts, according to Sid.

The company is backed by Hoxton Ventures with angel investors including the CEO and CTO of Cohere, senior members at OpenAI, Meta AI, and DeepMind.

In the interview, we cover:

  • How Finster is tackling the AI "hallucination" problem that's driving finance pros crazy

  • His prediction on how AI will reshape hedge funds (spoiler: it's sooner than you think)

  • Why general-purpose language models fall short for specific financial applications

  • The potential for AI to automate complex financial tasks beyond simple data retrieval

  • And more

This interview has been edited for clarity and length. 

Tell me about yourself and how you got involved with AI & Finance

I got lucky being in the right place at the right time. I grew up in India, moved to the UK, and during my undergrad at Cambridge, I got interested in AI just as Google acquired DeepMind. 

I remember this incredible demo they released where an AI agent learned to play Atari games, including “Breakout.” It wasn’t just playing the game; it kept getting better until it discovered a bug in the game. The AI figured out that if it hit the ball at a precise angle, it could clear all the bricks consistently without missing. It was like the agent had developed its own strategy. That video was mind-blowing to me, and even though it’s over 10 years old, it’s still on YouTube, and I remember it clearly.

I spent my first summer at Morgan Stanley doing data science, but after seeing what DeepMind was doing, I was hooked on AI. By chance, I met Demis Hassabis, the founder of DeepMind, at an event, and one thing led to another. I joined DeepMind in 2016 as their first undergrad intern, and I ended up staying for seven years. I was supposed to return to finish my degree, but they offered me a role, and I decided to stay.

Working at DeepMind was incredible. It wasn’t just about being around smart people—it was about bringing together physicists, computer scientists, and neuroscientists, all focused on solving intelligence. 

It felt like we were at the start of something big, even if we didn't fully realize it at the time. The work was very pure in a way - we weren't focused on products yet, it was all about pushing the boundaries of what was possible.

When ChatGPT became an inflection point for AI, I realized we were entering a phase where these models could truly change industries, not just in research, but in real-world applications. For me, that sparked a desire to explore something new outside of Google. I wasn’t set on Finster immediately, but I knew I wanted to be part of how AI would transform knowledge work—similar to how coding and programming have changed. AI isn’t replacing jobs, but it’s fundamentally altering how we approach work. I wanted to be part of that shift.

How’d you get interested in finance? 

Yeah, so my interest in finance actually comes from a pretty unique place. It's kind of funny - I have what I call the "reverse Indian parents stereotype." My parents aren't engineers or doctors, they're actually in finance, but my sister and I ended up as doctors and engineers. So, I grew up with finance being a big part of our dinner table conversations. I was always kind of there in that world, and I found it really interesting from an early age.

I actually almost ended up at a hedge fund a few times too. When I got my offer from DeepMind, I had a parallel offer from a hedge fund that was paying way more money. The Wall Street guys were pretty perplexed when I turned it down for DeepMind. They were like, "We even offer free housing in London!" But I was more excited about the potential of AI at that point. I figured I could always come back to finance later, but I wanted to see what this whole "solving intelligence" thing was about at DeepMind. Looking back, I'm really grateful I made that choice.

What does your company Finster do?

We're building a product that can automate a lot of the grunt work that financial analysts do, but also help with more complex tasks.

One of our key focuses is on solving the hallucination problem in finance. We've developed our own data pipeline and don't rely on what the large language models have been trained on. This allows us to ground our answers in what companies have actually said, and users can quickly verify this.

We're also working on more advanced features. For example, we can take a user's question and break it down into sub-queries, potentially doing multiple things at once to answer complex questions. We've even developed a system that can write entire reports. You could ask for a one-pager about a company's earnings, and our system will break that down into multiple queries, look at 20 different sources, and compile a comprehensive report.

Our goal isn't just to be a chat interface - we're really trying to automate whole pieces of work. We're aiming to be that junior analyst sitting on your laptop who can do the tedious stuff for you, but in a way where you can verify instantly that it's correct, reliable, and up to your standards.

How do you handle accuracy issues?

It's not about completely eliminating hallucinations. It's more about managing them and building user trust. “I don’t know” is a better answer than making something up.

What the user really wants is the ability to audit the answers so they can build trust. It's about transparency, you know? In finance, we're dealing with critical data and decisions. Users need to know where the information is coming from and how reliable it is.

We're giving context. Finster might say, “Factset's number was 4.3, but the 10-K has 4.2." Maybe it's been adjusted differently, maybe something's been calculated differently. But we leave that for the user to decide. 

We've seen a lot of hype around large language models like ChatGPT. How do you see these being applied in finance?

Look, ChatGPT is amazing, and it's probably one of the great consumer product stories. But I don't think that's the final form of what we need in finance. The first phase of large language models faced challenges with hallucinations, trust, and correctness. Over the last 12 months, we've realized that combining them with smaller, specialized models can deliver the necessary automation for specific use cases more effectively if done correctly. 

In finance, we're dealing with critical data and decisions. We need models that are not just generalists, but really understand the nuances of financial analysis, regulatory compliance, and market dynamics. That's why at Finster, we're building a lot of our own IP around these models and getting them fit for purpose for capital market use cases. We're not just using off-the-shelf GPT or Claude. We’ve found that you need to work across both better data and more specificity and auditability for the models. 

The goal is to create models that consistently outperform large, generalist models for specific financial tasks. It's not about having the biggest model, but having the right model for the job. And in finance, that often means a hybrid system of large and small models and various advances across retrieval, tool use and the like. 

How do you see AI impacting hedge funds in the coming years?

There's already pressure on the 2-and-20 world to go to 1-and-20 or lower. With AI, I think that will continue. The best performing people in all fields, but especially in finance, will find that AI aligns with their incentives. You could imagine a new, savvy hedge fund saying, "We don't have any juniors. We just have AI assistants and 3 partners. We're really lean, we take very low management fees, but we take our cut of the carry because we back ourselves to generate returns."

The interesting thing is, AI benefits both ends of the spectrum. The large, more "old school" asset managers actually have the most to gain from AI right now. They have the resources to implement it at scale, and they have the most repetitive work that can be automated. But it also opens the door for smaller, tech-savvy firms to compete by operating extremely efficiently.

Say you want to be market neutral but long a particular industry, and you want to figure out who the winners are based on an event you see coming soon. If you can do that research 10x quicker with AI, you can build conviction 10x faster and potentially trade much better. It's not about AI making the decisions, it's about AI doing a lot of the grunt work in validating or disproving your hypotheses.

How soon do you think we'll see these changes in the industry?

Some of this is already happening. From what I hear, there's a lot of money being made with interesting applications of large language model-based technology in quantitative prop trading. But for the broader industry shift we're talking about for the “fundamental” world, I'd say we're looking at the next 5-10 years. It's not an overnight change, but it's coming faster than many people realize.

Thanks for reading!

Drop me a line if you have story ideas, research, or upcoming conferences to share. [email protected]

Reply

or to participate.