- AI Street
- Posts
- How Hedge Funds Are Learning to Trust AI
How Hedge Funds Are Learning to Trust AI

Hey, it's Matt. Here’s what’s up in AI + Wall Street.
Was this email forwarded to you? Sign up below:
HEDGE FUNDS
AQR Leans Into AI
A helpful way to think about the current AI boom is that it flips how we’ve traditionally done science on its head.
Old way:
Come up with a hypothesis
Test it against data
With AI:
Start with data
Let the machine find patterns
Work backwards to figure out what it’s telling you.
This has become possible thanks to breakthroughs in computational power and model design.
Take self-driving cars.
Instead of programming for every possible scenario, you feed the systems tons of data and then it learns how to respond. This is a simplification, but conceptually this is what’s happening (and why self-driving cars have improved so much in recent years.)

Created with ChatGPT
Now think about how a hedge fund might use this tech. We’re starting to see some public commentary on how they are thinking about it.
Cliff Asness, co-founder of the $128 billion quant fund AQR, said in December the firm is now starting to embrace the technology and that for certain investing decisions AI is “annoyingly better” than him.
Asness is very much from the old school of factor investing, earning his PhD under Eugene Fama, architect of the efficient markets hypothesis.
Bloomberg reported this week that the fund is:
Raising outside capital for two machine-learning-focused funds: Turing Equities and Turing Macro
Using machine learning to power about 20% of the signals in its $3.4 billion Apex multi-strategy fund
Seeing early performance impact: Apex returned 9% in Q1, outpacing many peers (though April returns have been negative amid volatility)
The quote below stood out to me. It’s from Bryan Kelly, head of machine learning at AQR:
“I would call it bad science — and certainly bad asset management — if you discarded reliably predictive signals just because you didn’t understand why they worked.”
I think this is a crucial point the industry is wrestling with. How do you fit this new paradigm with the old?
RELATED LINKS
AQR published, A New Paradigm in Active Equity, in February discussing emerging tech + investing.
Research: AI Asset Pricing Models

FORECASTS
Boosted.ai’s CEO on Where AI Goes Next
Today’s AI delivers answers based on massive datasets and computing power — built for the “general” user.
But the AI of tomorrow will learn how you think, using your data to deliver the right answers before you even ask.
This sci-fi scenario isn’t so distant, according to Joshua Pantony, CEO of Boosted.ai, an AI investor platform used by investment teams managing more than $5 trillion.
![]() |
|
Pantony says this is moving from a Google-like experience, where you prompt the system for what you need, to something closer to YouTube or Spotify, where the system already knows your preferences and surfaces what’s useful before you go looking.
While still in college in 2009, he co-founded a Siri competitor that ran on 50 million devices and was later acquired by Microsoft. His bet now with Boosted.ai is that the next wave of AI in finance won’t just respond to prompts — it will know which questions matter, and back up its answers with code you can verify.
On building a macro view from micro signals
"One of the interesting new unlocks is the ability to fundamentally build your macro picture from micro perspectives... you can use the system, this ability to analyze huge amounts of unstructured data, turn it into a structured format, and then operate on it.
One macro fund used it to scan every stock with credit exposure, analyzing delinquency and charge-off rates globally. That used to take dozens of hours — now it’s one prompt.”
On how AI systems work at Boosted.ai
“You create a team of little AI workers. You teach them how to do a task, and they do it continuously. Each worker builds a work log — essentially a code-based audit trail — so you can see exactly what it’s doing and adjust the logic.”
On how code can serve as an auditor for AI
“The biggest challenge in finance is the lack of determinism… If it did it this way today, you want it to do it the exact same way tomorrow. The way to get trust is to make the system produce verifiable output — and the easiest way to do that is to get it to generate code.”
On broader productivity and workflow gains
“The cost of producing software is going to drop by 80 to 90 percent. That’s going to cause massive disruption to incumbents... Right now, users are automating maybe 20-30% of their work. By year-end, I think that number could hit 100% productivity gains.”

ICMYI
Matt’s Note: The below was originally published on Sunday April 20 in the AI Street Markets edition. I’m reposting here after I get an outsized response to the LinkedIn post promoting it.
Also, in the next Markets edition, which will be out on May 4, I’ll be focusing on how to use DataMule for financial analysis. I’m not a coder, but I’ll be doing some "vibecoding" as the kids say. If you wanna follow along, you can sign up for the Sunday edition here.
Making SEC Data Cheap
John Friedman wants his financial data platform, DataMule, to make SEC data dirt cheap.
Friedman, who used to work for MIT economist Simon Johnson, is building the ambitious open-source project with no staff, no pitch deck and no business model.
"I just want it to exist," he tells me.
The idea started when one of his PhD classmates at UCLA couldn’t pursue a research project because the dataset they needed cost $35,000.

So Friedman built a 1-million-row executive officers dataset from 20 years of SEC filings using just $5 of Google Cloud credits.
His approach blends traditional coding with AI.
“I write algorithmic parsers because they are very fast and cheap,” he told me. “Then I feed the parsed data into a LLM if I need context.”
Using DataMule’s SEC archive, you can download every 10-K since 1993 for about $2. For comparison, the SEC’s API is about $350 for a similar pull. Friedman thinks he can cut costs another 1000x by next year.
“Once data is cheap,” Friedman says, “I want to build an analytics layer to answer questions like: What companies are affected by tariffs on Canada?”
Unlike many companies pushing AI as the front-end product, Friedman’s stack treats LLMs like a secondary tool—useful, but not essential. He’s more interested in making the data accessible and reproducible first. The analytics layer, he says, comes second.
That’s the idea behind Indicators, a kind of minimum viable product for building structured economic signals from unstructured filings. Right now, it’s basic. But Friedman says he’s planning a full rewrite that will incorporate a wider range of form types beyond just 10-Ks.
He’s also working on LLM-based search for regulatory filings. The plan: build a simple tool where a query like executive departures citing strategic disagreements triggers an LLM to generate keywords and search a NoSQL database of 8-K Item 5.02s. Estimated cost per query? Roughly $0.00001.
Friedman isn’t looking to raise money at the moment. But he’s open to sponsorships or credits, particularly for LLM APIs and database infrastructure.
His users today are mostly PhDs, retired engineers, hedge fund quants, and technical researchers.
If you’re sitting on any unused API credits, feel free to reach out to John via LinkedIn or I’m happy to make an introduction.

WHAT ELSE I’M READING
Nvidia Releases platform for open-source AI agents (WSJ)
Banks must fight deepfakes with better AI, Barr says (Banking Dive)
AI pressures 100+ software firms, study finds (Business Insider) $
Moody’s builds 35 AI agents with tasks, personalities, and data access. (South China Morning Post)
Seven Goldman Sachs discuss AI’s impact at the Bank (Business Insider) $
Flash Boys Emerge From Shadows to Reorder Stock Trading (Bloomberg) $

How did you like today's newsletter? |
Reply