- AI Street
- Posts
- DeepSeek Upends AI Investment Narrative
DeepSeek Upends AI Investment Narrative

Hey, it's Matt. Here’s the latest in AI + Wall Street with expert interviews, news, and in-depth analysis.
Was this email forwarded to you? Sign up below:
DEEPSEEK
DeepSeek Upends AI Investment Narrative
A Chinese AI model, DeepSeek, rocked markets this week.
DeepSeek is a suite of Large Language Models developed by a Chinese quantitative hedge fund, High-Flyer, which initially focused on applying AI to financial markets. In 2023, the firm spun off DeepSeek as an independent research group dedicated to developing artificial general intelligence.
The company’s latest model, R1, has upended the AI investment narrative—showing that breakthroughs in large language models don’t require billions in computing power.
DeepSeek claims R1 was trained without Nvidia’s most advanced chips and at a cost of about $6 million — a fraction of what the most advanced models cost, while delivering similar functionality.
Now, some experts have called out this number, saying that it took a lot of trial and error to get this result. But this is beside the point. R1 is cheaper and it performs well.
Alex Hoffmann, cofounder and CEO of Marvin Labs, has been testing the new model and the quality aligns well with OpenAI’s o1 model and with much lower costs.
“We’ve only had positive experiences so far,” Hoffmann tells me.
This economical performance shocked markets on Monday, despite the fact that DeepSeek released R1 last Monday. Nvidia's stock plunged, wiping out almost $600 billion in market value, the largest single-day loss for any company.
The emergence of a cost-effective, open-source AI model could boost AI adoption on Wall Street, according to Pete Harris, founder of consultancy Lighthouse Partners.
“It should be good news for Wall Street firms and institutional investors, which are always concerned about costs, and which have long been comfortable using open-source software—even for mission-critical applications.”
“Sputnik Moment”?
Venture capitalist Marc Andreessen called the release of R1 ‘AI’s Sputnik moment,’ referring to the 1957 launch of the Soviet satellite Sputnik, which sparked the U.S.-Soviet space race.
Not everyone agrees.
"This isn't a Sputnik moment," Tucker Balch, former JPMorgan AI executive and now Emory University professor, tells me. "In that case, Russia could do something we couldn't do at all. In this case, the Chinese company is able to do it cheaper, not better."
He added, “OpenAI still owns the high ground, as in the best models. But DeepSeek is undermining OpenAI’s “commodity product” at the low end. DeepSeek is still relying on older Nvidia chips, so I don’t see it as such a disaster for them either.”
Hardware Constraints, Software Discoveries
Amid all this, the U.S. government has limited the sales of Nvidia’s most advanced chips to curb their use by the Chinese military.
What’s striking is that DeepSeek achieved this breakthrough using less advanced chips, showing how hardware constraints can lead to software discoveries.
Mark Zuckerberg discussed this yesterday on Meta’s earnings call:
I think there's a number of novel things that [DeepSeek] did that I think we're still digesting. And there are a number of things that they have advances that we will hope to implement in our systems. And that's part of the nature of how this works, whether it's a Chinese competitor or not. I kind of expect that every new company that has an advance -- that has a launch is going to have some new advances that the rest of the field learns from. And that's sort of how the technology industry goes. I don't know -- it's probably too early to really have a strong opinion on what this means for the trajectory around infrastructure and CapEx and things like that. There are a bunch of trends that are happening here all at once.
I bolded the last line. There’s way too much happening here! It’s hard to keep up!
I’ve interviewed close to 100 folks in the last few months about AI + finance and I usually ask, “What’s been the most surprising thing for you over the last year or so?” And a pretty consistent answer is the pace of advancement.
It’s been pretty shocking to me that experts, some with decades of experience, are sort of stunned at AI’s pace of advancement.
It’s like an architect building their dream home, and by the time they move in, it's already dated.
Additional Reading
PODCAST
We just recorded a fantastic episode of Alpha Intelligence Podcast focusing on DeepSeek's new model. My co-host Francesco Fabozzi and I had the pleasure of speaking with Tharsis Souza, a former senior VP at Two Sigma, Columbia lecturer, and author of a new O'Reilly book on LLMs, called Taming LLMs.
The conversation was so compelling that we've fast-tracked editing, putting our previously scheduled episode on hold. Look for it in the next day or so - I'll share it on LinkedIn once it’s out and in next week's newsletter.
AI NEWS
New Trading Market for Computing Power Launches

Courtesy of Compute Exchange
A new venture called Compute Exchange launched yesterday, aiming to transform how computing power is bought and sold. Their vision: create a commodity market for processing capacity (known in the tech industry as "compute"), similar to how oil is traded today.
The exchange plans to conduct auctions where data center operators and cloud providers can sell processing time on high-demand chips like Nvidia's H100 and H200, with AI companies bidding for capacity.
Donald Wilson Jr., co-founder of Compute Exchange and founder of high-frequency trading firm DRW Holdings, believes the market potential is massive: "The total dollars spent on compute will, over the next 10 years, exceed total dollars spent on oil. Obviously oil is the largest commodity right now, so I believe it will be displaced by compute."
But unlike oil, which has standardized benchmarks, computing power varies based on chip type, data center setup and software stack. Switching from one compute provider to another brings technical headaches, making it harder to trade than traditional commodities. (WSJ)
This timing of this new market is particularly relevant following this week's DeepSeek news. While researching compute pricing myself, I've found that establishing a clear "going rate" remains difficult. However, this complexity doesn't preclude the market from becoming more liquid over time.
BANKING
Banks Modernizing Legacy Code With AI
Banks can now modernize decades-old COBOL code with ~99% accuracy with AI: Accenture's Michael Abbott
Goldman Hires Amazon AI Expert as Engineering Chief
Goldman Sachs has hired Daniel Marcu, former Amazon VP of AI services, as its global head of artificial intelligence engineering and science. Earlier this month, Goldman rolled out an AI assistant to thousands of employees while expanding its artificial intelligence capabilities across the firm. (Yahoo)
How Lazard is Using AI Platform Rogo
Lazard is using a bespoke version of Rogo, which sits inside its firewall, across its investment banking division, and is getting ready to roll Rogo out for its asset management business; it also uses a version of ChatGPT, called Lazard GPT. (Puck) $
HEADLINES
MARKETS
In the latest AI Street Markets, where I break down AI Investing tools, I got a demo of an AI-agent driven investing platform called ScalarField.io. The company’s founder, Amandeep Singh, PhD, has experience building trading technology and showed me how it can answer open-ended questions like this:
Which small cap companies are attracting a growing number of institutional buyers and what could be sparking their interest?
In the demo video below, which I’ve sped up 3x, you’ll see how ScalarField uses agents for different tasks, guides my prompt for more specificity and suggests follow-up analysis.
This approach mitigates hallucinations since you can verify the underlying code with ScalarField's pro access.
How did you like today's newsletter? |
Reply