- AI Street
- Posts
- DeepSeek's AI Goals
DeepSeek's AI Goals

Hey, it's Matt. Here’s what’s up in AI + Wall Street.
Was this email forwarded to you? Sign up below:
IN THIS ISSUE: AI Adoption: IOSCO Report, Recent Research, and Talking Robots
DEEPSEEK
What Are DeepSeek’s AI Plans?
The Financial Times published an interesting story on the reclusive billionaire hedge fund manager behind DeepSeek. Here are the salient points:
DeepSeek, founded by Liang Wenfeng, is a suite of large language models developed by the Chinese quantitative hedge fund High-Flyer, which initially applied AI to financial markets.
It developed a low-cost R1 reasoning model comparable to more expensive competitors.
Currently has about 160 employees (compared to OpenAI’s 2,000)
Purchased ~10,000 H800 and 10,000 A100 of Nvidia’s powerful chips before export restrictions
Had to suspend API services due to overwhelming demand and resource constraints with strong interest from healthcare and finance sectors
Declined investments from Chinese tech giants and venture funds
Prior to the launch of DeepSeek, the hedge fund was better known, well, for being a hedge fund. Now, much of the media focus has (rightly) been on its cheap and efficient model and what the plans are to monetize it.
Hedge funds, by nature, prioritize returns.
While most AI startups are looking to monetize via API access, DeepSeek appears to be playing a longer-term strategic game. Its reported goal is artificial general intelligence, or AGI. While this term that means different things to different people, it’s basically a robot with human-reasoning abilities.
At the very least, I think it’s worth considering that DeepSeek’s AGI may be more for markets than for you and me.
ADOPTION
AI's Expansion in Investing and Trading: IOSCO Report
Global financial regulators say AI is becoming a bigger player in trading, risk management, and compliance, according to a survey by IOSCO.
Multi-Modal AI Gains Traction in Finance
Institutional investors are using AI to forecast market trends, analyze alternative data, and automate investment research.
“Some IOSCO Member/SRO Survey respondents observed that market participants were investigating the potential of multi-modal systems powered by GenAI to integrate and analyze various data types and sources – such as publicly-traded company filings, earnings calls, and social media posts – to inform trading decisions.”
Regulators Weigh AI-Specific Rules
IOSCO and financial regulators are evaluating whether AI-driven trading and research require new compliance frameworks.
“Regulatory responses to the use of AI in the financial sector are also evolving, with some regulators applying existing regulatory frameworks to AI activities, and others developing bespoke regulatory frameworks to address the unique challenges posed by AI.”
AI Poses Systemic Risks to Market Stability
The widespread use of AI in trading could amplify market volatility, increase concentration risk, and create new vulnerabilities.
“Risks most commonly cited to IOSCO during its information-gathering efforts with respect to the use of AI systems in the financial sector include risks from malicious uses of AI; AI model and data considerations; concentration, outsourcing, and third-party dependency; and interactions between humans and AI systems.”
ROBOTS
AI Agents Meet, Then Switch to Robot Speak
I’m a methodological guy. I ask a lot of questions generally. So I’m pretty skeptical of hype. I wrote about white collar crime for 5+ years at Bloomberg, so I saw up close how promises go bust.
When I saw this video of two AI agents recognizing each other and switching to robot voice, I figured it might be fake. (It’s only a minute—check it out.)
So I found the person who posted the video on Linkedin and asked him whether this was science fiction or not…
And it is not… according to Anton Pidkuiko. Here’s how Pidkuiko, who’s also a software engineer at Meta, broke it down to me:
Our pitch was that: AI phone agents make and receive more and more calls every day, so it will become increasingly common for both the caller and receiver happen to be AI agents. and we want to demonstrate a very easy way how to optimize it a lot.
We configured agents to make a normal phone call, with a caveat that in case they figure out they are talking to AI they should ask if they support gibberlink mode, and if that's true they simply switch to send the same LLM-generated text messages, but instead of vocal English using ggwave "open standard" data-over-sound protocol.
Currently commercial AI calls are so expensive (~0.10$/min) because you need GPU models to synthesise speech, recognise speech to text , track pauses and interruptions. But ggwave is so simple and lightweight so you only need one CPU process to handle it all. Also it's more error-proof and fast.
The phrase "what a pleasant surprise?" was completely improvised by LLM, everyone can reproduce this behaviour since we open sourced all code.
Their project won first prize.
I’ve talked to a lot of technologists who are genuinely concerned about AI getting ahead of us. And these are not tin-foil hat folks. I have a pretty good sense of them since I’ve talked to many, many so-called “whistleblowers” who had a lot of passion but few facts. Anton is also worried about AI safety:
“I'm personally very concerned about AI safety, and I think it's important to show a wide audience outside the AI field how scary good these technologies are and how quickly they're evolving.”
👀
RESEARCH
I usually pick one AI + Wall Street research paper a week, interview one of the authors, and write a summary. But with so many interesting papers out there, I’m aiming to highlight more—while still interviewing as many authors as possible.
AI Helps German Central Bank Throw Shade at ECB

The Bundesbank developed an AI model, using Meta’s Llama LLM, to analyze European Central Bank monetary policy communications from 2011 to 2024. The AI model found that the ECB’s messaging was predominantly dovish, especially during crises like the Eurozone debt crisis, COVID-19, and the early Russia-Ukraine war. While the ECB took a more hawkish tone on inflation in early 2021, it delayed raising interest rates until mid-2022.
From the FT:
Germany’s central bank bosses have often accused their Eurozone counterpart of being too aggressive in cutting interest rates, but now the Bundesbank has backed up some of their old arguments with an in-house artificial intelligence tool.
Related: Meta’s AI model, Llama, which is reportedly used by Goldman and Nomura, has been downloaded more than one billion times, per the company.
How Central Bankers’ Tone Moves Markets
Earlier this month, I wrote about how tone of voice conveys additional information than just text. I think this is pretty intuitive. We’ve all heard “It’s not what you say but how you say it.” The paper below takes it to the next level:
The research, which used facial and vocal recognition software alongside natural language processing, found that emotions like happiness or anger could amplify the impact of a hawkish policy statement—meaning bond yields rose more when central bankers conveyed tough monetary policy messages with confident or forceful expressions.
Check out the FT story on this research here.
AI Avatars Rival Experts in Economic Forecasting
Researchers created robot economic forecasters by using LLMs to mimic human experts. LLM-generated forecasts often match or exceed their human creators particularly at medium- and long-term horizons. The findings suggest that AI-driven forecasting could serve as a cost-effective, high-frequency complement to traditional economic surveys.
WHAT ELSE I’M READING
How did you like today's newsletter? |
Reply