• AI Street
  • Posts
  • Five Minutes with Aveni's Joseph Twigg

Five Minutes with Aveni's Joseph Twigg

Instead of using all-purpose AI systems, like Open AI, the finance industry is moving towards smaller AI tools. 

Think of LLMs as large libraries with thousands of books, while smaller domain language models are specialized bookstores focused on a single subject. Where LLMs offer breadth and versatility at the cost of precision, domain-specific models trade general knowledge for deep expertise in their niche.

That pitch helped Joseph Twigg, CEO and co-founder of Aveni, secure £11 million in Series A funding last month

The Edinburgh-based company, founded in 2018, is developing a finance-specific language model, FinLLM, in partnership with Lloyds and Nationwide, two of U.K.’s largest banks. The model aims to set the standard for transparent, responsible, and ethical adoption of Generative AI across UK financial services. 

The 43-year-old executive shares his thoughts on AI adoption, the challenges of implementing AI in financial services, and the future of domain-specific language models.

This interview has been edited for clarity and length.

How’d you get started in AI? 

I spent about 15 years in asset management, eventually running strategy and business management for Standard Life investments. My job involved launching international businesses and disposing of underperforming ones. This gave me a lot of relevant experience for starting a business.

During my Executive MBA at the University of Edinburgh, I met my co-founder (Dr. Lexi Birch). She was finishing a 10 million euro project with the BBC, automating media monitoring using speech-to-text, text-to-speech, and natural language processing. I was blown away by the platform and immediately saw use cases in financial services, especially for improving client processes. I wrote our business plan as my MBA dissertation, and we decided to go for it.

Why develop a finance-focused LLM? 

The first phase of large language models like ChatGPT faced challenges with hallucinations, trust, and correctness. Over the last 12 months, we've realized that smaller, specialized models can deliver the necessary automation for specific use cases more effectively.

FinLLM is our own large language model, specifically designed for financial services. It's important to note that we're not using existing models like GPT or Claude. Instead, we're using open-source datasets that have been used to train other foundational models as our starting point, aiming to grow from about a billion parameters to around 70 billion.

It combines industry data with client data from partners, heavily tuned towards prioritized use cases within UK financial services. It has all the typical characteristics of large language models - intent detection, summarization, etc. - but with a heavy financial services orientation. 

How does it perform?

This specialization allows us to outperform large, generalist models for specific financial use cases. We consistently score higher on evaluation test sets, providing more accurate and relevant results. Our model can outperform GPT or Gemini across the financial use cases we've chosen, which is what has attracted partners like Lloyds and Nationwide. 

What are the challenges? 

While the potential of AI in financial services is enormous, productionizing these solutions has proven more challenging than many expected. It's not just about having a powerful model; it's about implementing it in a way that adheres to regulatory requirements, maintains consistency across thousands of users, and adapts to the specific needs of financial institutions.

The competition in this space is diverse and intense. We're not just competing with other startups, but also with internal teams at major financial institutions and big tech companies. However, our unique position at the intersection of AI and financial services, combined with our years of experience delivering AI solutions in this sector, sets us apart. We understand not just the technology, but the specific processes, limitations, and use cases in finance that we're building for. 

Who's the target audience?

We're targeting financial advisors, wealth managers, mortgage protection specialists, and other financial services professionals who deal with complex, time-consuming, and high-value tasks. With FinLLM, we aim to automate about 90% of their post-client meeting administrative work.

We're also focusing on large financial institutions like Lloyds and Nationwide, which have a vast array of use cases that can benefit from our technology. The key is that we're not just offering a productivity tool, but a solution that can automate complex, regulated processes while maintaining compliance - crucial for any financial service provider.

What's your timeline look like? 

We're looking at a series of partner model testing before Christmas. These models will be in use, continually collecting and cleansing data, and being refined. It's not a one-and-done process, but a continuous cycle of learning and improvement.

We expect Lloyds and Nationwide to be testing models by the end of this year. We'll likely be running proof of concepts and pilots with FinLLM across their businesses in Q1 next year.

What has surprised you most about the developments in the AI space?

Two things stand out:

First, the leap in capabilities from the pre-GPT models to GPT was truly shocking. The difference between the 30-50 billion parameter models like BERT that we were using before and the capabilities of GPT models was unbelievable.

Second, the pace of innovation is staggering. We're now able to achieve similar capabilities with much smaller models by training them differently. The fact that we can spend just three months training a model and achieve performance parity with models that took years and hundreds of millions of dollars to develop is remarkable.

Thanks for reading!

If you’re new here, AI Street is a weekly newsletter tracking how AI is changing Wall Street. Drop me a line if you have story ideas, research, or upcoming conferences to share. [email protected] Sign up here:

Reply

or to participate.