Limbic: AI research, LLMs & the impact on mental healthcare

Mental healthcare
Dr Ross Harper, Founder and CEO of the Healthtech company, Limbic, explores AI research, mental healthcare and Large Language Models (LLMs)

The recent call for a moratorium on AI development by several prominent individuals, including Steve Wozniak and Elon Musk, has sparked heated debate on the risks of Large Language Models (LLMs). While many advocate caution, we must also guard against complacency and lazily maintaining the status quo, says Dr Ross Harper, Founder and CEO of the Healthtech company, Limbic.

“LLMs pose a remarkable opportunity to deliver social impact in mental healthcare. It is our responsibility to safely forge a path forward,” says Harper. “Fear is a natural response to the unknown, and when it comes to LLMs, there are a lot of unknowns. The dawn of generative AI in healthcare represents a technological revolution. And as with all technological revolutions, it has been met with scepticism and fear.

“This is hardly unprecedented: in the late 19th century, electricity was seen as dangerous and unpredictable, with the New York Times stating that ‘with every advance in electrical science, we are approaching more closely the brink of destruction to our race’. Similarly, as the internet came to life, concerns over criminal activity surged. ‘The Internet is a thief's paradise’ said a 1995 Wired article.”

Opinions change and it’s fun to reminisce, says Harper, but these criticisms were not incorrect. Electricity is dangerous and the dark web is home to criminals. 

“Highlighting risk is necessary. It helps us identify where safeguards are required. But it is wrong to allow fear to dominate the conversation. Exercising caution is not the same as halting progress.”

How LLMs pose risks for mental healthcare

All new technologies pose risks. Technology provides us with tools; the risk lies in how we use these tools. 

“At Limbic, we are keeping a particularly close eye on the following: bias in the models, inappropriate recommendations, clinical inaccuracies, privacy concerns, and unexpected effects of reduced human interaction. Currently, LLMs are very good at drafting plausible and imaginative content but are less suited for applications, such as diagnostics, where trust in the accuracy of the output is paramount.

“To illustrate this, we compared Open AI's GPT-4 against Limbic's own UKCA-marked symptom classifier; we observed a large and significant performance gap when it came to correctly identifying patient mental health issues. LLMs are dazzling, but they're not yet ready to take on clinical responsibility.”

Dr Ross Harper is Founder and CEO of the Healthtech company, Limbic

Dr Ross Harper is Founder and CEO of the Healthtech company, Limbic

Opportunities for LLMs in mental healthcare

As the founder and CEO of Limbic, Harper feels privileged to be at the birth of generative AI in healthcare. 

“Despite the unknowns, one thing is certain: with perseverance and a shared commitment to safety and innovation, the future looks bright for LLMs in mental healthcare and there’s a lot to be excited about,” he says. “LLMs have the potential to supercharge clinical workflows, especially in the area of note-taking, assisting clinicians in generating accurate and concise clinical notes and freeing them up to focus on other critical aspects of patient care.”

Equally, LLMs can play a key role in helping to unlock patient data. Mental healthcare is characterised by enormous volumes of patient data, often in the form of unstructured text, and this presents a significant challenge for extracting valuable insights. LLMs can analyse these transcripts to identify key insights and liberate knowledge to drive care delivery efficiencies and systematic changes that benefit call patients.

“Similarly, LLMs could augment traditional machine learning models, analysing large volumes of patient data to find connections that may otherwise be overlooked. As the most widely used mental health triage chatbot in the NHS, Limbic has already trained a classifier to identify symptoms of mental illness across 170,000 patients - this recently became the first and only mental health chatbot to be regulated as a Class 2 medical device in the UK. We’re proud to be helping one in four mental health services in the NHS, and I’m excited at the prospect of creating further value by exploring the combination of our regulated AI system with LLMs.”

Finally, there’s the opportunity to build a therapeutic alliance. Despite marketing rhetoric, most mental health chatbots aren't very good at chatting. Instead, they use design tricks to mask hard-coded rules - a far cry from AI or anything recognisable as intelligence. 

“The result is an unsatisfying patient experience, undermining the therapeutic alliance between the patient and clinician. LLMs can support mental healthcare to move beyond today’s one-dimensional chatbots and drive patient engagement during treatment.”


The risks of pausing mental health research

Mental illness is one of the most severe and urgent health issues of our time. It affects over a billion people worldwide and is the leading cause of disability. 

“If we pause AI research, we're delaying innovations that could help vulnerable people right now,” says Harper. “We’ve been given a chance to help people at unimaginable scale. Inaction carries a high cost, and I believe it is our moral imperative to find a responsible path forward.”

Of course, the future of AI in healthcare will not be decided solely by researchers and entrepreneurs. It is ultimately the role of governments and regulators to establish guardrails and set the boundaries within which society can advance. 

“Indeed, when it comes to technological innovation, the key to balancing patient safety and clinical outcomes rests on equally innovative regulation and policy. This can only be informed by expert knowledge, gained through rigorous research. A pause in research risks undermining our understanding of the very technology we seek to guide.”

Technology can be scary, and there are certainly risks to consider. But courage lies in innovation. 

“LLMs are an-era defining technology that can play a key role in unlocking mental healthcare challenges. It is our responsibility now to come together as a community to guide innovation to solve society’s biggest problems - not reject it.”


Featured Articles

Healthcare Business roundup: Baxter, Sanofi & Eli Lilly

Baxter International to sell kidney care spin-off Vantive; Sanofi's consumer health unit ‘to be separated’; Eli Lilly to buy Morphic Holding for $3.2bn

Trane Technologies Helping US Hospitals Meet Climate Pledge

Christy Fetsch of Trane Technologies explains how the company is helping US hospitals meet White House sustainability pledges for US healthcare provision

Nestlé Health Science Targets Weight Loss Side-Effects

Nestlé Health Science launches nutrition initiative to counteract side effects of weight-loss drugs, including hollowed-out 'Ozempic Face'

Eli Lilly Alzheimer's Drug Kisunla Approved in US by FDA

Medical Devices & Pharma

WEF: Gender Health Gap 'Costs women 75mn years of Life'

Medical Devices & Pharma

Schneider: Swathe of NHS Failing to Measure Sustainability