Theoretical physicist Stefano Goria is the co-founder and CTO of mental health technology start-up thymia. His expertise broadly lies in explainable AI and reinforcement learning and Goria was able to develop these skills during his time as a Quant at Citibank and J. P. Morgan. He has published a number of journal papers on the use of multimodal AI to identify biomarkers for depression. Now, he’s harnessing this experience to help fuel thymia’s mission: to build an objective tool for assessing and monitoring health conditions, such as depression, using AI.
Here, he tells us more.
What led you to this industry?
“I first met my co-founder, Dr Emilia Molimpakis, through the Entrepreneur First start-up accelerator. Despite never having met before, and coming from very different professional backgrounds (Emilia is an expert neuroscientist) we quickly found a synergy in the work we were each doing. We shared a keen interest in using our skills to make the world a better place and we were both passionate about mental health and the way existing care provision was letting patients down.
“It wasn’t until later that we decided to join forces. After years working as a quant I was determined to use the skills I’d gained to contribute to something that could make a real difference. I’d always had a keen interest in how the mind works, and had always believed that this understanding was vital for building truly impactful, ethical AI.
“Emilia had pitched the idea for thymia after witnessing her close friend experience a devastating mental health crisis, and I was instantly taken by the concept. I knew I had the right technical skills to help make it a reality, so we pooled our expertise and set about building the solution.”
What is thymia?
“At thymia we’re building video game-style tools, powered by AI, to help provide a more objective means of diagnosing and monitoring mental health conditions, including depression, Alzheimer’s, Parkinson’s and ADHD. Our tools are being designed to support the work of clinicians, handing them the data and insights to spot the signs of mental illness which may be invisible to the human eye.
“Based on neuropsychology, the games guide patients through a range of activities which allow the AI to track factors including voice, eye movement and reaction times. Analysing these helps detect the invisible “biomarkers” for conditions such as depression. A faster, more accurate diagnosis can be made, the right treatment can be identified sooner, and over time, data can be used to help monitor the severity of a patient’s condition and their progress in recovery.”
Tell us about how ethical AI can superchange mental healthcare?
“Ethical AI holds a huge amount of potential when it comes to transforming mental healthcare. It is key to unlocking the ability to perform more objective diagnosis, treatment and monitoring. Using ethically trained AI, we can empower clinicians to observe invisible “biomarkers” in the same way that a blood test or ECG might be used to detect and monitor physical conditions. This not only promises to make mental health diagnosis and treatment more accurate, but could also help improve parity of esteem with physical health. This is absolutely crucial for levelling the playing field in terms of access to care and support, and ending the stigma surrounding mental illness.”
Tell us more broadly about the use of AI to support mental health diagnoses.
“The use of AI adds a more objective tool to clinicians’ mental health assessment toolkit. Currently, much mental health diagnosis rests on conversational measures, such as questionnaires in which patient’s must rate their emotions or mood on a scale of 1-10, and a clinician's own powers of observation. These are inherently subjective and can only ever offer a snapshot of how the patient is feeling in that particular moment. By adding AI into the equation, doctors can corroborate this information with data from concrete physical and behavioural cues which might otherwise have been missed during the consultation. This can help them more accurately determine the cause of a patient’s distress. A diagnosis can be made, and a relevant treatment plan arranged, much sooner.”
How can thymia help healthcare professionals?
“By providing clinicians with a wealth of previously unattainable objective information on a patient’s signs and symptoms, AI can help them spot mental illness sooner, and more accurately monitor the condition over time. This is especially important given the current restraints on clinical capacity.
“Healthcare services across the globe are facing significant staffing shortages, while patient demand is on the rise. As a result, doctors are often only able to see their patients for brief, infrequent appointments. Without a way of capturing data and information on how the patient is progressing outside of these appointments, doctors are left with a piecemeal, incomplete picture of their condition and recovery.
“By providing these doctors with access to AI-driven tools, which can accurately capture and analyse changes in behaviour, mood and speech pattern, thymia is helping them more consistently and accurately measure how well their patient is doing. This means when they are able to see the patient in person, they can have a more informed, targeted discussion based on how the patient has really been doing since the last appointment. Interventions can also be made much sooner where necessary, to prevent the patient’s condition from worsening.”
What opportunities are there for AI to improve how mental healthcare is delivered?
“The objectivity that AI can bring to mental healthcare has a huge role to play in destigmatising and improving access by bringing it onto a more equal footing with physical healthcare. This is particularly important for ensuring that patients feel comfortable seeking support sooner, so that earlier intervention can be made. This can help move mental healthcare towards a more preventative, rather than reactive, approach. As a result, this will help reduce the pressure on acute mental health services, which are currently struggling to meet patient demand due to a lack of available beds.”
What are the risks AI could pose in mental health settings if used irresponsibly?
“In whatever context AI is used, it must be used ethically. If it’s not, it can open the floodgates to damaging bias and discrimination. This is particularly important within mental healthcare, where a huge amount of discrepancy exists between various patient demographics. For example, research shows that black people are four times more likely to be sectioned under the Mental Health Act than white people.
“Meanwhile, social and economic inequalities can significantly increase risk factors for certain mental health conditions. Any AI being used to help deliver mental healthcare must therefore be trained based on a diverse data set, which is representative of all groups who may seek mental health support and does not have these existing biases baked in. This is crucial to ensure that diagnosis and treatment are tailored to adequately spot the signs of illness - which can present differently in different people - and provide support in a way that is conducive to recovery for all.”
What do the next 12 months hold for you and thymia?
“We’ve got an incredibly exciting 12 months ahead. Currently, we’re focusing on rolling out the thymia tool across a number of different global mental health settings. We’re also working on developing the tool to help diagnose and monitor an increasing number of conditions, including Parkinson’s, Alzheimer’s, ADHD and fatigue. To ensure these are built to optimise the care received by patients, we’re working closely with our clinical partners around the world to develop our technology in a safe, ethical and effective way.”
- Johnson & Johnson: Turning supplier spend into local supportProcurement & Supply Chain
- Seasonal Affective Disorder’s impact on health & solutionsMedical Devices & Pharma
- CGI teams up with Totalmobile for digital healthcare serviceDigital Healthcare
- Deloitte: generative AI can improve access to healthcareTechnology & AI