Healthcare is more human when we design AI around people
Artificial intelligence has the potential to transform healthcare by providing better patient care, predicting disease, and addressing healthcare disparities. According to Will Reese, Omnichannel Marketing Leader at Evoke, AI in healthcare is already starting to reshape the industry.
“Using AI to identify patients at risk of diabetes or heart disease, for example, has been shown to reduce medical errors and improve patient health outcomes,” he says. “AI has also been used in surgery, radiology, medical devices, and even cancer treatment algorithms.”
AI can transform the healthcare sector
The practical application of AI within the healthcare ecosystem has accelerated. There are over 500 actively recruiting clinical trials exploring AI applications in diagnosis, identification, and management of disease and 521 FDA-approved machine learning or AI-related medical interventions.
“AI can be a powerful tool for helping address health equity issues and drive the support of realising the health potential for all patients. AI can help to provide the context for practitioners and systems to address and efficiently manage patients with greater precision and empathy for their social circumstances and lived experiences,” says Reese. “Opportunities for AI to humanise the healthcare experience include:
- Proactively identify geographic and system-specific inequities and associated social determinants and suggest relevant support services
- Detect unconscious biases that may exist in existing patient diagnosis and management patterns
- Organise transitions of care more effectively between community and academic institutions
- Coordinate access to holistic care (clinical, diagnostic, behavioural, psychological) and system navigation gaps that exist within people of colour, immigrant, rural, and LGBTQIIA+ communities
- Recognise disease management and outcomes disparities within clinical trials.”
For Reese, this expansion of AI must be balanced against the human realities it is attempting to solve.
“Healthcare disparities exist across race, economic status, geography, sexual orientation, gender identity, and age,” he explains. “COVID-19 has further widened the access and mortality divide for many under-served populations. AI-driven healthcare systems could exacerbate existing disparities between rich and poor communities by making access more difficult for those who live in rural areas or who don't have reliable access to technology. AI tools are often built upon historical healthcare data that reflects systemic race and gender identity gaps.”
Recent studies have shown bias issues inherent within technology such as pulse oximeters and a frequently used kidney disease algorithm. Additionally, barriers exist to provider adoption and trust in AI solutions.
“Gaps exist in the ability to explain algorithm development and communicate its reliability across patient types. This means that we need to make sure that we're working toward solutions that address the human side head-on.
“AI reflects our human cognition and consequently can include systemic and individual biases and blind spots. You must be thoughtful, equitable, and inclusive in the pursuit of AI solutions.”
Will Reese, Omnichannel Marketing Leader at Evoke, a business unit at Inizio MarComms
How to reduce bias in your AI projects
Immerse in experience:
“AI solutions cannot be created in an artificial setting solely viewed as a statistical challenge Ground your solution teams within the real-world healthcare experience and insights of your target populations at a localised level. This deeper understanding can identify data gaps, focus efforts on the greatest unmet needs, and instil empathy from the beginning of design efforts.”
Examine the algorithm history:
“When embarking on an AI effort, even long-accepted algorithms and medical technologies should be freshly examined for bias and traced back to their origins. Early and transparent risk identification of potential biases can help to mitigate future provider adoption and trust issues. Document and socialise these algorithms and data-set histories to spark further research and innovation.”
Create diverse & inclusive teams:
“Build your solution team to include representation from diverse community representatives with diverse skill sets,” says Reese. “Blending the humanity skillsets with technology and data skillsets helps to design human-centred experiences delivered through technology. These skillsets include ethicists, social scientists, user experience designers, and communications experts. Promoting inclusive STEM-based education opportunities can help to grow talent diversity within the data and technology fields.”
Diversify your data sets:
“Many current healthcare data sets have little or incomplete information regarding under-served patient populations, causing significant issues if used as the base to train AI,” says Reese. “Evaluate and document the origins, demographics, and diversity of your data sets. An innovative organisation focused on improving ethics in AI and reducing data bias is the Data Nutrition Project.”
Partner with the communities you serve:
“Community trust is required to close data diversity gaps and evolve the available healthcare data sets to reflect today’s patients,” says Reese. “Diversify and innovate your partnerships and build relationships with community and advocacy organisations. Immersion in their community insights, co-sponsor new data gathering initiatives and innovate algorithms through data science challenges as valuable ways to partner.”
“Ethical and inclusive applications of AI can unlock healthcare potential,” concludes Reese. “This requires new ways of working and partnerships across healthcare, technology, life science, and community organisations that lead with a consistent focus on health equity.”