Netskope Threat Labs: Healthcare Facing Data Security Risks

Share
Share
According to Netskope, 43% of healthcare workers are using personal generative AI accounts at work. Credit: McKinsey
Netskope Threat Labs’s report reveals healthcare faces mounting data security risks from unmanaged AI tools and personal cloud application usage

Netskope Threat Labs has released its annual healthcare threat report, revealing critical security challenges facing the sector over the past 13 months.

The analysis highlights how the rapid adoption of cloud services and AI tools is creating significant data security vulnerabilities for healthcare organisations, particularly concerning regulated patient information.

The growing use of generative AI applications among healthcare staff has emerged as a primary concern for data protection. 

According to the report, regulated data such as patient records and medical information accounted for 89% of all data policy violations occurring in the context of generative AI usage. 

This figure could indicate a substantially higher risk compared to the cross-industry average of 31%, suggesting that healthcare organisations face unique challenges in managing sensitive information within AI tools.

Youtube Placeholder

Monitoring challenges

A significant complicating factor is the continued use of personal generative AI accounts in workplace settings. 

While this behaviour has declined over the past 13 months, 43% of healthcare workers are still using personal generative AI accounts at work.

These personal accounts present monitoring difficulties, as security teams often lack the capability to properly track potential data leaks through these unmanaged services. 

According to Netskope Threat Labs, this creates blind spots in data governance frameworks.

Healthcare organisations appear to be responding to this challenge by accelerating the deployment of company-approved generative AI applications. 

The proportion of workers using generative AI applications managed by their organisation increased from 18% to 67% over the same period, potentially outpacing cross-industry averages which rose from 26% to 62%.

This shift could suggest that organisations are prioritising controlled AI environments to mitigate security risks. 

The trend indicates a growing recognition that managed AI tools provide better oversight of sensitive data handling.

Ray Canzanese, Director of Netskope Threat Labs, says: "While building defences against external threats is essential for healthcare organisations that have historically been prime targets for cybercriminals, addressing internal risk is equally important, especially in such a highly-regulated industry and a context of fast-paced cloud and AI adoption. 

Ray Canzanese, Director of Netskope Threat Labs

“Our report shows that those that operate without security guardrails governing cloud and AI usage are very likely to suffer regulated patient and clinical data leaks, and potentially high regulatory penalties. 

“Deploying company-approved applications that meet employees' demands for convenience and productivity, along with relevant security tools that offer full visibility and control over usage and data movements, should be a high priority for healthcare organisations to strike a balance between modernisation and security."

AI integration

The report indicates that healthcare organisations are increasingly exploring AI's potential to streamline operations, with deployment and usage of internal AI tools accelerating. 

These tools require bespoke security guardrails, and even when generative AI applications or AI agents are deployed internally, they often need to connect to cloud-hosted models for processing via dedicated APIs.

According to the analysis, monitoring API traffic can help measure on-premises AI deployments. 

In healthcare, almost two-thirds of organisations are detecting API traffic to OpenAI and AssemblyAI, at 63% and 62% respectively, whilst more than a third are detecting traffic to Anthropic at 36%.

This heavy reliance on API-based integrations could underscore the growing role of embedded AI services in clinical, administrative and operational systems. 

The integration of AI into healthcare could cause data security risks. Credit: Getty Images

According to Netskope Threat Labs, these connections represent potential data exposure points that require careful monitoring.

The increasing complexity of AI integrations means that security teams must develop new capabilities to track and control data flows through these API channels.

Personal cloud applications pose data risks

The use of personal cloud applications in workplace settings is creating additional data security challenges. 

Workers might inadvertently or intentionally upload sensitive data to personal accounts, with regulated data accounting for 82% of data policy violations related to personal cloud applications.

Over the past year, more than half of healthcare organisations that deployed mitigation policies blocked users from uploading files to personal Google Drive accounts, at 56%. 

This was followed by Google Gmail at 39% and OneDrive at 30%, potentially illustrating the frequency of exposure risks in popular personal cloud applications.

Attackers continue to exploit the trust employees place in cloud applications. 

According to the report, Azure Static Web Apps, GitHub and Microsoft OneDrive were the platforms most frequently exploited by attackers for malware distribution, with 8.2%, 8% and 6.3% of organisations detecting employees attempting to download malware from each application respectively.

Company portals

Executives