NTT DATA: How AI Innovation Can Help Healthcare & Finance

AI is moving from experimentation to enterprise-scale deployment, with the healthcare and finance sectors being two of the regulated industries leading the charge while navigating the complexities of trust, compliance and innovation.
David Fearne, Vice President of AI at NTT DATA is an industry voice that bridges the gap between technical innovation, regulation and ethics.
Working across healthcare, financial services, the public sector and other highly-regulated industries, David helps organisations operationalise AI in a way that is commercially viable, resilient and socially responsible.
His focus lies on turning high-level principles – like accountability, transparency and fairness – into practical system design and governance decisions that hold up under real-world scrutiny.
Global technology and consulting powerhouse NTT DATA has deep roots in regulated sectors such as banking, insurance and healthcare.
For David and his team, responsible AI is not a restraint on innovation but an enabler of trust and adoption.
By embedding governance, explainability and human oversight into every stage of AI system design and deployment, the company helps clients meet not only the letter of emerging regulations like the EU AI Act but also their spirit – making ethical AI a competitive advantage rather than a compliance burden.
In this Q&A, David shares how financial institutions can scale AI responsibly – turning governance into a driver of innovation, not a barrier.
How can banks balance the drive for AI-powered innovation with the need for strong ethical governance?
The key is to stop treating innovation and governance as competing forces.
In banking, the most successful AI programmes recognise that ethical governance is what enables innovation to scale, rather than what slows it down.
This balance starts with design intent. Banks need to be clear about what decisions AI is allowed to influence, where and why it must defer to humans, as well as how risk tolerance varies by use case.
Not all AI systems require the same level of explainability, or control, and treating them as if they do leads to unnecessary friction.
Practically, this means embedding governance into the delivery lifecycle. Model selection, data provenance, evaluation criteria and escalation thresholds should be defined upfront.
Continuous evaluation is just as important as pre-deployment testing. When governance is operationalised in this way, teams can move faster with confidence, because they understand the boundaries they are operating within and regulators gain greater visibility into how decisions are made.
What are the most significant risks financial institutions face when adopting AI at scale? How can they mitigate them responsibly?
One of the biggest risks is not model failure, but organisational overconfidence.
Many institutions assume that once an AI system performs well in a pilot, it will behave predictably at scale. In reality, scale introduces complexity, edge cases and behavioural drift that are often underestimated.
Another major risk is opacity. When decision-making becomes too difficult to explain, accountability becomes blurred, particularly in customer-facing or credit-related decisions.
This is compounded when AI is layered on top of fragmented legacy systems.
Responsible mitigation starts with clear system boundaries. Banks need to define what AI can and cannot do and ensure those constraints are enforced technically, not just documented in policy.
Robust evaluation frameworks, audit logs and escalation mechanisms are essential.
Finally, human oversight must be meaningful. Humans should not simply rubber-stamp AI outputs, but be equipped to challenge, override and have the AI learn from them as part of an ongoing feedback loop.
What role should explainability and accountability play in the design of AI systems for financial services?
Explainability and accountability should be treated as core architectural requirements, not optional features. In financial services, it is not enough for a system to be accurate.
Institutions must be able to explain how decisions were reached, who is responsible for them and under what conditions they can be challenged.
Explainability does not mean every model must be fully interpretable in a mathematical sense. It means the system provides appropriate explanations for the appropriate audience and related context, whether that is a regulator, a customer or an internal risk team.
Accountability requires clear ownership. AI systems do not make decisions in isolation, people and organisations do. That accountability must be traceable through system design, from data inputs to model behaviour and final outcomes.
Making explainability and accountability a functional requirement of an AI systems design will reduce regulatory friction, improve internal trust and make AI systems safer and more resilient over time.
How can the banking sector use AI to enhance trust rather than undermine it, especially in customer-facing applications?
Trust is built when customers feel AI is working with them, not acting on them.
In customer-facing banking applications, AI should be used to improve clarity, consistency and responsiveness, rather than to obscure decision-making.
This starts with transparency. Customers must understand when AI is involved, what it is being used for and how they can challenge or appeal outcomes.
Simple, well-designed explanations go a long way in demystifying automated decisions.
AI can also enhance trust by reducing friction.
Faster resolution times, more personalised support and proactive identification of issues all improve customer experience when implemented responsibly.
Crucially, banks must avoid over-automation in emotionally sensitive, vulnerable or high-impact scenarios. Maintaining human access at the right moments reinforces trust and signals respect for customer agency.
When AI is framed as an assistant to both customers and staff, rather than a replacement for human judgement, it strengthens relationships instead of weakening them.
What lessons can financial institutions learn from other regulated industries when it comes to AI oversight and governance?
Healthcare and aviation offer particularly relevant lessons.
Both industries operate under strict regulatory oversight, yet continue to innovate by clearly defining acceptable risk, system boundaries and escalation protocols.
One key lesson is the importance of continuous evaluation. In healthcare, systems are monitored throughout their lifecycle, not just approved once and left unchecked. The same mindset should apply to AI in banking.
Another lesson is role clarity.
In regulated industries, responsibility is explicitly assigned, even when automation is involved. This avoids ambiguity when something goes wrong.
Finally, these industries understand that trust is earned through consistency.
Governance frameworks are applied predictably, not selectively, which builds confidence among regulators, practitioners and the public.
Banks that adopt a similar discipline, focusing on operational governance rather than purely theoretical controls, will be better positioned to deploy AI safely and at scale.
How is NTT DATA helping banking clients operationalise responsible AI principles within complex, legacy technology environments?
Most banks are not starting from a blank slate – and neither are our solutions. We focus on integrating responsible AI principles into existing architectures, workflows and control frameworks rather than forcing wholesale transformation.
This often involves creating intermediary layers, such as evaluation services, audit pipelines and decision orchestration components, that sit alongside legacy systems. These layers provide transparency, monitoring and control without disrupting core platforms.
We also work closely with risk, compliance and technology teams to align AI delivery with existing governance processes. That includes mapping AI behaviour to regulatory expectations and internal policies in a way that is testable and repeatable.
Importantly, we prioritise skills transfer. Responsible AI cannot be outsourced indefinitely.
Our goal is to leave clients with the capability to govern, adapt and improve their AI systems long after initial deployment.
Looking ahead, what governance models do you think will define the next generation of AI in global banking?
The next generation of AI governance in banking will be adaptive and continuous rather than static and point-in-time.
Fixed rulebooks will not keep pace with rapidly evolving models and use cases, so we will see a shift towards continuous oversight frameworks.
These models will combine technical controls, such as real-time monitoring and automated evaluation, with organisational accountability, ensuring humans remain clearly responsible for outcomes. Governance will be embedded into systems, not layered on top.
We will also see greater differentiation by use case.
High-impact decisions will carry stricter controls, while lower-risk applications will be governed more lightly, enabling faster innovation without compromising safety.
Finally, governance will become more transparent.
Banks that can clearly demonstrate how their AI systems behave, learn and are corrected over time will earn greater trust from regulators and customers alike, turning responsible AI into a competitive advantage rather than a compliance burden.
- OpenAI's ChatGPT Health: Changing the Face of Patient Care?Technology & AI
- How Fortive Uses AI to Drive ESG in Healthcare TechTechnology & AI
- A Guide to Anthropic's New AI Model for Healthcare SupportTechnology & AI
- How Can Cera’s Healthcare AI Aid Nurses, Hospitals & Carers?Technology & AI











