Validation of AI: healthcare's new challenge

AI has the potential to fundamentally transform healthcare and life sciences—but only if we succeed in building the infrastructure of validation, compliance, and trust that the technology requires. Regulations are evolving rapidly, and those players who already understand how compliance can be a strategic asset rather than an administrative burden will define the next generation of healthcare innovation.

Introducing artificial intelligence into healthcare and life sciences is no longer a question of if, but how. The technology is already part of everyday clinical practice: algorithms that analyze X-rays, systems that predict the risk of readmission, AI that helps researchers design drug molecules.

But as pilot projects emerge, one question becomes increasingly clear: How can we ensure that AI solutions meet the same rigorous standards of quality, safety, and traceability as the rest of the healthcare system?

This is where compliance and validation come into play as both enablers and bottlenecks, crucial for building trust and safety, but also often determining how quickly innovation actually reaches the patient.

The regulatory playing field is changing

Traditionellt har validering inom life science byggt på väletablerade principer och standarder som GxP, GAMP 5, ALCOA+ och ISO. Dessa ramverk är utformade för stabila system där mjukvara och processer förändras långsamt och under strikt kontroll. Men införandet av AI utmanar detta paradigm i grunden.

Algorithms can change when trained on new data, lose performance if the underlying data changes (known as model drift), and produce different results depending on how the training data is composed. This places entirely new demands on how we understand, monitor, and validate AI systems, and raises questions that current regulations can only partially answer.

The EU's AI Act is a new player on the scene

In December 2023, the EU agreed on the AI Regulation, the world's first comprehensive legislation for AI. For healthcare, this means that most clinical AI applications are classified as "high-risk AI." The requirements include:

  • Risk management and documentation
  • Transparency and traceability
  • Data quality and representativeness
  • Human review of decisions
  • Robustness and cybersecurity

This means that AI solutions in healthcare must not only meet the requirements of medical device regulations (MDR, IVDR), but also the additional requirements set out in the AI Regulation, with a particular focus on the unique risks and challenges inherent in AI technology.

FDA initiative: from CSV to CSA

Changes are also taking place in the US. A clear example in medical technology is the FDA's Computer Software Assurance (CSA) initiative, which replaces part of the old Computer System Validation (CSV) paradigm. The focus is shifting from "documentation of everything" to a more risk-based, pragmatic approach.

For AI, this means that you may need to:

  • Validate performance and robustness rather than just demonstrating processes,
  • Ensure continuous monitoring rather than one-time validation,
  • Show explainability – not just results.

This shift better matches the nature of AI, but it also requires organizations to build new competencies, not only in data analysis and machine learning, but also in broader areas such as data governance and data management, risk management, ethics, regulatory interpretation, and human-AI interaction. The combination of technical understanding and regulatory expertise will be crucial to developing, implementing, and managing AI systems in a safe and legally compliant manner.

Three specific challenges with AI validation

1. Models that change over time

A traditional QMS assumes that software and systems are static until a change is made. But AI models can be continuously retrained, sometimes automatically. How do you validate something that never reaches a final, stable state but changes dynamically over time?

Possible solutions:

  • Define clear change control processes for model training as well.
  • Use "freezing" of models for clinical use, while training takes place in a controlled test environment.
  • Implement robust model monitoring (performance monitoring) that can trigger revalidation.

2. Data quality and bias

AI relies on data, but patient data is often fragmented and unevenly distributed. If a model is trained on data from a limited patient group, it may produce poorer results for other groups, which can have serious consequences for equal healthcare.

Regulatory requirements for data diversity and representativeness are key here. Organizations must be able to demonstrate not only how much data has been used, but also how well it reflects reality.

3. Explainable AI

One of the biggest challenges is that AI often functions as a "black box"; the model may give the right answer, but it is difficult to understand why. In healthcare, this is problematic: doctors, regulators, and patients need to be able to understand the reasoning behind a decision.

This is driving an increasing focus on explainable AI, technologies that make it possible to visualize the factors that influenced the model's decisions. Implementing this is not only a technical challenge, but also a regulatory and ethical necessity.

When regulations become a force for innovation

It is easy to view regulatory compliance as an obstacle to AI in healthcare, as regulations are complex, requirements are high, and the process is often slow. However, when handled correctly, this work can become a driver for innovation:

  • By incorporating validation and risk management into AI projects at an early stage, the risk of costly rework later on is reduced.
  • Robust traceability and documentation enable companies to obtain regulatory approval more quickly.
  • By working proactively with ethics and transparency, trust can be built among both healthcare staff and patients.

It's about moving from "we do compliance because we have to" to "we use compliance as a quality assurance and competitive advantage."

How organizations can refine their working methods

For organizations that already have established compliance structures, the next step is not about building something new, but rather deepening the integration between AI, quality, and regulatory governance. The goal is to create a system that is both flexible and robust enough to handle the dynamics of AI, without compromising traceability, security, or data integrity.

1. Educate the organization in AI and regulatory understanding

The biggest bottleneck is often not technology, but expertise. Regulatory specialists need to understand the basics of model training, data management, and the limitations of AI, while data analysts and developers need insight into GxP, ISO 13485, and the AI Regulation. Interdisciplinary training programs, joint workshops, and rotational roles between QA and data science can significantly increase an organization's AI maturity.

2. Work in a risk-based and differentiated manner

Not all AI applications require the same level of validation. Develop a differentiated risk model that classifies systems based on their impact on patient safety, data integrity the criticality of decision support. Such a risk matrix makes it possible to allocate resources proportionally while demonstrating to regulatory authorities that the organization is applying a systematic, risk-based approach.

3. Build AI control into QMS

Develop the quality management system (QMS) so that it explicitly covers AI-specific processes and roles. This means including procedures for model training, monitoring, versioning, and revalidation, as well as defining the division of responsibilities between development, validation, and operation. A mature QMS should also support continuous improvement based on the model's actual performance in a clinical or operational environment.

4. Introduce MLOps with regulatory approval

MLOps is not just a pipeline for model training, deployment, and monitoring; it is a framework for operationalizing quality, traceability, and compliance. By building traceability, validation points, and risk management into the MLOps flow, the organization can achieve both efficiency and regulatory predictability. When GxP principles are integrated into the development cycle, quality work becomes a natural part of the production chain rather than a separate step at the end.

Example: AI in clinical trials

Clinical trials are an area where AI can make a huge difference, from patient recruitment to real-time analysis of study data. But this is also where regulatory challenges become particularly apparent:

  • How can you ensure that patient data is protected when training AI models?
  • How is AI's role in study design documented so that authorities will accept the results?
  • How do you deal with the fact that models can perform differently in different geographical populations?

Organizations that successfully answer these questions can both streamline trials and increase the chances of regulatory approval.

Conclusion

AI has the potential to fundamentally transform healthcare and life sciences—but only if we succeed in building the infrastructure of validation, compliance, and trust that the technology requires. Regulations are evolving rapidly, and those players who already understand how compliance can be a strategic asset rather than an administrative burden will define the next generation of healthcare innovation.

Validating AI is no longer about meeting requirements; it is about creating a living process for quality, risk awareness, and transparency that evolves alongside technology.

For organizations that want to move from pilot to real patient benefit, the question is no longer whether to take the step, but how. And that step begins with dialogue between technology, quality, regulation, and leadership. Let's continue the dialogue where it is most useful, in the way you want.

In this article

Related content

AI in Life Sciences
AI‑skribenter i vården: mellan hypen och verkligheten
AI in Life Sciences
Framtidens compliance formas nu
Read more
AI in Life Sciences
#73 AI tar plats inom compliance och lyfter expertrollen till nya höjder
Read more
AI in Life Sciences
AI‑skribenter i vården: mellan hypen och verkligheten
Read more
AI in Life Sciences
Framtidens compliance formas nu
Read more
AI in Life Sciences
#73 AI tar plats inom compliance och lyfter expertrollen till nya höjder
Read more
Stay up to date

SUBSCRIBE to our newsletter