Artificial Intelligence in Life Science

- A growing force for innovation & compliance

Artificial intelligence (AI) has recently come to play a central role in life sciences, serving not only as a powerful tool to drive industrial efficiency but also as an engine for innovation. However, the increased use of AI also raises questions around security, best practices and regulatory compliance. Organizations in the life sciences industry face the challenge of navigating this complex landscape of risk management requirements and responsible use of AI.

A historical overview and a look ahead

 

The use of AI in life sciences is not new. Machine learning and deep learning algorithms have long been used in research and development and industrial applications. However, with the rapid growth of generative AI, the landscape for AI has changed dramatically, which also means that the regulatory framework needs to adapt. Although technology has outpaced regulations, guidelines and standards are already in place to promote best practices and address risks and safety concerns within organizations.

 

European Union AI Regulation - AI Act

 

A significant step forward was taken when the European Parliament adopted the AI Act, the EU's AI regulation, on March 13, 2024. This regulation is based on a risk-based approach where AI systems are classified into different risk categories depending on their impact on society. With clear prohibitions on systems that pose unacceptable risks and strict requirements for high-risk applications, the AI Act aims to establish a framework for the responsible use of AI.

The risk categories of AI acts are as follows:

  • Unacceptable risk: Applications and systems that fall into this category are considered a clear threat to people's security, livelihood and rights. Examples include systems that use real-time remote biometric identification in publicly accessible places, or that exploit vulnerabilities of groups of people in a malicious way. These are prohibited under the AI Regulation.

 

  • High risk: Among others, this includes AI systems used in critical infrastructure, education or employment. It also applies to AI-based medical devices, or AI-based components of medical devices, covered by the MDR and IVDR. For these systems, there are additional requirements for risk management, data governance, documentation, archiving, transparency, human oversight and cybersecurity.

 

  • Limited risk: This applies to AI systems that interact with natural persons, such as AI-based chatbots. These have transparency requirements where it should be clear to users that they are interacting with an AI system.

 

  • Minimal or no risk: This is an unregulated category that includes, for example, spam filters.

 

Specific rules apply to AI systems classified as general purpose AI, i.e. systems with broad application possibilities, such as ChatGPT. Manufacturers of these systems will have to comply with specific risk management and documentation requirements for monitoring and compliance control of the system. The regulation is expected to enter into force in May/June 2024, after which there will be a phased implementation. After 6 months, systems with unacceptable risk will be banned and for General purpose AI the restrictions will apply after 12 months. For systems falling under the MDR or IVDR, the AI Act will apply after 36 months. The European Council of Ministers has created a European AI Office to ensure that the AI Act is implemented and that there is guidance for compliance.

 

"Don't jump on the AI train without first considering documentation and risk management. Consider it a reminder of the importance of balancing excitement with safety when introducing AI. Are you up to date with the latest EU and FDA guidelines? You need to ensure that your company protects sensitive data while following best practices for AI use.

AI in Life Science is more than just a trend, it is a well-established approach to research and development. Authorities such as the FDA and the EMA, are striving to not only promote innovation but also ensure high quality and data integrity. Are you ready to embrace this technology? Then a plethora of opportunities and also some challenges await you along the way." - Tanya Al-Khafaf, Consultant Plantvision Compliance.

 

The emergence of the EMA Big Data Steering Group & the US Executive Order

 

The initiative of the European Medicines Agency (EMA) with the Big Data Steering Group marks a significant milestone in the use of advanced technologies in the Life Science sector.

By creating this group to guide the development and promote the responsible use of AI, the EMA is demonstrating its commitment to moving the industry forward in a sustainable and ethical way. The five-year timeline shows a thoughtful and long-term approach to integrating big data analytics and AI technologies effectively.

In addition to the AI act on the EU side, the European Medicines Agency (EMA) has set up a big data steering group to drive the development and responsible use of AI. The group has published a timetable from 2023 to 2028, which includes guidelines and workshops. Already this year, they will start developing guidelines specifically for drug development and AI. By creating this group to guide the development and promote the responsible use of AI, the EMA is demonstrating a long-term strategy to integrate big data analytics and AI technologies effectively.

The EMA's focus on developing guidelines specifically for drug development and AI from the outset is of particular importance. It underlines the need to adapt the technology to the unique requirements and challenges of the industry. By formulating these guidelines, EMA can help create a coherent and transparent framework for the use of AI in medical research and development.

The discussion on responsible AI under this initiative is also of key importance. It is not only important that the technology is effective, but also that it is used ethically and fairly. Issues such as privacy, fairness and social impact need to be taken into account when designing policies and frameworks for the use of AI in life sciences. By focusing on these aspects, EMA can help ensure that the technology benefits society as a whole and not just certain stakeholders.

It is encouraging to see the EMA taking the lead in promoting the responsible use of AI in life sciences. This demonstrates their commitment to fostering innovation while taking into account the broader implications of the technology's use. With the Big Data Steering Group, the EMA is positioning itself as a pioneer in the industry and setting the standard for how AI should be integrated in a sustainable and ethical way.

 

Executive order for the responsible use of AI

 

In the fall of 2023, President Biden issued an executive order focused on the safe development and use of artificial intelligence (AI). This order marks a significant strategic direction for the United States' approach to AI and demonstrates the growing awareness of the need to regulate and oversee its use. By emphasizing the responsible use of AI and calling on federal agencies to create guidelines and standards, the Biden administration is taking a proactive step to address the potential risks associated with AI use.

 

The FDA's role in future regulation

 

The Food and Drug Administration (FDA) is the US agency responsible for food and Pharma. The FDA has not yet implemented a framework for the use of artificial intelligence (AI), but in 2023 it published two discussion papers exploring the use of AI/ML in drug manufacturing. These papers summarize how AI and machine learning can be applied at different stages of drug development and in clinical trials. As a step towards future regulations and guidance, the FDA has actively sought feedback from stakeholders. The goal of this action is to identify the mechanisms needed to manage the data used to generate AI models in the pharmaceutical industry and to establish best practice principles.

A significant observation is the parallel work underway in both the EU and the US to address the challenges and opportunities of AI in the life sciences sector. The EU is a little further along with the AI act, but while the US FDA does not yet have an established framework for AI use, their discussion documents and stakeholder engagement indicate a desire to promote the responsible and effective use of this technology in the pharmaceutical industry.

 

ISO 42001 - The standard for AI Management Systems

 

ISO 42001 introduces a new standard for artificial intelligence (AI) management systems, known as AI Management Systems (AIMS). This standard establishes a framework for the documentation required within organizations using AI and is of importance to all organizations developing, delivering or using AI systems. By providing guidelines for establishing, implementing, maintaining and improving AI management systems within organizations, ISO 42001 aims to promote a more structured and effective use of AI.

A key point highlighted in ISO 42001 is the importance of having a clear purpose for the use of AI within an organization and understanding the needs of the organization to justify the use of the AI systems. The standard requires organizations to establish AI objectives to define the purpose of the use of AI and adopt an AI policy to create a framework for the use of these systems.

Risk management is a key aspect emphasized in ISO 42001, with a focus on identifying and managing both internal and external risks and their potential impact on society. This reflects the increasing awareness of the need to integrate responsible and socially conscious aspects into the AI strategy of organizations.

Another interesting observation is that ISO 42001, despite its similarities to ISO 9001 in structure and design, has a greater focus on society and its impact. This indicates a shift in priority towards a more societal approach to AI management systems, reflecting the increasing importance of addressing ethical and societal issues in the use of this technology.

The documentation requirements and deviation management of an AI management system are similar to those of a quality management system, which underlines the need to integrate these systems to ensure consistent and effective management of the organization's processes. It is the responsibility of management to ensure that an AIMS is on Location and that the right skills are available to manage the AI systems responsibly.

 

A final reflection

 

In an era of exponential growth for AI and increased demand for the integration of AI systems and generative AI, issues around privacy, bias and security arise. Organizations now, more than ever, need a clear AI management, documentation and risk management framework to guide them through the implementation of AI. ISO 42001 represents a groundbreaking standard that enables the responsible and sustainable use of AI in life sciences and other industries.

 

Did you know that...?

 

- The AI Act is expected to come into force in May/June and will have a phased implementation.

- The FDA has requested feedback from stakeholders to inform future regulations and guidance on the use of AI in drug development.

- ISO 42001 is similar in structure and design to ISO 9001, but with a greater focus on social impact and risk management.

- Future regulation around AI is expected to be a combination of national and international standards and regulations, with the aim of fostering innovation while ensuring societal and individual well-being.

 

Tips!

Here you can read more about how we helped a company, which specializes in the development of products that present medical information based on tissue images through AI and machine learning.

In this article

Related content

Pharma & Biotech
Övervinn utmaningar med dataintegration utifrån ALCOA+
Read more
Life Science
Plantvision tar täten i Life Science-satsning
Read more
Consulting
Case: Världsklass på QC-lab efter omfattande uppgradering
Read more
Pharma & Biotech
Övervinn utmaningar med dataintegration utifrån ALCOA+
Read more
Life Science
Plantvision tar täten i Life Science-satsning
Read more
Consulting
Case: Världsklass på QC-lab efter omfattande uppgradering
Read more
Stay up to date

SUBSCRIBE TO LATEST INSIGHTS