• Newsroom
  • Join us!
  • Newsletter
  • Kontakt
  • English English English en
  • Deutsch Deutsch German de
Experts Institut
  • Business Consulting
    • Business Solutions
      • Digitization
      • Sustainability Corporate strategy
      • Management systems
      • Project management
      • Strategy & Performance
      • Transformation & Leadership
  • GXP Consulting
    • GMP Beratung
      • Audits & inspections
      • GMP/GXP training courses
      • GMP Aircheck4
      • Continuous Manufacturing
  • Industries
    • Pharma
    • Service providers & trade
    • Automotive
    • FOOD & BEVERAGES
    • Financial service providers & insurances
    • Informationstechnik (IT)
    • Aerospace
  • Academy
    • Individuelle Inhouse-Schulungen
      • GMP/GXP training courses
    • Experts Institut Events
      • Academy
    • Direkt buchen
      • Live-Events
      • On-Demand Webinar
  • Kunden
  • Über uns
    • Über uns
      • Guideline
      • Portrait
      • Team
      • Geschäftsführung
      • Vision
      • Events
      • History Experts Institute
      • Sustainability at the Experts Institute
      • Social responsibility
    • Wissen
      • GMP Glossary
      • FAQ – Frequently asked questions in the GMP environment
      • Videos
    • Blog
      • Newsroom
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
  • Link to LinkedIn
  • Link to Xing
AI, GMP, GXP

Pharmaceutical-grade use of generative AI: regulations, limits and concrete implementation approaches

Generative artificial intelligence (AI) and large language models (LLMs) have long since become part of everyday working life – including in the pharmaceutical industry. However, especially in regulated environments, the question is not whether, but how AI can be used safely, sensibly and in compliance with regulations. Between efficiency potentials, the EU AI Act, GMP requirements and the draft of Annex 22, uncertainty prevails for many companies.

In the Experts Talk “Pharmaceutical-grade use of generative AI”, we demonstrated in a practical way which regulatory framework conditions apply, where the real risks lie and which AI applications can already be used in a compliant and validatable manner today. This article summarizes the most important content of the webinar.

Experts Talk

Why generative AI is a critical topic in the pharmaceutical environment

The relevance of AI in the pharmaceutical environment is undisputed. Studies and market analyses show that there is great potential for AI-supported applications, particularly in quality control/manufacturing. At the same time, current figures highlight a serious risk: a large proportion of employees are already using AI tools today, often without approval, without training and without clear rules.

This phenomenon is known as Shadow AI. It occurs whenever employees use generic AI tools without the company being aware of it or controlling its use. The consequences range from data protection problems and compliance risks to breaches of the EU AI Act, in particular the obligation to be AI literate.

What regulations apply to the use of LLMs in the pharmaceutical industry?

A central topic is the classification of current and upcoming regulations. The decisive factor here is that not every AI system is subject to the same requirements. The context of use determines the regulatory depth.

The EU AI Act

The EU AI Act applies across all sectors and affects all AI applications in companies, from office chatbots to decision support in quality assurance. This is particularly relevant for pharmaceutical companies:

  • Mandatory AI literacy (employee training)
  • Classification of AI applications as high-risk
  • Mandatory human oversight for high-risk systems

Pharmaceuticals is clearly a high-risk industry, so automated decisions without human-in-the-loop are not permitted.

GMP frameworks

Irrespective of Annex 22, existing GMP regulations already apply, among others:

  • ICH Q9 – Quality risk management
  • Annex 15 – Qualification and validation
  • Annex 11 – Data management
  • GAMP 5 (version 2) with explicit AI reference

These already form the framework for a risk-based assessment and validation of AI systems.

Annex 22 (Draft)

The draft of Annex 22 specifies the expectations of AI in the GMP environment for the first time. Particularly relevant:

  • No generative AI for critical GMP applications
  • Static models only (no automatic retraining)
  • Deterministic results (same input → same output)
  • Requirements for explainability (XAI) – no black box systems

The focus is on applications with a direct impact on patient safety, product quality or data integrity.

What does this mean in concrete terms? AI use cases in the GMP environment

Despite clear restrictions, there is still a wide range of permissible, validatable and economically viable applications.

Compliant use cases:

The practical use cases include, among others:

  • Support with document design
  • Classification and extraction of information (e.g. deviations)
  • Research in existing documents and GAP identification
  • Hyper-individualized training for employees
  • Data aggregation and trend or cluster analyses
  • Identification of recurring deviations

These applications are supportive, not decisive – and can be operated in a validatable manner with clear governance.

Critical applications with high risk:

The following are not permitted or only permitted to a very limited extent

  • Automatic batch release
  • Real-time decisions without human control
  • Automatic CAPA generation
  • Fully automated incident descriptions

The risk of hallucinations, wrong decisions and regulatory violations is particularly high here.

Human-in-the-Loop, Intended Use & Performance Monitoring

Human-in-the-loop (HITL) means that AI systems support employees, but the decision always remains with the human. This principle is required and necessary both in the EU AI Act and in the draft of Annex 22.

At the same time, practical experience has shown that human-in-the-loop alone is not enough. The long-term use of AI can influence the decision-making behavior of employees. If AI suggestions are perceived as reliable over a longer period of time, there is a risk that decisions will increasingly be confirmed uncritically.

Additional measures are therefore required:

  • Clearly defined intended use: It must be clearly defined what the AI may and may not be used for. As generative AI can often do more than originally planned, any use outside the defined intended use must be consciously checked.
  • Monitoring the interaction between humans and AI: In addition to the technical function of AI, it is important to monitor how employees deal with AI suggestions and whether decisions continue to be made actively and critically.
  • Performance validation and version control: The performance of the AI must be checked over time – especially in the event of changes to processes, regulations or data. At the same time, it must be possible to trace which system or model version was in use at what time.
  • Structured data management: Training, test and productively used data must be clearly separated, documented and traceable in order to ensure the quality and validation of the AI application.

These points were identified in the webinar as key prerequisites for using generative AI in a GMP environment in a controlled, traceable and compliant manner.

Practical example: Generative AI with MyGPT from Leftshift One

In the second part of the Experts Talk, Robert Spari from Leftshift One used MyGPT to show how generative AI can be used in a controlled and compliant manner.

MyGPT is an AI platform that:

  • is operated in a protected private cloud environment
  • guarantees that no data is stored for retraining purposes
  • can be integrated into existing systems
  • enables the use of generative AI without data leakage(internal or sensitive data does not leave the controlled system and is not reused for other purposes)

Typical application examples:

  • Structuring unstructured audit notes into formal audit reports
  • Support with scientific texts according to defined formal criteria
  • Use of internal GMP documents using retrieval augmented generation (the AI specifically accesses approved internal documents for queries without training or permanently storing them)
  • Transparent source information for traceability (XAI approach)

Particularly important: The systems are configured in such a way that they do not hallucinate, but only access approved content, which is a decisive factor for GMP compliance. If you have any questions about the tool, please contact Robert Spari: robert.spari@leftshift.one

Conclusion: Generative AI can be used with clear guidelines

Generative AI is not a no-go for the pharmaceutical industry, but it is not a sure-fire success either. Companies have to today:

  • Actively address Shadow AI
  • Structured recording and evaluation of AI use cases
  • Ensure AI literacy
  • Implement governance, documentation and human-in-the-loop consistently

Those who act early can use AI as an efficiency and quality lever instead of experiencing it as a compliance risk.

Further questions? Meet us live at the lounges in Karlsruhe

If you would like to delve deeper into the topic of the pharmaceutical use of AI, we look forward to a personal exchange at the Cleanroom and Processes 2026 lounges in Karlsruhe. The trade fair brings together experts from the pharmaceutical, biotechnology, medical technology and related industries and offers space for professional exchange on cleanrooms, processes, technology and regulatory requirements.

Our AI presentation at the LOUNGES 2026

Expected on 24.03.2026 | 11:30 am – 12:00 pm | Room 11
Quality decisions with AI: Annex 22 and EU AI Act

In this presentation, we will show how AI and large language models can be used for quality decisions – without violating regulatory requirements. We will provide insights from real projects, give a practical overview of Annex 22 and the EU AI Act and talk openly about opportunities, limitations and typical hurdles to implementation.

Further lectures from us on 25.03.2026:

  • Annex 1: Big words, small media fill deeds – strategic use of media fill tests for the sustainable improvement of sterile processes
  • Water Wars: Challenges and opportunities of ultrapure water – biofilm risks, system design, standardization and sustainability in ultrapure water treatment


Visit us at stand K6.1 – we look forward to exciting discussions and professional exchange! Free tickets are available with the code EXPERTSLOUNGES26 (registration required). You can book a ticket via the following link: https://cleanroom-processes.de/lounges-karlsruhe-2026/besuchertickets-lounges-karlsruhe-2026/

How the Experts Institute can support you

The Experts Institute supports pharmaceutical companies in the classification of regulatory requirements, the practical implementation of AI governance as well as training courses and workshops on the EU AI Act, Annex 22 and AI in the GMP environment. Get ahead and in touch with us: info@expertsinstitut.de

In addition to this article, it is worth taking a look at our blog article on Annex 22 and the EU AI Act. There we show which AI applications are to be classified as low-risk, limited or highly critical from a regulatory perspective and what preparations companies should already be making today: https://experts-institut.de/ki-in-der-pharmaindustrie-annex-22-eu-ai-act/

You can also stay informed about other Experts Talks, blog posts and events on LinkedIn: https://de.linkedin.com/company/expertsinstitut

2 weeks /by Christoph Köth
https://experts-institut.com/wp-content/uploads/2026/02/LinkedIn.png 1080 1920 Christoph Köth https://experts-institut.de/wp-content/uploads/2023/02/GEMI_Logo_Slogan_color_RGB.webp Christoph Köth2026-02-04 15:19:352026-02-11 13:09:41Pharmaceutical-grade use of generative AI: regulations, limits and concrete implementation approaches
AI, GMP, GXP

AI in the pharmaceutical industry: Annex 22 & EU AI Act – current obligations and practical implementation

The use of artificial intelligence (AI) is rapidly gaining importance in the pharmaceutical industry – from increasing efficiency in everyday office work to complex applications in GMP-relevant processes. At the same time, regulatory requirements are increasing significantly. The EU AI Act and the planned Annex 22 to the EU GMP guidelines will provide a clear regulatory framework for the use of AI in regulated environments for the first time.

This article summarizes the key content of the Experts Talk “AI in the pharmaceutical industry – Annex 22 & EU AI Act as a framework for quality and efficiency”, which took place on 27 November 2025. The discussion focused on what these regulatory developments mean in concrete terms for pharmaceutical companies, what obligations already apply today and how the path from risk analysis to regulatory compliance can be successful.

EU AI Act

Why the EU AI Act and Annex 22 are relevant now

The EU AI Act is the first comprehensive European regulation for the use of AI. It takes a risk-based approach and differentiates between AI applications according to their potential impact on people, safety and fundamental rights. The AI Act is particularly relevant for pharmaceutical companies, as many AI applications can be classified as high-risk.

At the same time, the planned Annex 22 specifies the authorities’ expectations for the use of AI in the GMP environment. Even though Annex 22 had not yet been finally adopted at the time of the Experts Talk, it was clear from the exchange with the speakers that the regulatory direction is clear – and companies should prepare now.

Annex 22 & EU AI Act at a glance: Which regulations apply to which AI use cases – from office chatbots to AI in the GMP process?

A central topic of the Experts Talk was the clear demarcation of the various regulations and their areas of application. This is because not every AI application is subject to the same requirements. The decisive factors are the context of use, risk potential and impact on product quality, patient safety and human rights.

Two sets of rules – two perspectives

  • EU AI Act: The EU AI Act is a horizontal, cross-sector regulation that addresses the use of AI throughout the entire company. It applies not only to GMP processes, but also to AI applications in HR, IT, office areas and management. The aim is to protect fundamental rights, safety and health and to create trust in AI systems.
  • Annex 22 (Draft): Annex 22 is a vertical, GMP-specific supplement to the EU GMP guideline. It focuses exclusively on AI applications in the regulated manufacturing environment and particularly addresses systems with an impact on product quality, patient safety and data integrity. The annex is strongly oriented towards GAMP 5 principles and supplements existing regulations such as Annex 11 and Chapter 4.

Typical AI use cases and their regulatory classification

1. office and support applications (low to minimal risk)
Examples:

  • Office chatbots for text creation or translation
  • Spelling and formulation aids
  • AI-supported ticket or document sorting

These applications generally have no direct influence on GMP decisions or patient safety. Nevertheless, there are already requirements arising from the EU AI Act, particularly with regard to:

  • Transparency about the use of AI
  • Employee training (AI literacy)
  • Clear internal rules on the use and handling of data

2. decision-supporting AI systems (limited to high risk)
Examples:

  • AI-based decision support in QA or production
  • Forecast models for maintenance, deviations or capacity planning

Regulatory requirements are increasing significantly here. Relevant factors include

  • Structured risk assessment and classification
  • Documentation of models, data basis and decision logics
  • Clear governance structures and responsibilities
  • Concepts for human control (human-in-the-loop)

The more decisions are automated or prepared, the closer these systems come to the high-risk area of the EU AI Act.

3. AI in the GMP core process (high risk / Annex 22 relevant)
Examples:

  • AI-supported process monitoring
  • Automated quality assessments
  • AI systems with influence on approval decisions

These applications are clearly the focus of Annex 22, as the draft makes clear:

  • Critical AI systems must be deterministic, validatable and explainable (XAI)
  • Dynamic learning systems and generative AI are not currently intended for critical GMP applications
  • Human-in-the-loop is absolutely essential
  • Data quality, traceability and life cycle documentation are key success factors

The Experts Talk clearly showed that the EU AI Act and Annex 22 are not alternatives, but complement each other. Companies must consider both perspectives in order to use AI in a compliant and sustainable manner.

What pharmaceutical companies already have to consider today

A key conclusion of the Experts Talk: waiting is not an option. Even without the final adoption of Annex 22, there are already specific requirements from existing GMP regulations, the EU AI Act and general quality assurance principles.

It was particularly emphasized that companies must obtain a structured overview of all AI applications used or planned. Regardless of whether these are used in the GMP core process, in supporting areas or in everyday office work.

The key requirements include in particular

  • Transparency and traceability of AI systems
  • Risk assessment and classification of AI use cases
  • Documentation and governance over the entire life cycle
  • Training and qualification of employees

Shadow AI poses a particular risk here – i.e. the uncontrolled use of generic AI tools such as ChatGPT in day-to-day work. Without clear rules, approvals, training and documentation, this can quickly lead to deviations and compliance risks.

Implementing AI governance in practice: Tool to support regulatory requirements

It became clear that regulatory requirements such as the EU AI Act, Annex 22 (Draft) and ISO/IEC 42001 can only be implemented effectively if they are translated into clear, practicable structures.

This is where an AI governance solution from Goodly Technologies comes in, which is currently being developed specifically for regulated industries. The tool supports companies in managing AI systems in a structured and traceable manner throughout their entire life cycle: from planning and deployment to continuous monitoring.

Among other things, the focus is on:

  • Systematic recording and classification of AI applications
  • Documentation of responsibilities, risks and controls
  • Illustration of key requirements from Annex 22
  • Integration of SOPs, training and proof of audit and inspection capability

The aim is not to limit AI, but to make it usable in a controlled manner as a basis for innovation and regulatory security. If you have any questions about the tool, please contact Robert Hoffmeister: robert.hoffmeister@goodly-technologies.com

Conclusion: Set the course for compliant AI now

The Experts Talk on November 27, 2025 made it clear that the EU AI Act and the planned Annex 22 are not abstract future topics, but are already having a concrete impact on the day-to-day work of pharmaceutical companies.

Companies that already use AI or are planning to use it should act early:

  • Identify AI use cases
  • Assess risks
  • Clarify responsibilities
  • Define processes
  • Train employees

Structured preparation makes it possible to use regulatory requirements not as a brake, but as a foundation for the safe, efficient and sustainable use of AI.

How the Experts Institute can support you

The Experts Institute supports pharmaceutical companies in the classification of regulatory requirements, the practical implementation of AI governance as well as training courses and workshops on the EU AI Act, Annex 22 and AI in the GMP environment. Get ahead and in touch with us: info@expertsinstitut.de

The Experts Talk series will also be continued. The next Experts Talk will take place on January 22 at 10:30 a.m. on the topic of “Pharmaceutical-grade use of generative AI”. To register for the event:
https://academy.experts-institut.de/ExpertsTalkmitChristophKthRobertSpariPharmatauglicherEinsatzvongenerativerKI

You can find more articles in our newsroom:
https://experts-institut.de/newsroom/

And feel free to follow us on LinkedIn to make sure you don’t miss any more Experts Talks:
https://de.linkedin.com/company/expertsinstitut

1 month /by Christoph Köth
https://experts-institut.com/wp-content/uploads/2026/01/LinkedIn.jpg 1080 1920 Christoph Köth https://experts-institut.de/wp-content/uploads/2023/02/GEMI_Logo_Slogan_color_RGB.webp Christoph Köth2026-01-12 17:59:082026-01-21 13:15:58AI in the pharmaceutical industry: Annex 22 & EU AI Act – current obligations and practical implementation
Recent
  • AI
    Pharmaceutical-grade use of generative AI: regulations,...2 weeks 
  • CSRD
    CSRD in the omnibus procedure: Basics, terms and what companies...4 weeks 
  • KI in der Pharmaindustrie: Annex 22 & EU AI Act
    AI in the pharmaceutical industry: Annex 22 & EU AI...1 month 
  • GMP
    The most important GMP findings for 2025 and forecasts for...15. December 2025 - 15:52
Popular
  • Deviation Management
    Enhancing Process Stability through Effective Deviation...27. March 2025 - 11:07
  • Supplier-Audit Reports
    Untrue Supplier-Audit-Reports: The Danger of Ethnocentric...9. January 2025 - 11:29
  • Qualitätssicherungsvereinbarungen
    Insights into our project experience: Successful implementation...4. December 2024 - 8:59
  • Cannabis
    GMP standards for medicinal cannabis: guaranteeing quality...19. November 2024 - 16:04

Tags

AI AI Annex Annex 11 Annex 22 Artificial Intelligence Audit audits Business Continuity Management Cannabis Certification Clean room Computerized systems Continuous Manufacturing CRA Cultures Cytostatics Data Integrity DORA Draft EU AI Act Germ count Germ count monitoring GMP GXP Health insurance Information security inspections ISMS ISO/IEC 42001 ISO 9001 ISO 27001 ISO standard Laboratory Machine Learning NIS-2 NIS2 Pharmacy QMS Quality management system Reagents Regulations Retaxation Sustainability Transformation

Kategorien

  • AI
  • Business Solutions
  • GMP
  • GXP
  • News
  • Retaxation
  • Sustainability
  • Uncategorized

Archiv

  • February 2026 (1)
  • January 2026 (2)
  • December 2025 (1)
  • November 2025 (1)
  • October 2025 (1)
  • September 2025 (1)
  • July 2025 (2)
  • March 2025 (1)
  • January 2025 (1)
  • December 2024 (1)
  • November 2024 (1)
  • October 2024 (3)
  • September 2024 (2)
  • August 2024 (2)
  • July 2024 (2)
  • May 2024 (1)
  • April 2024 (2)
  • March 2024 (2)
  • February 2023 (10)

Webpräsenz der Allianz für Cyber- Sicherheit
kununu widget

Business Solutions

  • Digitization
  • Sustainability
  • Management systems
  • Project management
  • Strategy & Performance
  • Transformation & Leadership

GMP / GXP Consulting

  • GMP Consulting
  • GMP audits & inspections
  • GMP/GDP training courses
  • GMP/pharmaceutical engineering
  • Continuous Manufacturing

EI Academy

  • GMP / GxP
  • Academy
  • Live events
  • On-demand webinar

New town

Experts Institut Beratungs GmbH
Weinstraße 85

D-67434 Neustadt a. d. Weinstraße

Phone: +49 (0)6321 969210
E-mail: info@expertsinstitut.de

Fax: +49 (0)6321 9692199

Bamberg

Experts Institut Beratungs GmbH
Untere Sandstraße 53

D-96047 Bamberg

Phone: +49 (0)951 51939330
E-mail: info@expertsinstitut.de

St. Gilgen (Austria)

Experts Institut Beratungs GmbH
Helenenstraße 16

A-5340 St. Gilgen, Austria

Tel.: +43 (0)6227 21068
E-Mail: info@expertsinstitut.de

  • Link to LinkedIn
  • Link to Xing

© 2024 Experts Institut Beratungs GmbH
  • Imprint
  • Data protection
  • AGBs
  • Cookie Directive (EU)
Scroll to top Scroll to top Scroll to top