Generative artificial intelligence (AI) and large language models (LLMs) have long since become part of everyday working life – including in the pharmaceutical industry. However, especially in regulated environments, the question is not whether, but how AI can be used safely, sensibly and in compliance with regulations. Between efficiency potentials, the EU AI Act, GMP requirements and the draft of Annex 22, uncertainty prevails for many companies.
In the Experts Talk “Pharmaceutical-grade use of generative AI”, we demonstrated in a practical way which regulatory framework conditions apply, where the real risks lie and which AI applications can already be used in a compliant and validatable manner today. This article summarizes the most important content of the webinar.

Why generative AI is a critical topic in the pharmaceutical environment
The relevance of AI in the pharmaceutical environment is undisputed. Studies and market analyses show that there is great potential for AI-supported applications, particularly in quality control/manufacturing. At the same time, current figures highlight a serious risk: a large proportion of employees are already using AI tools today, often without approval, without training and without clear rules.
This phenomenon is known as Shadow AI. It occurs whenever employees use generic AI tools without the company being aware of it or controlling its use. The consequences range from data protection problems and compliance risks to breaches of the EU AI Act, in particular the obligation to be AI literate.
What regulations apply to the use of LLMs in the pharmaceutical industry?
A central topic is the classification of current and upcoming regulations. The decisive factor here is that not every AI system is subject to the same requirements. The context of use determines the regulatory depth.
The EU AI Act
The EU AI Act applies across all sectors and affects all AI applications in companies, from office chatbots to decision support in quality assurance. This is particularly relevant for pharmaceutical companies:
- Mandatory AI literacy (employee training)
- Classification of AI applications as high-risk
- Mandatory human oversight for high-risk systems
Pharmaceuticals is clearly a high-risk industry, so automated decisions without human-in-the-loop are not permitted.
GMP frameworks
Irrespective of Annex 22, existing GMP regulations already apply, among others:
- ICH Q9 – Quality risk management
- Annex 15 – Qualification and validation
- Annex 11 – Data management
- GAMP 5 (version 2) with explicit AI reference
These already form the framework for a risk-based assessment and validation of AI systems.
Annex 22 (Draft)
The draft of Annex 22 specifies the expectations of AI in the GMP environment for the first time. Particularly relevant:
- No generative AI for critical GMP applications
- Static models only (no automatic retraining)
- Deterministic results (same input → same output)
- Requirements for explainability (XAI) – no black box systems
The focus is on applications with a direct impact on patient safety, product quality or data integrity.
What does this mean in concrete terms? AI use cases in the GMP environment
Despite clear restrictions, there is still a wide range of permissible, validatable and economically viable applications.
Compliant use cases:
The practical use cases include, among others:
- Support with document design
- Classification and extraction of information (e.g. deviations)
- Research in existing documents and GAP identification
- Hyper-individualized training for employees
- Data aggregation and trend or cluster analyses
- Identification of recurring deviations
These applications are supportive, not decisive – and can be operated in a validatable manner with clear governance.
Critical applications with high risk:
The following are not permitted or only permitted to a very limited extent
- Automatic batch release
- Real-time decisions without human control
- Automatic CAPA generation
- Fully automated incident descriptions
The risk of hallucinations, wrong decisions and regulatory violations is particularly high here.
Human-in-the-Loop, Intended Use & Performance Monitoring
Human-in-the-loop (HITL) means that AI systems support employees, but the decision always remains with the human. This principle is required and necessary both in the EU AI Act and in the draft of Annex 22.
At the same time, practical experience has shown that human-in-the-loop alone is not enough. The long-term use of AI can influence the decision-making behavior of employees. If AI suggestions are perceived as reliable over a longer period of time, there is a risk that decisions will increasingly be confirmed uncritically.
Additional measures are therefore required:
- Clearly defined intended use: It must be clearly defined what the AI may and may not be used for. As generative AI can often do more than originally planned, any use outside the defined intended use must be consciously checked.
- Monitoring the interaction between humans and AI: In addition to the technical function of AI, it is important to monitor how employees deal with AI suggestions and whether decisions continue to be made actively and critically.
- Performance validation and version control: The performance of the AI must be checked over time – especially in the event of changes to processes, regulations or data. At the same time, it must be possible to trace which system or model version was in use at what time.
- Structured data management: Training, test and productively used data must be clearly separated, documented and traceable in order to ensure the quality and validation of the AI application.
These points were identified in the webinar as key prerequisites for using generative AI in a GMP environment in a controlled, traceable and compliant manner.
Practical example: Generative AI with MyGPT from Leftshift One
In the second part of the Experts Talk, Robert Spari from Leftshift One used MyGPT to show how generative AI can be used in a controlled and compliant manner.
MyGPT is an AI platform that:
- is operated in a protected private cloud environment
- guarantees that no data is stored for retraining purposes
- can be integrated into existing systems
- enables the use of generative AI without data leakage(internal or sensitive data does not leave the controlled system and is not reused for other purposes)
Typical application examples:
- Structuring unstructured audit notes into formal audit reports
- Support with scientific texts according to defined formal criteria
- Use of internal GMP documents using retrieval augmented generation (the AI specifically accesses approved internal documents for queries without training or permanently storing them)
- Transparent source information for traceability (XAI approach)
Particularly important: The systems are configured in such a way that they do not hallucinate, but only access approved content, which is a decisive factor for GMP compliance. If you have any questions about the tool, please contact Robert Spari: robert.spari@leftshift.one
Conclusion: Generative AI can be used with clear guidelines
Generative AI is not a no-go for the pharmaceutical industry, but it is not a sure-fire success either. Companies have to today:
- Actively address Shadow AI
- Structured recording and evaluation of AI use cases
- Ensure AI literacy
- Implement governance, documentation and human-in-the-loop consistently
Those who act early can use AI as an efficiency and quality lever instead of experiencing it as a compliance risk.
Further questions? Meet us live at the lounges in Karlsruhe
If you would like to delve deeper into the topic of the pharmaceutical use of AI, we look forward to a personal exchange at the Cleanroom and Processes 2026 lounges in Karlsruhe. The trade fair brings together experts from the pharmaceutical, biotechnology, medical technology and related industries and offers space for professional exchange on cleanrooms, processes, technology and regulatory requirements.
Our AI presentation at the LOUNGES 2026
Expected on 24.03.2026 | 11:30 am – 12:00 pm | Room 11
Quality decisions with AI: Annex 22 and EU AI Act
In this presentation, we will show how AI and large language models can be used for quality decisions – without violating regulatory requirements. We will provide insights from real projects, give a practical overview of Annex 22 and the EU AI Act and talk openly about opportunities, limitations and typical hurdles to implementation.
Further lectures from us on 25.03.2026:
- Annex 1: Big words, small media fill deeds – strategic use of media fill tests for the sustainable improvement of sterile processes
- Water Wars: Challenges and opportunities of ultrapure water – biofilm risks, system design, standardization and sustainability in ultrapure water treatment
Visit us at stand K6.1 – we look forward to exciting discussions and professional exchange! Free tickets are available with the code EXPERTSLOUNGES26 (registration required). You can book a ticket via the following link: https://cleanroom-processes.de/lounges-karlsruhe-2026/besuchertickets-lounges-karlsruhe-2026/
How the Experts Institute can support you
The Experts Institute supports pharmaceutical companies in the classification of regulatory requirements, the practical implementation of AI governance as well as training courses and workshops on the EU AI Act, Annex 22 and AI in the GMP environment. Get ahead and in touch with us: info@expertsinstitut.de
In addition to this article, it is worth taking a look at our blog article on Annex 22 and the EU AI Act. There we show which AI applications are to be classified as low-risk, limited or highly critical from a regulatory perspective and what preparations companies should already be making today: https://experts-institut.de/ki-in-der-pharmaindustrie-annex-22-eu-ai-act/
You can also stay informed about other Experts Talks, blog posts and events on LinkedIn: https://de.linkedin.com/company/expertsinstitut




