An AI assistant has been deployed across the company intranet to help operators and supervisors find answers to procedural and GMP-related questions quickly. Drawing on a library of internal documents, the tool promises to reduce time spent searching procedures — but a QA review has surfaced significant compliance risks that demand careful examination before continued use can be approved.
AI in GMP
Quality Risk Assessment
Scenario Analysis
The IT team deployed a Microsoft Copilot-style assistant, accessible to all operators and supervisors through the company intranet. The tool allows staff to ask natural-language questions about procedures and GMP requirements, returning summarised answers that reference internal documents. Its rapid uptake across the site is driven by one simple advantage: it provides quick answers without requiring users to navigate the document management system directly.
Knowledge Library Contents
Approved SOPs
Work instructions
Training materials
Draft procedures
Internal guidance notes
Historical investigation reports
Typical Operator Questions
"Can I continue if this section of the batch record is incomplete?"
"Does this step require QA approval?"
"Is this deviation minor or major?"
"What should I do if this result is out of specification?"
These questions carry direct procedural and compliance consequences — making the quality of the answer critically important.
The tool became widely used precisely because it provides quick answers. Speed of adoption is not evidence of compliance fitness. Widespread use without governance approval is itself a risk.
The Four Core Risks Identified
A structured QA review of the system identified four distinct risk areas, each of which maps to a recognised GMP principle for AI deployment in regulated environments. These issues are not theoretical — they reflect observable behaviours and verifiable gaps in how the system was designed and governed before rollout.
1
Controlled and Uncontrolled Documents Mixed
The knowledge library contains both approved SOPs and uncontrolled materials including draft procedures, training slides, and historical investigation reports. The AI does not distinguish between these sources when generating a response.
2
AI Interpreting Procedural Requirements
The assistant answers questions about whether deviations are minor or major, whether a step can proceed, and whether QA approval is required — effectively performing procedural interpretation that sits outside an acceptable AI role.
3
Operators Treating AI as Procedural Authority
Observed behaviour shows operators relying on the AI response instead of opening and reading the controlled SOP. The AI output has, in practice, become the operational reference — displacing the controlled document.
4
Knowledge Library Governance Is Unclear
There is no defined process for approving which documents enter the knowledge base, controlling updates when SOPs change, or excluding draft and superseded documents from retrieval.
Issue 1 — Controlled and Uncontrolled Documents Mixed
The most fundamental risk in this scenario is that the AI assistant does not distinguish between approved, controlled documents and uncontrolled materials when generating a response. From the operator's perspective, an answer citing a draft procedure appears indistinguishable from one citing an approved SOP. The system provides no visible indication of source status, version, or approval standing.
In a GMP environment, the controlled document system exists precisely to ensure that only verified, approved information guides manufacturing decisions. When an AI retrieves content from draft procedures or historical investigation reports — which may reflect superseded thinking, interim conclusions, or unapproved practices — and presents it as a procedural answer, it undermines the entire document control framework.
Applicable Principle
Controlled Inputs
AI must operate only on approved, controlled information sources. Any document that has not been formally approved and version-controlled must be excluded from the knowledge base.
Questions to Ask Before Approving Use
Can the system be configured to retrieve only documents with a confirmed "Approved" status in the DMS?
Is there a technical control that prevents draft or superseded documents from entering the library?
Does the response indicate which document version was referenced?
What happens when a source document is revised — is the AI library updated in real time or with a lag?
Risk Summary: If the AI retrieves and presents guidance from uncontrolled documents, any manufacturing decision made on that basis is non-compliant — regardless of user intent. This is a critical finding.
Issue 2 — AI Interpreting Procedural Requirements
The assistant is being used to answer questions that require procedural judgement — specifically, whether a deviation qualifies as minor or major, whether a particular step requires QA approval, and whether production can continue given an incomplete batch record. These are not simple information-retrieval tasks. They require contextual interpretation of GMP requirements in light of specific, real-time manufacturing circumstances.
Providing these answers goes beyond the acceptable scope of an AI document search tool and places the system in a quasi-decision-making role. The risk is compounded by the fact that the same question asked in slightly different circumstances may warrant a materially different answer — a nuance that a summarisation engine is not equipped to apply reliably.
Acceptable AI Role
Retrieve and display the relevant section of the controlled SOP so the user can read and apply it themselves.
Borderline AI Role
Summarise the content of a procedure with clear citation, flagging that the user must consult the full document before acting.
Unacceptable AI Role
Interpret whether a deviation is minor, whether approval is needed, or whether production may proceed — these judgements belong to qualified personnel.
The relevant governing principle here is the requirement to define intended use and scope. A validated AI tool must have a clearly articulated scope statement that specifies what the system is and is not authorised to do. Procedural interpretation must be explicitly excluded from scope, and the system should be technically configured to decline or redirect such queries rather than attempt an answer.
Issue 3 — Operators Treating the AI as Procedural Authority
Observed behaviour during the QA review reveals that operators are routinely relying on the AI's summarised response rather than opening and reading the controlled SOP. This behavioural shift is understandable — the AI answer is faster and easier to consume — but it has profound compliance implications. The controlled SOP, not the AI's interpretation of it, is the procedural authority in a GMP facility.
When the AI output becomes the de facto operational reference, several downstream risks emerge. Errors or omissions in the AI summary are propagated directly into manufacturing decisions. Operators may follow the AI's answer in good faith while the underlying SOP contains critical nuances, caveats, or updated requirements that the summarisation did not capture. Version drift between the AI library and the live DMS can go undetected for extended periods if no one is reading the source document.
This is not simply a training issue. The system design itself encourages the behaviour by providing a frictionless answer without any prompt to verify against the source. Addressing this risk requires both technical and procedural controls: the AI should always display the source document reference and version, and there should be a visible prompt directing users to confirm their understanding against the controlled document before taking action.
Applicable Principle — Human Verification is Mandatory
Users must verify information against the controlled procedure. The AI response is a starting point for navigation, not a substitute for reading the SOP.
Required Design Control
Every AI response must include the source document name, version number, and a prominent directive to confirm against the controlled document before proceeding.
Required Behavioural Control
Training and awareness programmes must explicitly address the risk of AI over-reliance. GMP training should state that AI summaries do not replace the reading of controlled procedures.
Issue 4 — Knowledge Library Governance Is Unclear
The fourth risk area concerns the governance infrastructure that underpins the entire system. A compliant AI deployment in a GMP environment requires the same rigour applied to any other quality-critical system: defined ownership, documented processes, controlled change management, and periodic review. None of these elements were in place when the assistant was deployed.
In the absence of a governed process for maintaining the knowledge library, the following failure modes are plausible and likely: a revised SOP is approved and uploaded to the DMS but the AI continues to return content from the previous version; a draft procedure written during a project is never removed from the shared drive and enters the AI library; a historical investigation report containing superseded conclusions is retrieved in response to a deviation-classification question. Each of these scenarios has the potential to directly influence a manufacturing or compliance decision.
Governance Questions That Must Be Answered
Who is the designated owner of the AI knowledge library?
What is the formal process for approving a document's inclusion?
How are SOP revisions propagated to the knowledge base — and within what timeframe?
How are draft, superseded, or retired documents identified and excluded?
Is the knowledge library subject to periodic review and audit?
Is the system validated, and is there a validation protocol on file?
Applicable Principle — Quality System Governance
AI knowledge sources must be governed through controlled processes equivalent in rigour to the document management system itself. The knowledge library is, in effect, a controlled document repository and must be treated as one.
Ownership
A named Quality owner must be assigned responsibility for the library and its ongoing integrity.
Change Control
Any addition, removal, or update to the knowledge base must follow a documented change control process.
Periodic Review
The library must be audited on a defined schedule to verify alignment with the current approved document set.
Summary — Principles and Findings Aligned
The four issues identified in this scenario each map directly to a recognised GMP principle for AI deployment. Taken together, they describe a system that was implemented for valid operational reasons but without the quality infrastructure necessary for compliant use in a regulated environment. The table below summarises the alignment between each finding and its governing principle.
Key Takeaway: Continued use of this system cannot be approved until all four governance gaps are formally addressed, the library is restricted to approved controlled documents only, the scope is explicitly limited to document navigation rather than procedural interpretation, and the system is formally validated within the quality management system.
Recommended Actions Before Approving Continued Use
The QA group reviewing this system should not approve continued use in its current form. However, the tool could provide genuine value if the necessary controls are implemented. The following actions represent a structured path to compliance. Each is a prerequisite — not an optional enhancement — for a regulated deployment of this technology.
01
Suspend Unrestricted Access
Immediately restrict access to the tool pending a formal review. Issue a communication to all users clarifying that the AI assistant is not a controlled document source and that all procedural decisions must be made with reference to the approved SOP.
02
Audit and Restrict the Knowledge Library
Conduct a full audit of all documents in the AI knowledge base. Remove all draft procedures, training slides, guidance notes, and historical investigation reports. Retain only documents with confirmed Approved status in the DMS. Implement a technical control to prevent non-approved documents from entering the library.
03
Define and Document Scope
Produce a formal scope statement defining what the AI assistant is authorised to do — specifically, document navigation and SOP location — and what it is explicitly prohibited from doing, including procedural interpretation, deviation classification, and approval decisions.
04
Establish Knowledge Library Governance
Assign a named Quality owner. Define a change control process for library additions and removals. Link the library update process to the SOP revision workflow so that approved changes are reflected within a defined timeframe. Schedule periodic library audits.
05
Validate the System and Update Training
Subject the tool to formal validation within the quality management system, including a validation protocol, risk assessment, and user acceptance testing. Update GMP training to address AI over-reliance and reinforce that AI summaries do not substitute for reading the controlled procedure.