Scenario 9 – The Inspection Question
A practical walkthrough of how to defend AI-assisted batch record review during a regulatory inspection. This scenario demonstrates that a strong audit defence does not require explaining complex technology — it requires demonstrating clear governance, defined scope, and uncompromised human oversight.
Workshop Scenario
Background: AI in Batch Record Review
Your organisation uses an AI agent to assist QA reviewers with batch record review. The system has been integrated into the quality process to support reviewers in identifying potential documentation issues before the final compliance assessment is completed. After six months of operational use, the system has become a standard component of the batch record review workflow.
What the System Does
  • Identifies missing entries in batch records
  • Flags calculation inconsistencies
  • Highlights possible documentation anomalies
  • Produces a structured review summary for QA assessment
The Inspection Trigger
During a routine inspection, the auditor learns that AI is being used within the batch record review process. They request a full explanation of how the system is governed — a question that organisations must be prepared to answer clearly, confidently, and without hesitation.

The auditor's opening question: "Can you explain how this AI system is used within your GMP quality process?"
Mock Inspection: Key Questions
When regulators discover AI or intelligent agents are used in a GMP process, their scrutiny will typically focus on the following critical areas. Participants should be prepared to address these questions thoroughly and confidently to demonstrate robust governance and control.
Intended Use
“How is this AI system used within your quality process?”
Auditors will seek a precise understanding of the AI agent's role and the specific decisions it influences. It's vital to clearly articulate its defined purpose and where it fits within the existing quality workflow.
Scope and Limitations
“What is the system allowed to do and what is it not allowed to do?”
Demonstrate that the boundaries of the AI’s responsibilities are well-defined. This includes clearly stating the tasks it performs, the data it processes, and what remains outside its operational scope or decision-making authority.
Controlled Inputs
“What information does the AI analyse and where does it come from?”
The reliability of the AI's output depends heavily on the integrity of its inputs. Be ready to explain how all data sources are controlled, approved, and originate from validated systems and documents to ensure data quality and traceability.
Human Oversight
“Who reviews the AI output before decisions are made?”
Crucially, auditors will want assurance that human professionals retain responsibility for all compliance decisions. Describe the process for human verification, review, and ultimate approval of the AI's generated insights or recommendations.
Change Management
“How are changes to the AI system managed?”
Outline the established procedures for managing any modifications to the AI system, including updates to prompts, underlying models, workflows, or configurations. This should align with your existing Quality Management System (QMS) change control processes.
Ongoing Oversight
“How do you monitor the system after it has been deployed?”
Explain the continuous monitoring mechanisms in place to ensure the AI system consistently operates as intended within its defined scope. This includes performance metrics, deviation detection, and periodic re-validation activities.
The Opening Defence: Defining the Role of AI
Auditor Question 1
"Can you explain how this AI system is used within your GMP quality process?"
The first and most important principle of any audit defence involving AI is to position the system correctly from the outset. The QA reviewer's response must immediately establish that the AI operates as a review support tool — not as a decision-making authority. This framing sets the tone for every subsequent question the auditor may ask.
The AI's Role
The system performs a structured scan of the batch record to highlight potential areas of concern — missing fields, arithmetic inconsistencies, and formatting anomalies — and presents these findings in a review summary for QA consideration.
The Critical Distinction
The AI does not make compliance decisions. It does not determine whether a batch is releasable. All outputs are reviewed by a qualified QA reviewer who performs the final compliance assessment and takes full professional accountability for the outcome.
Why This Framing Matters
Regulators are not inherently opposed to AI in GMP environments. What they require is evidence that AI is properly governed and that human judgement remains at the centre of compliance decisions. This response establishes that foundation immediately.
Defining Scope and Intended Use
Auditor Question 2
"How do you ensure the system operates within an appropriate scope?"
One of the most common weaknesses in AI governance programmes is a failure to formally define what a system is — and is not — permitted to do. The auditor's follow-up question probes directly for this. A strong response demonstrates that the system's scope is not assumed or informally understood, but documented, controlled, and subject to quality governance review.
What the Agent Is Permitted to Identify
Missing Fields
Incomplete documentation entries in the batch record
Arithmetic Issues
Calculation inconsistencies within recorded data
Formatting Anomalies
Deviations from expected documentation structure
What the Agent Is Explicitly Prohibited From Doing
The agent is not permitted to determine compliance conclusions or release decisions. This boundary is not merely understood by users — it is formally documented in the system's intended use statement and reviewed as part of the quality governance process.
Documenting what a system cannot do is as important as documenting what it can. This explicit limitation provides the auditor with clear evidence of a controlled, considered implementation rather than an ad hoc deployment.
Controlled Inputs: Ensuring Data Integrity at the Source
Auditor Question 3
"What controls are in place to ensure the inputs to the AI system are reliable and appropriate?"
This question targets the integrity of the information the AI reviews. A weak response might describe the system's outputs without addressing what goes in. A strong response demonstrates that the organisation has defined and enforced a clear input boundary — ensuring the AI only ever operates on controlled, authorised data. The principle is straightforward: if the inputs are uncontrolled, the outputs cannot be trusted.
Controlled Source Only
The system exclusively reviews batch record exports generated from the electronic batch record system as part of the controlled review process. These represent the final record under review — not drafts, not working copies, and not documents from outside the defined input boundary.
No Access to Uncontrolled Data
The AI agent has no access to draft procedures, uncontrolled documents, or any data source outside the defined input architecture. This boundary is documented in the system's intended use statement and enforced through the workflow design.
Why Input Control Matters
Controlling inputs is a fundamental GMP principle that applies equally to AI systems. If the system were permitted to draw on uncontrolled or incomplete data, its findings could not be relied upon as a meaningful review aid. Input control is what makes the output trustworthy.
Human Verification: Outputs as a Review Aid, Not a Decision
Auditor Question 4
"How do you ensure that the AI system's outputs are reliable and that reviewers are not simply accepting its findings without independent assessment?"
This question probes whether human oversight is genuine or merely nominal. An auditor will not be satisfied by a response that describes human sign-off as a formality. A strong response demonstrates that QA reviewers independently evaluate AI findings — and that this evaluation is documented, mandatory, and forms part of the formal batch record review record. The AI informs the review; it does not replace it.
Outputs Treated as a Review Aid
The AI summary is presented to the QA reviewer as a structured list of potential areas of concern. It is not treated as a compliance finding, a deviation record, or an authoritative assessment. The reviewer determines whether any flagged item represents a genuine documentation issue requiring action.
Independent Assessment Is Mandatory
QA reviewers are required to independently assess each flagged item. Acceptance of an AI finding without independent verification is not permitted. The reviewer's assessment — including any disagreement with the AI output — is formally recorded as part of the batch record review documentation.
Audit Trail of Human Judgement
The review process creates a clear audit trail demonstrating that human judgement has been applied at every stage. The AI summary does not appear in the batch record as a compliance document — the reviewer's recorded assessment does. This distinction is critical to demonstrating genuine human oversight.
Change Control: Managing System Updates Through the QMS
Auditor Question 5
"How are changes to the AI system managed? What happens if the system configuration or prompts are updated?"
This question tests whether the organisation treats the AI system as a governed quality asset or as an informal IT tool. A strong response demonstrates that changes to the system — whether to configuration, prompts, or workflow integration — are subject to the same change control rigour applied to any other GMP-relevant process. The AI system is not exempt from quality system governance simply because it is software.
Changes Managed Through Established Change Control
Any modification to the system's configuration, prompts, or workflow integration is processed through the organisation's established change control procedure. This ensures that proposed changes are assessed for potential impact on the quality process before they are implemented.
Impact Assessment Before Implementation
The change control process requires a documented impact assessment for any modification that could affect system behaviour or output. This assessment considers whether the change falls within the system's defined intended use and whether any revalidation or re-qualification activity is required before the change is deployed.
Integration Into Existing QMS Infrastructure
The AI system is not governed through a separate or parallel framework. It is treated as a component of the quality workflow and subject to the same QMS controls as other regulated processes. This integration is what gives the governance approach credibility during inspection.
Ongoing Monitoring: Ensuring Continued Performance Within Defined Scope
Auditor Question 6
"How do you monitor the system's ongoing performance? What would happen if the system began producing unreliable outputs?"
This final question addresses the lifecycle of the system beyond initial deployment. A system that was well-governed at go-live but left unmonitored thereafter does not meet GMP expectations. A strong response demonstrates that the organisation has established a structured approach to ongoing oversight — one that would detect performance degradation, scope drift, or emerging issues before they affect the quality process.
Periodic Review as Part of Quality Oversight
The system is subject to periodic review conducted as part of the organisation's quality oversight activities. These reviews evaluate system performance against defined expectations, consider user feedback from QA reviewers, and assess any discrepancies or anomalies identified during the review process.
Escalation Pathway for Performance Concerns
If a QA reviewer identifies a pattern of unreliable, inconsistent, or out-of-scope outputs, this is escalated through the quality system in the same way as any other process performance concern. The system does not operate outside the quality event management framework — performance issues are documented, investigated, and resolved through established quality processes.
Continued Alignment With Intended Use
Periodic review also assesses whether the system continues to operate within its defined intended use. If operational experience reveals that the system is being used in ways that were not anticipated at deployment, or that its outputs are being interpreted beyond their intended scope, this is addressed through the change control and governance process before use continues.
Controlled Inputs, Reliable Outputs & Human Verification
The next two auditor questions address the integrity of the information the AI reviews, and the reliability of the outputs it produces. These questions test whether the organisation has thought carefully about both ends of the AI workflow — the data going in, and the conclusions drawn from what comes out.
Controlled Inputs
The system only reviews controlled batch record exports from the electronic batch record system. These records are generated as part of the controlled review process and represent the final record under review. The AI does not access draft procedures, uncontrolled documents, or any data source outside the defined input boundary. This controlled input architecture prevents the system from drawing conclusions based on inaccurate, incomplete, or unauthorised information.
Human Verification of Outputs
The AI output is treated as a review aid rather than evidence. QA reviewers independently assess the findings and confirm whether any flagged items represent genuine documentation issues requiring action. The reviewer's assessment is formally recorded as part of the normal batch record review process — creating a clear audit trail that demonstrates human judgement has been applied at every stage. The AI summary does not replace the review; it supports it.
Change Control & Ongoing Monitoring
The final two auditor questions address how the organisation manages the system over time. A system that is governed at deployment but uncontrolled thereafter does not meet GMP expectations. The auditor is looking for evidence that the same quality system rigour applied to other GMP processes has been extended to the AI system throughout its lifecycle.
Change Control for System Updates
Changes to the system configuration, prompts, or workflow are managed through the organisation's established change control process. This ensures that any modification with the potential to affect system behaviour is assessed for impact on the quality process before implementation. This approach integrates the AI system into existing QMS infrastructure rather than creating a parallel governance framework — a detail that resonates strongly with inspectors.
Periodic Review and Monitoring
The system is subject to periodic review as part of quality oversight activities. These reviews evaluate system performance, user feedback, and any discrepancies identified during QA review. This ongoing monitoring ensures that the system continues to operate within its defined scope, that performance remains consistent with expectations, and that any emerging issues are identified and addressed promptly through the quality system.
Facilitator Debrief:
What Made This a Strong Defence
This scenario illustrates that an effective audit defence for AI in GMP environments does not require deep technical explanation of how the system works. What regulators are looking for is evidence that the organisation has applied the same disciplined governance thinking to AI that they apply to every other quality-critical process. The following elements, present throughout this defence, are what made it successful.
1
Clearly Defined Intended Use
The AI system has a documented intended use that specifies its role, its permitted functions, and its explicit limitations. This document exists within the quality management system and is subject to review.
2
AI Outputs Not Treated as Compliance Decisions
At no point in the process does the AI output carry compliance authority. Findings are treated as a review aid, and all compliance determinations are made by qualified QA personnel.
3
Inputs from Controlled Sources Only
The system's data inputs are restricted to controlled batch record exports. This boundary is documented and enforced, preventing the introduction of uncontrolled or draft information.
4
Human Verification Is Mandatory
QA reviewer assessment is a required step in the process. The AI does not short-circuit the review — it informs it. The reviewer's conclusion is recorded in the batch record review documentation.
5
Governed Through Existing QMS Processes
Change control and periodic review are handled through established quality system processes. The AI is not treated as a separate category of activity but as a governed component of the quality workflow.
Principle Alignment:
QxAIOps Framework
Each element of the audit defence maps directly to a principle within the QxAIOps framework. This alignment demonstrates that the governance approach applied in this scenario is not coincidental — it reflects a structured methodology for deploying AI responsibly within regulated quality environments. Understanding these mappings helps QA professionals articulate their governance approach clearly and consistently during inspections.

Key insight: A strong audit defence is built on governance principles, not technical sophistication. When each audit question can be answered by referencing a documented process, a defined scope, or an established QMS control, the defence is coherent, credible, and inspection-ready.
Why This Scenario Works: The Simplicity Principle
The most important lesson from Scenario 9 is that a strong audit defence is simple. It does not require the QA reviewer to explain transformer architectures, model training methodologies, or statistical confidence intervals. Auditors are not assessing the technology — they are assessing the governance.
A good defence demonstrates three things consistently and clearly: that the AI system operates within a clear and documented scope; that it remains under QA control at every stage; and that it is subject to existing GMP governance structures — change control, periodic review, documentation standards, and human verification requirements.
Clear Scope
The system's permitted and prohibited functions are formally defined and documented within the quality management system.
QA Control
Human oversight is not optional or incidental — it is a mandatory, documented step that determines the compliance outcome.
GMP Governance
The AI is governed using the same quality system tools applied to every other regulated process: change control, review cycles, and documentation.
When participants leave this scenario, the takeaway is not a set of technical answers to rehearse. It is the confidence that their existing quality system — properly extended to cover AI — already provides the governance framework that inspectors are looking for. The language of the defence is the language of quality, not technology.