Operating Agents in GMP Quality
A QA Practitioner's Guide
Artificial intelligence is entering regulated quality work across the pharmaceutical and life sciences industries. This session explains how to operate Agent safely within GxP environments, not how to build them. We'll focus on the practical governance frameworks in any ecosystem that enable compliant Agent operation whilst maintaining the rigorous standards your quality systems demand.
Who This Workshop Is For
Quality Assurance Professionals
QA practitioners overseeing GxP processes and documentation review
Compliance and Validation Staff
Teams responsible for system validation and regulatory compliance
QPs and Auditors
Qualified Persons and audit professionals evaluating quality systems
QA Managers
Leaders making strategic decisions about quality operations
No technical or AI or Agent background required. This workshop assumes you understand GxP quality systems and focuses on how Agent operates within them. We'll translate technical concepts into quality language you already know.
Why GxP Is Well Placed for Agent Workflows
The pharmaceutical industry's rigorous quality culture actually makes it an ideal environment for controlled Agent operation. Unlike less-regulated sectors experimenting with Agent, GxP organisations already have the foundational disciplines that Agent governance requires.
Strong Documentation Culture
GxP already requires comprehensive documentation. Agents simply extend this to a new category of worker. The discipline of documenting decisions, maintaining records, and ensuring traceability is second nature.
Clear Procedures
SOPs define how work is performed, by whom, and under what conditions. This procedural rigour translates directly to Agent operation—defining scope, inputs, and verification requirements.
Defined Accountability
GxP organisations have clear role definitions and accountability structures. Agents fit into this framework as workers with defined responsibilities, just like human roles.
Existing Review Discipline
The concept of independent review, verification before approval, and evidence-based decision-making are embedded in GxP practice. These same disciplines govern Agent operation.
Agents fit because GxP is structured. The challenge isn't creating new governance—it's extending existing governance to a new type of worker.
The Core Reframe
Agents are not software to be validated. Agents are workers to be governed.
This fundamental shift in perspective changes everything about how we approach agents in regulated environments. Traditional software validation assumes deterministic, repeatable behaviour. Agents, by contrast, exercise judgement within defined parameters—much like human quality professionals do.
The risk is not AI Agents itself. The risk is uncontrolled operation. When we recognise agents as workers rather than systems, we can apply the governance frameworks we already use for human roles: defined scope, controlled inputs, oversight, and accountability.
What This Workshop Covers
01
How Agents Fit Into GxP Quality Systems
Understanding where Agent sits within your existing quality framework and documentation hierarchy
02
What QA Remains Accountable For
Clarifying the immovable boundaries of human responsibility and regulatory accountability
03
Operating Principles That Keep Agents Compliant
The eight core principles that ensure agent operation meets GxP expectations
04
How to Explain Agent Use to an Auditor
Practical strategies for demonstrating compliance during regulatory inspections
The QikSolve Operating Model
Agents operate inside a structured framework that mirrors how we govern human quality roles. This isn't about validating algorithms—it's about controlling how Agent workers operate within our quality system.
Defined Scope
Clear boundaries on what the agent may and may not do, documented before execution begins
Controlled Inputs
Only relevant, approved, controlled, traceable records feed into Agent decisions
Human Oversight
Qualified personnel verify outputs and maintain accountability for compliance decisions
Full Traceability
Complete audit trail from input documents through to final verified output
This model operates exactly like regulated quality roles. We don't validate people—we define their responsibilities, control their inputs, oversee their work, and maintain records of their decisions.
QMS Mapping: Pharma Language
Nothing new is being invented. This already fits your QMS.
Agent governance integrates directly into existing Quality Management System components:
Policies
Define high level rules and expectations
SOPs
Define roles, scope, and operating procedures
Work Instructions
Provide detailed instructions on how to perform a task
Records
Capture outputs and activities
Change Control
Governs changes to scope, prompts, and workflow logic
Deviations
Manage process related issues and non-conformances
Periodic Review
Ensures ongoing control effectiveness
Agent governance doesn't require a parallel system. It operates within the quality framework you already maintain.
QMS Mapping: The Agent Language
The same controls. The same hierarchy. Just the right vocabulary.
Agent governance maps directly to familiar QMS document levels — using language native to how agents are designed and operated:
Copilot.instruction.md
Overarching rules for Agent agent behaviour, ethical use, and legal compliance.
Agent Instructions (Agent.md)
Defines the agent's specific role, scope, operational boundaries, permitted inputs, and expected output formats.
Skill
The direct prompt or instruction given to the agent for a single task, including specific context and constraints.
Agent Execution Log / Output & Verification Records
Traceable audit trails of agent actions, raw outputs, and human verification/approval logs.
These aren't new documents. They are the agent-native equivalents of the quality controls you already operate.
GxP Framework
The 8 QxAgentOps Principles
These eight principles form the foundation of compliant Agent operation in GxP environments. They're not theoretical concepts—they're practical controls that translate existing quality expectations into Agent governance. Each principle addresses a specific compliance risk and maps to familiar quality system requirements.
Over the following cards, we'll explore each principle in detail, examining both what it means operationally and what QA remains accountable for under each principle.
Principle 1
Specification Before Execution
No agent runs without a defined purpose.
Before any Agent executes, its task must be explicitly specified. This isn't optional—it's the foundation of controlled operation. Every execution requires documentation of clear scope, explicit boundaries, and known limitations.
Clear scope of work
Precisely what the agent will review, analyse, or evaluate
Explicit inclusions and exclusions
What is in scope and, critically, what is out of scope
Known limitations and failure modes
Where the agent might struggle or produce unreliable outputs

Comparable to:
SOP-defined role responsibilities
Just as we wouldn't ask a quality professional to "review something" without specifics, we don't allow Agent agents to operate with vague instructions. The specification provides the same clarity we expect in job descriptions and standard operating procedures.
Document the Agent Like a Quality Role
Principle 1
QA Responsibility: Specification
Quality Assurance holds three critical responsibilities under this principle. These cannot be delegated to the Agent or assumed to happen automatically—they require active QA judgement before execution begins.
Confirm the Task Is Appropriate for Agent.
Not all quality work suits Agent operation. QA must evaluate whether the task involves contextual judgement (suitable) or deterministic logic (unsuitable). If the work requires interpreting complex narratives or identifying patterns across documents, Agent may help. If it's simple rule-checking, it shouldn't.
Confirm Scope Matches Intended Use
The documented scope must align precisely with what you need. Scope creep is a common failure mode—ensure boundaries are tight enough to maintain control but sufficient to deliver value. Review both what's included and what's explicitly excluded.
Do Not Allow "General" or Open-Ended Execution
Instructions like "review the batch record" or "check for issues" are too vague for GxP work. Every execution must specify exactly what to look for, which sections to review, and what standards to apply. Precision in specification prevents uncontrolled operation.
Principle 2
Compliant Controlled Inputs
This principle addresses one of the most critical compliance risks in Agent operation: the quality and control of source data. Agent agents cannot assess whether a record is approved, current, or authentic. They process what they receive. Therefore, input control is a human responsibility—and a non-negotiable one.
Approved
Records must carry appropriate authorisation signatures before Agent processing
Version-Controlled
Only current, effective versions may be used—never drafts or obsolete documents
Traceable to Source Records
Every input must link back to an identifiable, auditable source document
Locked at Execution Time
Inputs cannot change during or after processing—they must be fixed and time-stamped
No live guessing. No improvisation. Agent agents work only with controlled, verified inputs—the same standard we apply to human quality decisions.
Principle 2
QA Responsibility: Inputs
Ensure Records Are Complete and Authorised
Before records reach the Agent, QA must verify they're approved, signed, and complete. Incomplete records produce incomplete analysis. Unapproved records shouldn't be in the system at all. This is standard GxP practice—Agent doesn't change it.
Ensure Correct Versions Are Used
Version control failures cause compliance deviations. QA must confirm that the Agent receives current, effective documents—not superseded versions, drafts, or archived records. This requires active verification, not assumption.
Ensure No Post-Hoc Input Changes
Once an Agent executes, the inputs it used must remain unchanged and traceable. Any alteration after the fact breaks the audit trail and undermines the integrity of the output. Lock inputs at execution time and maintain that lock throughout the record lifecycle.
Principle 3
Bounded Autonomy
Agent can assist. It cannot decide.
Agent May:
Review
Examine records for completeness, consistency, and compliance indicators
Compare
Identify differences between documents, versions, or data sets
Identify Inconsistencies
Flag contradictions, gaps, or anomalies that warrant human attention
Surface Risk
Highlight areas of potential concern based on patterns or deviations
Agent May Not:
Approve
Grant formal acceptance or authorisation of any GxP record or decision
Certify
Provide regulatory certification or quality assurance sign-off
Release
Authorise batch release, product disposition, or regulatory submission
Override Procedures
Bypass, modify, or substitute for established SOPs or quality processes
These boundaries are not negotiable. They define the limits of Agent authority and preserve human accountability for compliance-critical decisions.
Principle 3
QA Responsibility: Autonomy
Understand Where Agent Authority Ends
QA must maintain a clear mental model of what Agent may and may not do. This isn't about technical capability—it's about regulatory authority. Even if an Agent agent could technically approve a record, it must not. Know the boundaries and enforce them consistently.
Ensure Final Decisions Remain Human
Every compliance-critical decision requires a qualified person to review, verify, and authorise. Agent outputs are inputs to human decision-making, not replacements for it. The qualified person's signature represents accountability—and Agent cannot be held accountable in regulatory terms.
Prevent Silent Escalation of Agent Authority
Over time, organisations may drift toward trusting Agent outputs without verification. This "authority creep" is dangerous. QA must actively monitor for signs that Agent findings are being treated as final rather than preliminary, and intervene when boundaries erode.
Principle 4
Evidence-First Outputs
Agent produces working papers, not conclusions.
Agent outputs must be structured to support human verification, not replace it. This means every output includes not just findings, but the evidence and reasoning that support those findings. Transparency is mandatory—no "black box" results are acceptable in GxP environments.
Outputs should always be treated as draft evidence requiring review. They form the basis for a qualified person's decision, but they are not the decision itself.
Findings
What the agent identified—observations, patterns, inconsistencies, or areas of concern
Supporting Evidence
Specific document references, data points, or record excerpts that justify each finding
Rationale
The reasoning process: why this finding matters and how it was identified
Confidence Indicators
Signals of certainty or uncertainty, flagging areas where human judgement is especially critical
Nothing is final without verification. Evidence-first outputs enable informed human oversight.
Principle 4
QA Responsibility: Outputs
Treat Outputs as Draft Evidence
Agent findings are preliminary until verified. Approach every output with the same critical mindset you'd apply to a junior reviewer's work—acknowledge the effort, but verify the conclusions. Never assume correctness simply because the source is technological rather than human.
Review Rationale, Not Just Results
Don't simply check whether findings seem reasonable—examine how the agent reached them. Review the evidence cited, assess the reasoning process, and verify that supporting documentation actually supports the conclusion. Surface-level review is insufficient.
Ensure Evidence Supports Conclusions
There must be a clear, logical connection between the evidence presented and the finding drawn. If the link is weak, unclear, or absent, the finding cannot stand. This is standard scientific practice—Agent doesn't exempt us from it. Require the same rigour you'd expect from any quality professional.
Principle 5
Human-in-the-Loop Verification
If it matters for compliance, a human signs for it.
This principle is the cornerstone of accountable Agent operation. Regardless of how sophisticated the Agent agent, how comprehensive its analysis, or how confident its outputs appear, a qualified human must verify and authorise any compliance-critical finding.
Verification isn't a formality—it's a substantive quality check. The verifier confirms accuracy, assesses regulatory context, and ensures the finding aligns with current compliance expectations.
Accuracy
Are the facts correct? Does the evidence support the conclusion?
Context
Does the finding account for process history, recent changes, or site-specific factors?
Regulatory Relevance
Does this matter to regulators? Does it affect patient safety or product quality?
Agent reduces effort, not accountability. The human verifier remains fully responsible for the compliance outcome.
Principle 5
QA Responsibility: Verification
Perform Risk-Based Verification
Not all findings carry equal weight. QA must prioritise verification effort based on impact: patient safety risks, regulatory sensitivities, and business-critical decisions warrant deeper scrutiny. Lower-risk findings may require lighter review. This risk-based approach ensures verification resources focus where they matter most, whilst maintaining appropriate oversight across all outputs.
Focus on High-Impact Findings
Where Agent flags potential non-conformances, deviations, or critical quality attributes, verification must be thorough. Trace findings back to source documents, assess the severity accurately, and consider broader system implications. High-impact findings can trigger investigations, corrective actions, or regulatory notifications—they demand commensurate verification rigour.
Never Rubber-Stamp Agent Output
The most dangerous failure mode is perfunctory approval. If verification becomes a checkbox exercise—quickly scanning and approving without genuine review—the entire control framework collapses. QA must actively engage with findings, challenge assumptions, and apply professional judgement. Verification is an intellectual exercise, not an administrative one.
Principle 6
Full Traceability
You must be able to reconstruct the decision.
Traceability in Agent operation means the same thing it means everywhere in GxP: an inspector must be able to follow the path from input to output, understand what happened, and verify that controls were applied. If you cannot reconstruct how an Agent agent reached a conclusion, that conclusion is not GxP-compliant—regardless of whether it's correct.
1
Input Documents
Which records were processed, including version numbers and approval status
2
Agent Identity and Version
Which specific Agent agent executed, including configuration and operational parameters
3
Execution Context
When it ran, who triggered it, what scope was defined, and any relevant environmental factors
4
Output Artefacts
The findings produced, evidence cited, and verification records completed
This traceability standard is aligned to ALCOA+ expectations: attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available.
Principle 6
QA Responsibility: Traceability
Be Audit-Ready at All Times
Traceability isn't created retrospectively when an inspector arrives—it's built into every execution. QA must ensure that, at any moment, any Agent-supported decision can be fully explained with supporting records immediately available.
Answer "How Did This Conclusion Occur?"
This is the fundamental audit question. QA must be able to walk an inspector through the entire decision path: what inputs were used, what the agent did with them, what evidence it produced, how a human verified it, and who authorised the final decision.
Ensure Records Are Inspection-Ready
Records must be organised, accessible, and comprehensible to external reviewers. Technical jargon should be minimised. The audit trail should tell a clear story that a regulator can follow without specialist Agent knowledge.
Principle 7
Segregated Agent Roles
The Agent Four-Eyes Principle
No agent reviews its own work.
This principle applies the fundamental quality concept of independent review to Agent operation. Just as we don't allow a manufacturing operator to release their own batch, we don't allow an Agent agent to verify its own outputs.
Segregation prevents self-reinforcing errors. If the same Agent agent both generates and verifies a finding, any systematic bias or logic flaw will pass through unchallenged. Independent review, even when both reviewer and author are Agent agents, provides a critical control.
Executing Agent
Performs the primary analysis or review task
Reviewing Agent
Evaluates the executing agent's output using independent criteria
The reviewing agent operates with independent prompts and independent evaluation criteria. It doesn't simply re-run the same analysis—it challenges it.
Principle 7
QA Parallel: Four-Eyes Principle
The segregated agent roles principle directly mirrors long-established GxP practices. This isn't a new concept—it's the application of existing quality thinking to Agent workers.
Author vs Reviewer
SOPs require separate individuals for document authoring and technical review
Manufacture vs Release
Manufacturing personnel cannot release the batches they produce—QA provides independent release
Operator vs Verifier
Critical operations require a second person to verify completion before proceeding

Why This Matters
Segregation prevents self-reinforcing errors. If the same logic that created a finding also verifies it, any systematic flaw will remain undetected. Independent review—whether human or Agent—provides the necessary challenge to catch mistakes before they become compliance issues.
Principle 8
Contextual Reasoning Task Alignment
Use Agent where context matters.
Not all quality work is suitable for Agent operation. This principle defines where Agent adds value—and, critically, where it doesn't. The key distinction is between tasks requiring contextual interpretation and those requiring deterministic logic.
Agent Excels At:
Interpreting Semi-Structured Data
Understanding narratives in batch records, deviation reports, or investigation summaries where format varies
Comparing Records Across Time
Identifying patterns or changes across multiple batches, documents, or reporting periods
Identifying Patterns and Anomalies
Spotting trends, outliers, or correlations that might not be obvious to a single reviewer
Highlighting Inconsistencies
Flagging contradictions between different sections of the same document or across related documents
These tasks benefit from Agent's ability to process large volumes of text, recognise linguistic patterns, and surface issues that require human judgement to resolve.
Principle 8
Where Agent Should NOT Be Used
Equally important is knowing where Agent operation is inappropriate. Some quality tasks are deterministic—they have known, fixed answers that can be calculated or checked algorithmically. For these, traditional validated software is the correct tool.
Calculations
Mathematical operations with single correct answers—use validated calculation engines or expose the Agent to tools that have this function.
Rule Enforcement
Checking compliance with fixed, unambiguous rules—use workflow systems with validation
Binary Pass/Fail Logic
Decisions with no interpretation needed—"Does this value exceed the limit?" requires deterministic checking
Deterministic Checks
Verification tasks where the answer is always knowable without interpretation—e.g., "Are all fields completed?"
Using Agent for deterministic tasks introduces unnecessary risk. These belong in validated systems with proven, repeatable behaviour.
Principle 8
One-Line Rule
If the answer is known, don't use Agent.
If the question is hard, Agent can help.
This simple heuristic guides Agent task selection. If you already know what the correct answer is—or if there's an algorithm that will reliably produce it—Agent adds no value and introduces unnecessary variability. Use validated, deterministic tools instead.
But if the task requires interpretation, judgement, or pattern recognition across complex narratives, Agent operation within the QxAIOps framework can provide significant value whilst maintaining GxP compliance.
Common Failure Modes
Understanding how Agent operation fails helps prevent those failures. Most compliance issues with Agent aren't technical problems—they're governance failures. These are the patterns QA must actively monitor for and correct.
Over-Trusting Agent
Treating Agent outputs as inherently reliable without verification. This erodes the human-in-the-loop principle and creates hidden compliance risk. Combat this by maintaining verification discipline even when Agent appears consistently accurate.
Poor Input Quality
Allowing incomplete, unapproved, or incorrect records into Agent processing. The agent cannot assess record quality—humans must. This failure mode produces unreliable outputs that appear legitimate.
Scope Creep
Gradually expanding what the Agent does without updating governance controls. What starts as a narrow, well-controlled task becomes broad and poorly defined. Prevent this through regular scope reviews.
Missing Traceability
Incomplete or absent audit trails that prevent reconstruction of Agent-supported decisions. Often discovered during inspections when records cannot be produced. Ensure traceability from day one—it cannot be retrofitted.

These are governance failures, not Agent failures. The technology works as designed—the control framework was insufficient.
Final Takeaway
Agent does not weaken compliance.
Uncontrolled Agents do.
The distinction is critical. AI agents, operated within the controlled framework, can strengthen quality systems by improving consistency, expanding review coverage, and surfacing insights that manual processes might miss.
Operated Correctly, Agent Strengthens:
Consistency
Agent applies the same standards across all reviews, eliminating variability from fatigue or subjective interpretation
Coverage
Agent can review more records more thoroughly than manual processes, increasing quality oversight depth
Insight
Pattern recognition across large datasets surfaces trends and correlations humans might miss
Quality Decision-Making
Better evidence, presented clearly, enables more informed compliance decisions by qualified professionals
But these benefits only materialise when Agent operates under appropriate governance. Without the controls described in this workshop—defined scope, controlled inputs, bounded autonomy, verification, traceability, and segregated roles—Agent becomes a compliance risk rather than a compliance tool.
Your role as a QA professional is to ensure Agent operation remains controlled, traceable, and accountable. Apply the eight QxAIOps principles, maintain human accountability for compliance decisions, and remember: Agent is a worker to be governed, not software to be validated.
Workshop Scenario 1: The New 'Game Changer' Agent
The production team has presented an exciting new agent they believe will "change the way we work." They're eager for rapid implementation, and you've been tasked with reviewing it.
Your Challenge:
As a group, how would you ensure this agent is fit for GxP use?
Consider the principles we've discussed. What steps would you take, what questions would you ask, and what aspects would you scrutinise to guarantee compliance and quality?
Workshop Scenario 2: The 'Helpful Trend Analysis' Agent
Six months ago, your Quality team implemented an AI agent for GxP deviation trend analysis. Its initial purpose was to identify patterns across deviation reports, CAPA investigations, and batch records, producing monthly summaries for QA investigation.
What Has Changed
Over time, reliance on the agent's reports has significantly increased, evolving beyond its initial scope.
Reports are now directly used in Quality Review meetings.
QA investigators reference them to support root cause conclusions.
A recent CAPA closure cited the AI report as evidence that "no recurring trend exists."
The agent report is now regularly attached to Quality Review documentation.
Workshop Scenario 4: The 'Smart SOP Assistant'
To boost efficiency and streamline access to procedures, an AI assistant was deployed. It allows staff to query SOPs and GMP documents, providing summarised answers. This tool quickly gained popularity among operators and supervisors.
What You Discover
  • The assistant pulls from various sources, including approved SOPs, training slides, and internal guidance.
  • Some responses reference training material rather than official procedures.
  • Operators increasingly rely on the AI's answer, often bypassing the full SOP document.
  • The system occasionally references draft or historical documents in its replies.
Workshop Scenario 8: The 'Efficiency Shortcut'
Your organisation implemented an AI agent to assist QA reviewers with batch record review.
The intended workflow is for the agent to identify potential documentation issues, allowing QA reviewers to evaluate the findings and complete the compliance assessment. The goal was to reduce manual review time while maintaining QA oversight.
What Has Changed
Over time, some reviewers have started using the agent in a different way:
Instead of running the structured review workflow, reviewers ask the agent directly: “Are there any compliance concerns in this batch record?”
The agent returns a short summary response such as: “No major documentation issues detected.”
Reviewers sometimes accept this response with minimal additional evaluation.
This approach has gradually become a common shortcut within the team.
Workshop Scenario 9: The Inspection Question
Your organisation uses an AI agent to assist QA reviewers with batch record review. The system highlights missing entries, arithmetic inconsistencies, and potential documentation issues. QA reviewers evaluate the findings before completing the batch record review. The system has been operating for six months.
What Happens During an Inspection
During a regulatory inspection, the auditor learns that AI is used within the batch record review process.
The inspector asks how the AI system fits within the GMP quality process.
They request an explanation of the AI system’s intended use and limitations.
They ask how the organisation ensures the AI reviews controlled and approved records.
They ask how QA verifies the AI output and how changes to the system are managed.
Next Steps
The concepts from this workshop are best put into practice. Here are some immediate, actionable steps you can take to start integrating Agent governance into your organization's quality processes:
1
Identify a Process
Pinpoint one quality process within your organization where Agent could potentially assist and improve efficiency or consistency.
2
Review Your SOPs
Examine your existing Standard Operating Procedures to identify where specific Agent inputs, processing, and verification points would naturally fit.
3
Start Team Conversations
Initiate discussions with your team about Agent governance, risk management, and the potential benefits and challenges of Agent integration.
4
Assess Documentation
Evaluate your current documentation practices to ensure they are robust enough to support the traceability and accountability required for Agent-driven processes.
5
Connect with Peers
Engage with other quality professionals and industry peers who are also exploring Agent in quality to share insights and best practices.
Request Additional Materials