SOC2 for AI Companies: What's Different in 2026
AI and ML companies face compliance requirements that barely existed two years ago. Standard SOC2 is now just the starting point. Enterprise buyers are asking for model governance controls, ISO 42001 certification, and EU AI Act readiness — requirements most SOC2 auditors have not yet learned to evaluate.
AI companies need SOC2 just like any SaaS company — but with additional scrutiny on model governance, training data handling, and AI-specific controls. The gap is in the auditors: the vast majority of SOC2 practitioners have never evaluated model access controls, training data provenance, or LLM output auditability. Choosing the wrong auditor means you get a SOC2 report that enterprise AI buyers immediately recognize as missing the controls they care about most.
AI-Specific Controls Your Auditor Will Test
Beyond the standard SOC2 Common Criteria (CC series), auditors with genuine AI experience test controls that have no direct analogue in traditional SaaS audits. These are the controls that matter to enterprise AI buyers.
Who can view, modify, or deploy production AI models? Access should be restricted to named individuals with business justification, with access reviewed quarterly. Auditors look for role separation between model development and production deployment.
Where did your training data come from? Is it licensed for your use case? Do you have documentation of data sources, consent mechanisms, and removal procedures for data subjects who request deletion? Training data documentation is a blind spot for most ML teams.
Model updates are production changes. Auditors expect the same change management controls for model deployments as for code deployments: peer review, documented approval, rollback procedure, and evidence that the change was tested before production deployment.
Are you monitoring model outputs for performance degradation, demographic bias, or distribution drift? This is increasingly required by enterprise buyers in financial services, healthcare, and HR applications. Auditors look for documented monitoring procedures and evidence of periodic review.
Can you reconstruct what your model output for a specific user at a specific time? Enterprise buyers — especially in regulated industries — need to audit AI-assisted decisions after the fact. Logging must be comprehensive, tamper-evident, and retained per your data retention policy.
For LLM-based products, prompt injection is the primary attack vector. Auditors should test that you have controls preventing users from accessing system prompts, manipulating model behavior, or accessing other users' conversation context through crafted inputs.
Are you collecting and retaining only the data necessary for model training? Privacy regulations and data minimization principles apply to training data, not just application data. Auditors look for documented data retention periods and automated deletion in training pipelines.
The SOC2 + ISO 42001 Combined Path
ISO 42001, published by ISO in October 2023, is the international standard for AI Management Systems (AIMS). It provides a framework for responsible AI development covering risk assessment, transparency, human oversight, and governance. For AI companies selling to enterprise buyers in regulated industries, it is rapidly becoming an expectation alongside SOC2.
SOC2 and ISO 42001 share significant common ground — access management, change control, risk assessment, and incident response all map across both frameworks.
A combined audit is significantly cheaper than two sequential audits because the auditor builds shared evidence once and applies it to both frameworks.
A combined SOC2 Type 2 + ISO 42001 certification typically takes 12–18 months from kickoff to both certificates in hand.
Not all SOC2 auditors offer ISO 42001 certification. It requires separate accreditation under the ISO certification body structure. When evaluating auditors, ask explicitly whether they are accredited for ISO 42001, or whether they partner with a certified body to deliver combined audits. See our auditor directory to filter by ISO 42001 capability.
EU AI Act: What It Means for Your SOC2
Full enforcement for high-risk AI systems begins August 2026. If you have EU customers using a high-risk AI system, you have limited time to get documentation and controls in place. Companies found non-compliant face fines up to 3% of global annual revenue.
High-risk AI systems under the Act include: AI used in medical devices, employment and worker management, credit scoring, access to essential public services, law enforcement, migration and border control, and administration of justice. AI that makes or assists in consequential decisions about individuals in these domains is high-risk.
If you are NOT a high-risk system: The Act still requires basic transparency measures — disclosing that content is AI-generated and that a chatbot is not human. General-purpose AI models (like custom GPT-4 applications) face specific transparency and copyright compliance requirements under Article 53.
CC3.x Risk Assessment criteria — annual risk assessment, risk register, risk owner assignment
CC6.x Access Controls + Privacy TSC — data classification, access restrictions, data handling procedures
Existing SOC2 policies and procedures form much of the required documentation base
New requirement — not covered by standard SOC2. Requires separate documentation of human-in-the-loop controls and override procedures
CC7.x Monitoring + Change Management provide the cybersecurity foundation; accuracy/robustness require AI-specific additions
Choosing an Auditor as an AI Company
AI companies face a fundamental challenge in the auditor market: the SOC2 practitioner pool developed around traditional enterprise software and cloud infrastructure. AI/ML systems introduce technical concepts — model governance, inference pipelines, training data lifecycle — that most auditors have never evaluated. Here is how to qualify them.
How many AI/ML company SOC2 audits have you completed? Which sectors — healthcare AI, fintech AI, general SaaS?
What specific model governance controls did you test in your most recent AI company audit?
How do you evaluate training data provenance controls? What evidence do you request?
Do you have ISO 42001 capability, or a partner certification body you work with?
How do you test LLM-specific controls like prompt injection prevention and output auditability?
Most SOC2 auditors have no AI-specific experience.
Verify by asking: “How many AI/ML company SOC2 audits have you completed, and what model governance controls did you test?” A qualified auditor will describe specific controls around model versioning, training data access, and output logging. If the answer is vague or redirects to standard SaaS boilerplate, you are talking to the wrong firm. An SOC2 report from an auditor who did not test AI-specific controls will not satisfy AI-literate enterprise security teams.
Frequently Asked Questions
Do AI companies need a different SOC2 than regular SaaS?
The SOC2 framework is the same — same Trust Services Criteria, same AICPA standards. But the controls auditors test for AI products are different. A standard SaaS audit focuses on access control, change management, and incident response. An AI product audit adds model access controls, training data provenance, model versioning, bias monitoring, and output auditability. If your auditor has never tested these controls, they will not know what evidence to request or how to evaluate it.
What is ISO 42001 and do I need it alongside SOC2?
ISO 42001 is the international standard for AI Management Systems, published in October 2023. It provides a framework for responsible AI development and deployment — covering risk assessment for AI systems, transparency requirements, and human oversight mechanisms. Enterprise buyers at regulated companies (finance, healthcare, government) are increasingly requiring ISO 42001 alongside SOC2 as part of vendor due diligence. Combined audits have about 40–50% control overlap, making the combined certification roughly 25–35% cheaper than two sequential audits.
Does the EU AI Act require SOC2?
The EU AI Act does not specifically require SOC2. However, the Act's requirements for high-risk AI systems — risk management systems, data governance, technical documentation, human oversight, accuracy measures — map closely to SOC2 controls. Companies that already have a strong SOC2 program have most of the documentation and control infrastructure needed for Act compliance. For companies selling into the EU, having SOC2 with AI-specific controls is a strong signal of Act readiness to enterprise buyers, even before formal certification programs exist.
How do I find an auditor with actual AI experience?
Ask directly: 'How many AI/ML company SOC2 audits have you completed, and what model governance controls did you test?' A qualified auditor should be able to describe specific controls around model versioning, training data access, and output logging that they have evaluated. If the answer is vague or redirects to standard SaaS controls, move on. Also ask whether they have any staff with machine learning engineering backgrounds — not required, but a strong signal of genuine AI domain expertise.
What controls do auditors test for LLM-based products?
For LLM-based products specifically, auditors should test: prompt injection controls (preventing users from manipulating system prompts or accessing other users' context), output filtering and moderation controls, rate limiting and abuse detection, training data handling if you fine-tune, model access controls (who can modify or deploy the model), and output logging for auditability. Few auditors have formal test procedures for these — when evaluating auditors, ask specifically about their LLM control testing methodology.
Find an auditor with genuine AI experience
Answer 5 questions about your AI product, team size, and timeline. We match you with auditors who have completed AI/ML company audits — filtered by their specializations and experience with AI-specific controls.