ISO 42001 Auditors Test Your Management System Not Just Your AI Policy

ISO 42001 requires an operational management system with auditable evidence. Learn why a policy alone fails audits and how to close the gap before the EU AI Act deadline.

The auditor arrived, reviewed the responsible AI policy, read the first two pages, set it aside, and asked the question the policy could not answer.

'Can I see your AI system impact assessments?'

The policy described the organization's commitment to responsible AI development. It said nothing about the three production AI systems deployed the previous year. The auditor was not testing the quality of the policy. The auditor was testing whether a management system existed behind it.

ISO/IEC 42001:2023 is a management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS): an organizational structure designed to produce auditable evidence across the full AI lifecycle.

A responsible AI policy satisfies the input requirements of one clause in that seven-clause framework. It does not produce the evidence that Clauses 6, 8, or 9 require. Most regulated-enterprise CISOs and AI leads have the first document and none of the others.

That gap is what the auditor is testing. With the EU AI Act's high-risk AI obligations becoming enforceable from August 2026, the organizations that treated policy as governance are about to discover the difference.

InterSec built and operates its own AIMS and was audited against ISO/IEC 42001:2023 before advising any client on the standard. What that process revealed: the distance between governance intent and auditable evidence is larger than most organizations expect, and it shows up in specific clauses in a predictable order.

Understanding What a Responsible AI Policy Actually Covers

In ISO/IEC 42001's framework, a policy document feeds into Clause 5, which addresses top management commitment, organizational AI objectives, and the formal expression of AI governance intent. Your policy satisfies those input requirements.

ISO/IEC 42001 requires you to build and operate an AI Management System, not write a document that describes one. An AIMS has defined roles, repeatable processes, and evidence-producing controls. The policy is the founding statement. The AIMS is the operating system. You cannot substitute one for the other.

Annex A of ISO/IEC 42001 contains 9 control objectives supported by 38 individual controls spanning AI policy, risk management, data governance, AI system lifecycle management, and third-party management. Each control requires documented implementation. Not stated intent. Evidence that the implementation is operating. A policy can describe intent across all 38 controls. It cannot produce evidence of implementation for any of them.

The three clauses that generate the most audit exposure sit below the policy layer and require operational artifacts a policy does not produce.

The Specific Artifacts an ISO 42001 Auditor Actually Tests

Clause 6 requires documented, repeatable AI risk assessments tied to specific AI systems. Not a general risk management section. Not a one-time exercise. A process with defined inputs, evaluation criteria, documented outputs, and treatment decisions recorded for each system in scope.

Clause 8 requires AI system impact assessments. One per production system. Each covering the system's purpose, operational complexity, and the sensitivity of the data it processes. These are not generic documents. An organization with five production AI systems needs five assessments.

Clause 9 requires internal audits at planned intervals with findings tracked to closure, plus documented management reviews with named decisions. The audit cannot run until Clauses 6 and 8 are operational. There is no shortcut through that sequence.

The finding that appears most often at the clause level is not that the policy was inadequate. The finding is that the policy was the only artifact. The board approved it. Legal reviewed it. The risk committee was briefed. None of that work produced the evidence the standard tests.

The EU AI Act Deadline for High Risk AI Systems

The EU AI Act's high-risk AI system obligations become enforceable from 2 August 2026. Organizations deploying high-risk AI systems listed in Annex III that cannot demonstrate traceable, defensible compliance records face regulatory exposure at that deadline.

ISO/IEC 42001 and the EU AI Act are separate frameworks. Certification against one does not automatically satisfy the other. But the evidence-layer logic is identical. Auditors and regulators do not test intent. They test documentation of operational governance.

A fire drill plan that nobody has practiced is not a fire safety program. An AI policy without an AIMS is the same structure in a boardroom.

A 600-person financial services firm completed their first ISO/IEC 42001 gap assessment in Q4 2025. Their governance team arrived expecting to present their responsible AI policy, their AI ethics charter, and their vendor AI review checklist. In the first session, the assessor asked for impact assessments for their three production AI systems. None existed. The gap between their governance documentation and the standard's clause-level requirements was visible within the first hour. The remediation roadmap they built from that session took four months to execute.

How to Map Where Your Current Governance Program Ends

Most organizations that hold their current AI governance documentation against what Clauses 6, 8, and 9 actually require find the distance in under an hour. Not because the gap is small, but because the requirements are specific enough that the gap becomes immediately visible when you put the two documents side by side.

Run this audit against your current posture:

  1. Clause 5 check: Do you have a board-approved AI policy with documented objectives? If yes, Clause 5 is covered. This is the clause your policy satisfies.
  2. Clause 6 check: Do you have a documented AI risk assessment methodology applied to specific systems with treatment decisions recorded per system? If the answer to any of these is no, Clause 6 is open.
  3. Clause 8 check: Do you have a documented impact assessment for each production AI system? Count your production systems. Count your assessments. The gap is the difference.
  4. Clause 9 check: Do you have a documented internal audit program with findings tracked to closure and management review records maintained? If not, Clause 9 is open.
  5. Annex A check: Map your current documentation against the 38 Annex A controls. Identify which controls have implementation evidence and which have only policy statements.

The Specific Steps Required to Close the Compliance Gap

The remediation work is scoped and finite. Build the AI risk assessment process. Conduct impact assessments for production systems. Establish the internal audit cycle. Create the management review cadence. None of that requires a multi-year restructuring effort. It requires a clear map of where your current posture ends and where the standard's requirements begin, and then executing the build in sequence.

The assessment turns an ambiguous governance question into a specific remediation list. Once it is a list, it is a project. Once it is a project, it has a completion date.

Organizations that run the gap assessment before the August 2026 EU AI Act enforcement window have a defined remediation roadmap. Those that don't are still carrying an ambiguous governance posture when the deadline passes.

Frequently Asked Questions

What does ISO/IEC 42001 actually require organizations to document?

ISO/IEC 42001 requires specific auditable artifacts at the clause level: documented AI risk assessments with methodology and treatment decisions per system (Clause 6), AI system impact assessments for each production AI system (Clause 8.2), internal audit findings at planned intervals with closure records (Clause 9.2), and management review outputs (Clause 9.3). A responsible AI policy satisfies only Clause 5 input requirements. None of the Clause 6, 8, or 9 artifacts are produced by a policy document.

Is a responsible AI policy sufficient for ISO/IEC 42001 conformance?

No. A responsible AI policy is an input to Clause 5. ISO/IEC 42001 conformance requires evidence that an AI Management System is operational, including risk assessments, impact assessments, and an active internal audit cycle. The policy is one document. Conformance requires an operating system that generates multiple artifact types across all seven clauses.

What does an ISO/IEC 42001 auditor actually examine?

An ISO/IEC 42001 auditor examines evidence that the AI Management System is functioning: AI risk assessment records tied to specific systems per Clause 6, per-system impact assessment documents per Clause 8.2, internal audit findings and closure records per Clause 9.2, and management review documentation per Clause 9.3. An auditor does not test the quality of your policy. They test whether an operating governance system exists behind it.

InterSec's ISO 42001 gap assessment maps your current documentation against each clause's requirements, identifies where evidence is absent, and produces a sequenced remediation list. If you want to know where your governance program actually ends, that is the right starting point.

Schedule an ISO 42001 gap assessment with InterSec.

Join our community
No spam. Just helpful guides, blogs, and news about Cybersecurity from experts
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
InterSec Assistant
InterSec Assistant