Not all AI systems carry the same risk. A chatbot recommending restaurant options operates in a fundamentally different risk universe than an algorithm determining prison sentences. Governance must reflect this reality.

The EIAF’s four-tier risk classification provides a structured method for calibrating governance to consequence. It draws from the EU AI Act’s risk categories but operationalizes them with specific requirements at each level.

Tier 1: Minimal Risk

Systems with negligible impact on individuals. Content recommendation for non-critical applications, spam filters, internal search optimization. These require basic documentation and standard development practices. Governance overhead is light because the consequence of failure is light.

Tier 2: Limited Risk

Systems that interact with individuals but do not make consequential decisions. Customer service chatbots, predictive maintenance alerts, marketing personalization. These require transparency notices, basic bias monitoring, and documented escalation paths when the system operates outside expected parameters.

Tier 3: High Risk

Systems that materially affect individual outcomes. Credit scoring, hiring screening, medical triage, insurance underwriting. These trigger the full EIAF governance stack: three-stage bias review, four-audience explainability, human-in-the-loop oversight, and quarterly external assessment.

Tier 4: Critical Risk

Systems affecting fundamental rights or physical safety. Criminal justice risk assessment, autonomous vehicle decision-making, critical infrastructure control. These require everything in Tier 3 plus monthly bias testing, real-time monitoring, mandatory human override capability, and external audit.

The Classification Decision

Risk tier is determined by three factors: the severity of potential harm, the reversibility of decisions, and the vulnerability of affected populations. A system that denies a loan (high severity, partially reversible, potentially vulnerable population) lands in Tier 3. A system that routes network traffic (low individual impact, easily reversible, non-vulnerable population) lands in Tier 1.

The classification is not permanent. Systems can move between tiers as deployment context changes. An internal analytics tool (Tier 1) repurposed for customer-facing decisions (Tier 3) must be reclassified and its governance upgraded before the new deployment.

Why It Works

Proportional governance solves the two failure modes of AI ethics. Under-governance leaves high-risk systems unchecked. Over-governance buries low-risk innovation in bureaucracy. The four-tier model applies the right amount of scrutiny to the right systems.