When a system makes a decision that affects your life, you have a right to understand why. This principle predates computation by centuries. The Magna Carta established the right to be judged by the law of the land. Due process requires that consequential decisions come with reasons.
AI systems do not get an exemption because the technology is complex.
The Four Audiences
The EIAF defines four distinct audiences for AI explanations, each requiring different information at different levels of technical detail.
End users need to understand what happened and what they can do about it. A loan denial explanation should identify the key factors and the path to reconsideration. Operators need to monitor system behavior and intervene when necessary. Feature importance scores, confidence levels, and anomaly indicators. Auditors need to reproduce decisions independently. Complete input data, model version, and decision chain. Regulators need to verify compliance. Methodology documentation, validation reports, and conformity assessments.
The same decision requires four different explanations. All must be accurate.
The Accuracy-Explainability Myth
The claim that explainability requires sacrificing accuracy is overstated. Research demonstrates that for most tabular data problems, interpretable models perform comparably to black-box alternatives. Where deep learning provides genuine advantage, in vision and language tasks, post-hoc explanation methods like SHAP and LIME provide meaningful insight without constraining model architecture.
The EIAF does not prohibit complex models. It requires compensating controls: enhanced monitoring, human oversight, and documented explanation methods appropriate to the risk tier.
Contestability
Explainability without contestability is theater. Every explanation must be paired with a pathway for the affected party to challenge the decision. The EIAF defines a six-step contestability pipeline: notification, explanation delivery, contest initiation, human review, outcome communication, and feedback loop integration.
A system that can explain its decisions but offers no mechanism to challenge them provides the appearance of accountability without the substance.