I need to tell you something that will be uncomfortable if you’re the person who drafted your organization’s AI ethics policy. Or the executive who approved it. Or the compliance officer who filed it.
Your AI ethics policy is almost certainly theater.
I don’t say that to be provocative for its own sake. I say it because I’ve spent over two decades building and operating the IT infrastructure and security systems that AI is now transforming. I’ve watched this pattern unfold at dozens of organizations, and it plays out the same way almost every time.
Company adopts AI tools. Someone in legal or compliance gets asked to “put together a policy.” They spend a few weeks assembling something that sounds responsible. Leadership signs off. The PDF gets uploaded to SharePoint. Everyone goes back to doing exactly what they were doing before.
The Anatomy of Ethics Theater
The Aspirational Document
The policy reads like a mission statement, not an operating procedure. It’s full of language like “we are committed to fairness” and “we value transparency in AI systems.” Try auditing a sentiment. Try proving to a regulator that you “value transparency.” You can’t, because there’s nothing to measure.
The Orphaned Policy
The AI ethics policy sits in the compliance folder, completely disconnected from procurement, vendor management, incident response, HR, and operations. If your AI ethics program doesn’t touch the daily work of the people actually using AI, it isn’t a program. It’s a document.
The Missing Feedback Loop
Ask your organization when the policy was last updated based on an actual incident or finding. If the answer is never, you’re watching theater.
The Accountability Vacuum
Who is accountable, not “responsible,” for AI ethics outcomes? Not “the committee.” A named individual who owns the results. In ethics theater, accountability is distributed so broadly that it disappears.
Why This Matters Now More Than Ever
Regulators Are Done Accepting PDFs. The EU AI Act isn’t asking whether you have a policy. It’s asking for evidence of operational controls. Article 9 requires risk management systems that are “a continuous iterative process.” Penalties reach up to 35 million euros or 7% of global annual turnover.
AI Failures Scale Differently. When an AI system produces biased or harmful outputs, it can affect thousands of decisions before anyone notices. A biased hiring algorithm doesn’t make one bad decision. It makes hundreds.
Trust Is the New Competitive Advantage. Organizations that can answer AI governance questions credibly, with evidence, will win trust. Organizations that point to a dusty PDF will not.
The Five-Step Theater Test
1. Can three random employees describe what AI ethics means for their specific role?
2. Has the policy changed in response to a real incident in the last 12 months?
3. Can you produce evidence of AI risk assessments for every AI tool in your environment?
4. Is there a named individual with budget and authority accountable for AI governance outcomes?
5. Could you demonstrate your governance to a regulator tomorrow with operational evidence?
If you answered yes to all five, you have a real program. If not, the window for getting this right proactively is closing.
Stop performing governance. Start operating it.