Most organizations now have an AI policy. A PDF somewhere. Maybe a SharePoint page. Probably a paragraph in the acceptable use agreement that nobody reads past the first sentence.

That’s not governance. That’s liability documentation.

A federal ruling out of the Southern District of New York just made the distinction matter in a way that’s hard to ignore. A defendant in a federal fraud case used a public AI tool to research his own legal exposure, fed information he’d received from his attorneys into the model, generated 31 documents of strategy and analysis, and passed them back to his legal team. When the FBI showed up, they took the documents. The court gave them to the prosecution. Every claim of privilege failed because the moment that information touched a public AI platform governed by standard consumer terms of service, the confidentiality was gone.

That case involved a criminal defendant. But the reasoning has nothing to do with criminal law. It has everything to do with what happens when proprietary information meets a public endpoint. And right now, that’s happening thousands of times a day inside your organization.


The Actual Risk Surface

The conversation about AI risk in most enterprises lives in the wrong place. It focuses on outputs, hallucinations, bias, and accuracy. Those are real problems. They’re not the primary governance problem.

The primary governance problem is inputs.

Every time an employee uses a consumer AI tool for work, they are making a decision about data exposure that your security team didn’t approve, your legal team didn’t review, and your compliance function doesn’t know about. They’re not being reckless. The interface feels private. It feels like a tool. It feels no different from Google Docs or a search engine.

But the terms of service tell a different story. Standard consumer AI platforms reserve the right to retain user inputs, use them to train models, and disclose them to third parties, including, in some cases, government authorities. The court in the Heppner ruling found that consenting to those terms and then using the tool destroyed any reasonable expectation of confidentiality, full stop.

Map that onto what your employees are actually doing. An engineer pastes proprietary code to get debugging help. A product manager describes an unannounced roadmap while drafting a positioning document. A finance analyst uploads internal projections to generate a summary. A business development team member outlines an acquisition rationale while working on an approach letter. None of these people are doing anything wrong by their own understanding. All of them may have just handed your proprietary information to a third party with no obligation to protect it.

Trade secret law requires that you take reasonable measures to maintain secrecy. That standard is not satisfied by a policy memo that employees consented to and forgot. It requires structural controls. When a court looks at a trade secret misappropriation claim and the opposing party can point to Heppner and say this organization’s employees routinely entered protected information into public AI tools with terms permitting broad data use and the company took no structural measures to prevent it, you have a problem that no policy document will fix.


What Governance Actually Requires

Real AI governance has three layers, and most organizations have at most one of them.

The first is structural. You need to know which tools your employees are using, where data is moving, and which terms govern each platform. This is not a matter of asking people to self-report. It requires tooling. Shadow AI discovery, network monitoring, and endpoint controls that flag or block unapproved AI endpoints for sensitive data categories. If you don’t know what’s happening, you can’t govern it.

The second is architectural. For any AI use case that involves proprietary information, trade secrets, privileged communications, regulated data, or competitive intelligence, the only compliant deployment is one in which your data does not leave your environment. Enterprise agreements with genuine contractual confidentiality commitments and no-training-on-inputs provisions are a minimum. Privately hosted models with no external data transmission are the clean answer. The Heppner ruling itself acknowledged this distinction. The court tied its confidentiality analysis specifically to the consumer version of Claude and its standard privacy policy. Enterprise deployments with real isolation present a materially different fact pattern.

The third is operational. Policies, training, and clear categorization of what data requires what level of control. Employees need to understand not just that there are rules, but why the rules exist and what they’re protecting. The abstract instruction “don’t enter confidential information into AI tools” fails because employees don’t always recognize what qualifies as confidential information. A product spec is confidential. A customer’s name is confidential. A deal structure is confidential. The training needs to be specific enough to be actionable.


The Compliance Frame

From a compliance and risk perspective, the governance gap around AI is now a known, documented, judicially recognized risk. That changes the analysis significantly.

Before Heppner, an organization could argue in a regulatory or litigation context that the risk of data exposure through employee AI use was theoretical, that the harm pathway was unclear, and that reasonable precautions were in place. After Heppner, none of that holds. There is a federal court ruling, analyzed by every major law firm in the country, establishing the precise mechanism by which confidential information loses its protected status when it touches a public AI platform. If your organization knew about this risk, which you do now, and took no structural action to address it, the reasonable measures argument for trade secret protection becomes difficult to make. The failure to act is documented.

For organizations in regulated industries, the exposure compounds. HIPAA, SOX, GLBA, ITAR, and dozens of sector-specific frameworks impose confidentiality obligations that exist independently of your preferences. Employees entering regulated data into non-compliant AI tools create regulatory exposure, not just legal risk. Regulators do not accept “we had a policy” as a defense when the policy had no teeth.

AI governance is no longer a technology question or a legal question in isolation. It is a fiduciary question. The directors and officers of an organization that understood this risk, documented in federal case law and analyzed extensively in legal literature, and took no action, have a harder conversation ahead of them than the ones who invested in the infrastructure to address it.


The Balance

None of this argues against AI adoption. The competitive case for AI is real, and the organizations that fail to operationalize it will fall behind. The point is that adoption without governance isn’t just risky, it’s self-defeating. You deploy AI to create value, to move faster, to generate insight. If the cost of that deployment is your trade secrets, your privileged strategy, and your regulatory standing, you’ve destroyed more value than you created.

The organizations that get this right will treat AI infrastructure the same way they treat financial controls or information security. Not as a constraint on operations, but as the foundation that enables sustainable operations.

The ones that don’t will learn the lesson the hard and expensive way. They always do.