The Ancestor's Error
Shumailov's Nature paper proved the mechanism in a closed loop. Ahrefs found 74% of new web pages contained AI text. The thought experiment is no longer hypothetical.
In February 2024 a Hong Kong firm wired $25 million to a synthetic CFO over a deepfake video call. Every protective layer in the firm's controls had quietly become invalid before the call rang.
In February 2024 a finance worker in the Hong Kong office of the British engineering firm Arup joined a video call with what appeared to be the firm's chief financial officer and several colleagues. The call instructed him to make a series of urgent transfers totaling about twenty-five million U.S. dollars. He made them, in fifteen separate wires over a single day. The call was a deepfake. The CFO had not been on it. None of the other participants were real. Every face on the screen, every voice, was a synthetic rendering produced by AI tools that had not existed eighteen months earlier.
The fraud was not detected by the firm's controls. It was detected when the employee, days later, mentioned the call to the actual head office. The Hong Kong Police Force confirmed the case publicly in February 2024. Arup did not publicly identify itself as the victim until May. What I keep coming back to is that every protective layer in the firm's procedures had been assuming something that was no longer true. The video call had been assumed to authenticate the participants. The voice had been assumed to be the voice of the person whose face appeared on the screen. The collective assent of multiple senior colleagues had been assumed to validate the request. Each assumption had been load-bearing. Each had been silently invalidated.
This is what I have been calling, in conversations with clients, institutional invalidation. It is the condition in which an organization's processes are correct in form and wrong in substance, because the world they were built for has changed underneath them in a way the processes have no way to detect.
The failure mode is not obsolescence. Obsolete processes do the right thing slowly. Invalid processes do the wrong thing efficiently, because the assumption that made them right has stopped being true. The Hong Kong firm's video-call verification was instant. It was also incorrect, every time, in the new operational environment.
The phenomenon is not specific to fraud.
In 2023 academic institutions discovered that their assessment systems could not reliably distinguish AI-generated student work from human-generated student work. The first response was technological. Deploy AI-text detectors. The detectors failed. The second response was procedural reform. Assignments were modified. Examinations went oral. The third response, where some institutions are now, is to admit that the assessment system itself was an artifact of an epistemic assumption, that written work reflects student learning, that no longer holds in any straightforward way.
Each of those three responses sits at a different point on the invalidation curve. Technological remediation says the underlying process is fine, we just need better tools. Procedural reform says the underlying process is broken, we need new processes. Epistemic acknowledgment says the underlying process was built on an assumption that is now untrue, and we need to ask what we were actually trying to do, and whether we have any way to do it now.
The institutions that adapted earliest were the ones that recognized, in 2023, that they were at the third stage and not the first. The institutions still in the first stage in 2026 are running detection arms races they cannot win, and the cost of running them is, in part, what is being subtracted from the budget for the second and third stages.
The same curve runs through every institution that has, over the last two years, encountered the limits of an assumption it did not know it was making.
The FDA has now authorized over a thousand AI-enabled medical devices. The agency's approval process was built for a world in which a medical device is a manufactured artifact with documented behavior. AI-enabled devices have a different ontology. Their behavior depends on training data the agency does not directly inspect, on continuous model updates the agency has only recently begun to address, on emergent interactions with clinical workflows the device manufacturer cannot fully anticipate. The agency is adapting. Whether the adapted FDA is the same FDA, in any meaningful institutional sense, is the question that comes next.
The intelligence community has spent two years confronting the fact that AI can produce signals indistinguishable from genuine ones. The traditional apparatus of source evaluation, corroboration, and analytic tradecraft was built for a world in which signals were produced by humans, with motivations and errors and limits that trained analysts could exploit. In a world where signals can be generated at scale by systems that have none of those properties, the apparatus does not stop working. It works against a category of input it was not designed for, and produces confident assessments anyway.
I do not write this to be alarmist. I write it because I keep watching the same conversation in different rooms. A general counsel says, "we have a verification process, this should not have happened." A CFO says, "we have controls, this should not have happened." A regulator says, "we have a review process, this should not have happened." Each of them is right that the process exists. None of them has confronted whether the process is, in 2026, doing the work it was designed to do.
The shape of the problem is what makes it hard. Invalid processes look correct from the inside, because the inputs they expect still arrive in the format they expect. The format is what was attacked. The inputs no longer have the property that made the format informative. The process happily reports success and the firm wires twenty-five million dollars to a synthetic CFO.
The honest version of the strategy advice is that every institution above a certain age has, somewhere in its structure, an assumption that is now invalid, and the work of finding it cannot be delegated to the same processes whose assumptions are at issue. It has to be done deliberately, by people who have permission to ask whether the underlying epistemic premise of a given control still holds.
That work is unglamorous. It is also, as far as I can tell, the only work that addresses the root condition.
Shumailov's Nature paper proved the mechanism in a closed loop. Ahrefs found 74% of new web pages contained AI text. The thought experiment is no longer hypothetical.
Epic's sepsis prediction model missed 67% of sepsis cases at Michigan Medicine. The audit method we built for AI cannot catch what the system never said.
Air France 447 and a 2025 Polish endoscopy trial point at the same trap. The more reliable the system, the more thoroughly its absence becomes catastrophic.
Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.
Schedule a Briefing