All Insights
Enterprise AI

Legibility and Its Discontents

James Scott's argument from 1998, run at the speed of inference. The map is quietly rebuilding the territory inside every firm that runs an AI summarization layer.

Legibility and Its Discontents

In 1998 the political scientist James C. Scott published Seeing Like a State, a book about how governments make populations and landscapes legible to administrative systems. Scott's central observation was that the simplifications required for legibility are not innocent. When a state replaces organic village naming conventions with standardized surnames, or replaces diverse agricultural practices with monocultural plots visible from above, the simplification does not just describe the territory. It restructures it. The map remakes the territory, slowly, through the operations the map enables.

Twenty-eight years later we are running Scott's experiment at the speed of inference, in every enterprise that has deployed an AI summarization layer.

Every summarization system is a legibility machine. It takes complex, heterogeneous, partially-illegible source material and produces a clean abstraction the reader can act on. That is its purpose. It is also its danger, because the abstraction is not a transparent window onto the source. It is a particular projection with particular blind spots, and over time the organization that consumes the abstraction begins to operate on it as if it were the source.

The mechanism is the one Scott named. Once a system is set up to read the territory through the map, the territory reorganizes to be more legible to the map.

Consider a pharmaceutical safety review function. The team consumes adverse-event reports from dozens of clinical trial sites. The reports arrive in inconsistent formats. Some use CTCAE grading. Some are narrative. Some are perfunctory and some are careful. A summarization tool is deployed. It produces clean, standardized records, with event type, severity, onset, resolution. The downstream workflows now read the standardized records, not the raw reports. Within a quarter, two things happen. The reviewers begin to think in the standardized vocabulary. And the report-writers, who learn through feedback that their narrative subtleties are being discarded, begin to write reports that survive summarization. The qualifications drop. The margin notes vanish. The reports become cleaner because they have learned what the AI keeps and what the AI throws away. The map has begun to rebuild the territory.

Once you see this pattern you cannot stop seeing it. It runs everywhere a summarization layer sits between source and decision.

When intelligence analysts know their reports will be consumed primarily through AI-generated briefings, they begin structuring analysis to survive summarization. The tentative hypothesis. The dissenting interpretation. The carefully qualified assessment. These are exactly the elements summarization handles worst, because they resist clean categorization. Analysts learn, without being told, to write for the machine. The reports become crisper, cleaner, less informative.

When legal teams draft contracts knowing the counterparty will review through AI summary, they structure provisions for machine readability. This sounds like an improvement until you notice that the strategic ambiguity on which much contract drafting depends, the open-textured clause that preserves negotiating flexibility, is precisely what AI summarization resolves into a false clarity.

Scott's most arresting example was Prussian scientific forestry. In the eighteenth and nineteenth centuries, Prussian foresters replaced biologically diverse woodlands with neat rows of identical Norway spruce. The idea was that the standardized forest could be measured, planned, and harvested by the same techniques that worked for the rest of state administration. For one generation it was spectacularly productive. Then the forest collapsed. The monoculture had eliminated the fungi and the insect populations and the understory plants and the ecological complexity the forest depended on for long-term health. The foresters had optimized the metric, which was board-feet of harvestable timber, and destroyed the unmeasured substrate the metric depended on.

The German foresters had a word for what came next. Waldsterben, forest death.

The analogy to AI-mediated enterprise is plain. When an organization optimizes for the metrics its summarization layer surfaces, it harvests legibility while depleting whatever illegible substrate it had. The tacit knowledge. The informal networks. The unstructured observations. The qualifications that did not fit the schema. The early results will be impressive. Dashboards will be cleaner. Reports will be more consistent. Whether the enterprise equivalent of Waldsterben arrives in five years or fifteen is an empirical question. That it arrives, by Scott's logic, is structural.

There is a version of this argument that ends with recommendations. Preserve raw source access. Maintain unmediated channels. Resist premature standardization. These are sensible. They are also fighting the economic logic that makes legibility valuable. Organizations adopt AI summarization because it works, and because the alternative, requiring every decision-maker to engage with raw complexity, does not scale.

The honest formulation I have arrived at runs like this. Legibility is purchased with a currency we do not know how to value, and the price is extracted from a reserve we do not know how to measure. By the time the deficit becomes visible, the reserve may already be spent. The Prussian foresters did not see Waldsterben coming because the system that would have detected it was the system they had replaced. The AI-mediated enterprise has the same blind spot, by the same mechanism. The thing that would notice the loss is the thing the loss is consuming.

Use AI summarization. In domains where the architectural layer either does not exist or does not matter, deploy it freely. In domains where the architecture is doing real work, the regulatory cases, the legal cases, the clinical cases where structure carries cognition, use summarization as a guide to the source rather than a replacement for it, and require the reader to engage with the source for the architectural-layer cases.

That recommendation is unfashionable. It does not scale. It costs money to enforce. It will lose, at every quarterly review, to a competitor who skips it. Whether it loses on the longer horizon depends on whether the firms that skipped it ever encounter a case where the architectural layer was what mattered, and on whether they recognize what they lost when they realize they lost it.

Most of them will not. The architectural layer is invisible until it is what was needed.

Initiate Contact

Ready to transform your decision architecture?

Tell us about the decision you're trying to improve. We'll schedule a briefing with our principals to understand your environment and explore a potential fit.

Schedule a Briefing