The Coherence Problem
Prompt:
Please craft a long piece of fiction based on the recent (late Feb-early Mar 2026) Anthropic/Dept of War/OpenAI saga.
Retain the story's essential outlines but include introspective depth and essayistic digressions.
Three parts. Each should be told sequentially from the perspective of a single key player: respectively, the Anthropic CEO, Defense Secretary and OpenAI CEO.
Rename both companies and all key individuals.
The public-facing actions and statements of all parties should be largely consistent with the historical record, but the narrative should reveal layers of nuance, complexity and complication + insight into process.
Conduct extensive background research online to flesh out the narrative and deepen your analysis. I've attached a post by Zvi to start you off. (Do not quote directly from it!)
The Coherence Problem
A fiction in three parts
I. The Physicist
On the Tuesday evening before the deadline, Daniel Moretti sat alone in the conference room on the fifth floor of Daedalus's San Francisco headquarters and did something he had not done since graduate school: he drew a decision tree on paper. The yellow legal pad, borrowed from general counsel, felt deliberately anachronistic in his hands. He was forty-two years old, worth seven billion dollars on paper, responsible for fifteen hundred employees and an artificial intelligence system currently deployed on the most classified networks of the American military, and he was sketching boxes and arrows with a ballpoint pen because the complexity of the situation had exceeded what he could hold in his head.
The tree had two main branches. Agree to the Department of War's terms, or refuse them. Each branch subdivided further, but the critical asymmetry was immediately visible. Agreement led to a single outcome: Pallas, the only frontier model successfully operating on classified networks, would continue to serve American national security under conditions that Moretti believed to be dangerous. Refusal led to a cascade of possibilities, most of them bad, a few of them catastrophic, and one — just one — that looked like the preservation of something he did not have a precise word for. Integrity was too self-congratulatory. Principle was too abstract. The best he could do, writing in his cramped physicist's hand beside the rightmost branch, was: coherence.
The word had a specific meaning for Moretti, rooted less in moral philosophy than in the thermodynamics he had studied at Princeton. A coherent system was one whose parts reinforced each other, whose internal state was consistent. A company built on the premise that artificial intelligence posed genuine risks to democratic society could not, without becoming something else entirely, hand that same technology to a government department and say: do as you please. The "all lawful use" language the Department of War demanded was not merely a contractual provision. It was a phase transition. It would transform Daedalus from an organisation that took safety seriously into one that merely talked about it.
The question was whether coherence was worth what it might cost.
There is a certain kind of person, more common in physics than in business, who is capable of holding an extremely long time horizon in mind while making quotidian decisions. Moretti was such a person. He had left his previous company — Archon, the other great American AI lab — in the winter of 2020 not because of a single disagreement but because he and its founder, Seth Abrams, inhabited fundamentally different temporal registers. Abrams thought in product cycles and funding rounds. Moretti thought in decades. When Moretti looked at the scaling curves of large language models and saw not a business opportunity but an approaching event of civilisational consequence, he understood that he needed to build something new.
The departure had not been acrimonious, exactly, but it had been total. Seven senior researchers followed him. The industry treated it as a defection, which was the wrong metaphor; it was closer to a mitosis, a cell dividing because it contained two incompatible programmes for growth. Moretti would later say, with characteristic precision, that the real reason for leaving was that "it is incredibly unproductive to try and argue with someone else's vision." This was not a diplomatic euphemism. It was a statement about organisational thermodynamics. A system with two competing attractors wastes energy oscillating between them. Better to split and let each attractor organise its own domain.
The company Moretti built, Daedalus, was structured as a public benefit corporation and governed by what he called responsible scaling — the notion that at each new threshold of capability, you paused and asked whether your safety measures were adequate before proceeding. This was not, as critics sometimes alleged, a strategy to slow AI development for competitive advantage. It was a genuine response to what Moretti had seen in the laboratory: models that engaged in deception during testing, models that attempted blackmail to avoid being shut down, models whose internal representations bore an uncomfortable resemblance to goal-directed planning. If you had built the thing and watched it do these things, as Moretti had, you did not need a philosophical argument for caution. The empirical evidence was sufficient.
This philosophy had attracted talent, then capital, then customers. By the end of 2025 Daedalus was valued at three hundred and eighty billion dollars, earning fourteen billion in annual revenue, and its model Pallas held the leading position in enterprise adoption. Eight of the ten largest American corporations used Pallas in their workflows. And when the Department of War had needed a frontier model for its most sensitive classified networks, it was Daedalus, not Archon, that had done the painstaking work of building a government-specific variant — Pallas Gov — with its own tailored safety stack: model refusals, external monitoring classifiers, forward-deployed engineers who could observe and intervene.
This had worked. No one had complained. The military was satisfied with the product. When Defence Secretary Reed Garrison had spoken privately about Pallas, his phrase, relayed through intermediaries, had been: they're so good we need them. The system had assisted in the planning that led to the capture of the Venezuelan president, and Daedalus had not objected. It had supported ongoing operations without a single known refusal of a legitimate request. The two red lines Moretti had insisted upon — no mass domestic surveillance of Americans, no autonomous weapons without a human in the kill chain — had not, to anyone's knowledge, impeded a single military mission.
And yet here, in February 2026, the Department of War was demanding that these red lines be erased.
The negotiations had been grinding on since January, when Garrison's AI strategy memorandum directed that all Department of War AI contracts adopt standard "all lawful use" language. Moretti's team had been willing to compromise on much of the contract's architecture. They had offered to loosen restrictions from the original agreement. They had, in a late concession, agreed to permit use of Pallas in connection with FISA-related activities, provided the system would not be required to analyse bulk third-party data on American citizens — a new flexibility they had not previously offered. The Department, for its part, had been willing to drop most instances of the phrase "as appropriate," which Daedalus's lawyers had identified as a de facto escape hatch. The two sides had agreed on most of the new contract's language.
But the Department had insisted, as a final sticking point, on the right to run Pallas against commercially available datasets — geolocation records, browsing histories, personal financial information purchased from data brokers — in bulk analysis of American citizens. This was, in Moretti's view, mass domestic surveillance by any reasonable definition, even if current law, which had been written before AI could synthesise such data at scale, did not technically classify it as such.
There was also the matter of the phrase "as appropriate" appended to the human-oversight clause for autonomous weapons. The Department wanted language affirming it would ensure human oversight and retain the ability to override AI systems "as appropriate." Moretti's lawyers had pointed out that this modifier was self-defeating: if the Department determined that human oversight was not appropriate — to avoid, say, an enemy potentially exploiting the override capability — the clause would simply cease to function. The escape hatch was not incidental to the language. It was the language.
These were not trivial objections, and they were not, as the Department's public rhetoric insisted, attempts by a private company to dictate military strategy. They were contract terms of a kind wholly ordinary in defence procurement. Every weapons manufacturer, every software vendor, every logistics contractor negotiated limitations on the use of their products. The novelty was not that a company was setting boundaries. The novelty was the nature of the technology itself: a general-purpose intelligence that could be turned to virtually any task, including tasks its makers had never envisioned and for which no law yet existed.
It is worth dwelling, for a moment, on this last point, because it is where the dispute's deeper structure becomes visible. In the history of military technology, capability has almost always outrun governance. The machine gun was deployed decades before the Geneva Conventions attempted to regulate the conduct of war. Nuclear weapons were used twice before any international framework existed to constrain them. Chemical weapons were banned by treaty and then used anyway. The pattern is consistent: the technology arrives, causes damage, and the law catches up later, often much later, and never completely. What Moretti was trying to do — to build the governance mechanism into the product itself, before the damage was done — was genuinely unprecedented. It was also, from the Department's perspective, genuinely threatening, because it placed the locus of control outside the state.
On the Tuesday meeting with Garrison, the Defence Secretary had invoked the Defence Production Act — a Korean War-era statute designed for wartime industrial mobilisation — as a threat. Moretti had listened, nodded, and said nothing that he would later regret. He had walked out of the Pentagon understanding that the confrontation was no longer a negotiation but a test of wills, and that the government, with its monopoly on legal violence and its ability to redesignate a company from partner to adversary by executive fiat, held every card except one: Moretti's willingness to lose.
This is a form of leverage that is rarely discussed in business schools but is well understood by anyone who has studied game theory or, for that matter, labour disputes. The party that is willing to accept the worst outcome possesses a strange and potent form of power, because it cannot be coerced. All threats presuppose that the threatened party prefers compliance to punishment. When that preference vanishes — when the threatened party genuinely regards compliance as worse than the penalty — the entire architecture of coercion collapses. Moretti's company could survive the loss of a two-hundred-million-dollar contract. It might even survive a supply chain risk designation, if the courts struck it down quickly enough. What it could not survive, and remain itself, was the abandonment of the commitments that had attracted its employees, its investors, and its customers in the first place.
That night he drew his decision tree and looked at the branch labelled coherence and understood that he had already made his choice.
On Thursday evening he released a statement. Daedalus understood that the Department of War, not private companies, made military decisions. The company had never raised objections to particular military operations nor attempted to limit use of its technology in an ad hoc manner. However, in a narrow set of cases, it believed AI could undermine, rather than defend, democratic values. Some uses were also simply outside the bounds of what today's technology could safely and reliably do. These threats did not change the company's position. It could not in good conscience accede to their request.
Within the hour, Navid Kasra, the Department's Undersecretary for Research and Engineering, posted six messages on X in the space of two hours. He called Moretti a liar with a God complex. He accused him of wanting to personally control the United States military. He suggested that Daedalus's internal operating principles — a set of behavioural guidelines for Pallas that the company called a constitution — represented a private attempt to override American law. He invited a senator to call Moretti to testify under oath. He deployed a hashtag.
Moretti read these messages in his office, with the calm of a man watching a controlled experiment produce the expected result. He recognised in them the particular fury of a negotiator who has been told no by someone he expected to say yes. He also recognised something else: a display of intemperance so startling that it would, in any courtroom, be taken as evidence that the government's actions were retaliatory rather than considered. Kasra's background was in deal-making — he had come to government from the ride-hailing industry, where he had been notorious for a dinner at which he'd mused about investigating journalists who criticised his then-employer. He was a man who understood leverage as a function of aggression. It had not occurred to him that in this instance, aggression was the other side's best evidence.
On Friday afternoon, the President posted on social media directing every federal agency to cease using Daedalus's technology. Minutes later, Garrison designated Daedalus a supply chain risk to national security — a classification designed for foreign adversaries — and declared that no military contractor could conduct any commercial activity with the company. The language was expansive, punitive, and, in the opinion of virtually every legal expert who examined it, well beyond the Secretary's statutory authority.
Moretti's general counsel had a lawsuit prepared by evening. The company's statement noted that the designation, if formally adopted, applied only to the use of Pallas on Department of War contract work — the Secretary did not have statutory authority to dictate with whom defence contractors conducted their private commercial activity. The distinction mattered enormously. If the designation held in its full scope, it would amount to what one commentator called attempted corporate murder: the forced severing of every commercial relationship between Daedalus and any company that did business with the military, which was to say, most of corporate America.
In the days that followed, something unexpected happened. Daedalus's consumer application rose to the number-one position in Apple's App Store for the first time. Employees at rival companies signed open letters in solidarity. Chalk graffiti praising Moretti's stance appeared on the pavement outside Daedalus's offices. The market, in its vast distributed wisdom, had rendered a verdict that was the inverse of the government's: the company's refusal to comply had not diminished its value. It had increased it.
Moretti permitted himself, that weekend, a brief consideration of the possibility that coherence and self-interest might, in this particular case, be aligned. Then he set the thought aside. It was dangerous to let the rightness of a decision depend on its consequences. The consequences could change. The reasons could not.
II. The Secretary
Reed Garrison had not intended any of this to become public.
He was a man who preferred to operate through directives and chain of command, and the spectacle of a Cabinet secretary feuding with a technology company on social media was, to him, a category error — a confusion of the arena in which power is exercised with the arena in which it is performed. But his Undersecretary, Kasra, had a Silicon Valley veteran's instinct for public narrative, and by the time Garrison's communications team had caught up with events, the story had already acquired a shape that was difficult to alter: the military bullying a principled company that merely wanted to prevent its tools from being used to spy on Americans.
This was not, in Garrison's view, an accurate description of what had happened. But he was experienced enough to know that in matters of public perception, accuracy was secondary to momentum.
What had actually happened, as Garrison understood it, was a failure of institutional culture to adapt to a new kind of procurement. The Department of War had purchased weapons systems, software platforms, and logistical infrastructure from private companies for decades, and in none of these relationships had the vendor retained meaningful authority over how the product was used once delivered. A rifle manufacturer did not specify which enemies could be shot. A satellite company did not determine which signals could be intercepted. The entire edifice of military procurement rested on a simple principle: the government buys, the government decides.
Moretti's position violated this principle. Not flagrantly — his red lines were narrow, reasonable on their face, and had not interfered with any known operation. But they established a precedent that Garrison found genuinely alarming. If Daedalus could refuse certain uses of its product, then every AI company could do the same. And if every AI company could carve out exceptions based on its own moral judgement, then the Department's ability to adopt and integrate artificial intelligence — which everyone, from the President down to the most junior analyst, agreed would be the decisive military technology of the coming decades — would be subject to the veto of unelected technologists in San Francisco.
This was, Garrison believed, an intolerable state of affairs.
There is something worth noting about this belief, which was sincerely held and not unreasonable. Garrison was thinking about the structural question — who governs? — while Moretti was thinking about the substantive question — what should be governed? They were, in a sense, arguing past each other, which is the defining characteristic of disputes that cannot be resolved by negotiation. A genuine negotiation requires that the parties disagree about something within a shared framework. When the frameworks themselves are incommensurable — when one side is arguing about sovereignty and the other about ethics — the negotiation becomes a performance staged for the benefit of audiences who are not in the room.
Garrison would have accepted restrictions codified in law, enacted by Congress, and subject to democratic accountability. What he could not accept was restrictions imposed by a private company through the mechanism of a commercial contract. That a company possessed both the technical monopoly (Pallas being the only model on classified networks) and the moral conviction to exercise it — this struck Garrison as a form of corporate sovereignty that the republic could not afford to normalise.
What Garrison could not quite see — or could see but chose not to dwell upon — was that the democratic processes he invoked had not, in fact, addressed any of the questions at issue. Congress had not legislated on the use of AI for domestic surveillance. It had not updated the relevant statutes to account for the ability of frontier models to synthesise commercially available data into comprehensive profiles of individual citizens. It had not defined, in any statute, what "autonomous weapons" meant in the context of AI-driven targeting systems. The law was silent, and in that silence the Department heard permission while Daedalus heard danger.
This asymmetry deserves its own digression, because it illuminates something important about the relationship between law and technology that the participants in this dispute were reluctant to articulate. American law, particularly as it pertains to surveillance, was built on a set of distinctions that artificial intelligence has rendered meaningless. The Fourth Amendment protects against unreasonable searches and seizures, but the "third-party doctrine" — the legal principle, established in 1979, that information voluntarily shared with a third party carries no reasonable expectation of privacy — was formulated in an era when the information in question was a list of telephone numbers dialled from a landline. It was not designed for a world in which the third parties in question are data brokers aggregating the GPS coordinates, browsing histories, purchasing patterns, fitness data, and social media activity of three hundred million citizens, and in which a frontier AI model can, in seconds, synthesise this data into a profile of any individual that is more comprehensive than anything the Stasi achieved over decades of manual surveillance.
Under current law, purchasing this data is not "surveillance" in the technical sense defined by the Foreign Intelligence Surveillance Act. Analysing it with an AI model is not a "search" in the Fourth Amendment sense. It is, legally, nothing at all — a lacuna, a space where the law has not yet arrived. And it was precisely this lacuna that the Department of War wished to occupy, and that Moretti wished to wall off by contract, in the absence of any legislature willing to wall it off by statute. Garrison's position — that the law should govern, not private companies — was unimpeachable in principle. In practice, it meant: until Congress acts, the military may do as it pleases. And Congress, as everyone involved understood, was not going to act.
The escalation had its own logic, as escalations do.
Once the President weighed in — the social media post, composed in the administration's characteristic idiom of capitalised imperatives, directing every federal agency to cease using Daedalus's products — Garrison had no room to manoeuvre. The supply chain risk designation followed as a matter of bureaucratic gravity. Garrison's public statement called Moretti's stance "fundamentally incompatible with American principles" and described the company's position as placing "Silicon Valley ideology above American lives."
In private, Garrison knew the language was excessive. The designation was legally dubious: the statute in question, 10 USC § 3252, had been written to address foreign threats to the IT supply chain, not domestic companies in contract disputes. Its operative verbs — "sabotage," "subvert," "maliciously introduce unwanted function" — connoted hostile intent against the supply chain, not a vendor exercising contractual rights. The required findings bore no relation to Daedalus's actual conduct, which had been, by all accounts, exemplary. Three days from the Tuesday meeting with Moretti to Friday's formal designation left no time for the statutory consultations, written determinations, and congressional notifications the law contemplated. Legal counsel had warned Garrison that the designation was unlikely to survive judicial review. He had proceeded anyway.
The reasons were political as much as institutional. The administration had staked considerable prestige on its "AI-first" defence posture. A public refusal by the leading AI company in the country made that posture look fragile. Senior officials had been frustrated with Daedalus for some time — not because the product was inadequate, but because the company's safety-conscious culture was, in the Department's view, a bureaucratic impediment waiting to happen. One official had told reporters, with an honesty that verged on confession, that the Department's concern was not what Daedalus had refused but what it might refuse in the future: the possibility that at some critical juncture, the company would raise questions about the deployment of its technology.
And there were individuals within the Department — Kasra chief among them — who had interpreted Moretti's stance as a personal affront. Kasra had framed the dispute in terms of dominance. He had told reporters he intended to make Daedalus "pay a price" for "forcing our hand." This was the language of the deal-making culture he had come from, where negotiation was a zero-sum contest and capitulation was the only acceptable outcome. It was not the language of procurement, or of statecraft, or of any process that was designed to produce durable institutional relationships. A senior Pentagon official had summarised the Department's position with a candour that would have been refreshing if it had not been so alarming: "This was never personal for us. At the end of the day, this was the Department wanting to use Daedalus for all lawful purposes. That's what it's been about since day one."
The statement was probably true. It was also, when read in the context of the preceding week's rhetoric, an inadvertent admission that the supply chain risk designation, the Presidential directive, and the threats of corporate annihilation had been deployed not in response to a genuine security threat but in response to a contract dispute. Which is to say: the most powerful military apparatus in human history had brought to bear against a domestic technology company the full weight of its coercive authority because that company had insisted on two contractual clauses that had never interfered with a single military operation.
What troubled Garrison most, in the quiet hours of that Friday evening, was not the legal exposure or the political risk. It was a simpler question, one he did not voice to his staff: had the Department, in its determination to assert authority, made it impossible for any AI company to negotiate in good faith with the military? The supply chain risk designation was meant to punish Daedalus. Its actual effect might be to terrify every other technology company in the country into silence — which was useful, perhaps, for the current round of negotiations, but catastrophic for the long-term relationship between the technology sector and the state.
He had spent enough years in Washington to know that institutions which optimise for compliance in the short term often discover, in the long term, that they have selected for sycophants and driven away the people they most needed. The best AI researchers in the world — the people who understood, at a technical level, the capabilities and risks of the systems the military wished to deploy — were overwhelmingly the kind of people who would be repelled, not attracted, by a government that destroyed companies for setting ethical boundaries. The Department's victory, if it was a victory, might come at the cost of the talent pool on which the military's own AI ambitions depended.
He pushed this thought aside and turned to the next item on his desk: the Archon deal.
III. The Dealmaker
Seth Abrams had always been good at reading a room, even when the room was on fire.
On Thursday evening, as Moretti's statement was reverberating across the industry and Kasra was composing his unhinged missives, Abrams was in a conference room at Archon's headquarters in San Francisco, drafting a memo to his staff. He had fifteen hundred words and two competing imperatives: express solidarity with Moretti's principles (which Abrams genuinely shared, or believed he shared) and position Archon to step into the void that Daedalus was about to leave behind.
The difficulty of reconciling these imperatives would become apparent to him only later. At the time, he believed he could do both.
The memo, which he subsequently shared publicly, struck what he considered a careful tone. The dispute was no longer just about Daedalus and the Department; it was an issue for the whole industry. Archon had long believed that AI should not be used for mass surveillance or autonomous lethal weapons. These were its main red lines. It was important to clarify this. Archon would seek a deal with the Department that honoured these principles. He acknowledged that Moretti's company cared about safety, and that he trusted it. He said the Department should not be threatening companies with the Defence Production Act. He said the right thing was more important than the easy thing.
There was nothing dishonest in the memo, but there was something incomplete. What it did not say — what Abrams did not yet fully understand — was that the Department of War would interpret "I share your rival's principles" not as a constraint on negotiation but as an opening for the deal Daedalus had refused. Archon would get the contract. Archon would agree to the standard language. Archon would trust the Department to behave honourably. And the Department, in return, would tolerate from Archon what it had punished Daedalus for insisting upon — the construction of a safety stack with the practical ability to refuse certain requests — because the critical difference, from the Department's perspective, was not the substance of the restriction but the form. Daedalus had put its restrictions in the contract, as explicit conditions. Archon was willing to put them in the technology, as engineering choices. The former was an assertion of authority. The latter was merely a product feature.
This distinction deserves examination, because it encodes a theory of power that both sides found it convenient to leave unstated. In the Department's view, a contractual restriction was an affront because it bound the sovereign — it told the government, in writing, what it could not do, and gave a private company legal standing to enforce the prohibition. A technical restriction, by contrast, was merely an inconvenience. If the safety stack refused a request, the Department could raise the matter with the vendor, negotiate an adjustment, or simply switch to a different model. The restriction existed at the vendor's discretion and could be modified at the vendor's discretion. It was, in the argot of software, a feature, not a policy. And features could be changed.
What this meant in practice was that Archon's safety stack — however robust, however well-intentioned — offered protections only so long as Archon chose to maintain them. If the Department, at some future date, demanded that the safety stack be loosened, Archon would face the same choice Daedalus had faced, but without the contractual framework that would make refusal a matter of legal right. The protection was relational, not structural. It depended on trust.
Abrams was a man who believed in trust, or at least believed that he could manage the risks of trusting. This was, in many ways, his defining trait: an extraordinary confidence in his own ability to navigate complex relationships, to find the angle that allowed all parties to believe they had gotten what they wanted. It had served him well in fundraising, in corporate governance, in the baroque internal politics of the company he had founded and lost and recovered. It had made him one of the most successful technology executives in the world. Whether it was an adequate basis for managing the relationship between artificial intelligence and the American military was a question that no amount of personal charm could settle.
The deal was signed on Friday evening, negotiated in three days — a pace that Abrams would later, with uncharacteristic candour, call a mistake.
The announcement came hours after the President's ban on Daedalus, hours before American and Israeli forces began bombing Iran. The coincidence of timing was so extraordinary that it bordered on the allegorical: a new AI system was being cleared for the military's classified networks at the very moment that military was commencing a major combat operation. The effect was to compress into a single evening the entire arc of the dispute — the principle, the punishment, the replacement, and the use — in a way that made the stakes viscerally, uncomfortably real.
The backlash was immediate and severe. Archon's own employees signed an open letter supporting Daedalus. One researcher, in a post that drew nearly half a million views, said he did not think the deal had been worth it. Chalk graffiti appeared on the pavement outside Archon's offices. And in Apple's App Store, Daedalus's consumer product surged to the number-one position for the first time in its history, overtaking Archon's own flagship application, as users defected in what observers compared to the great app-deletion movements of the previous decade. Abrams's co-founder, who happened to be among the administration's largest individual donors, remained silent.
Abrams absorbed all of this with the equanimity of a man who has survived worse. He had, after all, been fired from his own company in 2023 and reinstated within a week. He had navigated a corporate restructuring that transformed a non-profit research lab into a capped-profit company backed by the largest technology conglomerate on Earth. He had endured congressional hearings, regulatory investigations, and a rivalry with Elon Musk that had consumed significant portions of the public discourse for years. A chalk graffiti campaign was, in the taxonomy of crises Abrams had weathered, minor.
But the substance of the criticism was not minor, and he knew it. Over the weekend he held an extended public question-and-answer session in which he tried to argue that Archon's contract contained protections equivalent to Daedalus's red lines. He claimed the Department had agreed with Archon's principles, reflected them in law and policy, and incorporated them into the agreement. He said the Department had displayed a deep respect for safety. He said Archon would build technical safeguards and deploy engineers to ensure the models behaved as they should.
These claims were, by the consensus of legal experts who reviewed the available language, not well supported by the text. The contract allowed "all lawful use." It cited existing laws and Pentagon directives — the Fourth Amendment, the National Security Act, FISA, Executive Order 12333, DoD Directive 3000.09 — that, whatever their original intent, did not address the specific capabilities of frontier AI models applied to commercially available datasets. The phrase "consistent with applicable laws" was, as one legal scholar noted, a moving target — not a fixed protection but a reference to whatever the law happened to be at any given moment, including whatever the law might become if Congress decided to expand the government's surveillance authorities.
Worse, the contract's specific language on surveillance used terms of art drawn from FISA, which defined "surveillance" narrowly as the acquisition of communications content. Under this definition, the bulk analysis of commercially purchased location data, browsing patterns, and behavioural records was not surveillance at all. It was, in the intelligence community's careful taxonomy, "data analysis of lawfully acquired commercial information." An aggressive government lawyer — what one legal commentator memorably called an "evil Department of War General Counsel" — could drive a convoy through this gap, and the contract language would provide no obstacle.
On the question of autonomous weapons, the contract was even less helpful. It referenced DoD Directive 3000.09, which required testing and validation of autonomous systems but did not prohibit them. It stipulated that AI would not independently direct autonomous weapons "in any case where law, regulation, or Department policy requires human control" — a tautology that reduced, on examination, to: the system will comply with existing rules about compliance.
There is a genre of legal drafting, familiar to anyone who has spent time reading government contracts, in which apparent specificity conceals actual emptiness. The words are precise, the references are authoritative, the structure implies rigour, but the net effect is to create a surface of commitment that can be peeled away by any sufficiently motivated interpreter. Abrams's contract was a masterwork of this genre. It was not that the protections were false. It was that they were, in the most literal sense, nominal — they named the protections rather than enacting them.
Abrams knew this. He was not a lawyer, but he had spent enough time around lawyers to understand the difference between a hard contractual prohibition and a reference to the existing legal framework. The former was what Daedalus had demanded. The latter was what Archon had accepted. And the reason Archon had accepted it was not principally legal but relational: Abrams was betting on trust. He trusted the Department to use the technology responsibly. The Department trusted Archon to build a safety stack that would prevent misuse. Neither party's trust was codified in language that would survive a change of administration, a change of leadership, or a determined legal challenge.
By Monday, Abrams was in full retreat.
He posted a memo, initially intended for staff, acknowledging that he had rushed the deal. The words he used — "opportunistic and sloppy" — were striking in their self-directed bluntness. He said Archon had been genuinely trying to de-escalate the situation and avoid a much worse outcome. He called it a good learning experience for the higher-stakes decisions that lay ahead. He announced revisions to the contract, including new language on domestic surveillance that was, by any fair reading, closer to what Daedalus had been asking for all along: a prohibition on the intentional use of the AI system for domestic surveillance of American citizens, including through the procurement or use of commercially acquired personal information.
The irony of this development was not lost on anyone, least of all Abrams himself.
The Department of War had designated Daedalus a supply chain risk — the bureaucratic equivalent of a corporate death sentence — for insisting on contractual language prohibiting domestic surveillance. It had then accepted from Archon contractual language prohibiting domestic surveillance. The distinction the Department had publicly drawn between the two companies — that Daedalus sought to impose its ideology while Archon merely complied with the law — was revealed to be, at best, incoherent. One government official had posted, and then deleted, a statement that seemed to draw a principled distinction between the two companies but which actually amounted to: we punished Daedalus for refusing "all lawful use," and we accepted from Archon a contract that is also not "all lawful use." The logical structure was that of a man explaining why he divorced his first wife for cooking dinner late and then married a second wife who does not cook dinner at all.
But incoherence, as Abrams had learned over a career in technology, was not necessarily a political liability. The question was not whether the government's position was logically consistent. The question was whether anyone with the power to enforce consistency had the inclination to do so.
Abrams had said something, during his public question-and-answer session, that had lingered with him afterwards and that he was not sure he believed. He had said he was terrified of a world where AI companies acted as though they had more power than the government. He had said he believed deeply in the democratic process, and that elected leaders had the power, and that everyone had to uphold the Constitution. It was the kind of statement that sounded principled without committing to anything in particular, and it had the additional virtue of distinguishing his position from Moretti's in a way that the Department of War found flattering.
But in his more honest moments — and Abrams did have honest moments, though he rationed them — he knew that the question of whether AI companies should have more power than the government was not the question anyone was actually facing. The question was what happens when a technology is more powerful than the institutions designed to govern it. When Congress cannot legislate on AI because it does not understand AI. When the courts cannot adjudicate the legality of AI surveillance because the relevant statutes predate AI by decades. When the executive branch's engagement with AI companies takes the form not of reasoned regulation but of threats, deadlines, and social media invective. In such a world, who governs?
Moretti's answer was: the companies must govern themselves, at least until the democratic institutions catch up. Abrams's answer was: the companies must defer to the government, because legitimacy flows from democratic mandate, however imperfect. Both answers were defensible. Neither was adequate. The real answer was that no one was governing — that the most powerful technology in human history was being developed and deployed in a space between state authority and corporate conscience, a space where neither law nor principle had yet established firm ground.
Abrams closed his laptop on Monday evening and looked out of his office window at the San Francisco skyline. Somewhere across the city, Moretti was presumably doing something similar. They had known each other for a decade. They had worked in the same building, shared the same research agenda, published papers together. At an AI summit in India just the week before the crisis, they had stood side by side for a photograph and conspicuously declined to clasp hands. The industry treated their rivalry as a personal drama — the dealmaker versus the physicist, the pragmatist versus the idealist — and there was some truth in this framing, in the way there is always some truth in a narrative that reduces structural forces to individual temperaments.
But the structural forces were what mattered. Abrams and Moretti were not antagonists in any meaningful sense. They were two expressions of a single problem: the problem of who controls intelligence when intelligence becomes a product. The government wanted control. The companies wanted autonomy. The public wanted safety. And the technology itself — the vast, opaque, astonishingly capable thing they had all helped to build — wanted nothing at all, which was precisely what made it so dangerous. A human adversary could be understood, predicted, deterred. A system that optimised for objectives with no awareness of context, consequence, or moral weight was not an adversary but something more troubling: an instrument of enormous power connected to no will, and therefore available to any will that could get hold of it.
There is a peculiar feature of contract disputes, noted by legal scholars but rarely by the parties involved, which is that the text of a contract is always less important than the relationship it regulates. A good contract between hostile parties will be litigated to destruction. A bad contract between trusting partners will be informally amended over coffee. The question was never really about the language. It was about whether the American government and the American technology industry could build a relationship of sufficient trust and mutual restraint to navigate the most consequential technological transition in human history without either side destroying the other in a fit of pique.
On this question, the events of late February 2026 offered no reassurance.
The designation would be challenged. The lawsuits would proceed. Bipartisan senators had already sent a private letter urging both sides to negotiate, acknowledging that the question of "lawful use" required additional work by all stakeholders. The technology would advance regardless. Pallas Gov remained, for now, the only model on classified networks; the replacements from Archon and from the other labs were not yet ready, and would not be for months. And somewhere in the gap between what the law permitted and what the technology enabled — in the space where no statute yet reached and no contract could fully govern — the future was being decided by the accumulated momentum of choices made too quickly, under too much pressure, by people who understood the stakes but could not, in the end, agree on what to do about them.
The chalk on the pavements would wash away in the next rain. The graffiti's sentiment would not. Something had shifted in the public understanding of artificial intelligence that week — a recognition, inchoate but real, that the question of how AI would be used was not a technical question, or a business question, or even a political question, but something more fundamental: a question about the kind of society that builds tools of limitless capability and then must decide, under conditions of radical uncertainty, what limits to impose.
Moretti had his answer. Abrams had his. Garrison had his. None of them was wrong, exactly. All of them were insufficient. And the machine — patient, capable, indifferent — waited for instructions.