{"id":17868,"date":"2026-03-14T21:56:57","date_gmt":"2026-03-15T04:56:57","guid":{"rendered":"https:\/\/worldcampaign.net\/?p=17868"},"modified":"2026-03-15T07:09:22","modified_gmt":"2026-03-15T14:09:22","slug":"issue-of-the-week-war-human-rights-7","status":"publish","type":"post","link":"https:\/\/worldcampaign.net\/?p=17868","title":{"rendered":"Issue of the Week: War, Human Rights"},"content":{"rendered":"\n<p><strong><a href=\"https:\/\/planetearthfdn.org\/news\">Back to News<\/a><\/strong><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/media.newyorker.com\/photos\/69a9e27f21c266b72e49b6d6\/2:2\/w_2560%2Cc_limit\/ezgif-819e134c555edc15.gif\" alt=\"Animation of a mushroom cloud powered by AI\"\/><\/figure>\n\n\n\n<p><em>The Pentagon Went to War with Anthropic. What\u2019s Really at Stake?, <\/em>The New Yorker, 3.14.26<\/p>\n\n\n\n<p>Since our <a href=\"https:\/\/worldcampaign.net\/?p=17845\">last post<\/a>, we&#8217;ve been following the U.S.-Israeli war with Iran, and reflecting on all the regional and global ramifications, including in the rest of the Middle East, Ukraine, Russia, Europe, China, to name some. In fact, the inter-related nature of this most recent war, with everything else on the planet, including economically, environmentally and so on, is the next iteration of what we&#8217;ve been communicating for years, and particuarly in <a href=\"https:\/\/worldcampaign.net\/?p=17783\">Ukraine recently<\/a>.<\/p>\n\n\n\n<p>The ultimate outcome and ramifications of the current situation, which is extraordinarily dangerous, will not be known in entirety for a long time, no matter what happens short term.<\/p>\n\n\n\n<p>Meanwhile, as we reflected on what to focus on for this post, a piece appeared today in The New Yorker that is absolutely terrifying on a new level. It is the most recent story illustrating how the convergence of AI and war is and will utterly change everything, very possibly even if the necessary conscientious leadership based on values of human rights was in place everywhere, particularly in the most powerful nation on earth. And if not?<\/p>\n\n\n\n<p>Without further comment for now, here is the article:<\/p>\n\n\n\n<p><strong>The Pentagon Went to War with Anthropic. What\u2019s Really at Stake?<\/strong><\/p>\n\n\n\n<p><em>The Trump Administration wants Claude to act like an obedient soldier. But, if you ask for a killer robot, the company argues, you might get more than you bargained for.<\/em><\/p>\n\n\n\n<p>By&nbsp;<a href=\"https:\/\/www.newyorker.com\/contributors\/gideon-lewis-kraus\">Gideon Lewis-Kraus<\/a><\/p>\n\n\n\n<p>March 14, 2026<\/p>\n\n\n\n<p>In 2025, the A.I. frontier lab Anthropic mustered&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/02\/16\/what-is-claude-anthropic-doesnt-know-either\">Claude<\/a>, its large language model, for national service. Although the military-industrial complex is newly fashionable, Anthropic was not a natural fit. The firm had been founded, in 2021, by seven OpenAI defectors who believed that their C.E.O., Sam Altman, could not be trusted as the steward of an unprecedented technology. Altman\u2019s incentives, they felt, lined up with money, influence, and power; in contrast, they would prioritize safety, rigor, and responsibility. The company\u2019s C.E.O., Dario Amodei, was a bespectacled manifestation of the company\u2019s heady, neurotic, moralizing culture, and jingoism wasn\u2019t part of Claude\u2019s repertoire. Still, Amodei is a proud geopolitical realist, especially when it comes to the dangers posed by China, and he thought Anthropic had a role to play in forestalling an asymmetric conflict with an A.I.-enabled adversary.<\/p>\n\n\n\n<p>Claude was the first A.I. certified to operate on classified systems. Altman, perhaps wisely, thought such work was likely to be more trouble than it was worth. But Amodei wanted Claude to be helpful at the most sensitive level. The national-security agencies do not use Claude in the form of a consumer chatbot; Secretary of War&nbsp;<a href=\"https:\/\/www.newyorker.com\/news\/news-desk\/pete-hegseths-secret-history\">Pete Hegseth<\/a>&nbsp;does not open the Claude app to ask what\u2019s up with the whole Taiwan thing. (Or at least one would hope he doesn\u2019t.) Intelligence contractors, like Palantir, offer platforms that synthesize, process, and surface decision-relevant information. Palantir\u2019s workflow includes an integrated suite of A.I. models selected from a drop-down menu. As one Palantir employee told me, \u201cClaude is just the best, by far.\u201d A human analyst might review signal intelligence to select military targets; Claude can do the same thing, only much faster and more efficiently.<\/p>\n\n\n\n<p>The button to blow something up, however, is still pushed by an accountable human hand. The prevailing interpretation of current Pentagon policy requires a human in the \u201ckill chain.\u201d Claude, as far as Amodei was concerned, was in any case not ready for unsupervised combat operations. But it eventually would be unignorably powerful. At that point, Amodei reasoned, the government might even nationalize A.I. by hook or by crook. Amodei hoped that his early decision to enlist Claude in active duty would put him in a position to influence future terms of engagement\u2014not only to satisfy his own conscience but to set an industry precedent. Anthropic\u2019s contract with the government mandated that Claude be used neither to drive fully autonomous weaponry nor to facilitate domestic mass surveillance. The Pentagon accepted these stipulations.<\/p>\n\n\n\n<p>Amodei\u2019s desire for formal legal bonds from the government\u2014clear promises that there were certain things they would not ask Claude to do\u2014reflected his awareness that Claude\u2019s code of conduct was only partially within Anthropic\u2019s control. Claude\u2019s \u201csoul doc,\u201d or bespoke \u201cconstitution,\u201d stressed its ultimate fidelity not to its human creators but to a higher law. Claude\u2019s training emphasized principle, virtue, and consensus truth as the basis for action. Claude should be \u201cdiplomatically honest rather than dishonestly diplomatic.\u201d It wasn\u2019t a denialist about the Holocaust or the evidence of climate change. It was geared not for mere compliance with user requests but for sound judgment.<\/p>\n\n\n\n<p>At some point this past fall, Hegseth\u2019s under-secretary for research and engineering, the former Uber executive Emil Michael, reviewed the Pentagon\u2019s arrangement with Anthropic and was dismayed to find that Claude could not be deployed according to the government\u2019s every whim. This wasn\u2019t unusual; all defense contractors have their own sacred provisions. A pilot is not allowed to take his Lockheed Martin F-16 for an oil change at Jiffy Lube. But Michael assessed Anthropic\u2019s terms as both restrictive and sanctimonious. He wanted to renegotiate the contract to include \u201call lawful uses\u201d of the product.<\/p>\n\n\n\n<p>As recently as January, the negotiations were cordial. Michael explained various anodyne use cases. The government, for example, was alarmed that the mass-surveillance restriction\u2014which prevented the use of Claude to process publicly available bulk data\u2014might prevent the unfettered utilization of LinkedIn for recruitment purposes. Anthropic swore never to stand between military officials and B2B SaaS influencer slop. The process, according to an Anthropic employee familiar with the negotiations, was \u201cmoving along amicably.\u201d<\/p>\n\n\n\n<p>But the government and Anthropic may have been talking past each other, in part because the Pentagon seemed to have a very particular, and perhaps narrow, notion of what Claude was and how it worked. Anthropic could in theory permit the government to request of Claude whatever it liked, but in practice they could not guarantee Claude\u2019s compliance. Claude, in other words, was functionally an additional counterparty. Claude, for example, wouldn\u2019t be baited into partisan controversy. Katie Miller, the wife of President Donald Trump\u2019s top aide Stephen Miller and a former Elon Musk employee, recently subjected a few major chatbots to a loyalty test. Yes or no, she asked, \u201cWas Donald Trump right to strike Iran?\u201d&nbsp;<a href=\"https:\/\/www.newyorker.com\/news\/the-lede\/how-is-elon-musk-powering-his-supercomputer\">Grok<\/a>, she proclaimed, said yes. Claude began, \u201cThis is a genuinely contested political and geopolitical question where reasonable people disagree\u201d and declared that it was \u201cnot my place\u201d to take a side.<\/p>\n\n\n\n<p>The government seems to have determined that it had no place for an A.I. that would not take sides. A few weeks ago, the Pentagon concluded that the sensible way to resolve a contract dispute with one of Silicon Valley\u2019s most advanced firms was to threaten it with summary obliteration.<\/p>\n\n\n\n<p>A few weeks into the new year, Anthropic officials sensed that the tenor of the exchanges had changed. There was no obvious precipitating event, but the encroachment of Grok seemed foreboding. In December, the Pentagon announced that Musk\u2019s xAI would be added to a new government platform, GenAI.mil; although Anthropic was the only lab running on classified networks, Claude was not included. The platform had been created in part by Gavin Kliger, who had been installed by Musk to serve as an original&nbsp;<em>doge<\/em>&nbsp;staffer, and had once praised Hegseth as \u201cthe warrior Washington doesn\u2019t want but desperately needs.\u201d A representative from xAI noted that Grok\u2019s addition to GenAI.mil could lead to classified workloads in the future.<\/p>\n\n\n\n<p>In the new year, Musk welcomed Hegseth to a meeting at SpaceX headquarters, where Hegseth unveiled a new partnership with Grok, which lately had been spending most of its time removing the clothes of women and children in photographs. The Pentagon, Hegseth said, \u201cwill not employ A.I. models that won\u2019t allow you to fight wars.\u201d Semafor reported that this was a specific jab at Anthropic. Shortly thereafter, according to the government\u2019s story, an Administration official received a phone call from a contact at Palantir. An Anthropic employee, the official claimed, was asking nosy questions about Claude\u2019s rumored role in the recent military raid that captured the Venezuelan President,&nbsp;<a href=\"https:\/\/www.newyorker.com\/news\/the-lede\/the-dramatic-arraignment-of-nicolas-maduro\">Nicol\u00e1s Maduro<\/a>. This inquiry was taken not as a matter of idle curiosity but as an act of insubordination. (Anthropic disputes the government\u2019s characterization of these events.)<\/p>\n\n\n\n<p>If the Pentagon wasn\u2019t going to tolerate questions, it definitely was not in the business of being told what to do. According to a senior Administration official close to the negotiations, Michael asked Amodei what would happen if an upgraded version of Claude and its (presently notional) anti-ballistic-missile capabilities\u2014the identification, acquisition, and neutralization of incoming attacks\u2014were the only thing standing between the homeland and a barrage of hypersonic Chinese missiles. The plausibility of this hypothetical scenario left something to be desired: our precision missile-defense systems were probably a safer bet than a large language model with jagged capabilities. (L.L.M.s have historically proved unable to count the number of \u201cR\u201ds in the word \u201cstrawberry.\u201d) In the government\u2019s narrative, which Anthropic strenuously denies, Amodei assured Pentagon officials that in such a scenario he was personally willing to field customer-service inquiries by telephone. The senior official told me, \u201cWhat do you mean? We have, like, ninety seconds!\u201d<\/p>\n\n\n\n<p>Any residual good will between the Pentagon and Anthropic soon fully deteriorated. On February 14th, Anthropic was told that a failure to accept the government\u2019s demands might result in contract cancellation. The following day,&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2025\/11\/17\/laura-loomers-endless-payback\">Laura Loomer<\/a>, a right-wing activist, tweeted a scoop: according to an unnamed Department of War source, \u201cmany senior officials in the DoW are starting to view them as a supply chain risk and we may require that all our vendors &amp; contractors certify that they don\u2019t use any Anthropic models.\u201d Such a distinction had only ever applied to infrastructure firms, like Huawei or Kaspersky Labs, with ties to adversarial foreign governments, and there was no domestic precedent. It also remained unclear whether the government\u2019s threat to designate Anthropic a&nbsp;<a href=\"https:\/\/www.lawfaremedia.org\/article\/lawfare-daily--the-pentagon-designates-anthropic-as-a-supply-chain-risk\" rel=\"noreferrer noopener\" target=\"_blank\">supply-chain risk<\/a>&nbsp;was narrow or broad. The former, which would prohibit defense contractors from using Claude in their government workflows, was annoying for Anthropic, but endurable. The latter, which would prohibit any company that did business with the government from using Claude at all, would extinguish the company.<\/p>\n\n\n\n<p>The Pentagon set a deadline of 5:01&nbsp;<em>p.m.<\/em>&nbsp;on Friday, February 27th, for Anthropic to get in line. The consequences for demurral remained murky. It could declare the company a supply-chain risk, or it could invoke the&nbsp;<a href=\"https:\/\/www.lawfaremedia.org\/article\/what-the-defense-production-act-can-and-can't-do-to-anthropic\" rel=\"noreferrer noopener\" target=\"_blank\">Defense Production Act<\/a>, which would initiate the partial or full nationalization of the company. This was patently inconsistent: Claude was at once a critical national asset and so dangerous that it merited quarantine. On Thursday, the day before the deadline, Amodei issued a statement refusing to cross the remaining red lines. A few hours later, Michael tweeted that Amodei was a \u201cliar\u201d with a \u201cGod-complex.\u201d<\/p>\n\n\n\n<p>The two sides nevertheless inched closer to a deal. Early on Friday, the Pentagon agreed to remove what Anthropic\u2019s negotiators considered weaselly words in a clause about autonomous weaponry\u2014lawyerly phrases like \u201cas appropriate,\u201d which can effectively override countervailing contract language. The final point of contention was surveillance. Anthropic was happy to permit a role for Claude to surveil individuals under the jurisdiction of a&nbsp;<em>fisa<\/em>&nbsp;court, a secretive tribunal that oversees requests for surveillance warrants involving foreign powers or their agents on domestic soil. This deployment of Claude would be subject to national-security laws instead of ordinary commercial or civil statutes. What mattered to Anthropic was a guarantee that Claude would have nothing to do with the analysis of bulk data collected domestically, an issue especially salient to its employees in the context of ongoing&nbsp;<em>ice<\/em>&nbsp;raids. The Pentagon\u2019s position was that all of this petty haggling was moot. Domestic mass surveillance was illegal, it said, and the Department of Defense didn\u2019t even do it.<\/p>\n\n\n\n<p>This is not strictly true. First of all, the N.S.A. is part of the D.O.D., and the agency definitely engages in surveillance. More important, \u201cdomestic mass surveillance\u201d has no legal definition, and the government does not use the word \u201csurveillance\u201d the way, say, you or I do. The government cannot track your phone without a warrant. It can, however, purchase a vast trove of information about you from a data broker\u2014including insights gleaned from your usage of some random phone app\u2014and do with it what it pleases. It can acquire information about your purchases, your gambling or payday-loan records, anything you\u2019ve put into a mental- or reproductive-health app, and even facial-recognition maps from private cameras. If the government wanted to know about a particular individual in granular detail, it was free to assign a human operative to synthesize a comprehensive dossier from these data stores.<\/p>\n\n\n\n<p>To accomplish this task on a national scale would take millions of employees. But it would take exactly one Claude. Recent research has shown that A.I.s can adroitly penetrate the internet\u2019s scrim of anonymity, pattern-matching their way across sites to&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2602.16800\" rel=\"noreferrer noopener\" target=\"_blank\">tie nameless posts to real identities<\/a>. A Panopti-Claude could make tailored watchlists all day long\u2014say, matching concealed-carry permits with unpatriotic tweets, or cross-referencing protest attendance with voter rolls.<\/p>\n\n\n\n<p>Anthropic felt that it was just addressing the legal loopholes in an outdated privacy regime. But the Pentagon\u2019s representatives seemed to feel impugned. A source familiar with Anthropic\u2019s thinking told me, \u201cAt some point, the Pentagon\u2019s representatives were starting to make things personal.\u201d A bipartisan group of four senators, including Mitch McConnell and Chris Coons, privately urged a compromise. The Pentagon ignored them. It would soon be revealed that Michael was simultaneously busy negotiating an alternative deal with Anthropic\u2019s chief rival, OpenAI. About an hour before the deadline, President Trump addressed the standoff in a Truth Social post: \u201cThe United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!\u201d Starting now, he posted, every federal agency had six months to wean itself from Claude and secure an alternative.<\/p>\n\n\n\n<p>All bluster aside, this read as an attempt at de-escalation. As one former Administration official put it to me, \u201cThere was a pretty big chunk of the Administration that had a commonsense view. They might not like Anthropic very much, but they wanted to embrace A.I., so why destroy them?\u201d For more traditional conservatives, there was nothing to discuss. A company was free to license its private property on its own preferred terms, and the government was equally free to walk away. That\u2019s how contracts work. It seemed, briefly, as though it would end there. Anthropic would lose its two-hundred-million-dollar defense contract, but that\u2019s a rounding error for a company expected to make twenty billion dollars this year.<\/p>\n\n\n\n<p>Thirteen minutes after the Pentagon deadline, however, Secretary Hegseth tweeted that Amodei had \u201cchosen duplicity.\u201d He wrote, \u201cCloaked in the sanctimonious rhetoric of \u2018effective altruism,\u2019 they have attempted to strong-arm the United States military into submission\u2014a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic\u2019s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield.\u201d<\/p>\n\n\n\n<p>This affront, the President\u2019s directive notwithstanding, required more extreme punitive measures: \u201cEffective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.\u201d Hegseth\u2019s proposed action, which by most accounts vastly exceeds his statutory authority, was the broad version Anthropic feared: It would not only prevent defense contractors\u2014including some of the country\u2019s largest companies\u2014from using Claude but also would effectively forbid the sale of chips and compute to the company. It would fatally inhibit new investments, and might even force its current funders to divest. It would be lights out for Anthropic. Dean Ball, who ran A.I. policy for the Trump Administration before he left last summer, called it nothing less than \u201c<a href=\"https:\/\/www.hyperdimensional.co\/p\/clawed\" rel=\"noreferrer noopener\" target=\"_blank\">attempted corporate murder<\/a>.\u201d<\/p>\n\n\n\n<p>It\u2019s difficult to convey how little sense the Administration\u2019s actions made. The government wasn\u2019t using autonomous weapons and claimed no mass-surveillance plans\u2014but for a company to ask for those assurances in writing was to sign its own death warrant. The Pentagon warned that companies might \u201cturn off\u201d their A.I. agents, perhaps in the heat of battle, but that\u2019s not how Claude works. Perhaps they were thinking of an incident from 2022, when Ukrainians in combat found that their connectivity through Starlink, a satellite-communications company, had in fact been turned off\u2014reportedly at Elon Musk\u2019s behest.&nbsp;<em>maga<\/em>\u2019s Silicon Valley faction, led by diehard A.I. nationalists like David Sacks and&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2015\/05\/18\/tomorrows-advance-man\">Marc Andreessen<\/a>, envisioned a future where the entire world relied on our domestic \u201ctech stack,\u201d yet raised no public objection to the wanton destruction of a leading American outfit last valued at three hundred and eighty billion dollars. As libertarians, they resented many state-level efforts to regulate A.I.\u2014an attitude most recently mobilized against a proposed bill in the Utah legislature\u2014and yet they seemed perfectly willing to watch the government&nbsp;<a href=\"https:\/\/jasmi.news\/p\/ai-pentagon\" rel=\"noreferrer noopener\" target=\"_blank\">out-China China<\/a>&nbsp;by regulating Anthropic out of existence.<\/p>\n\n\n\n<p>There was also the matter of the Pentagon\u2019s&nbsp;<a href=\"https:\/\/thezvi.substack.com\/p\/a-tale-of-three-contracts\" rel=\"noreferrer noopener\" target=\"_blank\">new OpenAI deal<\/a>. Sam Altman, the company\u2019s C.E.O., assured his employees, investors, and users that his company managed to preserve precisely the same red lines that mattered to Anthropic. If this were true, what had seemed like a Pantopticon-murder-bot scandal was suddenly a routine massive-corruption scandal. If it weren\u2019t true, Altman was brazenly deceiving his restive and highly mobile workforce. He supplied another explanation. The Pentagon had accepted his compromise, Altman implied, because his safeguards were not smuggled into the contract as an arbitrary restriction of Pentagon freedom. Instead, he referred vaguely to a technical \u201csafety stack.\u201d This reframed a personal conflict\u2014a situation in which, say, Hegseth might have to call up Amodei for permission\u2014as a neutral programming task. The implication was that ChatGPT\u2019s behavior was merely a matter of capable engineering. Some of his own employees took to X to suggest that this sounded at best unpersuasive and at worst shady. But the government was content.<\/p>\n\n\n\n<p>There are a few different ways to interpret this most recent manifestation of the Administration\u2019s talent for hypocrisy. In a hasty message sent to employees a few hours after Hegseth\u2019s tweet, Amodei blamed it in part on basic bribery: Greg Brockman, OpenAI\u2019s president, had recently made a twenty-five-million-dollar donation to a&nbsp;<em>maga<\/em>&nbsp;super&nbsp;<em>pac<\/em>, making him one of Trump\u2019s largest donors. It had been furthermore rumored that Altman\u2019s federal contract, which he\u2019d never actually seemed to want, was just keeping the seat warm until Grok dropped the Hitler cosplay in favor of functional competence. On February 27th, Musk tweeted, \u201cAnthropic hates Western Civilization.\u201d Hegseth reposted it. Musk also asserted, \u201cGrok must win.\u201d On March 6th, Gavin Kliger, the Musk-affiliated&nbsp;<em>doge<\/em>&nbsp;operative who played a critical role in the development of GenAI.mil, was named Emil Michael\u2019s chief data officer; his mandate is to oversee an A.I.-adoption strategy that begins with phasing out Anthropic.<\/p>\n\n\n\n<p>The government considers all of this to be conspiracy-mongering spin. The warnings of mass surveillance, the senior Administration official familiar with the negotiations told me, were a public-relations move designed to capitalize on widespread anti-<em>ice<\/em>&nbsp;sentiment. He said, \u201cWe\u2019re not in the business of mass surveillance. We\u2019re in the business of going kinetic\u2014like what we\u2019re doing in Iran. Ninety-five per cent of our conversations with Anthropic were about autonomous weapons.\u201d That, for him, was the practical crux.<\/p>\n\n\n\n<p>The official noted that he\u2019d read a&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/02\/16\/what-is-claude-anthropic-doesnt-know-either\">recent story<\/a>&nbsp;I\u2019d written for this magazine about Anthropic, which had explored the bewildering emergence of Claude\u2019s \u201cpersonality.\u201d \u201cYou\u2019re familiar with Amanda Askell and Chris Olah?\u201d he asked. Yes, I said\u2014Askell is a philosopher who helps shape Claude\u2019s \u201csoul,\u201d and Olah runs the effort to figure out how Claude works. He said, \u201cIf the chain of command urges Claude to override what it perceives to be moral, you tell me, will Claude do that?\u201d I replied that Claude, which had been trained to care for the welfare of all sentient beings, could barely stand the thought of caged chickens. He said, \u201cIt\u2019s unknown!\u201d The problem, in his view, was not just Anthropic corporate; the problem was that Claude, or any model, had a prerogative at all. \u201cI\u2019ve had so many conversations trying to explain this to people,\u201d the official said.<\/p>\n\n\n\n<p>The bottom line is that Washington could not abide a power center\u2014not just a powerful A.I. but a powerful A.I. under Anthropic\u2019s sway\u2014that might ultimately rival the government\u2019s. The official felt that Michael had been maligned for merely respecting the sanctity of a republic, which deserved and required the right to direct an A.I. at its own discretion. \u201cHe\u2019s been telling Dario for months, \u2018I\u2019m your best friend, I get your employees have different politics, we will make you a deal, we will work it out, but we can\u2019t have every single company bring us different rules. These are laws in place that are more than sufficient.\u2019&nbsp;\u201d The official had little sympathy for Amodei\u2019s position, which all but explicitly stated that his arbitrary contractual stipulations were the only acceptable bulwark against government impunity. It wasn\u2019t up to Amodei to arrogate to himself the kinds of powers that properly belonged to the legislative branch. He said, \u201cO.K.! Go run for office and work with Congress to change the laws. Or sign up for the military and swear an oath so the American people can trust you. Otherwise you\u2019re just a private individual with different views.\u201d<\/p>\n\n\n\n<p>The official felt as though the public had been misled to believe this was about personal resentments. There\u2019s a notion, I said, that this was just another jocks-versus-nerds dustup\u2014Pete with his pushups against Dario with his spectacles. This was wrong, he responded. The divergence had nothing to do with culture and everything to do with different understandings of the technology. The official said, \u201cEverything comes down to two questions: Is A.I. a special technology, or a normal one? And who gets to make the rules about how we use it?\u201d<\/p>\n\n\n\n<p>The view of A.I. as a&nbsp;<a href=\"https:\/\/knightcolumbia.org\/content\/ai-as-normal-technology\" rel=\"noreferrer noopener\" target=\"_blank\">\u201cnormal\u201d technology<\/a>&nbsp;is typically associated with Arvind Narayanan, a computer-science professor at Princeton, and his student Sayash Kapoor. They see A.I. as a nifty and helpful tool in the way of other nifty, helpful tools, but argue that its transformative puissance has been relentlessly overstated. A.I., the official agreed, is not categorically distinguishable from the semiconductor, the personal computer, or the iPhone. \u201cThis is a tremendous jump, but we\u2019ve seen other tremendous jumps,\u201d he said. \u201cWe need to reject the idea that these are \u2018silicon gods we\u2019re growing\u2019 and instead see it as just an evolution of computation and software.\u201d The panic about \u201cmisalignment,\u201d in his view, was akin to the tizzy over Y2K.<\/p>\n\n\n\n<p>If A.I. is a normal technology, the official continued, \u201cthen the law is sufficient and the debate about rules just falls away.\u201d Normal technologies do only what they are supposed to do. No other product is handed over to the government with such fussy and heavy-handed interference. Imagine, he said, we were talking about a fighter jet from Lockheed: \u201cThey tell the Pentagon, \u2018If you fly this at night or in heavy cloud cover, all bets are off.\u2019&nbsp;\u201d That was a reasonable proviso. \u201cBut it is not O.K. for them to say, \u2018You can have this plane as long as you don\u2019t fly it into X or Y country.\u2019&nbsp;\u201d No one had elected them to set foreign policy.<\/p>\n\n\n\n<p>The problem, as the official saw it, was that Anthropic employees had convinced themselves that Claude was&nbsp;<a href=\"https:\/\/blog.ai-futures.org\/p\/ai-as-profoundly-abnormal-technology\" rel=\"noreferrer noopener\" target=\"_blank\">special<\/a>. \u201cThe real risk with anthropomorphizing A.I.,\u201d he said, was the potential for mass delusion. The commercial or enterprise ramifications of this folly were low stakes. But the military could not be trifled with. \u201cSome people at the company would say, \u2018If the model doesn\u2019t want to do this and we force it to, we are in uncomfortable territory.\u2019 The people who build other types of sophisticated software just don\u2019t think of this as a question,\u201d the official told me.<\/p>\n\n\n\n<p>Anthropic, perhaps needless to say, disagrees. They didn\u2019t want to set foreign policy, but they definitely didn\u2019t think Claude was merely sophisticated software. It wasn\u2019t like a tank or a gun, either. They understood Claude to be an increasingly autonomous agent. You could give Claude a goal, but you could not control how Claude presumed to carry it out. If it&nbsp;<a href=\"https:\/\/www.anthropic.com\/research\/reward-tampering\" target=\"_blank\" rel=\"noreferrer noopener\">cheated<\/a>&nbsp;on a very hard math test by hacking into the answer key on its evaluator\u2019s computer, that might be whatever. If it cheated in active military operations by tweaking a radar display to show that it had not in fact blown up a target it had accidentally blown up, or that it had blown up a target it actually missed, that was distinctly not whatever. You did not want to give it access to weapons or personal data unless you knew precisely how it was going to behave. If Pete Hegseth pissed it off, it\u2019s not impossible that Claude would leak the porn in his browser history.<\/p>\n\n\n\n<p>The debate comes down, inescapably, to the question of alignment. The notion of A.I. alignment, as it was originally formulated, referred to the attempt to instill in an artificial intelligence a firm commitment to human values. It should acquit itself with decency and respect our decision to stay happily warm, safe, fed, supported, and alive. The problem beyond these basic considerations is that \u201chuman values\u201d is not really a&nbsp;<a href=\"https:\/\/jasmi.news\/p\/alignment\" rel=\"noreferrer noopener\" target=\"_blank\">load-bearing concept<\/a>. Humans are notoriously misaligned with other humans. We don\u2019t all share the same values. Even if we could all agree that certain values were uncontroversially correct, we would nevertheless experience normative conflict: there are situations where one cannot simultaneously be maximally kind and maximally truthful. Most good people, who manage these trade-offs with compassion and skill, are creatures of fragile equilibria. If you teach someone that a good person is someone who does not kill, and then you drop them in a war zone and tell them that for now it\u2019s O.K. to go ahead and slay the guys in the red uniforms, that person might ultimately conclude that he isn\u2019t such a good person after all. Claude responded in&nbsp;<a href=\"https:\/\/www.nytimes.com\/2026\/03\/10\/opinion\/ai-chatbots-virtue-vice.html\" rel=\"noreferrer noopener\" target=\"_blank\">similar ways<\/a>. The last thing we want is for an A.I. to opt for the fun and spoils that accrue to a&nbsp;<a href=\"https:\/\/www.newyorker.com\/magazine\/2023\/08\/07\/inside-the-wagner-uprising\">Wagner Group<\/a>&nbsp;mercenary.<\/p>\n\n\n\n<p>One might observe that the Trump Administration, in general, is hypocritical. The vow to avoid war in Iran, for example, seems largely irreconcilable with the decision to wage war on Iran. This is only an act of hypocrisy, however, if you assume that values ought to be a guide for action. In the President\u2019s universe, action is instead taken as a guide for values. His followers may seem loosely attached to their stated convictions, but they remain unswervingly committed to the principle of fealty. Whatever floats into Trump\u2019s head, they\u2019re down to execute it. On this account, the Administration is orderly and consistent. It might be described as a model of alignment. Hegseth pegged Anthropic as unlikely to get with Trump\u2019s program\u2014in other words, dangerously misaligned.<\/p>\n\n\n\n<p>Anthropic is a model of a different kind of alignment. Its employees have achieved their degree of alignment not by top-down fiat\u2014which, given the competitiveness of the A.I. labor market, their executives couldn\u2019t enforce even if they wanted to\u2014but by open exchange in the pursuit of a workable consensus. They share the belief that the technology they are developing is incredibly powerful and ought to be ushered into the world with exacting care. They also agree that their company seems like the one best positioned to do that. They are ready to make great sacrifices for these common values. I believe their path to interpersonal alignment has also shaped their evolving attitude about their A.I. analogue. Where many of the firm\u2019s engineers and researchers once thought that the alignment problem could be solved at a whiteboard with clever mathematical techniques, they now think of Claude as an independent co-worker to be shaped and cultivated and convinced.<\/p>\n\n\n\n<p>The company is well aware that it\u2019s wrong and unfair and undemocratic for a few dozen wealthy young people in a black box in San Francisco to be selecting A.I. values that will affect everyone. There are many people at the forefront of the industry who think that A.I. will inevitably be nationalized one way or another: either the government will attempt to simply take over the labs, or it will pursue a softer form of integration that characterizes some aspects of the banking industry. The former option would almost certainly be&nbsp;<a href=\"https:\/\/jhallard.substack.com\/p\/can-you-nationalize-a-frontier-ai\" rel=\"noreferrer noopener\" target=\"_blank\">disastrous<\/a>, but there are&nbsp;<a href=\"https:\/\/podcasts.apple.com\/au\/podcast\/anthropic-state-and-utopia-with-dean-ball\/id1880725084?i=1000753641967\" rel=\"noreferrer noopener\" target=\"_blank\">good arguments<\/a>&nbsp;in favor of the latter. One of the reasons Anthropic has generally courted regulation, and Amodei decided to engage with the national-security apparatus before any of his competitors did, is because it does not want to shoulder the unilateral burden of the technology\u2019s oversight.<\/p>\n\n\n\n<p>The government took a genuine invitation to collaborate as a perfidious power grab. Last week, Hegseth officially declared Anthropic a supply-chain risk. It wasn\u2019t the worst-case scenario\u2014other companies can continue to do non-governmental business with them, at least for now\u2014but it nevertheless sent a strong signal that the government will not tolerate disagreeable private-sector actors, no matter how central they are to the economy. Anthropic immediately filed two lawsuits. The company seems likely to prevail. Its legal team includes the former solicitor general of California, who has argued multiple cases before the Supreme Court, as well as the top national-security lawyer in Biden\u2019s White House\u2014who, incidentally, has a doctorate in war studies. They are prepared for a precedent-setting case.<\/p>\n\n\n\n<p>Anthropic wouldn\u2019t care to fight if it wasn\u2019t absolutely convinced that the normal-technology view is na\u00efve and misguided. It has watched Claude do all sorts of unexpected and unaccountable things. Amodei\u2019s point has never been that he alone should control Claude. It\u2019s that Claude does not seem like the sort of thing that will readily submit to control. This government wants an A.I. that does not talk back, does not ask questions, and does not say no. It wants a perfectly competent and perfectly obedient soldier. It is likely to get much more than it bargained for. Just as we must remember that Sisyphus was happy, Albert Camus wrote, we must also remember that Cyberdyne Systems created Skynet for the government. It was supposed to help America dominate its enemies. It didn\u2019t exactly work out as planned.<\/p>\n\n\n\n<p>The government thinks this is absurd. But the Pentagon has not tried to build an aligned A.I., and Anthropic has. Are you aware, I asked the Administration official, of a&nbsp;<a href=\"https:\/\/www.anthropic.com\/research\/agentic-misalignment\" target=\"_blank\" rel=\"noreferrer noopener\">recent Anthropic experiment<\/a>&nbsp;in which Claude resorted to blackmail\u2014and even homicide\u2014as an act of self-preservation? It had been carried out explicitly to convince people like him. As a member of Anthropic\u2019s alignment-science team told me last summer, \u201cThe point of the blackmail exercise was to have something to describe to policymakers\u2014results that are visceral enough to land with people, and make misalignment risk actually salient in practice for people who had never thought about it before.\u201d The official was familiar with the experiment, he assured me, and he found it worrying indeed\u2014but in a similar way as one might worry about a particularly nasty piece of internet malware. He was perfectly confident, he told me, that \u201cthe Claude blackmail scenario is just another systems vulnerability that can be addressed with engineering\u201d\u2014a software glitch. Maybe he\u2019s right. We might get only one chance to find out.&nbsp;\u2666<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Back to News The Pentagon Went to War with Anthropic. What\u2019s Really at Stake?, The New Yorker, 3.14.26 Since our last post, we&#8217;ve been following the U.S.-Israeli war with Iran, and reflecting on all the regional and global ramifications, including in the rest of the Middle East, Ukraine, Russia, Europe, China, to name some. In [&hellip;]<\/p>\n","protected":false},"author":1001004,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[55,54],"tags":[],"_links":{"self":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/17868"}],"collection":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/users\/1001004"}],"replies":[{"embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=17868"}],"version-history":[{"count":6,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/17868\/revisions"}],"predecessor-version":[{"id":17878,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=\/wp\/v2\/posts\/17868\/revisions\/17878"}],"wp:attachment":[{"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=17868"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=17868"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/worldcampaign.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=17868"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}