Issue of the Week: War

 

 

 

 

 

 

 

 

 

Coming Soon to a Battlefield: Robots That Can Kill, The Atlantic, September 3, 2019

 

Twenty-eight years ago as we write, the movie theaters were packing them in for the most expensive movie ever made at that point.

Terminator 2: Judgement Day.

“Terminator 2 imagines things you wouldn’t even be likely to dream”, Mike LaSalle wrote in his review for the San Francisco Chronicle.

An AI, robotic, computer determined nuclear apocalypse was barely averted (or was it, given the scripts in ongoing sequels with the latest upcoming?)

Countless books, movies, TV and streaming provider series have covered similar ground.

Welcome to the dream come true, or the nightmare.

An article in The Atlantic last Tuesday titled, “Coming Soon to a Battlefield: Robots That Can Kill”, is a revelation, potentially right out of the Book of Revelations.

The subtitle is:

Tomorrow’s wars will be faster, more high-tech, and less human than ever before. Welcome to a new era of machine-driven warfare.

The article references the blunt talk from an air force general of the “Terminator conundrum,” the question of how to grapple with the arrival of machines that are capable of deciding to kill on their own.

Consider this excerpt from the final paragraph of the article:

In the Terminator movies’ dark portrayal, an artificially intelligent military system called SkyNet decides to wipe out humanity. . .But “our job is to make sure the robots don’t kill us.”

So was SkyNet’s.

There’s so much news going on at a thousand miles per second that this most-read article for part of the day last Tuesday was just that.

But this is much, much bigger than news of the day.

It is one of a handful of stories of science-fiction become reality that will completely alter life on earth, and given the probable expansion into space, throughout the universe (where who knows what other science-fiction encounters await.)

The power of the article is not as much in the unnerving observations above, as in what informs them–the mind-exploding, straightforward, objective journalistic details, of the story most people have no clue about.

Here it is:

“Coming Soon to a Battlefield: Robots That Can Kill”

Zachary Fryer-Biggs, Center For Public Integrity, Sep 3, 2019

Tomorrow’s wars will be faster, more high-tech, and less human than ever before. Welcome to a new era of machine-driven warfare.

Wallops island—a remote, marshy spit of land along the eastern shore of Virginia, near a famed national refuge for horses—is mostly known as a launch site for government and private rockets. But it also makes for a perfect, quiet spot to test a revolutionary weapons technology.

If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach.

The secretive effort—part of a Marine Corps program called Sea Mob—was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield.

Lethal, largely autonomous weaponry isn’t entirely new: A handful of such systems have been deployed for decades, though only in limited, defensive roles, such as shooting down missiles hurtling toward ships. But with the development of AI-infused systems, the military is now on the verge of fielding machines capable of going on the offensive, picking out targets and taking lethal action without direct human input.

So far, U.S. military officials haven’t given machines full control, and they say there are no firm plans to do so. Many officers—schooled for years in the importance of controlling the battlefield—remain deeply skeptical about handing such authority to a robot. Critics, both inside and outside of the military, worry about not being able to predict or understand decisions made by artificially intelligent machines, about computer instructions that are badly written or hacked, and about machines somehow straying outside the parameters created by their inventors. Some also argue that allowing weapons to decide to kill violates the ethical and legal norms governing the use of force on the battlefield since the horrors of World War II.But if the drawbacks of using artificially intelligent war machines are obvious, so are the advantages. Humans generally take about a quarter of a second to react to something we see—think of a batter deciding whether to swing at a baseball pitch. But now machines we’ve created have surpassed us, at least in processing speed. Earlier this year, for example, researchers at Nanyang Technological University, in Singapore, focused a computer network on a data set of 1.2 million images; the computer then tried to identify all the pictured objects in just 90 seconds, or 0.000075 seconds an image.

The outcome wasn’t perfect, or even close: At that incredible speed, the system identified objects correctly only 58 percent of the time, a rate that would be catastrophic on a battlefield. Nevertheless, the fact that machines can act, and react, much more quickly than we can is becoming more relevant as the pace of war speeds up. In the next decade, missiles will fly near the Earth at a mile per second, too fast for humans to make crucial defensive decisions on their own. Drones will attack in self-directed swarms, and specialized computers will assault one another at the speed of light. Humans might create the weapons and give them initial instructions, but after that, many military officials predict, they’ll only be in the way.“The problem is that when you’re dealing [with war] at machine speed, at what point is the human an impediment?” Robert Work, who served as the Pentagon’s No. 2 official in both the Obama and Trump administrations, said in an interview. “There’s no way a human can keep up, so you’ve got to delegate to machines.”

Every branch of the U.S. military is currently seeking ways to do just that—to harness gargantuan leaps in image recognition and data processing for the purpose of creating a faster, more precise, less human kind of warfare.

The Navy is experimenting with a 135-ton ship named the Sea Hunter that could patrol the oceans without a crew, looking for submarines it could one day attack directly. In a test, the ship has already sailed the 2,500 miles from Hawaii to California on its own, although without any weapons.

The Sea Hunter ship
John F. Williams / Released

Meanwhile, the Army is developing a new system for its tanks that can smartly pick targets and point a gun at them. It is also developing a missile system, called the Joint Air-to-Ground Missile (JAGM), that has the ability to pick out vehicles to attack without human say-so; in March, the Pentagon asked Congress for money to buy 1,051 JAGMs, at a cost of $367.3 million.

And the Air Force is working on a pilotless version of its storied F-16 fighter jet as part of its provocatively named “SkyBorg” program, which could one day carry substantial armaments into a computer-managed battle.

Until now, militaries seeking to cause an explosion at a distant site have had to decide when and where to strike; use an airplane, missile, boat, or tank to transport a bomb to the target; direct the bomb; and press the “go” button. But drones and systems like Sea Mob are removing the human from the transport, and computer algorithms are learning how to target. The key remaining issue is whether military commanders will let robots decide to kill, particularly at moments when communication links have been disrupted—a likely occurrence in wartime.

So far, new weapons systems are being designed so that humans must still approve the unleashing of their lethal violence, but only minor modifications would be needed to allow them to act without human input. Pentagon rules, put in place during the Obama administration, don’t prohibit giving computers the authority to make lethal decisions; they only require more careful review of the designs by senior officials. And so officials in the military services have begun the thorny, existential work of discussing how and when and under what circumstances they will let machines decide to kill.

The united states isn’t the only country headed down this path. As early as the 1990s, Israel built an AI-infused drone called the HARPY, which hovers over an area and independently attacks radar systems; the country has since sold it to China and others. In the early 2000s, Britain developed the Brimstone missile—which can find vehicles on the battlefield and coordinate with other missiles to divvy up which ones within a defined area to strike in which order—though it’s scarcely allowed it to exercise all that authority.

And Russian President Vladimir Putin boasted in 2018 about the deployment of a drone submarine that he claimed was equipped with “nuclear ordnance,” suggesting some degree of robotic control over humankind’s most deadly weapon, though he didn’t say how much autonomy the drone would have. The previous year, Putin said that relying on artificial intelligence “comes with colossal opportunities, but also threats that are difficult to predict.” He added nonetheless that the nation that leads in the development of AI will “become the ruler of the world.”

China hasn’t made similarly grandiose claims, but President Xi Jinping unnerved U.S. officials by asserting in 2017 that his country will be the global leader in artificial intelligence by 2030. The country seems primarily to be enhancing its domestic surveillance with facial-recognition and other identifying technologies, although U.S.experts say that technology could be quickly put to military use.The fear that the U.S. will be outpaced by a rival, namely China or Russia, has already triggered a “tech Cold War,” as retired Army General David Petraeus termed it when asked in a CNBC interview about the challenges now facing Secretary of Defense Mark Esper. Until this year, the Pentagon never said how much its artificial-intelligence work cost, though the Congressional Research Service estimated that the Defense Department spent more than $600 millionon unclassified artificial-intelligence work in FY2016 and more than $800 million in FY2017.

Read: China’s spies are on the offensive

In March, the Pentagon said it wants Congress to set aside more—$927 million—for the coming year to advance its artificial-intelligence programs. Of this, $209 million would go toward the Pentagon’s new AI office, the Joint Artificial Intelligence Center (JAIC), established in June 2018 to oversee all AI work costing more than $15 million. Most of the JAIC’s work is classified, and officials have openly talked only about Defense Department AI projects that focus on disaster relief.But the consulting firm Govini estimated in its 2019 Federal Scorecard that about a quarter of the Pentagon’s AI spending over the past five years has been on basic research, with the rest split between developing software that can sort through data the Pentagon already possesses, and creating better sensors to feed data to computer algorithms—key milestones on the way to machine-driven combat.The Pentagon has said little about these efforts, but public documents and interviews with senior officials and confidential sources make clear that the military is laying the foundation for AI to take over more and more of military operations—even if the technology isn’t quite ready to take full command.

The navy’s autonomous ship work was inspired partly by a challenge on Mars.

The planetary scientists who helped send the Spirit and Opportunity rovers to Mars in 2003 knew that after the craft landed at the end of their 286-million-mile journey, urgent high-speed communication would be impossible: A simple instruction sent from Earth to keep one of them from tumbling over a rock would arrive 10 minutes after the tumble. So the scientists had to develop sensors and computers that would enable the rovers to navigate around dangerous features in Mars’s terrain on their own. The effort was a huge success: Originally designed to last just 90 days and roam for half a mile each, the rovers wound up traversing miles of the planet’s surface, over a period of six years in Spirit’s case (only one of which was spent stuck in the sand), and 14 in Opportunity’s.

The achievement caught the attention of scientists at the Naval Surface Warfare Center’s offices in Bethesda, Maryland, who asked a team that helped design the rovers’ sensors to help the Navy create better autonomous ships. But while the Mars rovers traversed rocky, hilly, unmoving terrain, autonomous Navy vessels would operate in moving water, where they would have to survive waves, other boats, marine life, and wildly varying weather conditions. That required mastering and improving the sensors’ identification abilities.

Much of the work was done by Michael Wolf, who joined NASA’s Jet Propulsion Laboratory (JPL) immediately after getting his doctorate from the California Institute of Technology. The Navy declined to make him or other JPL officials available for an interview. But he and his colleagues published early research papers in which they named their solution Savant: Surface Autonomous Visual Analysis and Tracking.By the time the Navy first tested it, in 2009, according to one of the papers and official photos, Savant looked like the top of a lighthouse, with six cameras fanned in a circle inside a weatherproof case around a ship’s mast. Their images were fed into a computer system that enabled comparison with a data library of sea scenes, including a growing number of ships.The JPL system was designed to get smarter as it worked, using the continuous self-teaching that lies at the heart of every major AI development of the past half decade. The first test of the naval system used only a single ship. By 2014, though, Wolf had not only integrated better image recognition, but also helped write algorithms that allowed multiple boats to work cooperatively against potential foes, according to the Navy.

Crucially, all of the ships share the data their sensors collect, creating a common view, enabling each of the ships to decide  what it should be doing. Together, they function like a smart battle group, or a squad of marines. Each ship does what it thinks is best for itself, but is also part of an orchestrated ballet aimed at swarming and defeating an array of potential targets.Hence the Navy’s name for the program: Swarmboats. Initially, the ambition was solely defensive—to find a way to prevent an attack like the one on the USS Cole in 2000, in which two suicide bombers rammed a small boat packed with explosives into the side of the military ship while it was refueling in the Yemeni port of Aden. Seventeen sailors were killed, 39 others were injured, and the destroyer was knocked out of service for more than a year of repairs.In a 2014 test on the James River near Newport News, Virginia, the program proved itself when five boats swarmed others they deemed hostile. That test occurred just as a group of AI enthusiasts at the Pentagon began a major push for more widespread adoption of the technology. The JPL system got the attention of a small, secretive office in the agency—the Strategic Capabilities Office (SCO). Fulfilling its mandate to speed the military’s adoption of advanced technologies, the SCO accelerated the work and pointed it toward potential offensive applications.

Partly as a result, the JPL system for autonomous control has not only been applied to the Marine Corps’ Sea Mob program, but is now being developed for flying drones by the Office of Naval Research, and for use in land vehicles by the U.S. Army Tank Automotive Research, Development and Engineering Center, according to two knowledgeable sources.

The special sensors and brain used by the Sea Mob system, known as SAVAnT
John F. Williams / Released

SCO was run for half a decade after its 2012 creation by Will Roper, a mathematician and physicist who was previously the chief architect of a dozen or so weapons systems at the Missile Defense Agency. When I spoke with him in 2017 about this work, his youthful face lit up and his words gained momentum. “It will do amazing things,” he said. “To me, machine learning, which is a variant of AI, will be the most important impactor of national security in the next decade.”

Roper, who now directs the Air Force’s overall weapons-buying, told reporters this year that although the Air Force has long tried to protect its pilots from being made obsolete by unmanned smart planes, “I get young pilots coming into the Air Force who are super excited about this idea, who don’t view it as competing with the human pilot.”

So far, Roper told reporters, he doesn’t have the AI programs he wants to put “in the warfighter’s hands.” But this will change soon, and the technology will “move so quickly that our policies are going to have a tough time catching up with it.”

AI is, he said, “the technology that has given me the most hope and fear over the last five years of doing this job.” The fear, as Roper explained, is that other countries will find ways to take advantage of AI before the U.S. does. The country that integrates artificial intelligence into its arsenal first might have, Roper said, “an advantage forever.”

Fei-fei li was teaching computer science at the University of Illinois at Urbana-Champaign a little more than a decade ago when she began to ponder how to improve image-recognition algorithms. The challenge was in some ways surprisingly straightforward: Computers needed to get smarter, and to do that they needed to practice. But they couldn’t practice well if they didn’t have access to a large trove of images they could learn from.

So in 2007 Li started working on what would become ImageNet, a library of 3.2 million images tagged by humans in order to train AI algorithms. ImageNet has served as a kind of Rosetta Stone for the research now being channeled into the development of futuristic weapons. In 2009, Li and her team of four other researchers at Princeton made the data set public, and the following year teams from around the world started racing to build the best algorithm that could make sense of all those images.

Fei-Fei Li speaking at the AI for GOOD Global Summit
ITU/R.Farrell / Creative Commons

Each year, the algorithms got a little better. Soon, some teams began integrating neural networks, a computer model meant to mimic the structure of the human brain with layers of data-processing clusters imitating neurons. By 2015, two of them—a team from Microsoft and one from Google—reported striking results: They had created algorithms, they said, that could do better than humans. While humans labeled images incorrectly 5.1 percent of the time, Google’s and Microsoft’s algorithms achieved error rates below 5 percent.

Li, who by then had moved to Stanford University, thought this achievement was an interesting benchmark, but was wary of headlines declaring the supremacy of computers. The ImageNet project showed that “for that particular task, with that particular data set, very powerful algorithms with deep learning did very well,” Li told me in an interview. “But as a researcher who has been studying this for 20 years, it was not a moment of ‘Oh, my God, the machines beat humans.’”

Human vision is enormously complex, she explained, and includes capabilities such as reading facial movements and other subtle cues. The ImageNet competition didn’t create computer algorithms that would see the world with all its gradations. But it did help advance machine learning. “It’s very similar to saying you use a petri dish to study penicillin and eventually it’s becoming a general antibiotic for humans,” she said. “ImageNet was that petri dish of bacteria. The work, luckily for us, led to a great progress in deep-learning algorithms.”

An AI doctoral student named Matt Zeiler, for example, achieved 2013’s best ImageNet tagging rate, and then used the AI system he’d developed to launch a company called Clarafai. Clarafai is now one of the tech companies working with the Pentagon on Project Maven, which uses AI to dig through satellite and drone footage and identify and track objects that could be targeted by weaponry.

Image-recognition researchers haven’t solved all the problems. A group of nine scientists from four U.S. universities worked together on a 2018 test to see if pasting small black-and-white stickers on stop signs could trick advanced image-recognition systems. Eighty-five percent of the time, the algorithms determined the slightly altered stop signs weren’t stop signs at all. Researchers call this kind of inability to adapt to conditions “brittleness” in AI, and it is one of the chief problems facing the field. Consider, for example, what an inability to recognize stop signs means for driverless cars.

But the news of the striking Google and Microsoft achievements with ImageNet in 2015 helped convince Pentagon officials that “machines could be better at object detection” than humans, recalled Robert Work, who was then the deputy secretary of defense—a crucial skill that could define what gets automatically killed and what gets automatically spared.“As far as the Department of Defense was concerned, that was an important day,” he said.

The phalanx is a six-barreled naval gun that shoots 75 bullets a second from the decks of midsize and large Navy ships, and it gets twitchy before firing. It makes numerous small corrections as it starts to track incoming threats two miles out, including missiles and airplanes. It also keeps track of its own bullets, to verify that they’re zeroing in on the target. It does all this without human intervention.

One of its radars, contained in a large dome that makes the overall system look like a grain silo with a gun sticking out its side, constantly searches for new targets; when the movement of a target matches a pattern contained in a library of potential threats, the gun opens fire. The second radar, positioned lower in the Phalanx’s dome, keeps track of where all the bullets are going so that the system can make adjustments as it unloads.

The phalanx, a six-barreled naval gun that shoots 75 bullets a second
Creative Commons

The Phalanx is not a novelty. It has been on the Navy’s ships for about 30 years, and been sold to two dozen U.S. allies. That long history is one reason naval officers such as Rear Admiral David Hahn are at ease with letting machines make lethal decisions in a conflict. Hahn has a long history with machine intelligence himself: After graduating from the Naval Academy, Hahn was assigned to the USS Casimir Pulaski, a 425-foot vessel that relied on computers to help control its nuclear reactor. “What your role is, as the human, is to supervise those [systems] and then, when you have to act, to intervene,” Hahn told me. But “many times you’re not fast enough, so you’re depending on the machine to act to stop the chain reaction and keep the plant safe so then the humans can go recover it.”

Hahn now runs the Office of Naval Research (ONR), which has a budget of $1.7 billion and oversees the Navy’s efforts to introduce artificial intelligence into more of its systems. “We have all these conversations today about how this is new, or this is different,” Hahn said. “No, it’s just a different form. It might be faster, it might be a more comprehensive data set, it might be bigger, but the fundamentals are the same.”

His office was one of the original funders of the Sea Mob program, but neither he nor the Marine Corps wanted to talk about it or discuss the October 2018 test. A source familiar with the program said it is progressing from identifying ships to being able to make decisions about whether they are friendly or threatening, based on their behavior in the water. That requires tracking their movements and comparing those to new, more complex threat data, still being collected. The software to track the movements of an approaching ship and compare them to potential threats is already in place.

Other naval-autonomy programs are being built around much larger ships, such as the 132-foot, Sea Hunter, which first launched in 2016. How the Navy might arm the Sea Hunter isn’t clear yet, but anti-submarine warfare from the ocean’s surface typically requires sonar—either mounted on the ship or dragged along below the surface—plus torpedoes.One reason autonomy is so important is that commanders worry that radio jamming and cyberattacks might cause ships to lose contact in the future. If that happens, a robotic ship’s weapons systems will have to be able to act on their own, or they’ll be useless.

Robert Brizzolara, who manages the Sea Hunter program, says that much of the work getting the system ready for use on the battlefield is done, and it will be turned over to Naval Sea Systems Command next year so engineers can figure out how the ship might be armed and used in combat. Testing, however, will continue, in an effort to convince commanders it can be a reliable partner in conflict.

“Sailors and their commanding officers are going to have to trust that the autonomous systems are going to do the right thing at the right time,” Brizzolara told me. To create that trust, Brizzolara plans to go to the commanders with “a body of evidence that we can present that will either convince them” or guide new tests.

The Navy isn’t just looking at independent systems that can operate on top of the sea; it’s also experimenting with small unmanned, autonomous submarines that can search for unexploded sea mines and deploy bombs meant to detonate them, saving larger and more expensive ships that carry humans for other tasks. It’s a fertile arena for autonomy, because the chances of accidental casualties underwater are slim.Hahn described these countermine crafts, the most widely used version being the MK18, as a gateway to broader naval use of unmanned underwater systems. “Does that get you to anti-submarine warfare? No, but it gets you in that domain down the path a ways,” he said. “And then you can understand what the challenges look like if you try to apply that [to an autonomous vessel] instead of to an object that’s moving through the water and has humans guiding it.”

David Hahn
John F. Williams / Released

In February, Boeing was awarded a $43 million contract to build four autonomous submarines by 2022. They’ll each be 51 feet long and capable of traveling 7,500 miles underwater. Once they are built, the Navy plans to experiment with how the vessels might be used to attack both submarines and surface ships, according to Navy documents. Asked about the program, Boeing directed all questions to Naval Sea Systems Command, and a spokesman for the command declined to provide details,  saying only that the boats would “undergo a rigorous integration and test plan.”

The pentagon’s surge of interest in autonomous weaponry is partially due to leaps in computing and engineering, but it is also largely the handiwork of Robert Work, who spent 27 years in the Marines before becoming deputy secretary of defense. Work has spent a lot of his time, both in uniform and out, studying military history and strategy. Even now, having twice served in civilian jobs in the Pentagon, he still has the air of an officer, with clear-lens aviator glasses and an enthusiasm for discussing historic battles that demonstrate key technologies or tactics.

While Work was at the Center for a New American Security, a Washington think tank, in 2013, he and his colleagues were surprised by the outcome of some classified Pentagon simulations of theoretical conflicts with China or Russia. After the Cold War, these kinds of exercises typically ended with either the U.S. claiming a decisive victory, or a nuclear Armageddon. But the new simulations made it evident the U.S.’s technological edge was starting to evaporate, he and others say. New disruptive technologies had leveled the playing field, causing “blue [to get]…its ass handed to it” sometimes, in the colorful vernacular last March of an analyst from the RAND Corporation in Washington. The U.S. is dependent on big aircraft carriers to deliver military might to conflict areas, especially in the Pacific, but those carriers, and the expensive fighters and bombers that go with them, could be rendered useless by a hypersonic missile attack, swarms of inexpensive boats, or cyber weapons.

Robert Work sits beside Air Force General Paul Selva.
Amber I. Smith / U.S. Army

Work concluded that the U.S. needed to step up its game, the way it did during two other moments in the past half century when its battlefield dominance came into question. One was early in the Cold War, when the Pentagon realized it couldn’t adequately defend Europe from a Soviet ground invasion and responded with small nuclear weapons meant to blunt an advance by Soviet forces. The second was in the 1970s, when it came to understand that guided bombs and missiles were needed to give military commanders options short of carpet-bombing entire areas on the battlefield.

In early 2014, Work started publishing reports that outlined the need for cutting-edge, flexible, fast, and inexpensive weapons that could make U.S. forces nimbler and less dependent on aircraft carriers and other vulnerable hardware. Then, in February of that year, President Barack Obama appointed him to be the next deputy secretary of defense. Several months later, Work formally launched his technology initiative.“In labs and factories around the world … vast amounts of time, money, and manpower are being spent developing the next wave of disruptive military technologies,” Work said on August 5, 2014, speaking at National Defense University. “In order to maintain our technological superiority as we transition from one warfighting regime to another, we must begin to prepare now.”Work didn’t mention artificial intelligence in his speech, and his initial description of technologies was wide-ranging, focused on the idea of agility and flexibility rather than a specific type of military weapon.  But once Work and others had identified the types of technologies they thought could tilt the balance of power in the U.S.’s favor, namely cyberwarfare, artificial intelligence, and hypersonics, they began to shift money and manpower to make those technologies ready for combat.Other countries saw the U.S. push for new technologies and decided they needed to come up with innovative military options of their own. In 2016 China unveiled the Junweikejiwei, its new military research and development agency modeled after the U.S.’s Defense Advanced Research Projects Agency (DARPA). Similarly, Russia has the Skolkovo Institute of Science and Technology outside of Moscow, which defense-intelligence sources describe as another DARPA clone. Skolkovo was originally founded in partnership with the Massachusetts Institute of Technology, but the university backed out of the arrangement in early 2019 after the Russian billionaire who funded the project, Viktor Vekselberg, was sanctioned by the U.S. for suspected connections with Russian meddling in the 2016 election.Work has consistently sought to calibrate expectations about the results of his technology effort, making clear that machines could give America back its military edge, but would not make wars unfold precisely according to humans’ best-laid plans.

“There will be times where the machines make the mistake,” Work told me a recent interview. “We’re not looking for the omniscient machine that is never wrong. What we’re looking for are machines that have been tested to the point where we have the trust that the AI will do what it is designed to do, and hopefully be able to explain why it made the decision it did.”Still, Work argues that AI has the opportunity to save lives because of the precision that computers can harness. “AI will make weapons more discriminant and better, less likely to violate the laws of war, less likely to kill civilians, less likely to cause collateral damage,” he said.

Work, like all the current and former officials who discussed the future of AI in weapons with me, said that he doesn’t know of anyone in the military now trying to remove human beings entirely from lethal decision making. No such offensive system has been put through the specialized review process created by an Obama-era Pentagon directive, although the procedures have gotten a lot of internal attention, according to current and former Defense Department officials.

Work also says that the concept of machines entirely picking their own targets or going horribly awry, like something out of the Terminator movies, is unlikely because the offensive technologies being developed have only narrow applications. They “will only attack the things that we said” they could, Work said.

Work’s ideas got a sympathetic hearing from Air Force General Paul Selva, who retired as the vice chairman of the Joint Chiefs of Staff in July and was a major backer of AI-related innovations. But Selva bluntly talked about the “Terminator conundrum,” the question of how to grapple with the arrival of machines that are capable of deciding to kill on their own.Speaking at a Washington think tank in 2016, he made clear that the issue wasn’t hypothetical: “In the world of autonomy, as we look at what our competitors might do in that same space, the notion of a completely robotic system that can make a decision about whether or not to inflict harm on an adversary is here,” he said. “It’s not terribly refined, it’s not terribly good, but it’s here.”He further explained in June at the Brookings Institution that machines can be told to sense the presence of targets following a programmer’s specific instructions. In such an instance, Selva said, the machines recognize the unique identifying characteristics of one or more targets—their “signature”—and can be told to detonate when they clearly identify a target. “It’s code that we write … The signatures are known, thus the consequences are known.”With artificial intelligence, Selva said at Brookings, machines can be instructed less directly to “go learn the signature.” Then they can be told, “Once you’ve learned the signature, identify the target.” In those instances, machines aren’t just executing instructions written by others, they are acting on cues they have created themselves, after learning from experience—either their own or others’.

Selva has said that so far, the military has held back from turning killing decisions directly over to intelligent machines. But he has recommended a broad “national debate,” in which the implications of letting machines choose whom and when to kill can be measured.Systems like Sea Mob aren’t there yet, but they’re laying the groundwork for life-and-death decisions to be made by machines. In the Terminator movies’ dark portrayal, an artificially intelligent military system called SkyNet decides to wipe out humanity. One of the contractors working on Sea Mob has concluded his presentations about the program with a reference to the films: “We’re building SkyNet,” the presentation’s last PowerPoint slide reads, half in jest. But “our job is to make sure the robots don’t kill us.”


This article is a collaboration between The Atlantic and The Center for Public Integrity, a nonprofit, nonpartisan, investigative newsroom in Washington, D.C. More of its national security reporting can be found here.