Some old/new arguments favoring AI in the US military

Science is about facts, not values, cf Hume’s distinction between is and ought, so the scientist in me will always prioritize the (relatively) common standards of science over my own subjective values, and that’s why I don’t mind presenting the best arguments in favor of AI and Big Tech, even if these arguments may one day turn out to completely refute the validity of the values I support.

Foreign Affairs just published a relatively interesting article about military AI:

The Perils of Overhyping Artificial Intelligence

“For AI to Succeed, It First Must Be Able to Fail” (…)

” … In the United States, there is a growing sense of urgency around AI, and rightly so. As former Secretary of Defense Mark Esper put it, “Those who are first to harness once-in-a-generation technologies often have a decisive advantage on the battlefield for years to come.” …”

“Incorporating AI into the U.S. military, moreover, will require disruptive changes to everything from force structure and promotion patterns to doctrine and responsibility. This will inevitably trigger resistance. And because U.S. defense officials generally lack the expertise to assess AI advances currently being driven by the private sector, opponents of the new technology will find it easier to capitalize on inevitable setbacks, arguing that a potentially effective application of AI is not just too early but will never materialize.”

“Yet the naysayers cannot be allowed to triumph. Over and over throughout history, resistance to technological change has come back to haunt militaries. In the late nineteenth century, for instance, France’s navy sought to counter British naval supremacy by investing heavily in submarines and torpedo boats. The technology of the time was not up to the task, however, and France reverted to building battleships, a move that left the United Kingdom to rule the waves until the outbreak of World War II.”

“About 25 years later, Russia abandoned early design armored vehicles because they got stuck in the mud. Better tread and more power were the simple fixes. But instead, Russia put off procurement of the vehicles and fell behind as others moved forward. In short order, the armored vehicle evolved into the tank, a critical innovation in ground warfare. The Russians, like the French, ought to have shown more patience and persistence.” (…)

“Both China and Russia are investing heavily in AI, in part because they hope to challenge the conventional military superiority of the United States. It would be a colossal mistake for Washington to allow the potential of this new technology to slip through its hands, just as it was a mistake for France and Russia to dismiss early submarines and armored vehicles. …”

” … In order to strike this balance, the U.S. government will need to set more realistic expectations about what AI can do for the military. It must counter the popular focus on the fantastical—lethal autonomous weapon systems and artificial general intelligence, for instance, remain closer to sci-fi than reality—with a carefully calibrated, well-informed, and realistic picture of what AI can actually do. …”

The arguments above are old but still interesting in a new way, if you were not already aware of how France and Russia made wrong decisions in the past. These arguments are a reminder of what is at stake if choosing to drop AI-based military systems, a decision I will support, as explained in the archive of articles here, but readers deserve – in the name of science – to see the best arguments against my view on this.

Of course, I shamelessly like this article in FA because it confirms my own bias that worrying about AI and transhumanism isn’t “premature” if you have the same long-term “deterministic” view of tech development as seen in how long it took to create the atomic bomb: several decades, all in all.

It’s basically “deterministic” because if it’s technically possible to create something, someone will create it, sooner or later, no matter how many AI winters stand in the way of success. When one knows it can take decades to mobilize a popular resistance it’s nothing premature about shouting warnings from the rooftops today.

When the US military really wants something, they usually get it, be it nuclear weapons or Moon landings. China and Russia are well aware of this fact. How will they react if or when they notice that their own defeat in the AI arms race is imminent? I predict: war.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s