Cobalt atomic scorched earth as nuclear defense against superior AI

I once worked for the Norwegian branch of War Resisters’ International, in the 1990s, so I’m slightly angry that Big Tech has created dual-use 4IR (fourth industrial revolution) tech which is such a threat to humanity and liberty that building a more effective nuclear defense is absurdly enough the lesser evil compared to participating in the 4IR arms race of Cold War 2.

MIT physicist Max Tegmark and other AI researchers estimate that there is approximately somewhere between 20% and 50% chance that AI or other Big Tech innovations will kill all of humanity the next hundred years. For more on this read “The Precipice”, by Toby Ord at Oxford University, or just watch this:

Humans will destroy ourselves in 100 years with 50% probability | Max Tegmark and Lex Fridman

Big neural networks are terrifying | Max Tegmark and Lex Fridman

Returning to the freedom and stability we had during the first Cold War and in the 1990s, prior to the rise of Big Tech and its 4IR, is therefore the safest option. Back then we relied on nuclear deterrence to secure peace and prevent the West from becoming surveillance states. We still depend on nuclear deterrence today, so improving this deterrence is not more insane than the general MAD doctrine. Before you continue reading please take a look at these articles if you have not done it already:

Drone attacks on nuclear weapons

Fourth industrial revolution arms race isn’t necessary if we rely on better nuclear deterrence

How to secure nuclear launch sites deep underground and quit the 4IR arms race

The ethics of

The three major dangers of the 4IR arms race are:

1) increased domestic surveillance when elites claim that widespread monitoring of all (public) areas is necessary to stop internal and external enemies.

2) societal chaos and infrastructure vulnerabilities increase when internal and external enemies use 4IR tech, such as deepfakes, hacking and (privatized) surveillance, to engage in Byzantine shadow wars, disinformation campaigns, and Big Tech espionage.

3) when China, Russia and the US engage in an AI arms race, motivated by the security dilemma, it’s deterministically inevitable that one of them will sooner or later develop an AI so “intelligent” that it will overpower humanity (presupposing that this is technically possible). Because it’s in principle impossible for one empire to know with certainty that no other nation or actor on Earth is getting close to developing an even more powerful AI that can beat all other AI. Being the first to design a true AGI is therefore the most secure option (if it’s technically possible the next 20 – 250 years).

To stop the absurd and surreal developments described in the three sections above, 1) – 3), it’s arguably safest if Western states unilaterally abandon 4IR tech and return to the freedom and stability we had between 1950 and 1990. The drawback of doing that is obvious: authoritarian regimes will most likely continue with innovating 4IR tech, until one day creating a super-“intelligent” AI that can overpower all digital systems. The West is to a large degree protected from such digital attacks if we return to an “analog” or offline lifestyle. One option is to built so many hypersonic nuclear missiles that no authoritarian state or rogue actor dares to attack us, in fear of WW3. But authoritarian AI will eventually get better than humans in the art of designing machine-speed defensive systems that can stop all Western nuclear attacks. Fortunately, there is an effective alternative to relying on offensive nuclear missiles: Cobalt Atomic Scorched Earth (CASE).

CASE instead of 4IR

Instead of using any delivery systems, the military in all Western states, including Norway for example, should in each country build 100 – 10,000 very large 50 – 100 megaton (cobalt) bombs that are stationary, some placed on hills or mountain tops, others buried 5 to 100 meter below the ground, beneath relatively light material that will cover all Western states in a layer of radioactive material that can’t be decontaminated by armies of hostile AI robots.

The above CASE system should only be rolled out in each Western state when it’s clear to every average citizen that authoritarian AI has become so powerful that there is no other option than to rely on CASE for protection. It will take maybe 20 to 100 years before hostile AI can beat all Western hypersonic missiles, so this will give scientists time to design stationary CASE bombs that are of relatively small physical size and hundred percent safe when being hidden in various places across a Western state. If the public objects to this, despite the fact that everybody knows that hostile AI has become an existential threat to constitutional democracies, then it’s justified if the military rolls out the CASE system in secret.

Russia and China will have no desire to attack the West once the CASE system is in place, because 1) it will be impossible for humans to live in America, Europe and Australia after it detonates, 2) the fallout will also hit all authoritarian states, and 3) the nuclear winter will destroy (almost) all humans on Earth when basically everything explodes and burns down in the West.

Have earlier discussed specially designed underground operation halls for military officers here. To add an extra layer of security – in case a hostile actor, in the distant future, develops nanobots that can undetectably enter the bodies of all Westerners, including generals, before killing them instantly, literally in a second – the military should connect the CASE bombs to sensors placed on the bodies of at least 100 officers, so that if they suddenly die (or become unconscious) the CASE system automatically detonates.

At first glance, the main drawback of basically abandoning all 4IR tech, and just rely on CASE instead, is that if a doomsday sect in a non-Western state somehow manages to develop a new 4IR WMD and use it against a Western country, it can cause a lot of damage without giving us an ability to retaliate in AI-controlled territories. This risk will probably be very small however, because China and Russia will use their superior and regionally omnipresent AI to literally eliminate all criminals and terrorists in non-Western areas. The elites in authoritarian states will not allow anybody to use 4IR weapons in a very lethal attack on a Western country when knowing that it will trigger the CASE system. The risk is also reduced by the fact that Western states will no longer participate in any arms race. That will calm down authoritarian states, making it less urgent for them to develop superior AI when not having to compete with the best engineers in the West.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s