<STD: Serious Topic Disclaimer>
In the interest of offering content both didactic and accessible,
the following Serious Topic will not include proper source support. This means
you shouldn’t believe any of it. If you find the discussion compelling, however,
I encourage you to pursue a more demonstrable truth.
The Cold
War was an interesting time. The international relations landscape was bipolar.
That is, the USA and USSR were the centers of their respective spheres of power
(two poles), and the rest of the world (except the irrelevant 3rd
world) was to take a side.
Bipolar
dynamics are usually considered to be very stable. You have two primary powers
keeping each other in check to some extent—neither can get too powerful, but no
new powers are likely to grow to their level. The Cold War was a strange
scenario because it combined this stable model with a lot of instability, due
thanks in large part to the scariness nuclear weapons presented. Nuclear
deterrence theory is widely misunderstood, and I’m not the one who should educate the masses… so naturally,
here I go.
Nuclear
deterrence isn’t about distributing nuclear weapons, or controlling their
distribution, or hoarding them—at least not principally. It’s about maintaining
a status quo. During the Cold War, that meant that when the USA and the USSR—and,
ideally, ONLY the USA and USSR—had the means to destroy each other, it was
preferable for each side just to let the other one be. Neither could be sure
whether they were the #1 or #2 big kid on the block, but both were sure that if
they tried to find out, they’d end up in bad shape. This was the driving force
behind Mutually Assured Destruction: So long as both sides would rather be in
competition for “hegemonic world power” than for “scariest nuclear wasteland,”
no nuclear war was called for.
Of course,
what’s simple (albeit scary) in theory is a lot less simple in practice. MAD
was dependent on keeping both sides convinced that the consequences of aggression
were unavoidable, and that means a lot of puffery that can start looking like
aggression itself before long. This is a classic security dilemma scenario;
Thucydides first discussed this phenomenon in reference to the Peloponnesian Wars
of ancient Greece. Consider the following scenario:
Country A
and Country B are both peaceful and don’t wish to go to war—really, they don’t
(trust me on this one). But, both are realistic nations that have rational
concerns that other countries (not necessarily the other of this hypothetical
pair) might attack them at some time. So, both A and B build armies and secure
their borders. But each country wants to have just a slightly larger force on its own side of the border, just to give
it the extra edge. And so the forces grow, and each side gets a little more
wary of the other side, and finally, fearing an inevitable attack from Country
B (which Country B has no intention of launching, except perhaps because it
expects the same from Country A), Country A attacks preemptively, and two
would-be peaceful countries are suddenly at war.
Of course,
when we’re dealing with actual people on borders, this doesn’t happen (unless
you’re playing RISK). People are slow, and countries are big, and we don’t keep
armies sitting on borders during peacetime. But when nuclear weapons come into
the picture, things change. Suddenly both sides worry about whether they
actually have time to retaliate before that warhead detonates, and both start
pointing more and more nukes—faster and more accurate and longer-ranged every
year—at the enemy. Like I’ve said before, this is all very simplified, but the
principles are sound.
Security
dilemmas like the one I’ve just described are tied to two phenomena that are
heavily analyzed in modern game theory (which you might have heard of thanks to
the movie A Beautiful Mind, which I’ve
never seen but which I hear does a perfectly terrible job of actually
explaining the idea). The two phenomena are the Prisoner’s Dilemma and the Game
of Chicken.
Prisoner’s
Dilemma: Named after a hypothetical that involves the police arresting two suspected
criminals. The suspects are placed in different interrogation rooms and
addressed separately, without a chance to communicate. Each is told that the
police can easily prove a minor crime and put both suspects away for 2 years
each. But, if one is willing to implicate for the other, the one who sings will
get off free and his cohort will go away for 10 years. Of course, if both confess,
both will go to prison, but both will be compensated for their cooperation, and
the sentence will be for only 5 years.
In other
words, each suspect (if his only goal is to minimize jail time) wants implicate
the other suspect, who hopefully will stay silent. But if both talk, each gets
5 years, whereas if each is silent, they each only get 2 years. But of course,
each one is afraid to stay silent, because doing so would mean exposing one’s
self to the risk of 10 years in prison. Check the following grid:
B talks
|
B stays silent
|
|
A talks
|
5, 5
|
0, 10
|
A stays silent
|
10, 0
|
2, 2
|
The paired
figures represent that happens to (Suspect A, Suspect B) under each of the
possible scenarios. Essentially, it’s a trust game: If A and B really trust
each other, it’s best for them to stay silent, take the shorter sentence, and
move on. But they have to trust that their pal would rather do 2 years than get
off free and subject his accomplice to 10 years.
In the
context of a security dilemma like the one described above, “talking” means
attacking, and “staying silent” means keeping peace, or even demilitarizing. If
both sides attack each other, both suffer losses, but assuming the forces are
relatively evenly-matched the losses might not be catastrophic for either side.
If both forces remain peaceful, they get a much better scenario, of course. But
if one side stays demilitarized and the other attacks, the attacker wins
easily, gains the benefits of absorbing a new territory (a payoff), and the
loser suffers its worst possible outcome.
Game of
Chicken: Named for that stupid thing teenagers do! This is a lot like the
Prisoner’s Dilemma, but switched around in an important way. Imagine two people
driving cars straight at each other at 60mph. The winner is the one who doesn’t
“chicken out” and swerve away. Of course, if neither swerves, they both die,
and both of them lose. If one swerves, the other is the winner. If both swerve,
they both “lose,” but at least they’re alive. See the following:
B swerves
|
B stays the course
|
|
A swerves
|
0, 0
|
0, 1
|
A stays the course
|
1, 0
|
-1, -1
|
The paired
figures represent the end result for (Driver A, Driver B), where a “1” means
winning the game, a “0” means losing, but surviving, and a “-1” means dying.
This is a lot like the scenario the Joker sets up in the movie The Dark Knight.
Of course, that scenario was meaner, because there was no way to lose without
dying. In real life, the Game of Chicken also fits into a security dilemma.
Continuing to mass forces at a border in response to another country’s similar
behavior is analogous to “staying the course”—it costs a lot and it might
produce a conflict. If one side stops the cycle (“swerves”), it’ll be at a
military disadvantage, perhaps, but it might prevent actual conflict if both sides
are legitimately interested in peace. If both demilitarize, nobody is
advantaged, and the result is neutral.
The Game
of Chicken is notably different from the Prisoner’s Dilemma in that it is a
solvable problem; that is, there is a solution. One has only to make a credible
commitment to victory to win. Historically, legend has it that Hernando Cortez,
upon arriving in the Americas, instructed his men to burn their ships. The
implication was that they wouldn’t return to Spain until they had defeated the
American tribes. It’s an apocryphal story, probably, but it’s a good
illustration. In our hypothetical about two teenagers driving their cars at
each other, imagine that one teen rips the steering column out of his dashboard
before revving up his engine. The other driver, knowing that his opponent
couldn’t swerve even if he wanted to, is now choosing between dying and losing—and
it should be an easy choice.
These are
the sorts of scenarios that were running through the minds of military
strategists during the Cold War—although I hope they had a more sophisticated
understanding of them than I do. They had a prisoner’s dilemma and a game of
chicken. Both were hoping they could trust the other side not to “talk”—or fire
nuclear weapons—, but both were also eager to make that credible commitment and
force the other side to “swerve.”
Perhaps
what saved us all was another bit of common knowledge brought to us by game
theory: Cooperation is always the best solution in the long run. If you take
the prisoner’s dilemma, for example, and apply it over a longer term (say,
instead of just “playing” once, A and B “play” 100 times), a new dynamic
arises. In fact, the prisoner’s dilemma, while a tough nut to crack in the
literal prisoner scenario, because both players—the two suspects—only get one
chance to guess the other’s approach, without the benefit of communication, it
gets a lot easier in the long run. Say the goal is to have amassed the fewest
total years in prison after 100 turns. On turn one, both A and B might choose
to talk, netting each 5 years in prison. On turn two, they might try that
again. But that would be totally sub-optimal. Instead, the two are more likely
to agree to “stay silent,” minimizing prison time accrued to 2 years per turn.
Periodically, one might trick the other and “talk,” expecting the other to
continue the trend of “staying silent.” But that would ruin the deal, and soon
both would be “talking” again, to their mutual disadvantage.
Back when
this stuff was still in its initial stages of theorization, some professors set
up a computer program that simulated this exact scenario. They solicited a
bunch of other learned folks to contrive a computer program—a set of rules—that
determined how to play the game and get optimal results. Some people came up
with really complicated and elaborate approaches, but at ever iteration of the
experiment, the winner was the same. It was an incredibly simple program that
simply directed the player to play whatever the opponent had played on the
previous turn. A sophisticated actor, knowing his opponent was simply mimicking
him, could take advantage of such a directive, but the value of cooperation was
proven with that experiment.
[As a side
note here: It’s worth mentioning that the flaw with cooperation is that on the
last turn in a game of finite duration, there is no longer any motivation to
cooperate. In the real world, it’s difficult to say whether anything really has
a “finite duration,” but for theoretical games, this becomes a problem. Both
players might want to “talk” on turn 100. Of course, knowing that the other
player will want to “talk” on turn 100, one might opt to “talk” on turn 99… you
see where this is going. It reminds me of the old paradox about an executioner
promising to keep the day of his prisoner’s execution a surprise.]
So why does
this matter? The Cold War is over.
Well, I’m
not sure how we should conceptualize the modern nuclear paradigm. The Cold War,
though scary, was at least a reasonably stable scenario that conformed with the
ways we had thought about international relations and war to that point. Today,
things are a lot murkier. For one thing, countries without a status quo to
maintain are in possession of nuclear weapons. North Korea, who has been
testing its arsenal recently, isn’t a “world power.” Where the USA and the USSR
had a lot of interest in protecting their respective global stations during the
Cold War, North Korea isn’t similarly motivated. Even less so are, for example,
various fractal groups like terrorist organizations and insurgencies, who are
often designed specifically to break down status quos.
Further,
it’s no longer clear that we can rely on the decision makers in possession of
nuclear weapons to behave reasonably. Here we should distinguish between acting
reasonable and acting rationally: A rational actor is anyone who has a goal and
takes steps in furtherance of that goal. A unreasonable actor, I will now
stipulate, is someone who’s, for example, crazy (Kim Jong-un, I’m looking at
you). So, for example, the leader of a terrorist fractal group hell-bent on
killing, say, Jews, might be a totally rational actor, provided his actions are
in furtherance of that goal. However, his mission is obviously reprehensible.
All theory breaks down when you deal with the insane, megalomaniacal leaders
that do exist in our world.
These guys,
even if logically rational, might be impervious to traditional diplomatic strategies,
because their priorities aren’t in line with reality. Others might be similarly
unreachable simply because he has nothing to lose. We deal with a lot of very
religious enemies to whom death isn’t the same disincentive that it is to us.
We deal with a lot of fractal groups to whom threats on national infrastructure
aren’t menacing at all. If a proverbial Osama Bin Laden character has a nuclear
weapon, what incentives can we possibly create to keep him from using it? And
if we can’t create those incentives, how do we avoid the fallout—and I use the
term intentionally—without a fairly extensive defense system in place?
Much of
Western Europe is demilitarizing—just look at how much trouble recent conflicts
in, for example, Libya have created for France and the UK. Given the dwindling
collective power of Western forces, and given the uncertainties inherent in the new nuclear paradigm, it
seems prudent not to make serious defense spending cuts without a lot
consideration.
Anyway,
this concludes my (maybe not so) brief reflection on the current defense
spending debate. Under the classical social contract, as Hobbes conceptualized
it (without, perhaps, using Locke’s term), the primary thing the state has to
offer a citizen is protection. We’re all afraid of a painful death, and if we
band together, we reduce that risk. It’s interesting that we’re moving away
from the idea that military defense is an essential part of that pursuit.
Unfortunately, I think it may be too soon to do so.
No comments:
Post a Comment