In a recent article in the science journal Nature, J. C. Bohorquez, S. Gourley, A. R. Dixon, M. Spagat, N. F. Johnson (BGDSJ from now on) presented a unique data set of “54,679 violent events reported within nine diverse insurgent conflicts,” and found that insurgencies exhibit a common power law distribution of α≈2.5. It’s been presented as a big advancement for the study of warfare, netting Gourley a well-regarded TED talk, tons of blogosphere and academic chatter. Even Nature’seditors found the rather short article—a little under two pages of text—important enough to give it the cover over articles discussing a cure for cancer and the possible existence of an extra-solar Earth-like planet.
Power law distributions are a common feature of complex systems, whether stock prices, the structure of the World Wide Web, or even networks of sexual partners1—which makes us wonder what is so novel about seeing it in a complex system like an insurgency. What, exactly, do we learn by observing that insurgencies behave complexly?
To be clear: We commend BGDSJ for exploring this question, as the underlying fundamentals of warfare are important. But their methodology contains serious flaws. The assumptions of their model do not reflect empirical observations of insurgent organizations, their research relies on confirmation bias, and their dataset, while extensive, is neither comprehensive nor representative of the insurgencies they highlight.
BGDSJ present a simplified, bi-polar model of insurgency: assuming there is one group of insurgents that reacts to previous (and publicized) successes and one group of government forces. Their model rests on the key assumption that insurgents update their organizational and tactical decisions based on a universally observed and interpreted global signal independent of the signal’s content. Why limit discussion to the mere existence of a global signal? “Actual content is unimportant provided it becomes the primary input for the group’s decision-making process,” they argue. This poses several problems:
- Selection Bias. The “global signal” BGDSJ posit is a function of the frequency and mortality of insurgent attacks, but both of those factors rely on reporting. In other words, they limited themselves to published reports in the media. But, at best, the global media publicize only a tiny fraction of the actual violent event in a given insurgency—and that’s assuming their reports are even remotely accurate.
- Most insurgents groups cannot operate freely throughout a country, which makes private information a critical factor in modeling their behavior. How does private information replace, supplement, or contradict the “global signal,” and to what effect?
- Finally, joining and leaving an insurgent groups is not a costless process, as insurgent groups have recruitment procedures and often violently prevent exit. What effect does group pressure play in the way insurgents respond to a “global signal?”
In a way, BGDSJ’s arguments are rooted in the somewhat muddy waters of various reflexivity theories, though that is beyond what we’re discussing directly here. What they’ve really done is reverse-engineered a model to fit their data—something that, of course, confirms the model’s usefulness. Since that data is bad—they use ex-post facto reports of violence, not the behavior of either side in the conflict to prove their point—they wind up assuming individual behavior from group processes. The Ecology of War has an ecological problem.
In reality, this ecological model can only be considered one of several competing theories to describe the dataset. BGDSJ try to preempt such criticis by saying, “any competing theory would also need to replicate the results.” But creating a model to fit one’s data is an inversion of the scientific process, reducing the study to mere deduction. When respected newspapers write stories claiming ten percent of all Chechens live in South Waziristan, we must seriously question just how one goes about creating a useful model of behavior based on media accounts.
Given a limited and biased dataset and the previously mentioned ecological issues, the authors have become mired in confirmation bias—a poor way to conduct social science research. A model’s usefulness is not in how well it fits a given set of observations, but how well it serves as a useful approximation of reality—as such, BGDSJ’s model fails.
While BGDSJ find that attacks in nine different insurgencies follow a specific type of distribution, that’s not terribly interesting. As we’ve discussed before, the most interesting question is not what makes two wars similar, but what makes them different? Looking only at similarities between wars ends up obscuring vital differences—which leads to people arguing that methods in Iraq should apply directly to Afghanistan. This blog has discussed at length the uniqueness of Afghanistan’s history and conflicts; while it would be possible to draw some lessons from similar or nearby crises, when we only consider those similarities we open ourselves to quite literally fatal errors of conception.
Moving beyond Central Asia, we should ponder why Peru’s insurgency died but Colombia’s is still going on. The answer isn’t in the numbers. Without considering the human element, without contextualizing the numbers in politics, culture, society, even personality, any study of insurgencies will, ultimately, come up short.
Non-web accessible works cited:
1 Liljeros, Fredrik, Christofer R Edling, and Luis A Nunes Amaral. “Sexual networks: implications for the transmission of sexually transmitted infections.” Microbes and Infection 5, no. 2 (February 2003): 189-196.