Attend Spring Meetings on Development topics from Apr 17-21. Comment and engage with experts. Calendar of Events


Syndicate content

Determining Risk and Resilience to Violent Conflict

Eric Min's picture



Studies on conflict prediction and prevention often investigate places that experience civil war and try to determine why they occurred, with the idea that knowing the answer can inform policymaking efforts. This approach has two weaknesses. First, it provides an incomplete understanding of conflict, as there are no comparisons between these observed cases and a set of systematically chosen and similar peers. Second, it does not answer the question of whether the international community can identify risk factors in time to do anything about them.

We set out to address both gaps by asking a simple question: If we sought to predict conflict far enough in the future for policies to intervene, what factors would help explain cases where we expected conflict but instead saw peace, and what factors would help explain cases where we expected peace but instead saw conflict? Put more succinctly, what factors explain unexpected resilience and unexpected fragility, and are any of those factors ones that the international community could influence if it had sufficient advanced warning?

To answer these questions, we use a three-step combination of statistical learning models, matching algorithms, and qualitative case studies. In the first phase of our study, we identify cases of unexpected resilience and unexpected fragility by using country-year data from 1995 to 2015. For each country-year observation, we record whether a conflict onset took place five years in the future, and add measurements for dozens of variables common to quantitative studies of conflict onset. These include, but are not limited to, previous coups, human rights violations, elections, ethnic fractionalization, terrain ruggedness, legal system origins, gross domestic product, mortality rates, foreign direct investment, and agricultural productivity.

We train a statistical learning model—in our case, a linear discriminant analysis—on the entirety of these data using cross-validation to make sure the model chosen is one that works well out-of-sample. The model effectively identifies the patterns in the data that explain conflict onset five years into the future. We then use the trained model to predict conflict onset on the exact same data (after removing the conflict onset variable). Unsurprisingly, the predictions are quite good, but the model does occasionally predict conflict in a country-year when there is none (a false positive) and peace when there is conflict (a false negative). These false positives and false negatives are cases of unexpected resilience and unexpected fragility, respectively.

Second, we use statistical matching algorithms to find cases to compare against these unexpected observations. The idea is to find the most similar place that turned out well (or poorly) so that we can look for patterns in what separates the two. For example, our model falsely predicts that Burundi should have had a conflict onset in 2008. To better understand this case of unexpected resilience, we need to find another country-year that is very similar to Burundi in 2008 on as many dimensions as possible, except that it did experience a conflict onset. In our analysis, this closest observation is the Central African Republic in 2009. We use this process to find comparisons for each country-year with unexpected onset or unexpected peace.

To recap, this process produces a list of surprising cases, which come from our statistical learning model, and a counterpart list of similar cases with the opposite outcome, produced using statistical matching methods. Both steps relied solely on quantitative data.

Because such data do not capture a broad range of political factors that the international community can affect, our third and final step is deep qualitative analyses of each pair of cases. The qualitative work lets us determine whether previously unrecognized factors can explain unexpected resilience or fragility, and therefore suggest new opportunities for intervention. 

What do these comparisons tell us? We find strong evidence that a consistent difference between expected peace and observed conflict, and vice versa, is whether aggrieved minority groups could participate in politics and influence government policy. In addition, political exclusion of minorities appears to matter even more during economic downturns. These results suggest that policymakers ought to promote formal power-sharing or other processes that give greater voice to minority groups. Such efforts are particularly likely to stave off conflict if implemented in moments when ethnically divided societies suffer significant negative economic events. The international community should also push back against policy initiatives that would have differential and negative impacts on a country’s aggrieved populations, as in one case such an initiative pushed an otherwise stable polity into conflict.

Moving beyond the immediate context, our study employed a practical combination of quantitative and qualitative methods that is unique to studies of civil war. The statistical models allow us to systematically incorporate and analyze large amounts of information, in turn generating a rigorously selected pool of cases to study more deeply. The subsequent qualitative comparisons help highlight critical yet overlooked factors that could better inform both scholarship and policymaking. On theoretical and empirical grounds, our report shows that future research would be well-served by adopting this integrated approach.
 

Comments

Submitted by Soazic Elise WANG SONNE on

Very interesting piece of work integrating very genuinely a quantitative and qualitative approach using machine learning algorithms. Would love to read the full paper soon.

Add new comment