Are Sanctions on Russia Effective? How (Not) to Inform the Debate, Part II

This post originally appeared on PONARS Eurasia on 10/23/2023.

This is Part II in a two-part series. Part I may be found here.

Juliet Johnson is Professor in the Department of Political Science at McGill University, Quebec, Canada. 

Above: Volodymyr Zelenskyy discussing sanctions on Russia at an April 2022 meeting with Austrian Chancellor Karl Nehammer. Source

Avoid confirmation bias

If analysts have predetermined arguments to make about sanctions’ effectiveness, they can find pieces of economic data to support them. Skeptics can focus on the rapid rebound, on macroeconomic stability, on loopholes, on the sanctions’ impacts outside Russia, and on Russia’s ability to evade sanctions and build non-Western partnerships, while advocates can point to the longer-term, diversifying, and accumulating economic pain and distortions within Russia itself. Self-consciously matching explicit goals to appropriate evidence can help to avoid this, but it cannot on its own protect our analyses from the broader pitfall of confirmation bias.

People naturally filter information through the lens of their prior beliefs and theories. This is not necessarily a problem in and of itself, but it can become so when these priors are not explicitly taken into consideration by both the producers and the consumers of political analyses. Although analytical priors can assist in framing and understanding complex events, they can also lead us to search for confirmatory evidence, to make overly confident predictions based on preliminary or superficial information about actual conditions on the ground, and to interpret sanctions’ goals (and thus effectiveness) in light of these priors. Moreover, analytical priors may not only spark a search for confirmatory evidence of whether sanctions are effective, but also lead to unquestioned or poorly supported assertions as to why sanctions are or are not effective. This, in turn, shapes the policy prescriptions that follow from such analyses.

Theoretically informed stances on the war and analytical priors regarding sanctions as a tool of statecraft can be two such potential sources of confirmation bias. In the absence of self-reflection, discussions about the effectiveness of sanctions can devolve into a means to advance broader, long-standing arguments about the international system. Is NATO expansion a security threat to Russia, even perhaps the root cause of the war? Can the U.S. negotiate successfully with its adversaries? Should analysts think of the world in terms of great powers and spheres of influence? Is U.S. leadership waxing or waning, and can the U.S. count on its European allies? Is Russia an existential threat to the international liberal order? Can and should sanctions ever be targeted? Does the overuse of sanctions undermine U.S. influence and lead to blowback? Whatever one’s answer to these questions, the challenge is to resist preemptively defining sanctions’ goals and cherry-picking evidence of the economic impact of sanctions primarily as a way to support them.

Confirmation bias may be a particular concern when analysts lack deep regional expertise and when analyses are prospective, attempting to predict rather than evaluate the effectiveness of sanctions policy. International relations scholars and think-tank analysts without Russian area expertise have dominated much of the public-facing discourse on sanctions, both in sheer number of interventions and in the speed with which they publish in response to events. This has given such analysts an important opportunity to shape the terms of the U.S. debate on sanctions. But it also means that they have typically intervened with less understanding of the situation on the ground, as they have tended to write earlier (when information is sketchiest) and without the store of region-specific knowledge that might help to fill in the gaps, increasing the danger of confirmation bias.

Key takeaways

To sum up, public-facing policy analysis and prescriptions that rely in whole or in part on evaluating the effectiveness of sanctions should follow three guidelines.

First, be explicit about the goalposts. Claims of effectiveness require judgements not only about how and how much sanctions hurt the Russian economy, but also about the policy results of that pain. Saying that sanctions work (or not) should be a claim that their economic impact has succeeded (or not) in sending signals and compelling reactions that advance specific political goals.

Second, match the identified goals to the appropriate evidence. The empirical evidence on the economic impact of sanctions often points in contradictory directions and is of varying quality. Analysts have the daunting task of sifting through the noise as best they can and focusing on the economic indicators that most plausibly contribute to advancing the identified goals. In so doing, it is important to skeptically interrogate the data sources, focus where possible on trends over time rather than snapshots, and acknowledge when existing data is inadequate to support definitive claims.

Finally, be wary of confirmation bias. The combination of confounding empirical evidence and myriad potential policy goals can make analyses of sanctions’ effectiveness unusually susceptible to such bias. Moreover, the unprecedented nature of the current sanctions, both in terms of their scale and in terms of the economic size of the target state, makes it more challenging to extrapolate from prior experience.