It’s gotta be true, because data says so
In 2008, Doucouliagos and Ulubasoglu of Deakin University conducted a meta-analysis of 84 studies about democracy and economic growth. After evaluating 483 regression estimates from these studies, they find that every outcome about the political democracy and growth relationship is possible; they observe that
- 37% of the estimates are positive and statistically insignificant
- 27% of the estimates are positive and statistically significant
- 15% of the estimates are negative and statistically significant
- 21% of the estimates are negative and statistically insignificant.
The link between inequality and economic growth is equally difficult to identify. Dominics et al (2006) make a meta-analysis of studies focusing on this relationship. According to their study, most of the regressions yield a negative relationship between inequality and economic growth. Yet, when used different estimation techniques and panel datasets, this negative effect vanishes. So, after analyzing a vast amount of empirical literature, no clear relation appears.
Data and statistics grew increasingly important in recent decades. Big Data came to play a large role in many fields, from technology to healthcare. Economics is no different; regression analyses had already been popularized by the neoliberal school of thought. Economics was intentionally made “a real science,” within which basic connections between phenomena could be established, like in physics.
In the neoliberal world of economics, you are free from complicated, theoretical discussions, and able to draw firm conclusions. Unlike more nuanced fields like sociology or political science, neoliberal economics allows for simple, elegant arguments. With the help of mathematical modeling and statistical results, arguments take up just a few pages. This neoliberal methodology sounds pretty good in the first place: Direct scientific results, no chit chat. But it’s not as simple as it looks.
To what extent we can trust these statistical methods or the economists that use them? Economists can easily manipulate data to fit their ideological stances, or to comply with their initial hypothesis. Errors rooted in research design create unreliable results too: the type of the data used, the selection of the sample, differences in evaluation methods of estimates, availability of data, direction of causation, regional/country specific characteristics all influence the results. They jointly create a big divergence among empirical macroeconomic studies, leading to a conundrum in many questions.
These problems are not confined to economics; as the use of econometric methodology expands to other fields, the risk contained in data-interpretation increases. Say, there are studies on how religiosity levels affect people’s career paths. Knowing that even the large, carefully executed polls have failed at predicting Brexit, Trump’s victory, and Labour’s success in the UK, how can we trust other surveys to teach us about religion or social preferences? How can someone even build a theory on such data? Above all, how can these studies shape policy designs?
Some of the empirical studies that resulted in wrong conclusions remain unharmed in the ivory tower of academia, desperately waiting for a rare reader. But many of these studies do integrate to the real world, either through policy-making (top to bottom) or through media outlets (bottom to top).
Via the policy-route, developing countries have been one of the victims of empirical studies. The Washington Consensus, for example, suggested fiscal consolidation and trade liberalization. Later, however, it became clear that this was bad advice; copying the economic institutions from the Western world North and applying them to developing nations without considering country-specific environments can be devastating. While the Washington consensus was an elegant argument and supported by data from the West, it failed to account for the complexities of the Global South.
The second route of influence is the media; when people read the news and encounter headlines like “a recent study found…”, it sparks their attention. But that recent study’s sample size can be very low, and its data can be deficient, and neither editors in the media nor readers will be aware. To them, the study’s seemingly conclusive result are what is important.
To avoid that we act on false conclusions, academics, policy-makers, and media professionals all carry the responsibility to treat their empirical findings with skepticism. If it was physics, then a causal relationship based on data could be trusted. But for economics and politics, human factors create complications that statistical methods cannot always handle. Overall, it’s better not to overly believe in statistics, because data says so.
Originally published on The Minskys
Originally published on The Minskys
Yorumlar
Yorum Gönder