Mike Konczal deserves a huge back-pat for blowing up the internet with this post about a new academic paper identifying several flaws in the main piece of pro-austerity research at the heart of Paul Ryan's argument since 2010.
In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, "Growth in a Time of Debt." Their "main result is that…median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower." Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.
This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan's Path to Prosperity budget states their study "found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth." The Washington Post editorial board takes it as an economic consensus view, stating that "debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth."
In short, the original paper is shown by three researchers from UMass to have three major flaws. First, it selectively excludes data on high-growth, high-debt countries. Second, it uses a bizarre (and statistically ridiculous) method of weighting the data. Third, and perhaps most awesomely, they made a formula error on the Excel spreadsheet (!!!) they used to analyze the data. As Mike says, "All I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel." Since he explains well the three major errors with the paper, I won't belabor them here.
Explaining the petty intricacies of academic research for mass consumption is not easy, and that's what Mike really nailed here. However, I want to point out yet another issue with the original research.
The problem began when no one in academia could replicate the R&R paper. Replication is at the heart of every field of scientific inquiry. If I do a test proving that water boils at 212 F, then everyone else should be able to get the same result. In order to make that possible, I have to share my data with the rest of the scientific community – what kind of vessel I used to boil the water, the altitude and atmospheric pressure, the mineral content of the water, and so on. I have to show everyone else exactly how I did it.
What non-academics might underestimate reading Mike's account is just how egregious a red flag it is when A) no one can replicate a major finding despite considerable effort and B) the authors of a controversial paper refuse to share their data or explain their methods. To a non-academic, it might seem like "property" owned by the authors to which no one else is entitled. In academia that simply is not how it works. Every reputable journal on the planet has a policy of sharing replication data, and any publicly funded (NSF, etc) research must, by law, make all data publicly available.
So when R&R not only refused to share data for years but also refused even to tell anyone how they weighted the observations, Red Flag doesn't begin to convey how sketchy that is. Fireworks should have been going off at this point suggesting that this is not research to be taken seriously, and in fact it is possible that the "findings" were simply made up.
The science/academic people out there are probably wondering how in the hell one gets a paper published without even explaining the methodology used in the analysis. Good question! The answer is our friend the "special issue" in which papers are invited by the editor rather than being peer-reviewed. In other words, the R&R paper didn't even go through the standard review process (which is plenty flawed, but at least it's something) before publication. No one at any point in the process checked these results for accuracy or even looked to see if the authors actually did an analysis. Cool.
So that's how a paper based on cherry picked data, a scheme for equally weighting every country in the analysis (which wouldn't pass muster in an undergraduate statistics course), and a computational error became the primary evidence in favor of a pro-austerity agenda here and around the world. Mike is charitable in calling these issues innocent mistakes on the part of the authors. They might be, but I have a hard time believing that Harvard economists make the kinds of errors that somnolent undergrads make on homework assignments. When authors refuse requests for data, 99.9% of the time it's because they know goddamn well that they did something shady and they don't want you finding out.
Are these results fabricated? No. They did an analysis. A really bad one. My guess is that they ran dozens of different models – adding and removing variables, excluding and including different bits of data – until they got one that produced the result they wanted. Then they reverse-engineered a research design to justify their curious approach to the data. Every academic who handles quantitative data has been there at some point. That point is called grad school.
Nick says:
So basically, the Ryan budget is the cold fusion of economics.
Xynzee says:
@Nick: at least w CF people waited to check the science before selling the farm (or in this case the Nat'l infrastructure).
Sean says:
Morans.
wetcasements says:
"So basically, the Ryan budget is the cold fusion of economics."
+1
Austerity, of course, is a moral position, not a logical or economic one. So good on RortyBomb, but this will change nothing.
Middle Seaman says:
Ryan's budget was a con job with or without R&R. His tax cuts to the rich were way larger than his draconian cuts in the safety net. Inother words, his budget not only didn't shrink the deficit, it blow it up substantially.
Special issues may be peer reviewed; their existence is due to the faster publication time and an issue of a journal dedicated to a single topic. (In my field they are always peer reviewed.)
The academic community works on trust and respect. Publications are assumed to be corrected and the authors sincere. The peer review process is fur from perfect and based on volunteer work of people with heavy work load. Mistakes are not always detected. Once is a while the trust is breached and a big stink follows. R&R were probably trusted until someone smelled a rat. It did take several years. Researchers are human beings; some of them are crooks.
Arslan says:
The field was economics, their findings were favorable to the ruling class. Hence, they were doing their jobs.
c u n d gulag says:
Reinhart and Rogoff probably placed a call to Michelle Rhee a few days ago, to see if she knew any people who could do more than just correct some kid's tests, and would "erase" Mike Konczal.
Oh yeah, this was just one big, "OOOOOOOOPS!!! Our bad…"
Right.
I'm from NY City, so I wasn't born in a turnip truck.
And I sure as Hell wasn't born yesterday.
Zebbidie says:
I don't think the sneer about Excel was justified. Most major companies and most major economies are run off Excel spreadsheets. They seem to do just fine.
Major Kong says:
Austerity can never fail. It can only be failed.
jon says:
Austerity: the opiate of the classes.
sluggo says:
Excel error?
Checking your math is part of the vast left wing conspiracy to take over Amerika.
Hazy Davy says:
So, if all countries for which accurate data is available are used,
a more valid weighting is used,
and one performs the computation correctly,
what information does the analysis actually present? Are there conclusions we should be deriving from the analysis?
Hazy Davy says:
"So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim].""
John says:
There is one and exactly one reason to make any attempt to hide data or methods in a scientific analysis: the analysis was bullshit.
Well, mostly says:
"People believe what they want to believe and disregard the rest…."
Once again the artists, here, S and G, explain the overarching narrative we find over and over. This whole sordid affair is present in some way in every field. Enron? Auditors, analysts, and attorneys of every stripe all over it and it still blew up. LTCM. Madoff. ZZZZZZ Best.
Any bets that Ryan will move an iota in his beliefs? Or that R & R will feel a bit of shame and we see some public apology? I bet they double down and their next "paper" has an even more extreme conclusion, right in line with what they already "know" and their funding sources want.
The sad part is that bad research pollutes the good and honest work so many others do.
One could almost get cynical.
Paul says:
Not just "three major flaws," but three damning flaws; three utterly fatal, credibility-destroying, worthy-of-the-trash-can-rendering flaws that should put the kibosh, not only on this paper, but (with a little luck) on the reign of austerity as the guiding principle of "First World" policy.
MF says:
Much of this falls into Daniel Davies category of
"Good ideas do not require lies to be sold"…we can add "or obfuscation and data secrecy".
Wareq says:
Send it to the Journal of Irreproducible Results and the Annals of Improbable Research.
Charles Bird says:
What was in it for these guys to cook their books so to speak? Who funded their "research"? If it was our federal money, how can they continue to deny access? I guess one might defend them by saying that too many social scientists suck their conclusions from the end of their thumbs, so it may be taken as standard practice.
BrianK says:
Mary Rosh was able to duplicate their results, just fine, thank you.
not a gator says:
Ha ha, Grad school. As Homer said, there's a time and a place for everything. Then you drink yourself silly because you hate yourself and you drop out and change professions, hahaha.
I did have a friend who had to legitimately toss out data. He was studying calcium channels in frog oocytes, but sometimes the oocytes died, and then he'd get a flat line instead of a nice voltage curve. He still used to joke about how he was cherry-picking results. There have been a lot of cases in science where the model in use caused the investigators to toss out results they thought were "spoiled" when it was really the model that was flawed or breaking down in that instance, especially in medicine, so judgments like that are really fraught.
Of course, these guys are way past that point.
Alex SL says:
Thanks for the explanation. I was unaware that the paper did not have to go through peer review.
The last comment is uncalled for though. From what I have seen in my area, the average grad student is honest and unbiased. It is more likely that people try to twist data when they are further on in their career: the postdoc who realizes who desperately competitive the field is and that their chances would be greatly increased by having spectacular results; the established professor who finds that doing a decent analysis would disprove a theory that they have been pushing publicly for the last 15 years.
I generally say that it doesn't matter that most of us are irrational about some things, that because scientists constantly test and criticize each others work we collectively get it right in the end. However, in economics there are such big incentives to say what certain groups want to hear that I am not so sure.
just me says:
Surely every sensible person knows that facts and data make no difference if you really BELEEEEVE in your presupposed conclusion.
Crocodile Chuck says:
"Replication is at the heart of every field of scientific inquiry. "
Economics isn't science.*
If a physics researcher made an INADVERTENT error in a published paper, he would be ostracised from the academy.
These a _ _ holes? Back to Pete Peterson for another handout.
* Misnamed: The Sveriges Riksbank Prize in Economic Sciences
Tim says:
Hazy Davy, as you posted, their analysis when done properly should have a result of 2.2%.
Anyways, what this means is that there is a weak correlation between high debt and low growth. But it's definitely clear by now that they have no mechanism for determining causality, and there is plenty of evidence that it runs the other way. I'll give a model as an example as to how low growth could cause higher debt instead of the reverse:
Country A has a GDP of $1000 a year.
Country A's government borrows $30 per year, and at the start of our model has a debt of $500.
Country A has a GDP growth rate of 5% per year.
As you can see, the growth of the economy will make the debt burden smaller relative to the economy over time even as the nation's government runs a deficit. Now, a recession hits Country A:
The GDP growth rate is reduced to 1% per year, while borrowing remains at $30 a year.
Now, the growth rate is too small and thus the relative size of the debt begins to climb. It's fairly easy to see that increasing levels of debt does not cause the slow growth, but rather (at least in a straightforward recession) the slow growth causes increasing relative levels of debt.