From the Comments

Here are two of the objections to Saturday’s “More Reasons to Call Off the Reinhart-Rogoff Witch Hunt” from a commenter (on Seeking Alpha) who contributed a competing list of “pet peeves” to counter my list:

  • That anybody now talks about debt and causation now without addressing Arin Dube’s finding. He hypothesized that if high debt causes low growth, high debt should be more strongly correlated with slow growth in future years than past years. Instead, high debt was much more strongly correlated with past years than future years:
  • Preying upon ignorance. Laypeople, policy-makers, and the media often assume that published research must be peer-reviewed. If you are talking to them about work that isn’t, you probably should disclose that little detail upfront as well.

I’ve seen these same two points all over the web. Let’s think about them.

I’ve argued that RR’s critics are on nothing more than a witch hunt, and in response, they insist that RR “preyed upon ignorance” by not disclosing that their paper wasn’t peer reviewed. I’m guessing the comment is right in that RR aren’t likely to have disclosed this “upfront.” They probably don’t begin every conversation with “Please ignore my research because it wasn’t peer reviewed.” Or maybe they do – I don’t follow them around and really don’t know.

Because we can’t be sure, how can we satisfy RR’s critics and make sure this doesn’t happen again? Should we make a stamp for the American Economic Review, so they can put marks on the foreheads of all Papers and Proceedings authors? And the same stamp for the National Bureau of Economic Research (NBER) and Journal of Economic Perspectives, since their authors aren’t peer reviewed, either? (h/t reader “Mike” on Marginal Revolution.)

Oh wait, the author touted by the pro-stimulus lobby all last week and linked in the first bullet point above would need a stamp, too, since his work wasn’t peer reviewed. And I didn’t see the “not peer reviewed” disclosure on any of those sites.

Maybe I’m right about that witch hunt, after all.

Witch hunt also witch-hunt
An investigation carried out ostensibly to uncover subversive activities but actually used to harass and undermine those with differing views. (The Free Dictionary by Farlex)
An intensive effort to discover and expose disloyalty, subversion, dishonesty, or the like, usually based on slight, doubtful, or irrelevant evidence. (

Bookmark and Share
This entry was posted in Uncategorized. Bookmark the permalink.

7 Responses to From the Comments

  1. Margaret says:

    I think it might be helpful to link this debate to the wider debates that are happening across many disciplines where various high profile published peer reviewed articles have since been found to be flawed– with most the heat (but not necessarily light) being generated by the debate in the medical and climate change areas

    1. There is a debate about the quality of peer review . Somehow “peer review” has come to mean “it’s right” rather than “this paper is worth us discussing further, and if it stands up to that discussion then it will become part of the corpus of knowledge”. What peer review means? If it is taken to mean “it’s right” are the current peer review practises (e.g. who is responsible for picking the reviewers, what they do as part of peer review, the anonymity of reviewers) consistent with this meaning?

    2. There is a debate about access to the information so that results can be tested. In some cases this has even been denied to peer reviewers! Often this requires not just the statistics but also, as in the case of RR, the computer programme used to analyse that data so that the actual reasons for the results can be determined. The slow provision of the raw data and computer programmes seems to be a feature of more than a few studies that have subsequently been found wanting. The academic community needs to be willing to set the standards that are acceptable in this area — and probably fast if it wants to maintain credibility. Journals setting “guidelines” or even “requirements” that seem to be frequently honoured in the breach is not working. For instance, what should university policies be around retaining staff who do not supply the data and programmes needed to assess their work?

    3. There is a related debate over “science by press release” where a study gets enormous coverage and becomes part of the “folklore” of understanding, before it is published and can be critically assessed by others. This is probably of greatest concern in areas where the involvement of the public (health – think measles vaccine) or politicians (economics, climate change to name but two). A related issue is that (unlike in the case of RR) the subsequent critique and/or retraction gets far less publicity. What should the standards be around press releases for work that is yet to be published, and how does this relate to the meaning of “peer review” under paragraph 1 above?

    • perfectlyGoodInk says:

      As I’ve stated elsewhere, macroeconomists in particular face strong incentives to create research that can be used as ammunition for politicians (on both sides of the aisle — note the strength of Keynesianism within the field), as it gains them attention, influence, and perhaps a political appointment. I see this bias is a major reason most macro theories are flawed. Much of the research seems to be broadly aimed at the wrong question: “should there be more/less government involvement in the market” rather than “how can we make better economic predictions and models.” Bias is inevitable, and it results in many psychological tendencies that lead to error. Whether or not that was the source of RR’s error will forever be debated, but it’s clearly a human tendency that ought to be corrected for.

      One method is the norm where you disclose the reason why you chose the weighting methods you did and whether or not the result still holds up when other methods are used (HAP makes this critique on page 8). Real-world data is very messy, and the methods for correcting for things all have their pros and cons, and so as Steve Levitt says, “Regression analysis is more art than science.” The “right” method is always debatable, so a result typically needs to be robust to the weighting method chosen to be considered significant.

      Another method is for work to be replicated and peer-reviewed, as readers with different ideological views than the researchers may notice things that researchers eager to see what they were looking for may have missed. I think this is necessary, but not sufficient, as there aren’t particularly strong rewards for either, and note that Levitt himself was subject to very similar controversy regarding his paper on abortion and crime — work that was peer-reviewed (also, this is a better example of Margaret’s point #3, where the critique/retraction didn’t get very much attention).

      One would hope that economists would be hyper-aware of perverse incentives upon themselves and their work, and only take on research questions for which they don’t have very strong priors, but to count on individuals to defy incentives is generally setting yourself up for failure (imagining relying upon politicians to turn down bribes, or upon individuals to minimize how many taxpayer dollars they take in welfare or subsidy dollars).

      I can think of a number of suggestions, but at the root, I think it’s a case of power corrupting. It corrupts politicians, why wouldn’t it corrupt economists with influence as well (Krugman, anybody?). Macroeconomics has too poor a track record of prediction to warrant very much attention from or influence upon policymakers. Without this undeserved influence, the incentive to pander would evaporate.

      Not exactly holding my breath that this will happen, but I do see it as the silver lining to this whole affair that economics as a whole has lost some of its prestige.

  2. ffwiley says:

    Thanks Margaret,

    This is helpful and I passed on your thoughts in another comment thread today (on Seeking Alpha).

  3. perfectlyGoodInk says:

    Not sure why you didn’t include it, but here’s the link to my comment in question.

    • ffwiley says:

      The answer is that we had no idea that you could link to a comment. And I’ve looked at your hyperlink and I’m still stumped.

      • perfectlyGoodInk says:

        It depends on the blog commenting software, but you can insert links within a page using anchor tags. Seeking Alpha puts a little chain-link icon below the comment which is a permalink to the comment. For this blog, the date of the comment has the permalink.

Comments are closed.