6.07.2016

Wonderful Swedes, Workable Science

If y'all haven't already discovered The Weeds, Vox's policy podcast, OMG you don't know what you're missing. It's like the antidote to the vitriol and despair of American politics: detail-oriented discussion of nuts-and-bolts policy on the level of people who actually do that work. While Trump and Clinton and Sanders exchange body blows, the business of governing quietly carries on in Federal, state, and municipal offices around the country, and it's reassuring to remember that and learn what the cutting-edge ideas are in how best to advance that business.

In their May 27, 2016 episode, they discuss a shiny new paper using the near-omniscience of Swedish administrative data to estimate the effects of psychological trauma during pregnancy on the mental health of the child when they reach adulthood. It's cool. But in the most recent episode from June 3 (middle show segment), they note that the paper in question has been called out for failing to acknowledge its intellectual forebears, possibly in an effort to appear more novel than it actually is. Matt Yglesias and Sarah Kliff then capably recap some of the key questions in the replicability crisis that's kept the scientific community up nights lately. And that means that for a hot minute this became a science policy podcast, which of course got me all heebly-jeebly.

They do a great job of explaining, to a policy audience probably not as obsessed with this crisis as sciency types have been, how incentives to publish ever more novel and controversial findings may have rendered suspect a considerable chunk of research in the last few decades. I'm super glad for that, and am totally stealing Sarah's "hot take machine for the academy" line. However, one important thing they don't touch on is the good reasons the system came to look like this in the first place.

What we fear when we talk about the value of original research is not replication; it's duplication. Not so much in the sense of fraud or plagiarism, though those are certainly undesirable, but rather in the sense of research on a treadmill, wherein a failure to learn the history of one's field leads to wasted time and energy reinventing the wheel. I get the sense that this is one area where scholarly work is genuinely better than most opinion journalism, even the data-driven variety: if you usually have to cite 100 other papers to publish an article, there will typically not be a gigantic surplus of articles on a subject (and yet even despite that, it feels like that sometimes), and you will probably not miss giant chunks of important information, all of which leads to efficiency and centralization in the field.

As an exercise in contrast, consider this article from the right-wing blog Zero Hedge. It's a retort to a WaPo "fact check" of a collage of charts ZH posted (you follow that?). While obviously more low-brow in tone and aim, this is pretty similar to the kinds of substantive exchanges you see in academia all the time. The difference is these articles were not, of course, subject to peer review; not carefully scrutinized for evidence of bias in presentation (though that is in part what the debate is about); not held to exacting disciplinary standards for analytical methods (though both maintain they do, at least for the standards within their own spheres of influence); and thus, able to be published within days of each other. I do give more credence to the fact-checking article, and I don't mean to draw false equivalence -- only to point out that when there isn't sufficient emphasis on drawing from a base of common knowledge, you can always throw charts at each other and most readers will just believe whichever one sounds the best to them. Thousands of such feuds erupt all over the internet all the time, and most of them ultimately produce nothing but heat, and a sense of tribal superiority. The emphasis in academia that you can't just go make stuff up without looking at what everybody else in your field has done tends to minimize the flash-in-the-pan aspects of these debates, so even though they take longer, they do tend to resolve in one direction or another -- at least for a while.

In the Vox example, it's true that Matt only saw the article because of the new publication and its presentation on NBER. However, that didn't have to be the case: if NBER runs a feature on seminal papers from the past alongside the freshest work, for instance, it accomplishes much the same thing in terms of disseminating useful knowledge, without present-day authors becoming incentivized to angle for a moment in the spotlight, prior literature notwithstanding. The website would, in a sense, better resemble the sort of didactic structure of the academy: know the oldies & goodies as well as the new hot stuff in somewhat equal measure, so we're all working from the same base of knowledge and can speak a common tongue.

So originality and lit review, which are significant factors in the novelty-controversy maelstrom, are still definitely worth maintaining even as we figure out how to best incentivize replication, data-sharing, open-access publishing, all that sort of stuff; and professional organizations & science journalists can help in advancing the cause too, perhaps by adopting some of the healthier norms of the academy while rejecting the less healthy ones. I'm 1000% in favor of staid replication, but it's good to remember what else we want to maintain at the same time.



TL;DR: science's historical emphasis on original work requires people to understand the prior literature, so we don't lose knowledge as fast as we generate it. It just also has the side effect of overvaluing novelty. We should fix that, but we shouldn't conflate ignorance or plagiarism with hot-takeism; they're two separate problems and require different solutions!



Also, note: another post about how to be a skeptical reader of science journalism in the right ways, not the wrong ways. Stay tuned!

No comments:

Post a Comment