Peer review, replication, and thankless tasks
Peer review and the ability to test claims are powerful but not infallible. The video (here) from The Economist covers the way science and peer review may not be great or as reliable as we hope or believe. In short, industry such as pharmaceuticals, may draw on academia, but the research cannot be replicated. Pharma has revealed that issue. Many who think about this issue know that replication and verification is not well-rewarded, and so the scientific method may not live up to its potential. The chat also gets into some nice issues regarding statistics and false positives. It also looks at the failure of peer reviewers to do their jobs as well as desired (for example, not catching errors that one journal inserted as a test). And, peer review is not about reviewing the raw data.
I wonder whether open data sets as Victoria Stodden has described them will help here. It may be that modeling and other software approaches will be able to test the raw data and examine the method of collection to note it limits and find errors. Who knows? Maybe replication can be automated so that people could focus on the new work and machines can deal with the drudgery of “Yes, that is correct.”
UPDATE: I noticed that The Economist has an autoplay ad. That is lame. I have removed the embedded video but still recommend going to the site to watch it.