Oops! Why so many mistakes?

I think we can all safely agree that all science is flawless. Also, experimental designs are invariably perfectly suited to deal with the problem at hand, no errors ever occur during data processing and published science is FACT. Meanwhile, back in reality, mistakes happen and things aren’t always what they seem. But how often do mistakes occur? And why do they happen? Mistakes aren’t necessarily problematic. If they occur infrequently, are detected and corrected in a timely manner and are not linked to any deceitful research activity, then we can probably pass this off as just one of those things that happens when we pesky humans undertake an activity.  These are usually dealt with by way of a published correction or in dire cases, a retraction. The problem is that we are now starting to talk about the structures surrounding science (e.g. funding mechanisms and the peer review system) actually increasing the likelihood of needless mistakes and error prone work. In an example published in Nature, a life sciences researcher suggested that published false positive results are an inevitable product of the competitive nature of the field: as scientists seek faster ways to generate and analyse data, the necessary scrutiny of results is not applied and needless mistakes transpire. These mistakes make their way into published literature and are followed by embarrassing retractions. And all the while, the viability of major projects and individual careers hang in the balance. In another piece on the topic of mistakes, also in Nature, the editors lamented the large number of mistakes creeping into papers and the ballooning number of corrections and retractions they are nowadays implored to publish. These mistakes are anything from relatively benign missing references, to inappropriate tampering with figures or the improper use of statistics. In most cases, the data can be fixed and the findings of the paper hold true. I’ve made many of these kinds of mistakes before. Recounting my own time as a PhD student would be a litany of naïve errors. I once spent a whole weekend of valuable instrument time, which I had cajoled the laboratory manager into allocating to me, measuring control standards instead of analysing actual samples, and all the while wondering why I was getting such consistent measurements. Oops! And then there was the time I lost a precious sample only to find it a week on a dusty window sill, where I had placed it while juggling boxes of sample vials, marker pens and lumps of rock many days beforehand. Another time, I spent several days writing and debugging code and tearing my hair out whilst analysing a dataset and becoming increasingly perplexed and frustrated. Later, I realised that I had mistakenly copied the incorrect data from one spreadsheet into another.  Rather than calculating the statistics of climate variability, I was investigating the properties of a straight line. Hmm…not good. Or then there was the instance of a very poorly timed sneeze that sent my carefully weighed out grains of precious sample flying off the scales and into the ether around me. If most mistakes are unintended slips that can be corrected, is there a need for concern? Increasingly, the answer is yes. Firstly, shoddiness in research and subsequent publications breeds a general, and quite understandable, distrust of scientists and their science. Why should we believe your results when you keep coming back and saying that the last ones were wrong, but it’s all ok because they weren’t that wrong and these new ones are definitely right? Also, the large number of mistakes that plague published research papers may be inhibiting our capacity to produce valuable scientific outcomes. One example outlined in the Nature piece last March is the particularly low number of cancer related research studies that have been successfully converted in clinical trials. Many scientific studies cannot be reproduced and cannot be used for follow-up trials. These are not instances of fraudulent publication, but rather instances where insufficient material was published with the manuscript, or those that were published are marred by errors. The causes of sloppy work seem to be simply insufficient time or expertise to handle large projects. Although the chief investigator of a project is responsible for ensuring the quality of data and publications arising from their projects, much of the tedious grunt work in a laboratory is done by graduate students or postdoctoral (junior) researchers. Often, or at least in my case) these are fairly green young researchers with minimal training, who require active supervision in working with large, complex datasets and in designing effective experiments. Realistically, however, a lax laboratory head may only see data after it is accepted for publication or after problems in published studies arise. Under an intense pressure to generate novel and exciting results, particularly in the fast moving life sciences, researchers who succeed in publishing high-profile papers revel in kudos. The same can certainly not be said for the persistent researcher who devotes years to a well conceived, replicated experiment with solid evidence of a useful result.   In a competitive field, the desire to avoid being scooped by an opposing laboratory outweighs the risk of issuing a retraction at a later point in time. Although the causes of these error-littered publications are comparatively simple, the solutions are more difficult to pin down. So much of science, in both the generation and publication of results, is done in good faith. Ultimately, laboratory heads must take responsibility for work generated under their supervision and to train staff appropriately. It is not acceptable to refuse to review the work of junior staff prior to publication. Beyond these admittedly large and often invalid assumptions about the code of conduct operating in scientific groups, changes to journal publications may also temper the building wave of mistakes. For example, online commenting for published journals has been flagged as starting point. Readers would directly alert journals and authors of mistakes and these would be recorded appropriately. Also, online journals (such as PLoS ONE) that allow the publication of more complete methods and results increase the scope of others to scrutinise published work. I was thrown into a laboratory as a young graduate student with little training but an unwavering idealism. I grasped at opportunities for better laboratory training, for more robust statistical skills and to have someone (anyone!) review my work.  Fortunately, during this time, I was able to obtain appropriate scrutiny of my work from mentors outside my group prior to the publication of my results. As such, I can definitely understand the culture that permeates the practices of science and drives ill-conceived work to submission to a journal prematurely. But with a potential cure for cancer at stake, I think all those who manage juniors, who work in a laboratory, who conduct statistical analyses or who wrangle large datasets need to give due care to the work they undertake. A retraction or correction isn’t just an individual embarrassment, but should also be read as a sign of an incremental, yet growing distrust of published science.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s