• Best practice for reproducible research

    July 15th, 2017 | by
    read more

    Reproducibility in research is the ability of a researcher to duplicate the results of another study with a degree of concordance that makes the original study at least believable. The impact of irreproducible results was discussed in my last blog.

    Global investment in biomedical research is over US $100 billion annually, and leads to major breakthroughs that pay dividends to human health. But not all research translates into benefit, however noble the intention of the researcher. Many studies legitimately test valid research hypotheses that might shed light on disease processes, but may prove to be unactionable. It’s been estimated that ~85% of research investment is wasted annually because of problems that can be corrected. More information on reducing waste and increasing research value can be found in a Lancet series of five articles published in 2014.

    Although deliberately falsifying or fabricating data is most damaging to science, it is less frequent compared to more widespread forms of misconduct that can often fly under the radar. The ‘most worrying misbehaviors’ cited in a survey of ~1300 scientists attending international research integrity conferences, were protocol deviations, selective reporting of positive results and insufficient reporting of study flaws and limitations, quality assurance failures, poor mentoring of junior scientists, and turning a blind eye to the misconduct of co-workers.

    While there are many drivers of the scientific culture that generates ‘sloppy science’, it is quite possible to remedy this by introducing practices that have worked for some scientific disciplines. For example, guidelines developed by the NCI-NHGRI Working Group on replication in genetic association studies have led to large-scale collaborations that have transformed genetic and molecular epidemiology from one of spurious associations, to a highly credible one.

    Standards exist in many industries that define best practice for various systems that ensure consistency, quality and integrity, and procedures to ensure compliance.

    The following list is by no means exhaustive, but outlines some of the ways to ensure reproducibility.

    • Schedule routine and informal ‘catch-ups’ between lab personnel. If done in the spirit of collaboration and good scientific citizenship, this will go a long way to encourage honesty and admission of errors without fear of retribution. A major failure by lab heads is a ‘hands-off’ approach to problems that foster poor morale that could lead to ‘sloppy science’.
    • Develop a system of sample labeling, logging and storage organization. The aim should be about accuracy rather than convenience or quick retrieval. Where multiple users access the same samples, a log should be maintained of where the sample came from, how old it is, and who did what to it. An open and transparent lab culture, good citizenship, and collaboration and cooperation go a long way to ensuring quality research.
    • Standardize protocols. The old adage ‘if it ain’t broke don’t fix it’ has no better place than in lab-based research. Protocols should be standardized and followed to the letter. If there are deviations, they should be documented and reasons given. In the interest of reproducibility, it should be standard practice that another individual in the group repeats the experiment. Senior research staff with many years of valuable experience and ‘magical hands’ should be encouraged to do the final replication before publication. 
    • Don’t skimp on equipment maintenance. Equipment and key lab instruments such as pipettors should be calibrated on a regular schedule and documentation of this kept.
    • Good technique is everything. Correct use of instruments should be an integral part of training for junior lab members. I’ve had the experience where I could not understand why one person’s 50 μl volume was consistently ~10-15 μl more than it should be. It turned out he was a bit ‘heavy-handed’ with his pipettor. The simple ‘eye-ball’ test by experienced lab personnel will quickly identify the source of problems that contribute to variability and experimental errors. 
    • Get rid of old and outdated reagents. Most research institutes and organizations have clear policies on this. Where there are budget constraints it can be tempting to push ‘old’ reagents beyond their ‘best by’ date to save money. This can cost more in the long term. 
    • Lab records and notebooks should be carefully and correctly filled out. These are legal documents for most institutions and cannot be removed from the lab. Electronic lab notebooks are on the rise, and allow protocols and methods to be aligned with the final published data where institutional audits are necessary. While random audits and spot-checks are commonplace in industry, this has yet to be widely implemented in academic institutions.
    • Report methods in detail. Where journals have a word count requirement, additional details of protocols and methods can be included in Supplementary Information, which most journals have no word count limitations on. This will go a long way to others replicating your experiments.
    • Publish negative findings. A considerable amount of blame for the bias in scientific literature can be laid at the feet of journals that preference positive findings. It should also be the responsibility of investigators to adequately address unexpected negative findings, without trying to put a positive spin on them. A respondent of a recent survey by Nature reported that he expected rejection of a manuscript outlining why a technique had failed, and suggests that the reviewers accepted his paper because he offered a solution to the problem. One of the best papers I wrote was a replication study using well-curated data from the largest multi-centre study to date, and very robust analysis. We made good use of the Supplementary Information to detail all methods and results. Our findings refuted other positive claims, and an Editorial by the journal confirmed that the study was sufficiently robust to conclude that our negative findings settled the question of clinical relevance.
    • Pre-registration of a priori hypotheses. This has been one of the most publicized recommendations to improving reproducibility. It involves submitting an a priori hypothesis and analysis approach to a third party, before undertaking experiments, to guard against the temptation to pick potentially false positives that were not part of the pre-specified action plan.
    • Robust study design and analysis approaches. Much has been said about this, and it continues to be one of the most vexing issues in both lab and computational research. Approximately 90% of respondents to the Nature survey ranked “more robust experimental design”, including blinding and experimental controls where possible, and “better statistics” higher than institutional incentives for improving reproducibility.

    Scientific integrity and good practice in academic research is generally assumed to be the responsibility of individual lab heads and lead investigators whose career or funding is potentially at stake. Ultimately, their failure can tarnish the institute’s reputation, as funding bodies increasingly publicize institutional-based summaries of overall funding and achievements. Good scientific practice should therefore be cultured at the institutional level, and a system of guidance and compliance can be developed and formalized both within and across institutes.  Ultimately there needs to be a complete shift in culture by all stakeholders including investigators, institutions, funding bodies and journals, of rewarding best practice. 

    Academic metrics need to be devised that distinguish citations of discredited claims so that it is not more advantageous to state and retract a result than to make a solid discovery.” (Jan Conrad, Nature Comment 2015) 

    At SugarApple Communications we can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

  • The challenges of reproducible research

    July 5th, 2017 | by
    read more

    The issue of irreproducible research has been of concern to scientists for decades, but recently there has been considerable focus on this problem by leading journals in various commentaries and editorials, and calls for recognition and response to this problem by the scientific community.

    Pressure to publish has dominated academic research for over half a century, and policies that inadvertently reward quantity at the expense of quality, and the rising focus on citation counts and other metrics that are increasingly used as a proxy for impact, have contributed to concerns that we are drowning in false or exaggerated claims. Science publishing has mushroomed into a global industry with new players aiming for market share in a climate that values publication numbers as a marker of success and a currency for academic advancement.

    The exponential growth in numbers of publications and higher numbers of citations of articles, regardless of the quality has led to the belief that we are in a highly productive era. A recent report of the most influential biomedical researchers identified more than 15 million authors of more than 25 million scientific papers published in between 1996–2011. Analysis of publication patterns of over 40,000 researchers who published two or more papers in the first fifteen years or their ‘early-career’ phase, found that research productivity had not increased for most disciplines, taking into account co-authorship.

    It is not an easy decision to know when to publish, and as mentioned in our recent blog, applying the ‘Goldilocks’ rule and good scientific citizenship could save a career. There should be no issue with publishing preliminary hypothesis-generating work that is accessible to the research community. The problem is when these findings are publicized to the lay community as having clinical value, particularly for diseases with heavy burdens in poorer populations that raise false hope. Disseminating findings that prove to be meaningless and is reversed over time could have a ‘crying wolf’ effect and undermine public trust in science. 

    “More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.” (Nature 2016)

    It has become increasingly evident that much of the published literature has findings that cannot be reproduced. Reproducibility separates the anecdotal from the real results that should be able to withstand the test of time and replication by the same or other researchers. Nature recently conducted a survey of 1576 researchers and found that more than 50% of researchers could not reproduce their own experiments, and more than 70% could not reproduce other researchers’ experiments. In this survey, more than 60% of respondents cited pressures to publish and selective reporting to be the main contributors to irreproducibility; more than 50% cited low statistical power and fewer still cited variability in reagents and techniques that were difficult to replicate. An overarching problem was also lack of sufficient time to plan and execute protocols, and senior lab members with limited time to train and mentor junior researchers.  Given the importance of mentoring in research, junior researchers who train in such labs may go on to become lab heads, and continue the cycle of productivity without reproducibility.

    Misleading or irreproducible research can have far-reaching consequences. Most published research that is preliminary in nature forms the basis of other hypotheses or research questions explored by others researchers with similar interests. If the initial hypothesis-generating research is not sound, and secondary publications expand upon, but do not attempt to validate the original findings, considerable time, money and effort is wasted, careers are affected, and real advances can suffer. In the worst-case scenario, wrong information may form the basis of translational work that enters clinical trials, exposing patients to potentially harmful treatments.  

    Industry invests substantially in candidate drug targets sourced from published literature and conference presentations. The validity of candidate drug reports was highlighted by a team of Bayer researchers who retrospectively surveyed all sources of data that contributed to 4 years of in-house validation programs. They found that they were able to reproduce the relevant published findings for only 20–25% of the projects surveyed.

    A similar survey by Amgen scientists found that of 53 clinical oncology publications that were deemed ‘landmark’ studies (21 were published by journals with impact factor >20), the findings of only 6 were corroborated by their in-house scientists. The authors of these reports discussed a range of explanations why validation attempts may have failed and the challenges of reproducing published findings, including variations in reagents and experimental models. 

    A report by John Arrowsmith at Thomson Reuters on phase II projects from 16 companies, representing ~60% of global R&D spending, showed that success rates had fallen from 28% in 2006–2007, to 18% in 2008–2009. Analysis of 87 phase II failures from 2008–2010, with known reasons for failure, revealed that 51% was due to insufficient efficacy. Although historical trends show that efficacy remains the most common reason for failure, the proportion of efficacy failures decreased by 11% and safety failures decreased by 7% during the period 2013–2015.  

    “To do better, insights on reproducibility will be crucial. Laboratory research is of tremendous importance. We should not drown its excellence in a sea of irreproducible results.” (John P.A. Ioannidis 2017)

    These reports confirmed the general belief that there was an urgent need for all stakeholders, whether academic institutions, journal reviewers, or funding agencies, to adopt more stringent policies and implement initiatives to improve reproducibility. Basic research has for many decades been the foundation of discoveries that have led to vaccines, new drug developments, and a better understanding of disease processes that contribute to improved health.

    Choosing careers in research is indeed a ‘hard slog’ and involves many challenges, and while the thrill of discovery may initially be the driving force, improvements in human health should remain the central focus of health-related research programs.

    Despite the current concerns, it is not all ‘doom and gloom’ as data on 2013-2015 phase II & III trials showed that late-phase failure rates are declining and rates of progression at the regulatory review stage had increased. Also recent reports from the Reproducibility Project that aims to independently replicate high-profile research in cancer biology, shows promise.

    In our next post we will review some of the methods that can be adopted to improve reproducibility and validation of research findings.  

    At SugarApple Communications we can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

Unfog the science…ensure quality, clarity and accuracy.

CONTACT US