• Grant writing — principles and practice

    February 27th, 2019 | by
    read more

    Writing a grant proposal takes a significant amount of time and effort, and there are no guarantees. In general, poorly written grants are unlikely to get funded even if the project is scientifically sound and focuses on a major unmet need. Given that only ~20% of submitted grants will be funded, a well written grant will greatly improve the likelihood of success. This article offers some general principles and tips on the practical aspects of grant preparation from the Australian perspective, though they may not apply to all schemes.

    More than 600 working years of researcher time goes into each round of NHMRC grant applications” (A. Barnett article at Croakey.org)

    As outlined in my previous article on this topic, it is critical to read and re-read the guidelines and criteria for each funding mechanism, particularly since the recent changes to funding schemes outlined by the National Health and Medical Research Council, the primary funding body in Australia. Details of peer-review guidelines and a flow-chart from the NHMRC scheme-specific peer-review guidelines can be accessed here.

    The synopsis

    • Selected grant review panel (GRP) members and Spokespersons will determine their suitability to assess your grant on the basis of the synopsis. They may have to read and prioritise 15-20 applications in a 4-week period.
      • **Tip** Make sure the synopsis captures your aims and is readable by anyone; avoid technical jargon here. 
    • The synopsis should include a brief background, methods and significance. Do not to pitch it so narrowly that only someone in your field understands it. Remember those who are knowledgeable about your area of research will leave the room when your grant is being reviewed by the GRP.
    • A badly written synopsis may mean that your grant ends up being reviewed by the wrong spokesperson, which may be fatal to its chances of being funded.

    Overall structure

    • Devote the first few pages to selling the grant.
    • The first page is critical — panel members will form their first impressions of fundability early in their review.
      • It should start with a brief overview of the topic and include a brief caption of methods and significance.
      • **Tip** Try to limit the first page overview to about a third of the page. Your aims and hypotheses should also appear on the first page.
      • By the end of the first page it should be clear in the reviewer’s mind what you plan to do (minus the details) and the reviewer should be hooked.
    • Begin the research plan around pages 3 or 4.
    • Sections on feasibility, timelines and significance can go on the last page, and include the role of associate investigators if they cannot be mentioned in the team capability sections.

    Aims and hypotheses:

    • This should be absolutely clear. It is worth the extra time it takes getting this right as it is perhaps the most important part of your grant.
    • Try to avoid conditional aims i.e. where the success of the subsequent ones are dependent on the first. If this is unavoidable, then be sure that your preliminary data totally supports the earliest aims and try to leave the reviewer in no doubt that they can be achieved.

    Background

    • Do not use too much space on the Background; try to keep it concise and relevant to the aims. A long-winded Background can be exhausting to read.
      • **Tip** Devote no more than 1–2 paragraphs on background statistics and epidemiology (if relevant).
    • Carefully review every sentence and decide if it contributes anything of importance. If not, delete it.

    Preliminary data

    • Preliminary data is where you can highlight the innovation in your project; it also provides proof of concept and that you have the necessary skills or technology. This is particularly important if you are applying for funding to do a larger study and can show that the project is viable.
      • **Tip** Preliminary data should begin no later than page 4 and should be ~2–3 pages; some projects involving data collection may not require preliminary data; evidence that the investigator can do this work should be in your track record.

    Methods

    • Make sure methods clearly relate back to the aims and no new apparent aims are introduced in this section.
    • Make sure your primary aim is sufficiently powered and achievable.
      • **Tip** Include sample size and power calculations. It is critical that this be done correctly and is accurately and clearly described.
    • If your project requires serious statistics input and you do not have a statistician on the team, you may be questioned why, as there is a risk of failing to achieve your aims.
    • If your aims include exploratory work that is underpowered, include them as secondary aims and clearly outline why they are worth doing even though they are underpowered.
    • Analysis approaches should be included with each aim and as an integral part of the methods.
      • Statisticians can be quite influential on review panels, so seek a statistician’s advice on your analysis approaches quite early, and not as an afterthought.

    Significance and Innovation

    • This sets the ‘mood’ and level of enthusiasm for the grant. Use this to get the reader in a positive frame of mind about your grant, and to ‘sell’ your grant.
      • **Tip** Significance can be worked into the Background; alternatively put as much of the S & I into the Overview box on the first page; don’t wait till pg. 6 to tell the reader the significance of what you are doing!!
    • Be sure to include estimates of ‘burden of disease’ to support the significance, particularly in cases where the project is not about a ‘life and death’ disease.
    • Generally a project may have either significance or innovation or both; for example a topic that are very close to clinical application may be highly significant, but lacking innovation because that part has already been done.

    Choosing your team

    • Think about the team carefully; don’t include any ‘guest’ CIs or international ‘high-flyers’ just for their impressive CVs; most reviewers on the panel will only consider the CIA–CIC, with more weight placed on the CIA.
    • If the work is to be accomplished by both CIA and CIB collaborating to supervise and accomplish the work, then be sure to point that out in your justification.
    • When listing AIs on the grant, be sure to indicate what their role is if the work is dependent on them, as most of the focus goes on the skills of the CIs.
    • Think of how you write the team capacity page; this must be appropriately pitched especially if you are trying to help a more junior person in your lab get a grant funded; it is important to say how they will work together with the team.

    Budget

    • This is a major sticking point; almost always this is scored down; you need the right amount of justification; your estimates must not be too low or too high
      • the budget is the last thing to be addressed by the GRP; so your grant needs to escape the ‘not for further consideration’ pile to get to this point.
    • Never ever divide the total budget by the number of years of the project; for example if it’s a 5-year grant do not simply put 20% effort across all the yearly columns, as it’s a bad look and you are not estimating actual expenditures by year of the grant.
    • There is a lot of pressure on panels to cut budgets; be sure to justify it.

    Pesky details

    • Make sure you follow formatting guidelines to the letter, and keep overall formatting and style consistent.
      • Include some white spaces; the overall look of the grant should suggests readability.
      • Use bold/italic/underline formatting judiciously and avoid the appearance of ‘shouting’.
    • Avoid using very complex figures with impossible-to-read font; don’t recycle figures from published manuscripts; simplify figures and make it clear why it is relevant to the application.
    • Don’t risk annoying the reviewer with careless writing, spelling errors, convoluted sentences that need a ‘double-take’, and grammatical mistakes; have someone else review the grant and check that it flows well.
    • Avoid over-use of acronyms; it becomes onerous to constantly flip back and forth to figure out what it means.

    Rebuttals

    • Make sure you answer every query in your rebuttal; if there are any comments in your review on your budget, be absolutely sure you address them in your rebuttal.
    • Be assertive but polite in the tone of your responses; don’t be rude, but on the other extreme, don’t grovel.
    • Try to be completely devoid of emotions in your rebuttals; avoid any indication of annoyance or irritation at reviewers’. comments; be factual and truthful, even if you disagree, e.g. “I respectfully disagree…” and clearly articulate why.
    • Avoid statements that may be construed as insulting, e.g. ‘as clearly described on pg 4’ because obviously it was not clear to the reviewer.

    Finally, consider submitting only one very well-written in any given year, and make sure you have thought through all elements of it. Start early and think carefully about how it is written. Remember the review panel moves very fast, so keep grants simple, readable, and as flawless as possible. Grant-fatigue can blind you to glaring mistakes, so ask someone independent to read it and check that it has all the necessary elements to make it a winning grant.

    At SugarApple Communications we can help with writing, editing and presentation of your research ideas in grant proposals and manuscripts. Get in touch today and let’s talk.

  • Grant writing — planning for success

    February 19th, 2019 | by
    read more

    Many careers have floundered because of grant funding woes, and talented scientists have chosen to remain on the fringes of research because pathways to a successful research career are neither straightforward nor predictable. Life in academic research is often littered with rejection of one form or another, whether it is from the journal of choice for publication of years of research, or the funding agency that this work is dependent on.

    Grant funding is pivotal to a successful career in academic science. As many researchers know, there is considerable stress, sleepless nights and anxiety associated with this — from the planning and writing stages of this process to the announcements of application outcomes.

    This article summarizes current thinking on grant funding that have been gleaned from various sources, including published research and commentaries on biases in the process of how research proposals are reviewed and prioritised, which applies across funding schemes.

    It’s best to start planning for a grant application at least 9-12 months before the submission deadline” (Anne Marie Coriat, Head of UK and Europe Research Landscape at Wellcome Trust, London (Nature January 2019 “Working Scientist podcast series)

    There is no guarantee that even the best written grants will get funded, but it does help their chances. The review process is mostly similar across funding agencies and involves a system of scoring and prioritizing according to the specific criteria outlined for the funding mechanism. Grant reviewers are expected to differentiate the very best grants from the weaker ones.

    A recent study involving replication of the National Institutes of Health peer-review process examined the degree of agreement between different reviewers and how the reviewers went about scoring applications. The results highlighted considerable subjectivity in how applications were evaluated.

    “We found that numerically speaking, there really was no agreement between the different individual reviewers in the score that they assigned to the proposals. We also found that when we were looking at the relationship between the strengths and weaknesses (written) in a proposal, and the score that was assigned, we did see a relationship between the number of weaknesses that a reviewer would identify in their critique and the score that the reviewer assigned, but that relationship between the weaknesses and the score doesn’t hold up between different reviewers.” (Dr Elizabeth Pier “Inside the NIH grant review process” Nature Careers Podcast, January 2019).

    On the positive side, responding to reviewers comments and providing feedback played a strong role in the grant eventually being funded.  It is important to not take the  critique personally given the apparent ‘randomness’ involved in the review process, and to have the tenacity to not give up.  Accept that the review process is not completely objective, and that humans are fallible and subjective. The process of judging grants is highly complex, and predicting whether a project is likely to succeed if funded is a very difficult decision.

    The study also showed that, as with so many other human endeavours where a critique is involved, weaknesses in a grant were found to be far more predictive of the score than their strengths. So every effort to minimise weaknesses is well worth it.

    In general it can be said that a phenomenal amount of time is spent in preparing research proposals. One commentary calling for reforms of the National Health and Medical Research Council grant system reported that Australian scientists spend an estimated 550 working years of researchers time preparing grants, or the equivalent of a combined annual salary cost of AUD $66 million, which was greater than the total salary bill of a major medical research centre that produced 284 publications the previous year.

    Major reforms for administering and funding grant proposals have been suggested by leading researchers and institutions for almost all major funding agencies. Until these are implemented, other points that are broadly applicable across funding schemes are as follows:

    • Read, read and read again – the guidelines for each funding scheme. All funders have very clear guidance on what each type of scheme involves and the sorts of things they’re looking for.
    • Understand the requirements and the deadline first, and then work back from there
    • Think carefully of how to express the importance of the problem that you are trying to tackle, as this is something reviewers talk a lot about.
    • Applications that focus on a condition that is so fatal and severe that obtaining sufficient preliminary data or a large enough sample size, or previous research that supports interventions or treatments, may not be proritised.
    • Think carefully about how you order your aims in tackling the research question, and try to ensure that your first aim is achievable; otherwise the rest of your research project is at risk of reaching a dead-end early.
    • Strong preliminary data is important, although this is the catch-22. You sometimes need to have done a fair bit of the work outlined in your proposal in order to get the funding to do the work! Most researchers end up using funding from other smaller grants to generate the strong preliminary data needed for larger grants, and to show that they have the technology to make the grant viable.
    • Discuss your ideas with colleagues who are not pursuing the same work but who know enough to comment on
      • robustness of experimental design
      • your team and having the right collaborators
      • possible alternatives to experiments if they go wrong
      • costings of materials and budgets etc.
    • Pay particular attention to the summary statement because this is the main focus of grant review panels. It needs to tell the panel what the aim of your project is, why it is important, and what you are actually going to do. It also needs to reflect the fact that you are the best person to do to this research; so there needs to be statements like ‘we previously showed’, or ‘I have contributed to…’ and be clear that you are showing that you have worked in this field before
    • Other issues are making sure you have sufficient detail in the body of the application so that it is clear to the reviewers that the work is achievable. Most funding bodies expect quite a lot of detail, particularly where analyses are involved; or that you have the necessary expertise among members of your team. Great ideas without the evidence of how you plan to tackle the work will worry the review panel.

    A final point that warrants attention in this article is the very important resource that is available at most academic institutions — the army of grant support officers whose business it is to oversee the application process and liaise with funding agencies. It is critical to work with them and follow their advice. I once heard a rather telling comment by a researcher, who said the grants officers were there because they ‘didn’t make it in science’!! Don’t ‘dis’ your best sources of support. I’ve been on both sides of the submission process and know the frustrations of grants officers who all but pull their hair out when researchers ignore their advice — because some researchers think they know better! At the same time, given the considerable stress of obtaining funding, a patient and supportive grants officer who responds promptly to queries and concerns can be your beacon of light in your darkest hour.

    In a follow-up article I will outline some more focused and practical suggestions and tips to consider for grant writing.

    At SugarApple Communications we can help with writing, editing and presentation of your research output and ideas, whether for grant proposals or manuscripts. Get in touch today and let’s talk.

  • Evidence Synthesis—Deciding what to believe

    September 19th, 2018 | by
    read more

    We often hear the words “evidence-based” thrown around in the media and from politicians on a range of issues like climate or environmental policy, or from those promoting the health benefits of their products. As consumers we put a great deal of stock in ideas and theories that are described as ‘evidence-based’ because it simply has a nice authoritative ring to it. A key question that we should be asking is was the evidence obtained from a comprehensive evidence synthesis approach, i.e. was sufficient effort put into reviewing all available evidence on the topic to draw the stated conclusions?

    The dictionary definition of synthesis is “the combination of components or elements to form a connected whole”. Evidence synthesis is the process of pulling together information and knowledge from various sources and disciplines that influences decisions and drives public policy. It is the ‘how, what, why and when’ that goes into ‘evidence-based’ decisions.

    The earliest records of evidence-based decisions came from medicine, as documented by James Lind who pioneered The Royal Navy’s approach to dealing with scurvy in the mid-1700s. In fact the British adoption of citrus in their sailors’ diet was one of the factors that gave them superiority over all other naval powers, until this practice became universally adopted. It is interesting to note that Florence Nightingale called statistics ‘the most important science in the world’ as she collected data on sanitation to change hospital practice. She was also an advocate for evidence-based health policies and chastised the British parliament for their scattered approach to health policy.

    “You change your laws so fast and without inquiring after results past or present that it is all experiment, seesaw, doctrinaire; a shuttlecock between battledores.” Florence Nightingale’s admonition to the British parliament (1891)

    The stated goal of almost every funded research program is doing what is best for the public; therefore a comprehensive overview of outcomes of research needs to be considered to generate policies that are in the best interest of the public. This can be a challenge given the expanding published literature.

    As with any process of decision making, the important question is, what constitutes evidence? Is there sufficient information available to systematically analyse and come to a sound conclusion, or are there major gaps in knowledge such that any decision made on the basis of the existing evidence is likely to be unsound and potentially harmful?

    Policy makers cannot always predict the outcome of their policies unless similar policies have been successfully implemented elsewhere and under similar conditions. Politicians charged with the business of implementing policies tend to aim for a balance between the best use of public funds and a ‘healthy, wealthy and wise’ agenda (although the ‘wise’ part is often assumed from success with the former two).

    Evidence-based policies should therefore rely on the best use of existing evidence. This may require advanced planning and allocation of funds, and many years to implement. However in some instances time may be of the essence— as in responding to a disaster or emergency, in which case both governmental and non-governmental experts from a range of relevant disciplines may be convened to provide advice and manage risks. In healthcare, evidence synthesis influences policy and practice, and given the impact this could have on healthcare costs, the process should ideally be unbiased and accurate and based on all available relevant disciplines.

    Various experts have outlined a set of principles that govern evidence synthesis, which if followed, can facilitate development of high-quality evidence. Science of any variety can be contentious and subject to debate depending on the personal and political values of the contender. There will always be topics with a high moral content that lead to disputes, and for which there are no clear-cut right or wrong answers or opinions. Almost all science will at some point transgress on individual sensitivities, and the question remains how to balance this with the greater good of the general population. There is no lack of such examples, whether it is a moral objection to culling pests in order to increase farm productivity, or retaining traditional forms of energy generation versus renewable energy that lower carbon emissions. For this reason, those charged with the task of synthesizing evidence should, where possible, have no personal or financial stake in the outcome, and should stick to unbiased and reputable sources of information.

    The following principles have been suggested:

    1. Inclusion of all stakeholders

    It would be appropriate to include policy makers if the aim of synthesizing evidence is to advise on a current issue of national importance, for example the economic feasibility of drought-proofing Australian farms. In addition to relevant scientists, community stakeholders who may be the target audience or ‘end-user’ should be involved to add a ‘common sense’ perspective to the issue. This ensures that the question is correctly formulated, and the interpretation of the findings is accurate and not biased in favour of a political agenda. It also brings diversity of opinion and provides several lenses through which the topic is viewed. In contrast, issues like summing up evidence to help drive future policies, such as advanced technologies in artificial intelligence or quantum computing, should be left to the experts.

    1. Rigorous methods

    Depending on how urgently the evidence is needed, those involved in evidence synthesis should try to identify all relevant science before deciding on its quality. Public policy based on flawed science can result in costly mistakes, a set-back to progress and a loss of public confidence. Sources of information and reasons for declaring a study as poor evidence should be documented. Where time constraints do not apply, evidence synthesis techniques typically involve systematic reviews, which I have previously written on and can be accessed here. Organizations like Cochrane and The Campbell Collaboration, synthesize evidence to educate the public and inform health-care policy by following predefined methodologies in ways that minimise bias. Such processes are very time-consuming (upwards of 2 years in some cases) but they are well renowned for scientific rigour and generating reports that are comprehensive and reputable.

    1. Transparency

    Sources of evidence, databases used, search terms, and how evidence is graded should all be publicly available and transparent to end-users. Although study methodology should follow a pre-defined process, areas of difficulty may arise in whether to include or exclude certain studies. Accounts should be kept of decisions, the reason for the difficulty or disagreement, and why a consensus could not be reached, as this may be important in future updates or policy debates.

    1. Open Access

    Evidence summaries in plain language that is accessible and available online is critical to their wider acceptance by relevant communities, policy makers and the general population. Depending on the range of potential end-users, multiple reports of the synthesized evidence may be necessary. Infographics or interactive online demonstrations that inform and educate the public will go a long way in gaining support for, and successful implementation of policies informed by this process. Timing is also critical, and updating the knowledge base regularly on topics of local and regional importance will reduce reliance on inaccurate or outdated information in an emergency.

    On a global scale, evidence synthesis is critical to coordinated responses to disease outbreaks. Costs of such efforts are often borne by countries with the ability to fund them, and stakeholders need to be convinced to participate in a unified effort. During the 2014 Ebola epidemic in West Africa, SAGE convened a wide range of experts from around the world. It needs to be irrefutable that the universal benefit of such an undertaking far outweighs the self-interest of any particular country.

    A limitation of the current best-practice of systematic reviews is the likelihood that existing studies are of low quality or highly variable in their findings to produce reliable results. An alternative is subject-wide evidence synthesis, which involves extracting and collating relevant information from many different sources. This has been done on a project called Conservation Evidence, which provides summary information on the effects of conservation interventions for all species and habitats worldwide. Although this approach is quite different from systematic reviews, it provides a valuable searchable database that can be used in combination with systematic review methods to address new research questions on conservation.

    The subject-wide approach is not limited to conservation or environmental sciences, but may be useful in public health questions where a particular outcome is so rare that even large well-powered studies are not sufficient to shed light on risk factors. I recently did an analysis of uterine cancer among women who were treated with tamoxifen for prior breast cancer. Although the risks associated with tamoxifen treatment are well documented, our data suggested that a high proportion of women treated with tamoxifen subsequently developed a rather nasty type of uterine tumour known to have very poor prognosis. This has also been documented in several case studies. While case studies tend to be viewed as anecdotal and insufficient evidence to guide health policy changes, it does suggest the need for improved surveillance of women treated with tamoxifen.

    Decisions and critical assertions made by public officials and politicians that impact on matters that affect people’s lives—whether it is indigenous affairs, climate change, energy policies, or healthcare—cannot be done on a whim or in a rush to placate political constituencies and win votes. The importance of a comprehensive assessment of all evidence to ensure that polices are indeed evidence-based, should not be side-lined in favour of ‘quick and dirty’ alternatives.

    At SugarApple Communications we can help you find the best way to analyse and interpret your important data, and communicate it to your intended audience. Get in touch today and let’s talk.

  • Reputation and ethics ― shadow or substance

    June 5th, 2018 | by
    read more

    Reputation, ethics and actual value should go hand in hand. In the corporate world, success depends, among other things, on whether people are willing to buy or recommend your product, and this is usually driven by the perception that the company can be trusted and will deliver on promises made in advertising. The question arises, is this perception based on any real substance, or is it merely a shadow cast by an unjustified REPUTATION!!  

    Where consumers have a choice between essentially similar products, their decision to buy is based on what the company is about, and this ultimately comes down whether the company name is associated with a core of ethics.  

    Many fail to grasp the connection between reputation and ethics, as evidenced by the rise in reports of unethical behaviour and loss of credibility both at the individual and corporate level.

    It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.” (Warren Buffet)

    I have heard so much in the past year about ‘reputation’ both in the media and from those who work in the corporate sector that I often wonder if ‘reputation’ has become a substitute for a resume or CV, or any real evidence of value. Those who claim to have a ‘reputation’ assume this can be leveraged regardless of whether there is any substance to their claims. Some even use ‘protecting reputation’ as justification for unethical behavior.  

    It is tempting to get caught up with the perception and not the practice of ethical conduct, and to define reputation only by how you are perceived by others. If in fact you are ethical, is there really a need to speak of having a reputation? Surely your ethical conduct speaks volumes about you in the workplace without needing to utter a word about it!

    The value of a company or organization is tied up not only in physical or financial capital, but in human capital ― individuals who interact to produce outcomes. It is the perceptions of these outcomes that drive reputation and therefore long-term success.

     “Ethics are frameworks for human conduct that relate to moral principle and attempt to distinguish right from wrong” (Miesing and Preble, 1985).

    Reputation “is a perceptual representation of a company’s past actions and future prospects that describe the firm’s overall appeal to all of its key constituents when compared with other leading rivals” (C.J. Fombrun, 1996).

    Beyond the workplace laws legislated by governments, each company or institution will develop a set of guidelines and policies about workplace conduct with the aim of building value, whether financial or social. Social value is measured in terms of trust; where there is trust, stakeholders are more likely to give the corporation the benefit of any doubt in the face of a crisis or controversy.

    We’ll now look at what constitutes a good reputation in two essentially different industries.

    Reputation in the Corporate Sector

    At the corporate level, issues of reputation may be tied up with legal and ethical issues, but they are not always mutually exclusive, meaning not all that is legal is necessarily ethical. In some scenarios a legal solution may not be the most ethical one, and choosing an ethical solution over a legal solution may in fact pay dividends to a company’s reputation despite the cost in terms of real dollars.

    In an ideal world, the pharmaceutical and medical device sectors should comprise individuals who spend the best years of their working life inspired by the prospect of improving human health, quality of life and longevity. However, these corporations are answerable to their share-holders, and their motivations are not entirely altruistic. For society to benefit, there has to be a balance between good science and good money. Where it becomes unstuck is when money is the primary goal, as evidenced by lawsuits, trials and fines related to unethical behavior.

    The prevalence of unethical behavior and the betrayal of the public trust by the financial sector have recently dominated news headlines since the start of the Australian Banking Royal Commission. Initially, the concern of some politicians was that this investigation would “damage the banking sector’s reputation internationally”— until day after day of public testimony highlighted the egregious behavior of those under investigation. Some involved claimed they concealed relevant details of financial products in order to preserve their reputations!!

    Clearly ethical conduct was side-stepped in the hope that ‘reputations’ would remain intact. Among other things, the lack of honesty and transparency by these institutions and the individuals who represent them, all in the name of reputation, highlights the gaping chasm between ethics and reputation in this sector.

    This and other similar cases highlight the need to change unethical behavior that clearly impacts company reputation. Once a decision is made, whether conscious or sub-conscious, to compromise ethics in favor of financial gain, it sets in motion a domino effect that may be impossible to stop before it reaches the brink of disaster. People’s lives are ruined, and monetary settlements are merely a band-aid on the greater festering problem of greed and self-interest.

    It takes moral courage and discipline at the leadership level to ensure that stakeholder trust is not compromised for financial gain, and recognize that the intangible and invaluable thing called ‘reputation’ is not just a shadow without any substance, but is firmly rooted in a core of ethics.

    To quote another blogger on this topic, “A company’s commitment to ethics is the most effective means to preserve and protect the company’s reputation. Frankly, they go hand in hand and mutually reinforce each other.”

    Reputation in Academia

    Reputation in academia is highly dependent on ‘getting it right’ and not necessarily on ‘being right’. The motivation to do academic science tends to be a personal one, and the choice of fields may closely follow something the individual personally identifies with. In the current environment, academic reputation is primarily defined by the quality of publications and the ability to convince the scientific community of the veracity of their finding, and secondarily by the value of their work to the general public.

    Recent concerns in the academic community on reproducibility and replication are perhaps the greatest threat to academic reputation, because it calls into question the intrinsic value of the research and threatens acceptance by their peers. I have written previously on the challenges of reproducible research and some steps that can be taken to ensure reproducibility.

    In a recent published article, ~4700 individuals from varied backgrounds were asked to choose between two extremes about their expectations of scientists. Overwhelmingly they chose ‘boring but certain’ (i.e. very reproducible) over ‘exciting but uncertain’ (i.e. not very reproducible). Although respondents of this survey chose the ‘exciting’ researcher as the more creative and therefore more celebrated, they linked reputation and success to ethics and truth, and cared more that the scientist pursued certainty and reproducibility in their work.

    The main message from this survey was that the pursuit of truth and ethical conduct in academia was more valuable to reputation than the research outcome itself.

    Building a positive reputation

    • Learn to accept criticism gracefully. Anyone in a regular working environment, whether academic or corporate, will face this sooner or later. It is a mark of personal development to be honest with yourself, and consider whether there is room for improvement, rather than fire off a nasty response.
    • Before you act in any situation, stop and consider it from the other person’s point of view. How would you like to be treated if you were in their shoes?
    • Do not ignore emails from colleagues or collaborators or those below you in the career ladder. Respond in a timely manner, even if it’s a brief response to say I’ll get back to you on that. And if you say that, follow through. Few things foster mistrust more than someone who says they’ll do something but never get around to it.
    • For those in publishing, everything that you put your name to should be taken seriously, and carefully checked to ensure that you are willing to take responsibility for the published work. You should never agree to be an author or offer authorship to someone where contributions do not meet the ICMJE criteria for authorship.
    • Be open and transparent about conflicts of interest. Consider opting out if asked to review the work of a competitor, or provide advice on a product you stand to benefit financially from.
    • Maintain your integrity and be true to your values. It is easy to take shortcuts hoping no one will notice. Any level of misconduct will eventually catch-up with you and has been known to ruin careers. For more on integrity, see my recent article on integrity in science and why it matters.

    Developing and maintaining a code of ethics and strictly adhering to it is central to a good reputation. As stated earlier, individuals collectively interact to produce outcomes that affect external stakeholders and reflect company image, whether corporate or academic. Building a reputation therefore must start with ethical conduct at the individual level.

    At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Get in touch today and let’s talk.

  • Academic-industry partnerships — are we there yet?

    March 21st, 2018 | by
    read more

    The current climate for research funding in academia, coupled with unmet public health needs and the low rate of return on research investment by industry, highlight the need for academic-industry partnerships, and a shift in what for decades was thought of as a ‘clash of cultures’.

    Academic science is driven by education, intellectual curiosity and discovery, while the mission of industry is translational research, commercialization and profit. Academic scientists share their findings with the wider scientific community, while industry for obvious reasons tends to be reticent about this. The wall separating these cultures was first breached by university engineering and computer sciences sectors, leading to patenting, licencing and royalty income by major research universities.

    A similar process has been occurring in the last 2–3 decades in the biomedical sciences, with adoption by many leading universities of a “20% rule”, encouraging faculty to engage in extramural activities with industry one day a week. Although such alliances have been viewed as productive overall, concerns continue to abound, including conflicts of interest, loss of public trust, increased workload for institutional review boards, and lower than expected financial return from licencing agreements and royalties.

    Academics who are driven by ‘pure motives’ are currently facing decreased government funding for research and pressures to link up with industry. In commenting on innovation in Australia at a press conference in 2015 at Miraikan, Japan’s National Museum of Emerging Science and Innovation, Prime Minister Malcolm Turnbull said: 

    “…we’re adding another criterion for success in achieving grants which is to demonstrate the degree of collaboration that you’re undertaking with business, with industry, so you can add “publish or perish” or perhaps “collaborate or crumble” as well.”

    Recognizing the value of partnerships between industry and academia is not a one-way street. High quality research tracked by the Nature Index show that almost 90% of articles authored by corporate institutions are in collaboration with academic institutions, while only 2% of high-quality research come from the corporate sector. Partnerships between corporate and academic institutions have more than doubled since 2012 when tracking by Nature Index began, and half of them are in the life sciences.

    Collaborations between academia and industry that are undertaken to address complementary parts of a research question can improve the publication power of the program. For the corporate side of the partnership, it means attracting and retaining top-tier scientists, which ultimately adds to the credibility of the research. A recent example is the 2013 collaboration between the Walter and Eliza Hall Medical Research Institute in Australia, and Genentech in the US, that led to the development and approval of venetoclax, a potent new therapy for patients with chronic lymphocytic leukemia.

    The post-genome period and the prospect of new therapies have also helped to foster academic-industry partnerships. In 2016 AstraZeneca launched an integrated genomics initiative based in Cambridge, UK. This collaboration with The Wellcome Trust Sanger Institute, and the Institute for Molecular Medicine in Finland will focus on discovery of new drug targets and biomarkers across Astrazeneca’s main therapy areas. Professor David Goldstein, renowned for his research on genetics of disease and pharmacogenetics, and the director of the Institute for Genomic Medicine at Columbia University, is leading Astrazeneca’s ambitious 10-year genomics initiative.

    It is also no secret that many academics in top positions with successful careers have been poached by pharmaceutical giants due to the growing need for innovation. Many academics who made the switch did so, not because of the prospect of better remuneration, but because advances in technology and the emphasis on translational research have made it particularly attractive to partner with industry where developing new therapies is a ‘hard’ endpoint. It is also the difference between research that may eventually help people, versus developing therapies that directly benefit people.

    Such moves are not for everyone, because for some academics the greater motivation may be preclinical research and the joys of discovery. For others who want to see their innovations brought to fruition, the attraction to industry also depends on whether the pharmaceutical company is a good fit, and academic scientists can continue to espouse their academic ethos. An example of this was Dr Mark Fishman, who was chief of cardiology at Massachusetts General Hospital prior to accepting the top job at the Novartis Institutes for BioMedical Research. Fishman made a point of recruiting academic researchers because he liked their way of thinking.

    “When you’re in academia, you have to develop critical thinking, and your ability to survive depends on your speaking and writing well and defending clearly what it is you want to do. The entrepreneurial spirit and culture of survival in academia is quite relevant to getting things done [in business]. (Fishman, Novartis)

    Following retirement, Fishman was succeeded in 2016 by Dr Jay Bradner, a talented and innovative physician-scientist from the Dana-Farber Cancer Institute and Harvard Medical School. Under their leadership, Novartis has forged many more alliances with academia, and reported 2017 as a landmark year for innovation.

    As with most collaborations and partnerships, academic-industry partnerships can work if there is an alignment of common goals, free and open communication, and trust between the parties in sharing ideas and data. The common ground between the two parties is identifying ‘druggable’ candidates in pre-clinical research that leads to the development of novel therapies. The divergence of academia and industry arises mainly in their respective approaches to achieving this endpoint.

    Although many academics report their research in publications or grants, when it comes to therapeutic potential, their focus tends to be mainly on understanding a previously unknown biological mechanism or the role of a molecule in the disease process. Industry scientists or ‘drug hunters’ however, tend to be more focused on whether a specific molecule involved in the disease pathway can be effectively targeted for therapeutic purposes. Academic scientists may also be concerned about intellectual property and the risk of revealing promising results prior to publication for fear of ideas being scooped. There are also well-founded concerns that industry involvement may mean delayed publication and a tendency that they will ‘drop the ball’ in the face of negative results.

    At a recent “Bridging the Innovation Gap between Academia and Industry” event held in New York, panelists Dr Barbara Dalton, Director of the Pfizer Ventures Investments team, and Teri Willey, Vice President of Business Development at Cold Spring Harbor Laboratory, provided some insights into traits needed to build a successful partnership:

    1. Speak and write for the general population. A common complaint about scientists is that they do not explain science in a way that helps the general population understand what they are doing. The litmus test is whether you can tell your mother or grandmother—assuming they are not also scientists—what you are doing.
    2. Good salesmanship. Scientists may have great ideas and may be doing ‘cutting-edge’ science, but they need to be able to sell their project. That means not only speaking and writing clearly, but developing a good ‘sales pitch’ and being able to persuade the funding organization or industry partner that they are on the right track with their project without crossing the line into ‘used-car salesman’ territory.
    3. Leadership skills. Some scientists are more suited to leadership than others and can provide persuasive arguments in support of commercialization of project. Such individuals should be identified at the institutional level and supported as ‘business agents’ for liaising with industry partners, while maintaining their role in supporting the researcher whose work they are promoting.
    4. Mutual appreciation of value. This involves striking a balance on more than just the financial aspects of the partnership. ‘Value’ varies with each entity, and can be one of the most difficult aspects of negotiating a partnership, but it is critical to success.

    Dr Dalton also recommended investing in ‘entrepreneurs in residence’—people who have had experience with commercialization and can facilitate transitioning projects from the laboratory to a commercial sponsor.

    Life sciences companies have fallen behind the technology sector and are facing lost opportunities by their reticence to share knowledge. The SEMATECH story, a research consortium formed in the mid 1980’s of fourteen computer chip makers in the US semiconductor industry, has become a model of how open innovation and drawing on the knowledge of your peers can transform an industry. Facing tough competition from Japan, this alliance banded together to solve shared problems and establish standards for chip design and manufacturing. Today US producers have not only regained their footing, they control half of the global semiconductor market.

    Pharmaceutical companies have increased research spending by as much as 287% in the last two decades, and are facing lower revenues from expiring drug patents and longer time frames to bringing new therapies to market. At the same time there are many disease conditions with growing healthcare needs.

    Although there are some very successful industry-academia partnerships, there is still a significant need for cooperation between academia and industry. One such effort is the Academic Drug Discovery Consortium which began in 2012. This collaborative network of academic and industry scientists, philanthropic and government organizations across 16 countries aims to share technologies related to drug discovery, and provide a platform for engaging with the life sciences industry.

    If successful, academic-industry collaborations represent a move in the right direction, and may be the light at the end of the shrinking drug pipeline that could lead to improved translational research, greater innovation, and the much needed development of new therapies.

    At SugarApple Communications, we support ethical scientific endeavours in academia, industry and government sectors. Let us help you find the best way to communicate your research. Get in touch today and let’s talk.  

  • Integrity in Science — Why it matters

    March 6th, 2018 | by
    read more

    Research partnerships between industry and academic institutions have more than doubled in the last five years, 50% of which are in the life sciences. This significant shift to seeking partnerships with academic scientists is evidenced by the fact that almost 90% of publications authored by corporate researchers are in collaboration with academic or government labs.

    While corporations are known to protect their property, they see the benefit of such collaborations in producing high-quality research. The value of such alliances is also apparent because the public gives much weight to academic research as unbiased independent confirmation. Corporations should not fear associations with academics who themselves are fearless in defending their research, and should legitimately want to have such endorsements to back their products.  

    Given the current emphasis in academia on translational research, it is more critical than ever before that ethics and integrity be in the forefront of all scientific endeavours.

    Ethics and integrity in science are issues that have been in the spotlight recently, and are gaining momentum in many discussion forums and leading journals. The pursuit of truth above all else seems to be a fading tenet, as the need to increase publication quotas is tenuously balanced with career progression for many who love research. As reports of scientific fraud increase, and social media opinions on medical topics that are not based on good science flourish, it is becoming harder for scientists to retain their credibility and engage with the public. It is therefore urgent that the scientific community upholds the highest standards of ethics and integrity.

    This issue has led to the development of a Code of Ethics by the World Economic Forum Young Scientists Community — a group of researchers under the age of 40 from diverse fields and from around the world. Their reflections and consultations with other researchers and ethicists identified seven principles that form the basis of the code that stakeholders are invited to endorse, with the aim of shaping ethical behavior in scientists and facilitating a cultural shift in scientific institutions. This code ranges from fundamental social behaviors such as ‘Minimize harm’ — as in damage to public health or waste of research dollars — to issues that are potentially contentious and subject to interpretation, such as ‘Pursue the truth’. Overall the code has stimulated much discussion, and serves as a timely reminder of the social and moral obligations of those who conduct research.

    There are striking examples of scientists throughout history with a strong commitment to integrity and social responsibility. Rachel Louise Carson (1907-1964), is renowned for her work on pesticides and her campaign to alert the public of the harmful effects of DDT. Although she faced strong opposition from the chemical manufacturers, she urged awareness of the environmental impact of this chemical and advocated responsible use. Her work led to a grassroots environmental movement, the creation of the Environmental Protection Agency in 1970, and the banning of DDT use in the US six years after her death. Herbert Needleman, a child psychiatrist, is well known for his research on lead poisoning, and provided the first evidence that exposure to lead was associated with cognitive defects in children. His work was instrumental in the EPA mandate to remove lead from gasoline and interior paints.

    Research integrity and social obligations are often based on individual value judgments. A researcher may decide to work in academia vs the commercial sector out of altruism and a desire to choose the research she wishes to pursue. For example a researcher working for Acme Pharmaceuticals may find that her research funding is determined by the potential for market share and financial benefit, while government and public funding sources may offer more flexibility in her choice of research topic, as well as autonomy in the discharge of social and moral obligations to the public. In either scenario, personal values come into play, which may be challenged by the interpretation and conclusions drawn from their research.

    Ethics and integrity, loosely defined, is an intrinsic value system that governs the ability to distinguish between right and wrong, or acceptable and unacceptable behaviour. Most people consider these as matters of morality or simple common sense. Given the range of disagreement on what constitutes ethical norms, the ‘common sense’ approach may in fact be overly simplistic, and depends instead on the context. Ethics in the medical sciences may have more direct and obvious application at the individual level as defined by the Hippocratic Oath “First, do no harm”, while issues like global warming tend to be more complex, requiring a broad range of stakeholders, e.g. politicians, economists and environmental scientists to work together to achieve an ethical outcome. In the latter scenario, each stakeholder would have a different set of moral and ethical responsibilities that makes it difficult to achieve a common goal.

    Research institutions almost universally have policies and training modules in place that outline ethical standards for personal behaviour (e.g. workplace bullying and sexual harassment) and research conduct (e.g. animal ethics, plagiarism and fabrication of research data), and strict requirements for timely completion of these modules.

    There is also much emphasis internationally on excellence in science, a recent example being the launch of a Regional Centre for Research Excellence at the University of the West Indies last month. Defining and measuring excellence is difficult and engenders much debate, particularly because of a lack of consensus across different disciplines. While there is disagreement on what constitutes excellence in research, “soundness” and “proper practice” that maintains ethics and scientific integrity remain a common thread across disciplines in the pursuit of excellence.

    Regardless of cross-disciplinary variations, ethics and integrity in science is important because it promotes:

    • A quest for knowledge, truth, and minimizing error. This includes prohibitions against falsifying or misrepresenting research findings.
    • Collaboration and cooperation between institutions. The success of collaborative work is highly dependent on common ethical standards, and promotes trust, respect, and responsibility, as exemplified by the ICMJE guidelines for authorship.
    • Accountability in adhering to policies on the protection of human subjects, animal care and use, and declarations of conflicts of interest particularly for publicly funded research.
    • Public support and trust. While government resources vary according to economic and political interests and agendas, research funding from philanthropic organisations is a growing phenomenon worldwide.

    The public relies on the integrity of scientists who in turn can earn their trust through a wide range of avenues, from providing expert testimony highlighting the legal and policy implications of research, to whistle-blowing on scientific misconduct, to advocating for public health policies. These avenues can be uncomfortable for many scientists by bringing them sharply into the public spotlight. Like Carson and Needleman, whose work led to major public policy shifts, the dilemma that scientists with integrity may face is criticism and backlash from opponents and industry groups who may have more to lose from these policy changes.

    Overall, scientific integrity can be summarized in many ways, but simply stated, it is founded upon:

    • honesty in reporting data and results
    • objectivity in data analysis and interpretation
    • carefulness and record keeping of research activities
    • openness in collaboration
    • protecting patient confidentiality
    • respect and acknowledgement of all contributors, i.e. giving credit where credit is due
    • accountability and responsible publishing, and
    • communicating values through mentoring.

    A model example of scientific integrity at its best occurred when a paper with significant clinical impact published in 2014 by the Journal of the American Medical Association, was discovered two years later to contain analysis errors because of miscoded data. The error turned the original findings on its head, spelling doom for the original published paper. In what has been highlighted as “a shining example of scientific integrity”, the lead author acted quickly to submit a ‘retract and replace’ article. JAMA agreed, using the opportunity to urge authors to share data and avoid the stigma of retraction when honest errors are discovered.

    Think of the unthinkable when it comes to checking the quality of your data; If you think that all possible has been checked, check again; Always let others use your original data for new (or just the same) analyses.” (Marc Bonten 2017 Blog post)

    This is in stark contrast to a situation where for example a freelancer commissioned by Acme Pharmaceuticals discovers errors in her data while drafting a manuscript, and assumes that data checks prior to write up were not part of her responsibilities. As outlined in my recent article on authorship, ethical conduct in reporting requires that all parties involved in the preparation of a manuscript for publication conduct multiple data checks and take responsibility for the contents of the published manuscript.

    Correcting major errors prior to publication, even at the stage of checking the pre-publication proofs, may be a lengthy process if all authors need to approve these corrections, but it should be done even if it delays publication. The decision to correct the error should be made by all authors if the correction substantively changes the take-home message of the paper.

    Reputable journals and scientists with ethical standards can work together to preserve the integrity of their research by correcting any errors as soon as they are discovered, and simply get on with the business of science.

    At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Let us help you find the best way to reach your intended audience, and assist with writing, editing and statistics. Get in touch today and let’s talk.

  • A tribute to Women in Science ― leading by example

    February 22nd, 2018 | by
    read more

    The UN General Assembly declared 11th February as the International Day of Women and Girls in Science, aiming to achieve full and equal access to participation in science for women and girls. This is vital for achieving the 2030 Sustainable Development Goals adopted by world leaders in September 2015. A recent survey conducted in 14 countries found that compared to men, women are half as likely to graduate with a Bachelor’s or Master’s degree, and one-third as likely to graduate with a doctorate in a science-related field.

    “We need to encourage and support girls and women achieve their full potential as scientific researchers and innovators” (UN Secretary General Antonio Guterres)

    Although a wide range of factors influence the careers of women in science, perhaps disproportionately in comparison to men, this article aims to highlight some of the achievements made by women in science, and some personal observations during the course of my career in science. It also highlights principles that apply equally to men and women, and the fact that anyone can work to their best potential if they commit to working with integrity and honesty, and observing a high level of ethical conduct in their careers.

    As someone who loved the biological sciences since early high school, and who grew up in a household where male and female siblings were in equal proportion, I was blessed to have parents who were education professionals each in their own right, and who encouraged (in fact, expected) us all to excel in our respective fields and professional pursuits. Gender bias may have existed, but it did not affect the way I saw myself or others in terms of career goals or simply being the best I could be. As expressed by Prof Michelle Simmons in her recent acceptance speech on receiving the 2018 Australian of the Year award, believing what others think of you can become a self-fulfilling prophecy.

    Significant contributions by women to science and medicine go back centuries. Although women were excluded from university education in the earliest emergence of universities, some countries were more liberal than others. The first woman to chair a scientific field was Laura Maria Caterina Bassi, a physicist and academic, in the late 18th century in Italy. Segregated women’s colleges arose in the 19th century, and in 1903 Marie Curie was the first woman to receive a Nobel Prize, which was in physics. She then went on to receive another in 1911 for chemistry, and her work on radiation is well renowned.

    Although a total of forty women have been awarded the Nobel Prize by 2010, not all women can or wish to pursue this level of achievement. Many women are scientists because they love it, and simply enjoy discovery and want to do great science. Prof Michelle Simmons rightly stated “women think differently, and that diversity of thought is invaluable to technological research and development”. We all know that scientific research needs to be approached from many different angles, and this can only enhance the findings and lend credibility to research outcomes as a whole. Prof Simmons also did not believe in “mandating equal numbers of men and women in every job”. It is just as wrong to hire on the basis of gender as any other characteristic, unless that characteristic helps to advance the field. Girls and boys in early education should have the same opportunity to develop their potential in STEM disciplines without bias or stereotyping. Their differences will shine in positive ways if they are encouraged to develop their talents, whatever that may be.

    I have been in science for 35 years, and all except one of my supervisors were women, although this was not my objective. This is not counting my PhD advisory committee who were all men, and whom I chose because they were the best mentors for me. One in particular, a statistics professor who also served on the FDA CardioRenal Advisory Board, has remained someone I respect and admire because of his fierce and unwavering ethical stance on many issues, particularly the interpretation of clinical trial data. When I approached him, he surveyed me skeptically as yet another epidemiology student needing a statistics professor on her committee. By the end of the interview, he was completely disarmed in his attitude towards me, and from that point on it was evident that we had struck common ground in our views on ethics in science. The earned respect was mutual, and being an ‘older’ PhD student at the time, I was not intimidated by his occasional ‘wrath’ over issues of scientific misconduct and poor data interpretation.

    But I digressed! Getting back to the women who were my supervisors, I was fortunate to have had many who were excellent role models and mentors over the years. Some of the most outstanding qualities I have observed in successful women in science is the ability to be organized, to multi-task, and to lead by example. Honesty and integrity, discipline, respect for employees’ personal lives, fairness, challenging junior scientists to excel and giving them space to develop their thinking skills while offering calm and constructive guidance, and giving credit where credit is due, are some of the characteristics I have valued most in the women in science who were my mentors.

    I also often bring to mind a situation where a difficult decision needed to be made, and a senior female scientist who overheard me discussing this, came over and whispered to me ‘To thine own self be true’. This has remained with me over the years and continues to ring true in many situations. But it takes courage and conviction to apply this and requires weighing up the pros and cons of each individual circumstance where a decision has to be made.

    There are certain principles that should govern all aspects of our life, whether in family life at home, or in our working environments, or in our relationships. The measure of the person is the consistency of this in all walks of life. There may be times when you are challenged to stand up for your principles, and doing so can cost you your standing on a committee, or your reputation. But can you sleep at night knowing you were not true to yourself or the research you love and for which you strive for excellence?

    I have often thought of how best to explain ‘leadership by example’. The golden rule, ‘do to others as you would have them do unto you’ comes to mind. The sorts of people who issue edicts that they themselves either cannot or will not fulfill, are those who rarely gain the respect of their peers, at least not in science. Those who criticize others and demand what they themselves cannot deliver, while blinded to their own incompetence, do not make good leaders in any field.

    It is also important for women in science to take care of their personal lives and pay attention to work-life balance. This has also recently received attention in a highly visible journal article. As a scientist you need to have clarity of thought. Few things can cloud the mental landscape and your research progress more than a personal life that is disorganized and in disarray. This takes work and discipline, as well as considerable support from family and close friends. It is not your sole responsibility as a woman to organize everyone, but like your supervisory role in science, delegating responsibility in a kind and supportive way, while not shunning it yourself, is critical to success.

    As part of ethical conduct, keeping your word is also important to your success as a scientist. I have had colleagues who offer to do a job, and they not only never get it done, but their mind seems to go blank when it is brought to their attention, usually in an effort to think of an excuse. We all forget things at times, and to err is human. Taking responsibility for something you overlooked will enhance your credibility far more than making excuses. Eventually excuses will be embarrassingly in limited supply, and the cold hard facts of your unreliability will cost more than you realize. So don’t go there. Take ownership for both your achievements and your failures.

    Some final characteristics I have valued in the women in science whom I’ve known personally, particularly here in Australia, are conviction, graciousness, humility, humanity, and a sense of humor. These will elicit loyalty and support, and the willingness of your staff to voluntarily go the extra mile like no pay rise will. I believe this is the icing on the cake that will go a long way in attracting the best and brightest scientists to your team.

    Finally, there is so much more women in science can do to advance their respective goals, by simply being true to themselves and not trying to be what they are not.  Careers in scientific and medical research are universally known as a ‘hard slog’, and participation by women will continue to bring to science those qualities that enrich the tapestry of human endeavor.

    At SugarApple Communications we celebrate all scientists and the labor of their research. We can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

    Feature Image AlesiaKan / Shutterstock.com

  • Association and causation — Is there a difference?

    October 9th, 2017 | by
    read more

    It may be obvious to many that an association between two factors does not necessarily imply causation. We are frequently exposed to scientific reports that factor X is associated with disease Y, which are then altered to ‘X plays a key role in Y’, and we are led to believe that X causes Y. We also find reports on social media of individual exposures that are believed to have caused disease simply because they happened at the same time. Such reports have led to false claims about novel associations and their relationship as a cause of disease.

    For example, a positive relationship between the amount of damage caused by fires and the number of firemen at the scene does not mean that sending more firemen to a fire causes more damage. Also, high coffee consumption may be associated with a decreased risk of skin cancer, probably because high coffee consumption is associated with indoor lifestyles and activities, and therefore less exposure to the sun. But coffee itself does not protect against skin cancer, and is therefore not causally related. 

    Food has been a prime candidate in the search for causes and cures for cancer as far back as the 18th century. In an innovative study, Schoenfeld and Ioannidis addressed the question “Is everything we eat associated with cancer?” The authors selected 50 common and familiar ingredients from random recipes in a popular cookbook, and queried them on PubMed for an association with cancer risk. They found articles for 40 of these ingredients that the authors of the articles claimed was evidence that they either increased or decreased the risk of cancer. Some ingredients had effects on both sides of the risk profile, and had unconvincingly large effects that tended to shrink in meta-analyses.

    So how do we decide whether an observed association is evidence for causation or not? Students of epidemiology or public health are taught to differentiate between association and causation, but may be tempted to exaggerate the implications of association studies when they enter the ‘publish or perish’ world of academic research. An inherent weakness of observational association studies is that experimental studies may not corroborate their findings. Inferring causation from a single association study may therefore be misleading, and could potentially cause harm to the public. This is a major reason why preliminary results from association studies should be interpreted with caution, and if publicized, should be carefully presented, keeping in mind the aims of the study and ‘real world’ implications as opposed to statistical significance.

    All scientific work is incomplete – whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time.” (Sir Austin Bradford Hill 1965)

    In 1965, at a meeting of the Royal Society of Medicine, Sir Austin Bradford Hill outlined nine tenets to consider when deciding whether causation was a factor in an observed association. He clearly stated that he had “no wish, nor the skill, to embark upon a philosophical discussion of the meaning of ‘causation'”. The starting point in assessing a causal relationship is generally an observation of an association or correlation between an exposure and an outcome that may or may not be attributed to the play of chance. These tenets are as follows:

    1. Strength of association. This refers to the magnitude of the effect of the exposure on the disease compared to the absence of the exposure, often called the effect size. This is represented by the odds ratio, confidence interval and p-value. These measures should be considered together when deciding how strong or how real is an association.
    2. Consistency. The association remains even when other factors change, e.g. different time, place, location, ethnic groups, age groups, gender etc.
    3. Specificity. The causal factor is quite specific to the outcome. For example if coal mine workers exposed to coal dust develop black lung disease, whereas those not exposed to coal dust do not, then coal dust specifically causes black lung disease. Working in a coal mine may not be the causal factor, as a person may be exposed to coal dust outside a coal mine. In an association study, it is important to isolate what is specific to the disease process to determine if the association is causal.
    4. Temporality. This answers the question: Which came first? You would expect that if an exposure causes a disease, then the exposure should necessarily precede the disease development.
    5. Dose-response or Gradient. Evidence that an exposure causes a disease may be related to a certain quantity or dose of the exposure, in which case you may see varying degrees of disease depending on the extent of the exposure.
    6. Biological Plausibility. Plausibility asks the question: Could the observed results fit into an established biological theory if it existed? This helps argue for causation, but it is not absolutely necessary as the understanding of the disease biology may be immature.
    7. Coherence. Assessment of causation should not conflict with existing knowledge of disease biology. Coherence asks the question: If the association is indeed causal, would it fit into an existing biological theory? The difference between ‘plausibility’ and ‘coherence’ is subtle. Coherence assumes there is an existing biological theory, and rejects the result if it does not fit into that theory, while plausibility at least allows for it in the absence of mature science.
    8. Experimental evidence. This has also been called ‘challenge–dechallenge–rechallenge,’ meaning, if we prevent the exposure, is it likely to prevent disease, and if we re-introduce the exposure, does the risk of disease return? The best experimental evidence for causation comes from randomized controlled trials, although in some circumstances this may be unethical.
    9. Analogy. Causation by analogy implies that if an exposure is known to cause disease, then it is highly likely that a similar exposure under similar circumstances will also cause disease.

    Bradford Hill tenets are not meant to be a checklist for assessing causation, nor are they intended to be adhered to pedantically, but should serve as a guide when evaluating whether an exposure might be causally linked to a disease. It is unlikely that any single association study will satisfy all criteria for causation, but any given study may address some of the nine tenets, or none at all.

    In fact, inferring causality may not require association studies or even a significant p-value. A classic example of this was the drug thalidomide that was approved in Europe in 1957 for combating morning sickness among pregnant women. Subsequently, an explosion in the incidence of neonatal deaths and congenital birth defects, of a type that can only be described as horrific and extremely rare, occurred almost simultaneously in 46 countries where this drug was approved¹. Clearly, any study attempting to associate thalidomide with birth defects would be unethical. Also, the extremely low prevalence of this type of birth defect in the general population, coupled with the striking increase in its prevalence in countries where thalidomide was prescribed, require no statistical measure of association to infer causation.

    Bradford Hill tenets are not irrelevant or outdated, and provide useful principles for establishing causation. With new technologies and advances, various scientific disciplines may contribute to a better overall understanding of the disease process that can enhance the application of these criteria, and provide a stronger argument for or against causation.

    In the absence of indisputable and compelling evidence that an association is causal, it is important to consider the entire body of knowledge and think through all sources of evidence when determining whether an exposure causes an outcome or disease. As a well-respected statistics professor of mine frequently reminded us, causality is a ‘thinking person’s business’, i.e. don’t let your computer or statistics program, or for that matter, anecdotal or biased reports, decide on the evidence.

    As researchers we experience a eureka moment when the output of our statistics program generates a p-value with many zeros after the decimal point, i.e. a negligible probability that our finding was due to chance, but become crest-fallen when a validation study generates a p-value suggesting that our earlier result was well and truly a chance finding. This does not mean that the study should be filed away in the endless repository of unpublished science, nor should it be spun into something it is not. Negative results, assuming they were based on sound methodology, are not failed research, but are an important part of ultimately assessing causality and obtaining definitive answers to research questions, as highlighted in my previous blog on systematic reviews.

    In a future article I will go into more detail on some of the common statistical measures that are reported in the scientific literature and their implications for ‘real world’ evidence.

    ¹More on the Thalidomide tragedy can be found in the book ‘Suffer the Children: The Story of Thalidomide’ that chronicles this disaster.

    At SugarApple Communications we can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

  • A checklist for evaluating a systematic review

    September 6th, 2017 | by
    read more

    In this article, we focus on some of the key elements to look for in a systematic review, and how to assess its credibility, when searching for answers to a specific clinical question. As outlined in our last blog, a systematic review is a summary of all relevant studies on a clearly formulated clinical question, using systematic methods according to a strict, pre-defined protocol. It often involves meta-analysis, a statistical technique for pooling the results from different studies to provide a single estimate of effect. A major limitation of systematic reviews is that they are only as good as the studies they summarize.

    For simplicity, we define ‘treatment’ as any intervention, exposure, or clinical attribute that is being assessed in the systematic review, and ‘placebo’ as the comparison intervention or exposure. 

    1. There should be a clear focused clinical question.

    As with any individual study report, authors should clearly state the clinical question. This should include four elements, often referred to in the literature as PICO; the patient (P) or study population characteristics, the intervention (I) or exposure or treatment regime, a comparison (C) intervention or treatment, and specific outcomes (O).

    1. Is there sufficient detail on how the literature search was conducted?

    A well designed systematic review should involve details of how studies were identified, e.g. what electronic databases were used to retrieve studies, language restrictions, and any additional sources of data including clinical trials registers, conference reports, and whether any unpublished studies were included.

    If the search for relevant studies is not exhaustive, the results of the systematic review may be flawed. For example, one study showed that searching only MEDLINE retrieved 55% of eligible clinical trials. It is important to use multiple electronic databases including EMBASE and The Cochrane Library, using various search terms, medical subject headings (MeSH) and synonyms to yield the best results.

    1. Are there pre-defined criteria for which study types will be included?

    A systematic review that involves a therapeutic intervention, or will contribute to clinical guideline development, should prioritize randomized controlled trials where available, as these are more reliable and less subject to selection bias compared to observational study designs. Systematic reviews that aim to assess the adverse effects of treatment may include observational studies, such as case-control studies and post-marketing surveillance studies.

    PRISMA guidelines recommend that more than one contributor should be involved in selecting and reviewing the studies for inclusion, in order to avoid subjective decisions. The kappa statistic (κ) of inter-reviewer agreement should be estimated and reported to provide readers and those using the results with a degree of confidence in the systematic review.

    1. Does the systematic review include meta-analysis?

    Meta-analysis is a statistical technique that combines the results of multiple studies to produce a single estimate of effect, which tends to be more reliable than those from the individual studies because it is based on a larger sample size. However, meta-analysis should only be performed if the individual studies are sufficiently similar in terms of the PICO (patients, intervention, comparisons and outcomes). It is therefore important that the study question has a relatively narrow focus. For example, consider the following questions:

    A. What is the effect of all cancer treatments on cancer outcomes?

    B. What is the effect of chemotherapy on ovarian cancer survival?

    C. What is the effect of carboplatin-based chemotherapy on ovarian cancer-specific survival?

    D. What is the effect of standard doses of paclitaxel and carboplatin chemotherapy on ovarian cancer-specific survival?

    Question D considerably narrows the focus of the overall research question to a specific treatment for a specific disease condition and a specific outcome measure, and is more likely than Question A to provide a meaningful result with clinical application. However, the results of Question D will need to be carefully applied, as the question does not address differences in population and other aspects of the disease biology.

    1. The meta-analysis should include a test for study heterogeneity and the results interpreted.

    Meta-analysis should always include a statistical test for heterogeneity. This assesses the consistency of the results or variation in outcomes, across included studies. Most tests for study heterogeneity generate a p-value that should be reported and interpreted by the author in the context of the clinical implications of the study findings. One of the best measurements of heterogeneity is the I2 statistic which describes the proportion of variation across studies that is due to heterogeneity rather than chance. In real life, study heterogeneity may mean that the treatment effect may differ between patient groups, possibly according to ethnicity, age, gender etc.

    1. The results of meta-analyses should be graphically displayed with a forest plot.

    Forest plots are the most effective way to present individual estimates from the input studies included in the meta-analysis, as well as the single summary estimate derived from the meta-analysis. Study estimates are most commonly expressed as an odds ratio comparing the treatment vs. placebo or comparison intervention, along with their 95% confidence intervals.

    The odds ratio is simply a ratio of the effect of the treatment to that of the placebo. It is often called the effect size, and is derived from the simple division of the size of the treatment effect over that of the placebo. If both are very close, then the result of this division is 1, also known as the ‘null’ value, i.e. no difference in outcome between the treatment and the placebo.

    The confidence interval is a range of likely effect sizes for the study population and contains the true estimate of effect. It also indicates how ‘confident’ we can be in the results. Narrower confidence intervals indicate that the effect size is very precise or believable, and close to the true population effect, whereas a wide confidence interval suggests that the effect size is very variable and imprecise, is less believable, and should be interpreted with caution.

    The vertical axis of the forest plot represents the ‘null’ value of no difference between treatment groups. The odds ratios for individual input studies shown in the forest plot is often depicted as a square, the size of which depends on the sample size of the study, and therefore the ‘weight’ it carries in the meta-analysis; the line drawn through this square represents the confidence interval. The summary estimate from the meta-analysis is typically a weighted average of the results of individual input studies and is often represented on the forest plot as a diamond shape closest to the horizontal axis. The vertical points of the diamond indicate the summary effect estimate, and horizontal points indicate the range of the confidence interval.

    1. The systematic review report should include commentary on bias or study limitations.

    A well-conducted systematic review should report information that will help readers decide on the applicability of the results. It should include some commentary on possible sources of bias, e.g. publication bias arising from the tendency of journals to publish studies that have positive effects. Language bias can also be a factor if studies are selected because they are published in an English language journal. Authors of systematic reviews should comment on the risk of bias in the individual studies included, and their interpretation of the result of meta-analysis in the context of these limitations.  They should also include an explanation of study heterogeneity as a potential limitation, as outlined in point #5.

    1. An interpretation of the results and implications for clinical practice or further research should be provided.

    The authors should provide an interpretation and explanation of all reported statistical estimates and their meaning in terms of the clinical application of the study findings. Readers of systematic reviews can also visually inspect forest plots to identify differences in effect estimates, overlapping confidence intervals, and the direction of the effect from individual studies, i.e. are most of the odds ratios or squares in the forest plot falling on one side of the ‘null’ line or both sides?

    Effect estimates falling on the left of the ‘null’ line indicate that the treatment has a favorable effect on the outcome compared to the placebo; those falling on the right side of the ‘null’ line suggest that the treatment has a worse effect on the outcome compared to the placebo.

    A similar judgment can be made for the summary estimate from a meta-analysis to gauge how confident you can be in these results. However, it is incumbent on the authors to provide details of their interpretation, implications for clinical practice, or whether the results are ready for clinical application, and what limitations to its application should be considered.

    There is much debate in the scientific literature that systematic reviews cause research waste in light of the mass production of such publications that are poorly designed and conducted, with exaggerated claims. Judging the quality of systematic reviews is a first step in determining how credible the findings are, whether the methods conform to PRISMA guidelines for conduct and reporting, and how confident we can be in applying their results to healthcare.

    At SugarApple Communications we can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

  • Systematic review and meta-analysis: summarizing healthcare research

    August 11th, 2017 | by
    read more

    Healthcare research is becoming increasingly more accessible as the level of reporting and commentary rises, both via established news sources and on health-related internet sites and private blogs. The public appetite for health information is growing as evidenced by the popularity and uptake of such information, regardless of whether or not it comes from a trustworthy source. A well-written narrative review, or a carefully designed systematic review and meta-analysis, is important to summarizing healthcare research

    We are bombarded almost daily with new and often changing reports about the latest research — “drink red wine”…“don’t drink red wine”…“drink red wine once or twice per week”… and so on. As we discussed in an earlier blog, overly ambitious claims may be contributing to a general mistrust, within society, of medical advice and guidance, even when the supporting data is reliable.

    A recent commentary in Nature suggested that more collaboration was needed between computational experts and population-health researchers in order to synthesize health evidence as more and more data become available. Reviews that aim to summarize the available research to provide the ‘bottom line’ are a valuable resource for clinicians, researchers and the community as a whole. Such reviews can be conducted in a number of ways – from narrative reviews or opinionated summaries to more analytical, protocol-defined systematic reviews capable of providing summary estimates that can guide health policy.

    Narrative reviews are traditional literature reviews that describe and summarize current knowledge on a specific topic or area of research. Authors of published narrative reviews should ideally have spent their careers working in that field. They generally use informal and subjective methods to screen and select studies to include, and apply their expertise on the topic to interpret and summarize these studies to provide a snapshot of the current research. Narrative reviews are largely qualitative, and tend to rationalize the diversity of information around the topic into a single coherent article, which critically evaluates the research, highlights gaps in the existing knowledge, and may even recommend areas that need more work. Authors should ideally also include a summary of the search criteria utilized, the inclusion or exclusion criteria, and their rationale for selecting them.

    Narrative reviews may be voluntary, or commissioned by journals from leading researchers, on topics of current interest, with the purpose of obtaining consensus statements and perspectives on where the current research lies, and also to evaluate trends in research to direct future publications.

    Systematic reviews, in contrast, aim to identify, evaluate and summarize all relevant studies on a clearly formulated question, using systematic methods according to a strict, pre-defined protocol. A systematic review should combine the results of selected individual studies to obtain a single more reliable overall estimate. The methodology involves a rigorous approach to ensure all possible, relevant research is considered, thereby minimizing the risk of bias. The criteria for inclusion should be transparent, and reasons for inclusion and exclusion clearly stated. All methods including search terms, criteria for selection, appraisal of the studies, and statistical methods, should be reported to allow study replication if needed.

    The trustworthiness of a systematic review depends on a priori planning (both with regard to study selection and analytic approaches), careful documentation of a planned protocol agreed upon by those involved, and strict adherence to the set protocol. This avoids subjective or arbitrary decisions with respect to data extraction, or bias and selective reporting of findings, which could negatively impact healthcare decision-making.

    Authors of systematic reviews should include commentary on the quality of each input study, an interpretation of their summary result and how best it can be used in clinical practice, their confidence in the result, and how they differ from those of other published studies. The old adage ‘rubbish in – rubbish out’ holds true for systematic reviews, and careful study selection is pivotal to their utility.

    Meta-analysis is the analysis of analyses … the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” (Glass 1976)

    Meta-analysis is the statistical technique used to combine and summarize the results of quantitative studies to provide a single more precise estimate. It is a common misconception that the terms ‘systematic review’ and ‘meta-analysis’ are synonymous. They aren’t. Some systematic reviews involve studies with results that cannot be pooled statistically. It would, therefore, be inappropriate to apply meta-analysis to such studies.

    The use of meta-analysis is not limited only to systematic reviews. Meta-analysis can be applied to small studies that address similar questions, and is particularly valuable where the individual estimates are inconclusive due to the small sample size. Combining these studies in a meta-analysis can improve estimates by increasing the sample size and therefore the statistical power to detect a true effect.

    Critics of meta-analysis argue that this technique amounts to combining ‘apples and oranges’ that will generate an essentially meaningless estimate. These are valid concerns for small observational studies that have considerable study heterogeneity, or where there is variation between the results of individual studies. Ideally, meta-analysis should only be applied to studies with sufficiently similar design, study population, intervention or comparison, and outcome measures. The extent of variation or heterogeneity between studies can be assessed using a number of available techniques.  Authors of a systematic review should report the p-value for formal statistical tests for study heterogeneity, and provide some discussion on this in the context of the overall utility of their findings.

    Much work has been done to standardize protocols for the conduct and reporting of systematic reviews and meta-analyses. The most widely endorsed and accepted is PRISMA, which was published in 2009. These and other guidelines on reporting health research studies can be found on the EQUATOR Network, an umbrella organization of multiple stakeholders with the goal of improving the quality of research and research publications. The Cochrane Library and Campbell Collaboration regularly publish systematic reviews and their protocols. Affiliated with Cochrane and EQUATOR is PROSPERO, an international database that prospectively registers systematic review protocols, which was launched through the University of York (UK) in 2011. By 2016, the number of prospective registrations reached 20,000 records from 107 countries.

    Narrative reviews, systematic reviews and meta-analyses are all useful ways to evaluate and summarize large numbers of studies on the same subject. However, they are only as good as the studies they summarize, and the relative merits and limitations of each should be appreciated. This is critical when reviews are used to guide medical decisions and develop clinical guidance. Systematic reviews that follow clearly defined protocols and adhere to the guidelines for good conduct and reporting practices are a gold standard that provides the most reliable estimates.

    In an upcoming article, we will discuss ways to identify characteristics of a sufficiently credible, well-written systematic review that ticks the boxes for good reporting practice.

    At SugarApple Communications we can help you find the best way to communicate with your intended audience and assist with writing, editing and statistics. Get in touch today and let’s talk.

Unfog the science…ensure quality, clarity and accuracy.

CONTACT US