-
Writing a scientific manuscript ─ getting started
February 10th, 2020 | by Sharon JohnattyAs scientists we are not all gifted writers, nor is writing a manuscript a particularly enjoyable task compared to other aspects of research. Very few researchers actually ‘enjoy’ writing a scientific manuscript. Considerable time is involved, and there is the possibility of rejection by a reviewer who does not consider the months or years of hard work summarized in ~5000 words or less. Sometimes the ‘getting started’ stage can be the most difficult. This article is an overview of the process of writing a manuscript.
In general, you are ready to write your manuscript when you have reached that point in your research that suggests your results are publishable. That point may be fairly self-evident if you have closely followed an a priori hypothesis with a well-defined end-point. Some types of research can go on indefinitely (or as long as you have funding) if the answers to the research question generates more questions.
I have often heard senior researchers comment on the very clever researcher or collaborator who cannot write to save her/his life (or career). Getting into the necessary headspace to write can be the hardest part of research.
Here are some tips, gleaned from over three decades in research, on getting started with that manuscript that you’ve put on hold.
- Choose your journal. Create a list of journals that are suitable for your subject area and review their aims and scope. Try not to dwell too much on impact factor in the first instance, but think of where the articles that your research builds upon were published. For more on journal selection, see my earlier blog titled ‘Choosing the right journal – think before you submit’.
- Review your results. Thinking about the ‘design’ of your manuscript as you review your results will help you decide what other experiments you need, and what questions potential reviewers may ask. Plan your tables and figures before the write-up stage. These can be anything from flow-charts to graphs or illustrations that can be updated as more data comes in. You will know when the write-up stage is ready when the tables and figures tell a story. ‘A picture tells a thousand words’, so use them to your best advantage.
- Start with an outline. Create a blueprint that will guide your manuscript. This can be as detailed or as basic as you wish, and helps to ensure flow and logic. Lay out the different sections in your document according to the journal requirement and include any relevant formatting or word count so you don’t have to repeatedly refer to the online author instructions. Think of any subheadings in the Results section and include them in the blueprint. It is not necessary to list these chronologically, but I have found that it does help with outlining the logical thought processes that guided the work. The ultimate goal should be to tell the story as simply as possible without the day-by-day diary of events that led to the final result. Consider whether to include experiments that did not substantively contribute to the result. It is a judgment call whether a failed experiment (I don’t mean negative results) adds to the current manuscript, or is a distraction from the main finding.
- Create or update your referencing software. Whether you are using Endnote or another referencing software, check that you have imported all relevant references and you have the correct output style in your referencing software. Linking your manuscript to referencing software will make life a lot easier if you do this before your start writing.
- Order of writing. Choose the order according to what inspires you most. Like a jig-saw puzzle, you can start at the top or bottom, left or right. I personally find it helps to write the Methods first, as this should be the least difficult section. Keeping a good lab book (a requirement of most research institutes) will help avoid frustration and ‘memory loss’ about complicated experiments you did months (or years!) ago. It may be helpful to write up results as you do your experiments or analyses, rather than waiting for the penultimate experiment.
- Write now, edit later. Very often my first draft will be full of ‘notes to self’ and ‘stream of consciousness’ text. There will necessarily be several rounds of editing before you are ready to share it with collaborators and co-authors. At the editing stage, aim for clarity and conciseness. Avoid overuse of transition phrases like ‘Next’ or ‘We then showed that’ etc. Any sentence that you need to do a double-take on needs to go! Fragment long sentences. Don’t get caught up with formatting requirements at this stage. That should be the last thing you do prior to submission, along with ensuring you have met the required word count. Remind your co-authors that you need substantive comments back when you send them the first draft — not formatting suggestions!
- Keep a close eye on potential plagiarism. Most journals use plagiarism software, as do most reputable research institutions. Invest in plagiarism checks with your institute. Automated software will almost always detect a level of plagiarism. I recently got a report back from a fairly reputable journal, and was somewhat amused that standard phrases in my manuscript, like “associated with an increased risk” and “as a risk factor for” were flagged as plagiarism! Reasonable journals will not ask for changes to every phrase that the software picks up on, but will highlight the ones that require your attention. At all cost, avoid lifting entire paragraphs or sentences from published manuscripts or online resources. Also, ‘self-plagiarism’ needs to be avoided. If your manuscript is a follow-up to one you have published before, it can be tempting to save time and recycle sections that you legitimately wrote in a previous publication. This should be avoided because regardless of who originally crafted the words, the software will pick this up as legitimate plagiarism.
- The nitty-gritty.
- The title can be written at any stage of the process, and crafting one is not a minor element of the paper. Avoid long drawn out titles. Some journals put character limits on this. As a general guide, titles that run into 2-3 lines are too long. Try to find the phrase that ‘sells’ your work and encourages the reader to want to know more. Also, shorter titles that are easy to understand and informative are more likely to reach a wider audience and shared on social media than those with scientific jargon (reported in Journal of Clinical Epidemiology 2017). More on crafting a title can be found in my blog titled “It matters how your write”.
- The Abstract may be one of the last sections you write. Word count limits will force you to again focus on the main message. Where there is a lot of information in the Results, prioritizing what to include in the Abstract can be daunting. Work your way back to the a priori hypothesis and the main result. Remember to include key words and search terms in your Abstract that may be used by other researchers for PubMed or Google Scholar searches.
- The Results section is what reviewers tend to focus on. Take the time to properly format your tables and figures, and avoid sloppiness. As a reviewer, I have seen manuscripts with tables that look like they were just pasted into a Word document from Excel without any attempt to reformat them. The same applies to images. Be pedantic about data presentation. You run the risk of a reviewer expecting the rest of the paper to be sloppy if they cannot make sense of your data.
Planning for a manuscript takes time and commitment. My best advice to doctoral students and early career researchers is try to start writing early on in your career, as there is no avoiding it. Pay attention to the writing styles of your mentors, and be open to their advice, edits and comments on your draft. Good mentors will also provide guidance in developing writing skills. Some universities and research institutions will offer writing workshops. Take advantage of these opportunities whenever possible. Allocate time at the end of each day to actually write up your results as your research progresses. Writing skills take practice, so start early.
Finally, celebrate your achievements!! Publishing your research is a major career milestone worth celebrating with your colleagues. Make it memorable!
At SugarApple Communications we are experienced in all stages of the publication process, and can advise on data presentation and analysis. If you are time-poor and need to get that manuscript out, get in touch today and let us talk.
-
Compassionate use — what it means in practice
July 21st, 2019 | by Sharon JohnattyCompassionate use refers to patient access to investigational therapies outside of clinical trials when approved treatments have failed, and the survival or alleviation of suffering of a patient is dependent on finding an alternative. While the primary objective of clinical trials is to investigate the efficacy and safety of a drug in the general population, compassionate use, also referred to as ‘expanded access’, seeks to obtain benefit in a specific patient or a group of patients whose options are limited.
The term ‘compassionate use’ goes back to the early stages of the AIDS epidemic when experimental drugs were undergoing clinical trials. Programs allowing access to these experimental drugs came to be known as ‘compassionate use’, because people were dying from this untreatable disease faster than the clinical trials could provide solutions.
Recently the subject of compassionate use came up as a part of my scientific writing career. While my client was very adept on the practical implications and regulatory requirements for compassionate use, some of my medical writing colleagues indicated they had not been involved with this before, and of course correctly noted it would require the right ethics approval. This article will attempt to enlighten the reader on this issue, and present it from the perspective of regulatory bodies and that of the patient.
Over 99% of requests for compassionate use of investigational drugs are approved by regulatory authorities; however more than half of investigational drugs entering pivotal clinical trials fail, primarily due to efficacy and safety
Requests for compassionate use of investigational therapies increased 2-fold from 2005 to 2014, and the numbers have risen significantly since 2015. The rise in requests is mainly due to a greater awareness of promising therapies from the internet. Potential pandemic threats from emerging infectious disease, coupled with the willingness of chronically ill or dying patients to try a novel untested therapy, have contributed to this increase. Patients may even launch social media campaigns to raise awareness of their plight.
A well-publicized example of the power of social media in influencing decisions about compassionate use was for Josh, a seven-year old boy who was critically ill at St Jude’s Hospital in Tennessee, USA, with a life-threatening viral infection. After the failure of approved treatments, his doctor requested access to a drug that was undergoing phase 3 clinical trial. It would be heartless not to see the dilemma facing physicians caring for Josh, who had battled childhood cancers for all but 9 months of his life, only to watch him succumb to a viral infection that could be treated with a medication that had proven efficacy in earlier clinical trials. On the flip-side, the drug company had stopped accepting new requests for compassionate use, deciding to focus their resources on the phase 3 trial required for FDA approval that would, in the long run, be in the best interest of the greatest number of patients. We may ask, why could they not do both? It takes considerable resources and people-time to get the necessary approvals in place to obtain compassionate access to investigational drugs. In the case of Josh, the company denied the second plea by the hospital administration for access, which led to a Facebook post by a relative of Josh trying to identify someone at the company who could be influenced to change their decision. This mushroomed within days into main-stream media coverage and more social media pressure, with the company’s management and board members receiving hundreds of phone calls, emails, and even death threats, pleading for access for Josh. State and national politicians got involved, and discussions began between regulatory authorities and the company on how to permit compassionate access without compromise to the ongoing clinical trial. Five days after the first Facebook post, the company announced they would make the drug available for Josh as part of a small open-label trial.
The outcome was good for Josh who responded well to the treatment. However, there were many unexpected downstream consequences. Where social media led to the release of the drug for an individual, an important question raised was “Do attempts to help individuals in immediate need place at risk the pursuit of evidence-based regulatory approval that will make a product available as quickly as possible to the largest number of affected or soon-to-be affected individuals?” Smaller companies with less resources are more likely to have difficulty addressing such a moral dilemma.
Patient access to experimental treatments presents a very powerful moral and ethical dilemma, and has generated much debate in the scientific community. On the one hand, there is the immediate need of a very ill patient who has not responded to approved therapies, but is aware that an investigational one exists. On the other hand, the process of obtaining regulatory approval for compassionate use rests with the manufacturer, and may delay or divert resources away from the more streamline and efficient process of a well-designed clinical trial that will ultimately benefit a much larger number of patients. There is also the healthcare provider who has a duty to help and not harm patients, and to decide whether any potential for benefit outweighs the likelihood of harm from treatments with insufficient or limited safety data.
Very often regulatory authorities are blamed for non-access to investigational drugs. Since 1999, the US FDA, for example, has approved over 99% of expanded access requests. This has raised further concerns that pressures on regulatory authorities to approve these requests might circumvent science and logic, and create a system of unfairness and inequality. However, compassionate use requests can only occur if the company who owns the therapeutic entity applies for approval with the appropriate regulatory agency. This process often tends to delay the development of the therapeutic entity. If a drug is already in a clinical trial program, the available supply of the drug may already be allocated to the trial, with limited supply available for compassionate use.
Even if the drug has not yet entered phase 1 clinical trials, there is often a high likelihood that it will not be approved. Recent published estimates suggest that among novel therapeutics that entered trials between 1998 and 2008, only 36% were approved by the US FDA. Failures were mainly due to inadequate efficacy (57%), while safety accounted for 17% of failures.
There is also the question of risk versus benefit in the specific patient, and the extent to which those seeking the therapy can truly be ‘informed’ about these risk given their circumstances in order to give informed consent. Patients in the situation where an untested drug is a last resort are often likely to overestimate the potential benefits and underestimate its risks, and efforts to ensure that rational decisions are made, whether by the patient or a close relative, based on a thorough understanding of the risks, can be very problematic.
With regard to the ethics and the medical practitioner, the ‘first do no harm’ principle immediately comes into force. The potential for benefit, however is perhaps the counterpoint to the argument, and weighing this up can be fraught with assumptions and value judgments. Decisions by the medical practitioner should be on a case-by-case basis, and should involve consideration of whatever data is available on the drug, the patient’s prognosis, and unintended mental and emotional consequences of giving hope where none can be assured.
Regulations governing compassionate use require fairly lengthy ethical and legal regulations before this could occur. In the US, access to experimental treatments for emergency use requires approval by the manufacturer, the FDA, and the ethics committee of the institution where the patient is being treated. The manufacturer also has no legal obligation to make the treatment available. The FDA also requires that doctors who offer the investigational therapy must, in general, follow all procedures as if the patient was part of a clinical trial, including reporting of adverse events. In Australia, there are two schemes that enable clinicians to use unauthorised treatments; the Authorized Prescriber Scheme (APS) and the Special Access Scheme (SAS), both regulated by the Therapeutic Goods Administration of Australia. Ethics approval, informed consent of the patient, and a thorough clinical justification is needed for both schemes.
In 2018 the US signed into law the Right to Try Act, which provided patients with life-threatening conditions an alternative pathway to access investigational drugs that had completed phase 1 clinical trials, without the need for FDA authorisation. This has been criticized for a number of reasons, mainly for unreasonable use of drugs that have not been fully trialled, and also the apparent ‘hands-off’ approach of the FDA that could negatively impact proper regulatory oversight of drug development.
For patients with serious life-threatening disease for whom authorised treatments have failed, compassionate use may be a last resort. There are also the issues of how to allocate limited supplies of an investigational drug in a transparent, ethical, evidence-based and patient-focussed manner. It may be the lucky few who know about investigational drugs, and can rally the forces of social media for their cause, or have the tenacity to navigate the processes involved in obtaining it. There is however, no consensus on how best to handle these ethical and moral dilemmas, and deciding on what is fair access, and whether the individual need should be placed ahead of the ‘greater good’, remains an emotional and legislative minefield.
At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Get in touch today and let’s talk.
-
Grant writing — principles and practice
February 27th, 2019 | by Sharon JohnattyWriting a grant proposal takes a significant amount of time and effort, and there are no guarantees. In general, poorly written grants are unlikely to get funded even if the project is scientifically sound and focuses on a major unmet need. Given that only ~20% of submitted grants will be funded, a well written grant will greatly improve the likelihood of success. This article offers some general principles and tips on the practical aspects of grant preparation from the Australian perspective, though they may not apply to all schemes.
“More than 600 working years of researcher time goes into each round of NHMRC grant applications” (A. Barnett article at Croakey.org)
As outlined in my previous article on this topic, it is critical to read and re-read the guidelines and criteria for each funding mechanism, particularly since the recent changes to funding schemes outlined by the National Health and Medical Research Council, the primary funding body in Australia. Details of peer-review guidelines and a flow-chart from the NHMRC scheme-specific peer-review guidelines can be accessed here.
The synopsis
- Selected grant review panel (GRP) members and Spokespersons will determine their suitability to assess your grant on the basis of the synopsis. They may have to read and prioritise 15-20 applications in a 4-week period.
- **Tip** Make sure the synopsis captures your aims and is readable by anyone; avoid technical jargon here.
- The synopsis should include a brief background, methods and significance. Do not to pitch it so narrowly that only someone in your field understands it. Remember those who are knowledgeable about your area of research will leave the room when your grant is being reviewed by the GRP.
- A badly written synopsis may mean that your grant ends up being reviewed by the wrong spokesperson, which may be fatal to its chances of being funded.
Overall structure
- Devote the first few pages to selling the grant.
- The first page is critical — panel members will form their first impressions of fundability early in their review.
- It should start with a brief overview of the topic and include a brief caption of methods and significance.
- **Tip** Try to limit the first page overview to about a third of the page. Your aims and hypotheses should also appear on the first page.
- By the end of the first page it should be clear in the reviewer’s mind what you plan to do (minus the details) and the reviewer should be hooked.
- Begin the research plan around pages 3 or 4.
- Sections on feasibility, timelines and significance can go on the last page, and include the role of associate investigators if they cannot be mentioned in the team capability sections.
Aims and hypotheses:
- This should be absolutely clear. It is worth the extra time it takes getting this right as it is perhaps the most important part of your grant.
- Try to avoid conditional aims i.e. where the success of the subsequent ones are dependent on the first. If this is unavoidable, then be sure that your preliminary data totally supports the earliest aims and try to leave the reviewer in no doubt that they can be achieved.
Background
- Do not use too much space on the Background; try to keep it concise and relevant to the aims. A long-winded Background can be exhausting to read.
- **Tip** Devote no more than 1–2 paragraphs on background statistics and epidemiology (if relevant).
- Carefully review every sentence and decide if it contributes anything of importance. If not, delete it.
Preliminary data
- Preliminary data is where you can highlight the innovation in your project; it also provides proof of concept and that you have the necessary skills or technology. This is particularly important if you are applying for funding to do a larger study and can show that the project is viable.
- **Tip** Preliminary data should begin no later than page 4 and should be ~2–3 pages; some projects involving data collection may not require preliminary data; evidence that the investigator can do this work should be in your track record.
Methods
- Make sure methods clearly relate back to the aims and no new apparent aims are introduced in this section.
- Make sure your primary aim is sufficiently powered and achievable.
- **Tip** Include sample size and power calculations. It is critical that this be done correctly and is accurately and clearly described.
- If your project requires serious statistics input and you do not have a statistician on the team, you may be questioned why, as there is a risk of failing to achieve your aims.
- If your aims include exploratory work that is underpowered, include them as secondary aims and clearly outline why they are worth doing even though they are underpowered.
- Analysis approaches should be included with each aim and as an integral part of the methods.
- Statisticians can be quite influential on review panels, so seek a statistician’s advice on your analysis approaches quite early, and not as an afterthought.
Significance and Innovation
- This sets the ‘mood’ and level of enthusiasm for the grant. Use this to get the reader in a positive frame of mind about your grant, and to ‘sell’ your grant.
- **Tip** Significance can be worked into the Background; alternatively put as much of the S & I into the Overview box on the first page; don’t wait till pg. 6 to tell the reader the significance of what you are doing!!
- Be sure to include estimates of ‘burden of disease’ to support the significance, particularly in cases where the project is not about a ‘life and death’ disease.
- Generally a project may have either significance or innovation or both; for example a topic that are very close to clinical application may be highly significant, but lacking innovation because that part has already been done.
Choosing your team
- Think about the team carefully; don’t include any ‘guest’ CIs or international ‘high-flyers’ just for their impressive CVs; most reviewers on the panel will only consider the CIA–CIC, with more weight placed on the CIA.
- If the work is to be accomplished by both CIA and CIB collaborating to supervise and accomplish the work, then be sure to point that out in your justification.
- When listing AIs on the grant, be sure to indicate what their role is if the work is dependent on them, as most of the focus goes on the skills of the CIs.
- Think of how you write the team capacity page; this must be appropriately pitched especially if you are trying to help a more junior person in your lab get a grant funded; it is important to say how they will work together with the team.
Budget
- This is a major sticking point; almost always this is scored down; you need the right amount of justification; your estimates must not be too low or too high
- the budget is the last thing to be addressed by the GRP; so your grant needs to escape the ‘not for further consideration’ pile to get to this point.
- Never ever divide the total budget by the number of years of the project; for example if it’s a 5-year grant do not simply put 20% effort across all the yearly columns, as it’s a bad look and you are not estimating actual expenditures by year of the grant.
- There is a lot of pressure on panels to cut budgets; be sure to justify it.
Pesky details
- Make sure you follow formatting guidelines to the letter, and keep overall formatting and style consistent.
- Include some white spaces; the overall look of the grant should suggests readability.
- Use bold/italic/underline formatting judiciously and avoid the appearance of ‘shouting’.
- Avoid using very complex figures with impossible-to-read font; don’t recycle figures from published manuscripts; simplify figures and make it clear why it is relevant to the application.
- Don’t risk annoying the reviewer with careless writing, spelling errors, convoluted sentences that need a ‘double-take’, and grammatical mistakes; have someone else review the grant and check that it flows well.
- Avoid over-use of acronyms; it becomes onerous to constantly flip back and forth to figure out what it means.
Rebuttals
- Make sure you answer every query in your rebuttal; if there are any comments in your review on your budget, be absolutely sure you address them in your rebuttal.
- Be assertive but polite in the tone of your responses; don’t be rude, but on the other extreme, don’t grovel.
- Try to be completely devoid of emotions in your rebuttals; avoid any indication of annoyance or irritation at reviewers’. comments; be factual and truthful, even if you disagree, e.g. “I respectfully disagree…” and clearly articulate why.
- Avoid statements that may be construed as insulting, e.g. ‘as clearly described on pg 4’ because obviously it was not clear to the reviewer.
Finally, consider submitting only one very well-written in any given year, and make sure you have thought through all elements of it. Start early and think carefully about how it is written. Remember the review panel moves very fast, so keep grants simple, readable, and as flawless as possible. Grant-fatigue can blind you to glaring mistakes, so ask someone independent to read it and check that it has all the necessary elements to make it a winning grant.
At SugarApple Communications we can help with writing, editing and presentation of your research ideas in grant proposals and manuscripts. Get in touch today and let’s talk.
- Selected grant review panel (GRP) members and Spokespersons will determine their suitability to assess your grant on the basis of the synopsis. They may have to read and prioritise 15-20 applications in a 4-week period.
-
Grant writing — planning for success
February 19th, 2019 | by Sharon JohnattyMany careers have floundered because of grant funding woes, and talented scientists have chosen to remain on the fringes of research because pathways to a successful research career are neither straightforward nor predictable. Life in academic research is often littered with rejection of one form or another, whether it is from the journal of choice for publication of years of research, or the funding agency that this work is dependent on.
Grant funding is pivotal to a successful career in academic science. As many researchers know, there is considerable stress, sleepless nights and anxiety associated with this — from the planning and writing stages of this process to the announcements of application outcomes.
This article summarizes current thinking on grant funding that have been gleaned from various sources, including published research and commentaries on biases in the process of how research proposals are reviewed and prioritised, which applies across funding schemes.
“It’s best to start planning for a grant application at least 9-12 months before the submission deadline” (Anne Marie Coriat, Head of UK and Europe Research Landscape at Wellcome Trust, London (Nature January 2019 “Working Scientist podcast series)
There is no guarantee that even the best written grants will get funded, but it does help their chances. The review process is mostly similar across funding agencies and involves a system of scoring and prioritizing according to the specific criteria outlined for the funding mechanism. Grant reviewers are expected to differentiate the very best grants from the weaker ones.
A recent study involving replication of the National Institutes of Health peer-review process examined the degree of agreement between different reviewers and how the reviewers went about scoring applications. The results highlighted considerable subjectivity in how applications were evaluated.
“We found that numerically speaking, there really was no agreement between the different individual reviewers in the score that they assigned to the proposals. We also found that when we were looking at the relationship between the strengths and weaknesses (written) in a proposal, and the score that was assigned, we did see a relationship between the number of weaknesses that a reviewer would identify in their critique and the score that the reviewer assigned, but that relationship between the weaknesses and the score doesn’t hold up between different reviewers.” (Dr Elizabeth Pier “Inside the NIH grant review process” Nature Careers Podcast, January 2019).
On the positive side, responding to reviewers comments and providing feedback played a strong role in the grant eventually being funded. It is important to not take the critique personally given the apparent ‘randomness’ involved in the review process, and to have the tenacity to not give up. Accept that the review process is not completely objective, and that humans are fallible and subjective. The process of judging grants is highly complex, and predicting whether a project is likely to succeed if funded is a very difficult decision.
The study also showed that, as with so many other human endeavours where a critique is involved, weaknesses in a grant were found to be far more predictive of the score than their strengths. So every effort to minimise weaknesses is well worth it.
In general it can be said that a phenomenal amount of time is spent in preparing research proposals. One commentary calling for reforms of the National Health and Medical Research Council grant system reported that Australian scientists spend an estimated 550 working years of researchers time preparing grants, or the equivalent of a combined annual salary cost of AUD $66 million, which was greater than the total salary bill of a major medical research centre that produced 284 publications the previous year.
Major reforms for administering and funding grant proposals have been suggested by leading researchers and institutions for almost all major funding agencies. Until these are implemented, other points that are broadly applicable across funding schemes are as follows:
- Read, read and read again – the guidelines for each funding scheme. All funders have very clear guidance on what each type of scheme involves and the sorts of things they’re looking for.
- Understand the requirements and the deadline first, and then work back from there
- Think carefully of how to express the importance of the problem that you are trying to tackle, as this is something reviewers talk a lot about.
- Applications that focus on a condition that is so fatal and severe that obtaining sufficient preliminary data or a large enough sample size, or previous research that supports interventions or treatments, may not be proritised.
- Think carefully about how you order your aims in tackling the research question, and try to ensure that your first aim is achievable; otherwise the rest of your research project is at risk of reaching a dead-end early.
- Strong preliminary data is important, although this is the catch-22. You sometimes need to have done a fair bit of the work outlined in your proposal in order to get the funding to do the work! Most researchers end up using funding from other smaller grants to generate the strong preliminary data needed for larger grants, and to show that they have the technology to make the grant viable.
- Discuss your ideas with colleagues who are not pursuing the same work but who know enough to comment on
- robustness of experimental design
- your team and having the right collaborators
- possible alternatives to experiments if they go wrong
- costings of materials and budgets etc.
- Pay particular attention to the summary statement because this is the main focus of grant review panels. It needs to tell the panel what the aim of your project is, why it is important, and what you are actually going to do. It also needs to reflect the fact that you are the best person to do to this research; so there needs to be statements like ‘we previously showed’, or ‘I have contributed to…’ and be clear that you are showing that you have worked in this field before
- Other issues are making sure you have sufficient detail in the body of the application so that it is clear to the reviewers that the work is achievable. Most funding bodies expect quite a lot of detail, particularly where analyses are involved; or that you have the necessary expertise among members of your team. Great ideas without the evidence of how you plan to tackle the work will worry the review panel.
A final point that warrants attention in this article is the very important resource that is available at most academic institutions — the army of grant support officers whose business it is to oversee the application process and liaise with funding agencies. It is critical to work with them and follow their advice. I once heard a rather telling comment by a researcher, who said the grants officers were there because they ‘didn’t make it in science’!! Don’t ‘dis’ your best sources of support. I’ve been on both sides of the submission process and know the frustrations of grants officers who all but pull their hair out when researchers ignore their advice — because some researchers think they know better! At the same time, given the considerable stress of obtaining funding, a patient and supportive grants officer who responds promptly to queries and concerns can be your beacon of light in your darkest hour.
In a follow-up article I will outline some more focused and practical suggestions and tips to consider for grant writing.
At SugarApple Communications we can help with writing, editing and presentation of your research output and ideas, whether for grant proposals or manuscripts. Get in touch today and let’s talk.
-
Moral courage to do the right thing and its rewards
October 20th, 2018 | by Sharon JohnattyThe debate around whether glyphosate causes lymphoma is raging on many fronts since the landmark decision by a California court found in favour of the plaintiff in the case of Dewayne Johnson vs Monsanto. The question of whether glyphosate (or glycophosphate), the active ingredient in the well-known herbicide Roundup®, causes lymphoma, has been the subject of public discourse and discussion forums. These issues need a thoughtful, common sense, honest and public response, which highlights the company as being ethical and transparent in their promotion of this vital product to farmers worldwide. By ethical we mean an adoption of standards and advice that go beyond the mere assertion of legal or scientific criteria that define ‘safe’, to common sense, man-on-the-land advice. This would clearly identify manufacturers who have the moral courage to do the right thing in terms of the needs and well-being of their customers.
Regarding the science behind the claims, the most comprehensive evidence synthesis done to date was published by the International Agency for Research on Cancer (IARC) in 2015. The IARC report concluded that there was ‘limited evidence in humans for the carcinogenicity of glyphosate”. This is the basis for their position statement “Glyphosate is probably carcinogenic to humans”. Subsequently the IARC came under attack for their evaluation of glyphosate, to which they stood their ground and responded earlier this year in an open and transparent manner.
“The epidemiological evidence that glycophosphates are associated with an increased risk of lymphoma is very weak. This is why IARC class them as possibly carcinogenic. Furthermore if there were a risk it is modest and would not be big enough to conclude that it is more likely than not that in any given individual with lymphoma who was exposed to glycophosphates that the exposure was cause of their cancer.” (Paul Pharoah, Professor of cancer epidemiology, University of Cambridge)
Several studies on glyphosate and lymphoma can be accessed on PubMed, including four systematic reviews relevant to lymphoma and glyphosate, one of which also reviewed the IARC study. Overwhelmingly, these studies reported that the overall body of literature was “limited”, “inconsistent”, “associations were weak”, and therefore a causal link between exposure to glyphosate and lymphoma could not be established.
While opinions on the science abounds, what concerns me is the extrapolation that the ‘limited evidence’ is taken to mean that the chemical is safe. This is the classic fallacy upon which alternative remedies are promoted as efficacious―that of placing the burden of proof on the ‘no’ camp rather than where it belongs―on those promoting the product. One article published by the Conversation touted its safety, stating “Establishing whether a chemical can cause cancer in humans involves demonstrating a mechanism in which it can do so.” Respectfully I disagree. We do not need to demonstrate a mechanism before we established causality. We can hypothesize about the mechanism, and this helps to strengthen the argument, but it is not a necessary requirement to establish causality. A well-known example of this was the thalidomide disaster that shocked the world in the 1960s. Did we know what the mechanism of action of thalidomide was that led to birth defects before we acknowledged that it caused birth defects? No! In fact all we had was a very high degree of specificity between thalidomide and some very rare birth defects, and that was enough to establish causality and withdraw the drug from further use.
So what does the science tell us about glyphosate? Without a doubt the scientific evidence to support an association between lymphoma and glyphosate is suggestive but weak. Those who state that the California ruling shows ignorance of the science clearly do not understand that the ‘science’ on its own is inconclusive, and the legal ruling was evidently based on more than just the science.
This leads to my next concern; why did evidence surface that the company took steps to systematically attack any science or scientist that suggested their product was not safe. If it was “as safe as table salt”, then claims that they were trying to cover up evidence to the contrary should have had no bearing on the case. Lack of transparency and attempts to cover up information is generally a rather telling indicator of company ethical standards. Another concern I have is, can a chemical designed to kill something actually be considered to be less dangerous to humans than table salt???
I have worked with companies to help them understand the science behind the adverse events associated with drugs that led to litigation, and have reviewed published literature for scientific evidence to support expert testimony by those hired to front up in court. I have also reviewed company documents confiscated by the law firms involved in the litigation. There were instances where the scientific evidence from the published literature overwhelmingly supported a causal link between the drug and the adverse event in question. One of the cases that I worked on produced minimal evidence from the scientific literature to support the claim brought by the plaintiff, although I was expected to literally go through thousands of published articles. (Needless to say I could not in good faith continue this because this is not how evidence for causality works!!). However there was evidence in company documents that revealed an internal culture of covering up adverse event reports, and that regulatory authorities had levied fines for failure to report adverse events, which tends to raise red flags. In that particular case, although the scientific evidence was lacking, the jury found in favour of the plaintiff because the evidence of failure to be transparent about adverse events took centre stage in court proceedings.
Jurors are typically not scientists. They are down to earth common sense folk like you and I. Efforts to give the impression, if not an outright preparedness, of a willingness to blatantly deceive the public for their profit, as we recently witnessed in the Australian Banking inquiry, will inevitably come with a hefty price tag.
As with any product, regulatory authorities need to weigh up the benefits against the risks and state this in relative terms rather than definitive terms like ‘safe’ or ‘harmless’ because the public hears this and takes little or no precaution. The ultimate responsibility, however, resides with the manufacturer who would know their product better than any independent scientist.
It is critical that business giants, whether it is the financial, pharmaceutical, or agricultural sectors, recognize that they stand to lose more if they lack the moral courage to do the right thing by customers.
As individuals we need to take personal responsibility and assess potential long term health risks, because when it comes to our health, we are in control; to not do so can have grave consequences not only for ourselves but those we love. If the risk is 1 in 10,000, that ‘one’ may be you, and the probability of a bad outcome is no longer 0.01% but 100%. As consumers, we need to be aware that the use of chemicals on crops are a fact of life and observe the ‘all things in moderation’ approach. Foods we consume may be ‘safe’, but there are elements in anything that, if consumed in large enough quantities, will render them ‘unsafe’—not necessarily in and of themselves, but potentially in combination with genetic and environmental factors that may interact to trigger a disease or health condition.
Safe and effective weed control has huge benefits to the global agricultural industry, but along with the many occupational hazards associated with this industry, farmers using these products in large quantities and with long term exposure have the most to lose in terms of adverse health outcomes and loss of income from failed crops. If you are a farmer using glyphosate or any other chemical, it would be wise to take all possible precautions and suit up if necessary, wear masks and gloves to minimise physical contact with chemicals.
Those with the task of reviewing the science need to take a responsible balanced approach and realize that the ‘stop worrying and trust the evidence’ advice is naïve and simplistic. Frankly the research to date on glyphosate is saying only one thing – THE SCIENCE IS INCONCLUSIVE, but we cannot claim it is universally safe. We also don’t know if individual factors like genetic mutations combined with the cumulative effects of long-term exposure to glyphosate can trigger cancer. So until there are known markers that allow genetic testing for susceptibility, a common sense measure is to take all available precaution to ensure minimal physical exposure.
Those who have a responsibility to the public, whether for-profit companies or government regulators, need to first and foremost do what is true, honest, just, and as assessed by the man in the street, to be commendable. Point-of-sale distributors who work closely with farmers are often very knowledgeable about products and can offer guidance and advice on safety. Clear and accessible guidelines from manufacturers on how to take precautions to minimise exposure and risk are imperative with all ‘poisonous’ products — because absence of evidence is NOT evidence for absence.
At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Get in touch today and let’s talk.
-
Evidence Synthesis—Deciding what to believe
September 19th, 2018 | by Sharon JohnattyWe often hear the words “evidence-based” thrown around in the media and from politicians on a range of issues like climate or environmental policy, or from those promoting the health benefits of their products. As consumers we put a great deal of stock in ideas and theories that are described as ‘evidence-based’ because it simply has a nice authoritative ring to it. A key question that we should be asking is was the evidence obtained from a comprehensive evidence synthesis approach, i.e. was sufficient effort put into reviewing all available evidence on the topic to draw the stated conclusions?
The dictionary definition of synthesis is “the combination of components or elements to form a connected whole”. Evidence synthesis is the process of pulling together information and knowledge from various sources and disciplines that influences decisions and drives public policy. It is the ‘how, what, why and when’ that goes into ‘evidence-based’ decisions.
The earliest records of evidence-based decisions came from medicine, as documented by James Lind who pioneered The Royal Navy’s approach to dealing with scurvy in the mid-1700s. In fact the British adoption of citrus in their sailors’ diet was one of the factors that gave them superiority over all other naval powers, until this practice became universally adopted. It is interesting to note that Florence Nightingale called statistics ‘the most important science in the world’ as she collected data on sanitation to change hospital practice. She was also an advocate for evidence-based health policies and chastised the British parliament for their scattered approach to health policy.
“You change your laws so fast and without inquiring after results past or present that it is all experiment, seesaw, doctrinaire; a shuttlecock between battledores.” Florence Nightingale’s admonition to the British parliament (1891)
The stated goal of almost every funded research program is doing what is best for the public; therefore a comprehensive overview of outcomes of research needs to be considered to generate policies that are in the best interest of the public. This can be a challenge given the expanding published literature.
As with any process of decision making, the important question is, what constitutes evidence? Is there sufficient information available to systematically analyse and come to a sound conclusion, or are there major gaps in knowledge such that any decision made on the basis of the existing evidence is likely to be unsound and potentially harmful?
Policy makers cannot always predict the outcome of their policies unless similar policies have been successfully implemented elsewhere and under similar conditions. Politicians charged with the business of implementing policies tend to aim for a balance between the best use of public funds and a ‘healthy, wealthy and wise’ agenda (although the ‘wise’ part is often assumed from success with the former two).
Evidence-based policies should therefore rely on the best use of existing evidence. This may require advanced planning and allocation of funds, and many years to implement. However in some instances time may be of the essence— as in responding to a disaster or emergency, in which case both governmental and non-governmental experts from a range of relevant disciplines may be convened to provide advice and manage risks. In healthcare, evidence synthesis influences policy and practice, and given the impact this could have on healthcare costs, the process should ideally be unbiased and accurate and based on all available relevant disciplines.
Various experts have outlined a set of principles that govern evidence synthesis, which if followed, can facilitate development of high-quality evidence. Science of any variety can be contentious and subject to debate depending on the personal and political values of the contender. There will always be topics with a high moral content that lead to disputes, and for which there are no clear-cut right or wrong answers or opinions. Almost all science will at some point transgress on individual sensitivities, and the question remains how to balance this with the greater good of the general population. There is no lack of such examples, whether it is a moral objection to culling pests in order to increase farm productivity, or retaining traditional forms of energy generation versus renewable energy that lower carbon emissions. For this reason, those charged with the task of synthesizing evidence should, where possible, have no personal or financial stake in the outcome, and should stick to unbiased and reputable sources of information.
The following principles have been suggested:
- Inclusion of all stakeholders
It would be appropriate to include policy makers if the aim of synthesizing evidence is to advise on a current issue of national importance, for example the economic feasibility of drought-proofing Australian farms. In addition to relevant scientists, community stakeholders who may be the target audience or ‘end-user’ should be involved to add a ‘common sense’ perspective to the issue. This ensures that the question is correctly formulated, and the interpretation of the findings is accurate and not biased in favour of a political agenda. It also brings diversity of opinion and provides several lenses through which the topic is viewed. In contrast, issues like summing up evidence to help drive future policies, such as advanced technologies in artificial intelligence or quantum computing, should be left to the experts.
- Rigorous methods
Depending on how urgently the evidence is needed, those involved in evidence synthesis should try to identify all relevant science before deciding on its quality. Public policy based on flawed science can result in costly mistakes, a set-back to progress and a loss of public confidence. Sources of information and reasons for declaring a study as poor evidence should be documented. Where time constraints do not apply, evidence synthesis techniques typically involve systematic reviews, which I have previously written on and can be accessed here. Organizations like Cochrane and The Campbell Collaboration, synthesize evidence to educate the public and inform health-care policy by following predefined methodologies in ways that minimise bias. Such processes are very time-consuming (upwards of 2 years in some cases) but they are well renowned for scientific rigour and generating reports that are comprehensive and reputable.
- Transparency
Sources of evidence, databases used, search terms, and how evidence is graded should all be publicly available and transparent to end-users. Although study methodology should follow a pre-defined process, areas of difficulty may arise in whether to include or exclude certain studies. Accounts should be kept of decisions, the reason for the difficulty or disagreement, and why a consensus could not be reached, as this may be important in future updates or policy debates.
- Open Access
Evidence summaries in plain language that is accessible and available online is critical to their wider acceptance by relevant communities, policy makers and the general population. Depending on the range of potential end-users, multiple reports of the synthesized evidence may be necessary. Infographics or interactive online demonstrations that inform and educate the public will go a long way in gaining support for, and successful implementation of policies informed by this process. Timing is also critical, and updating the knowledge base regularly on topics of local and regional importance will reduce reliance on inaccurate or outdated information in an emergency.
On a global scale, evidence synthesis is critical to coordinated responses to disease outbreaks. Costs of such efforts are often borne by countries with the ability to fund them, and stakeholders need to be convinced to participate in a unified effort. During the 2014 Ebola epidemic in West Africa, SAGE convened a wide range of experts from around the world. It needs to be irrefutable that the universal benefit of such an undertaking far outweighs the self-interest of any particular country.
A limitation of the current best-practice of systematic reviews is the likelihood that existing studies are of low quality or highly variable in their findings to produce reliable results. An alternative is subject-wide evidence synthesis, which involves extracting and collating relevant information from many different sources. This has been done on a project called Conservation Evidence, which provides summary information on the effects of conservation interventions for all species and habitats worldwide. Although this approach is quite different from systematic reviews, it provides a valuable searchable database that can be used in combination with systematic review methods to address new research questions on conservation.
The subject-wide approach is not limited to conservation or environmental sciences, but may be useful in public health questions where a particular outcome is so rare that even large well-powered studies are not sufficient to shed light on risk factors. I recently did an analysis of uterine cancer among women who were treated with tamoxifen for prior breast cancer. Although the risks associated with tamoxifen treatment are well documented, our data suggested that a high proportion of women treated with tamoxifen subsequently developed a rather nasty type of uterine tumour known to have very poor prognosis. This has also been documented in several case studies. While case studies tend to be viewed as anecdotal and insufficient evidence to guide health policy changes, it does suggest the need for improved surveillance of women treated with tamoxifen.
Decisions and critical assertions made by public officials and politicians that impact on matters that affect people’s lives—whether it is indigenous affairs, climate change, energy policies, or healthcare—cannot be done on a whim or in a rush to placate political constituencies and win votes. The importance of a comprehensive assessment of all evidence to ensure that polices are indeed evidence-based, should not be side-lined in favour of ‘quick and dirty’ alternatives.
At SugarApple Communications we can help you find the best way to analyse and interpret your important data, and communicate it to your intended audience. Get in touch today and let’s talk.
-
It matters how you write
July 30th, 2018 | by Sharon JohnattyWriting a scientific paper or thesis for most of us can be a daunting task. It reminds me of a classic poem I learnt as a child “Maria intended a letter to write, but could not begin, as she thought to indite…”. The rest of the poem was advice from her mother to think of it as though speaking to the person, but with her pen.
Oh that it could be so simple getting your manuscripts written and published! In this article I will outline some general principles that I’ve gleaned over the course of 30 years in academic research and publishing in a range of scientific journals, editing student theses and research grants, and as a peer reviewer for various biomedical journals.
In the simplest of terms, the entire paper should consist of the context set out in the Introduction, the content presented in the Results, and the conclusion brought together in the Discussion.
An important over-riding principle of scientific writing is clarity. Keep the message clear and accessible. Think about your driving hypothesis, and phrase it in the simplest of terms without compromise to accuracy.
“If you write in a way that is accessible to non-specialists, you are not only opening yourself up to citations by experts in other fields, but you are also making your writing available to laypeople, which is especially important in the biomedical fields.” (Stacy Konkiel in ‘The write stuff’ Nature 2018)
Some journals require a brief statement of the main findings written in language that is accessible to all readers. This is an opportunity to encourage the reader to want to know more about your work. This also applies to the Abstract, which should focus on the study question, why it is important, how you have addressed the question, and the broader implications of your work.
Keep in mind that PubMed searchers and e-alerts will bring up only the Title and the Abstract, which is all that most people will ever read. The Abstract should be written so that those outside your field will get the big picture and entice them to access the full paper.
The Introduction should not be a comprehensive long-winded overview of the topic, but should be concise, sufficiently capturing the relevant aspects of the topic that help to clarify why you undertook your research. It should give the reader enough of the broad scope of the topic and possible deficiencies in the current knowledge.
Choose references that are fairly recent, scientifically sound, and published in reputable journals. Finish the Introduction with a brief paragraph on your hypothesis and what you are about to present, which should logically flow from the information outlined in the preceding paragraphs.
The Methods section should be quite straightforward. For some it is the easiest section to write, and can be written up prior to obtaining results. Experiments can take months to complete, and analysing data and writing up your results require good record keeping. For this reason, lab books are not only critical to your manuscript, they are also legal documents in both academic and industry research.
The same applies to large-scale data analysis; you may spend weeks or months cleaning and organizing large datasets and performing quality checks before beginning the analysis. Keeping a log of what you did will pay dividends when you begin to write up your methods.
It is often a good exercise to review how methods are written for other publications in the target journal, and the level of detail that is acceptable. This is often where acronyms abound and can lead to statements that are incomprehensible. I recently read a thesis where a simple two-word expression used only three times in the entire document was turned into an acronym. It reminded me of the scene in the movie Good Morning Vietnam where Adrian Cronauer (Robin Williams) said “Seeing as how the VP is such a VIP, shouldn’t we keep the PC on the QT? ‘Cause if it leaks to the VC he could end up MIA, and then we’d all be put on KP” (I just about fell out of my seat laughing!).
Acronyms have their place, but use them sparingly and only where it helps to avoid verbosity. It’s a good idea to stick with acronyms already in use in the published literature, and avoid confusion by developing new ones, particularly for terms that are commonly used as keywords in PubMed.
The Results section should describe the results as factually and as clearly as possible. Avoid the temptation to insert justification or interpretation in the Results section. Some journals allow a combined Results and Discussion section. This helps to avoid repeating the main findings at the start of the Discussion. However the same principle applies, i.e. Results should describe the main findings based on stated aims that are detailed in the Methods, while the Discussion allows interpretation of the results in the broader context of the topic, and how it fits with the existing literature.
Avoid introducing results that are not specifically outlined as part of your Methods, or making claims that are not consistent with the evidence obtained, especially if so-called ‘exploratory analyses’ were undertaken. Likewise, any analysis outlined in the Methods should be reported in the Results. Supplementary Material is usually a good place to provide additional data or analyses, as long as it is part of the research undertaken. Large-scale genome-wide association studies often utilise Supplementary Information to provide effect estimates that reach a certain significance threshold, even if they are not all discussed in the main paper.
The Discussion can be the most challenging section to write, and requires considerable knowledge of the existing literature. Good reviewers will be well informed on the topic, and will highlight deficiencies in the Discussion or aspects of the topic that should be considered. Conclusions should be confidently stated and evidence-based. In general, a good Discussion leaves no loose ends; it shows that you have considered alternative explanations for your findings, and addresses strengths and weaknesses of the research and the reasons for them. As your research is unlikely to be the final chapter on the topic, include a statement about unanswered questions that may form the basis of ongoing work.
Finally, give serious consideration to crafting a Title that stands out. ‘Punchy’ titles have their place in scientific writing, but for manuscripts published in scientific journals, aim for one that provides a clear informative statement highlighting the most important finding of the study, and that sets it apart from others in the field. Avoid boring titles that begin with “Studies of XYZ in ABC…” or “Characterization of crumple-horned snorkaks in…”. A recent published study assessing title characteristics of health care articles concluded that easy-to-understand, declarative titles were more likely to be picked up by the popular press compared to those with more uncommon words.
The cover letter is the final document you will draft before submitting. When you consider the months or years it took to produce your paper―an important stepping stone in your scientific career―then what level of importance should you place on the cover letter? I would say very high importance! The cover letter is the most important part of your written submission. I cannot over-emphasize the importance of a well-crafted cover-letter. Regardless of how amazing your work is, it could determine whether your paper will be considered for review or rejected outright. Your cover letter should be formally written by the corresponding author, and addressed directly to the editor in chief; so take the time to look up his/her name. It should be no more than a single page briefly summing up in a few sentences the main highlight of the study, how it fills an existing gap, and why it warrants publication in your target journal. Most journals will provide guidelines for what to include in a cover letter, such as suggestions for reviewers and their contact information, or those to exclude. Careful thought should be given to this.
As a final pre-submission step, check the literature one last time for relevant papers that have come online while preparing the final draft. As the lead author, you are responsible for EVERYTHING in this paper. Read your paper again (and again!) for grammar and spelling errors, formatting, references, line spacing, and the multitude of non-scientific content like acknowledgements, affiliations, and referencing style. Check that tables and figures are correct and correctly formatted. Scan references for any glitches that occur with your referencing software—they sometimes have a mind of their own! A major annoyance to reviewers is careless mistakes, suggesting that the paper was rushed out and sloppily done. Your aim should be to make life easier for a potential reviewer and your paper a pleasure to read.
There is a lot to be said about avoiding dry and boring traditional writing styles that are accessible only to a select few. Scientific writing should be factual and evidence-based, while at the same time creative enough to draw the reader in. It is important to strike a balance between emotive language that sensationalizes the science and engaging the reader with uncomplicated accessible language.
It’s great to be able to say my grandmother who is not a scientist can tell her friends about my work. Scientific writing is not about ‘dumbing it down’; it’s about telling a great story with clarity, confidence and conviction. And yes! You can!
At SugarApple Communications our writers are long-standing authors with experience in all stages of the publication process, including data management, statistical analysis, and ethical publication practice. Get in touch today and let us discuss you next publication.
-
The 1918 flu pandemic — Australia’s experience
June 19th, 2018 | by Sharon JohnattyThe 1918 influenza pandemic was one of the worst natural disasters recorded in history. Known as the ‘Spanish’ flu because it was first reported by Spain, it infected ~500 million people, and killed as many as 100 million people worldwide, including healthy adults under the age of 40. When the disease hit Australia in 1919, maritime quarantine measures put in place helped to curb the death toll, but the social impact was significant.
This pandemic spread across the globe in three distinct waves; the first was in March 1918 and spread through the US, Europe and Asia over the next 6 months. The second wave spread across both the Northern and Southern hemispheres from September to November of 1918 and had about five times the death toll of the first wave. The third wave came in early 1919 and killed more people than the first, but it was not as severe as the second wave. More on the global impact of this pandemic can be found in my previous blog.
Australia was spared the ravages of other countries in this pandemic. Although the mortality rate that was the lowest on record—233 per 100,000 of the general Australian population, compared to 430 in England and 500 in the non-indigenous New Zealand population—there were ~15,000 recorded deaths. Indigenous populations were more severely affected. Among Aboriginals in Australia, some tribes were almost entirely wiped out. Infection rates were quite high, up to 40% of the general population, and 50% in Aboriginal communities.
Like most countries, Australia was not prepared to cope with this disaster, given that the war had disrupted social and economic life, and key medical personnel were abroad. During the first wave in March 1918, Australia remained free of infection, and the Australian Quarantine Service monitored the spread of the pandemic. After learning of outbreaks in New Zealand and South Africa, a first line defence was to implement maritime quarantine, which came into force on 17th October 1918. But the first infected ship arrived the very next day in Darwin. Over the next six months, ~50% of intercepted vessels were found to be carrying the infection.
A second line of defense was to establish a consistent response to handling and containing influenza outbreaks. A national influenza planning conference was held in November 1918 with all State and Commonwealth health authorities, and a thirteen-point plan was agreed upon, six of which involved interstate quarantine. It was agreed that the Federal government would be responsible for declaring infected States and enacting more stringent quarantine, while States would be responsible for local medical and emergency services as well as public awareness of the potential for outbreaks. These measures limited the entry of the virus into Australia, by which time the virulence had lessened.
The first severe case of ‘Spanish’ flu occurred in Melbourne in January 1919. Confusion about other milder cases led to a delay in confirming that there was indeed an outbreak in Victoria, and it was not until a case was diagnosed in New South Wales that the Victorian authorities officially notified the Director of Quarantine. The New South Wales government viewed Victoria’s delay as a breach of the national influenza planning agreements. Although both States were declared infected, New South Wales unilaterally closed their border with Victoria because the first diagnosed case was a soldier travelling by train from Melbourne. This led to a general breakdown in the Federal systems agreed upon in November 1918. Individual States then made their own decisions regarding border control and handling and containing outbreaks, and in February 1919, the Commonwealth withdrew temporarily from the November 1918 agreements.
The measures put in place did not prevent the spread of the disease, but slowed its movement. Although the cause of influenza was not known at the time, an experimental vaccine to treat pneumonia had been developed by Commonwealth Serum Laboratories (CSL). Once the New South Wales government closed its borders, the city of Sydney closed schools and some public places, and implemented the use of masks and vaccine programs. However, there were three waves of outbreaks in Sydney with many deaths. The first cases of the ‘Spanish’ flu in Queensland were recorded at the Kangaroo Point Hospital in Brisbane in May 1919, and by the end of June there were over 20,600 reported cases throughout the Sunshine State. The relative isolation of Perth and State border controls proved effective, as the ‘Spanish’ flu did not appear in Western Australia till June 1919.
There were various degrees of maritime quarantine enforced, depending on the extent of infection on a vessel. If a ship arrived with a single infected individual, everyone on board was inoculated and forced to wear a mask for the quarantine period, which could be up to fourteen days. This did not bode well with returning troops, as family reunions and victory parades were delayed, causing a sense of rejection and divisiveness to fester among troops. As expected, there were instances of some troops breaking quarantine. One significant break occurred in South Australia, leading to a court martial in March 1919, a charge of inciting mutiny, and sixty days detention. A more spectacular example of breaking with quarantine occurred in New South Wales in February 1919, when 1000 men broke out of their snake-infested campsite at North Head where their ship had landed. The North Head quarantine station is now the longest operating quarantine station in Australia with a rich history dating back to the 1800’s.
Once the Commonwealth efforts had broken down, each State then implemented interstate travel regulations in an attempt at self-preservation, and imposed restrictions as they saw fit. Over the next six months, further ‘mayhem’ ensued as a result, with considerable disruptions in commerce and tourism between States, the impounding of the Trans-Australian railway which had opened one year prior, as well as political fallout for the Nationalist Government.
Although these disputes had no positive impact on the control of the disease, at least 25% of the New South Wales population were inoculated against pneumonia by the end of 1919. There were few trained doctors around at the time because many were still on overseas service. Given that little was known about the cause of this pandemic and the fact that available trained doctors could not suggest anything substantive, people turned to their own methods of diagnosis and treatment. Quack remedies came from all quarters, including some in the medical establishment, many of whom were seen to be arguing in the press about the nature, causes and treatment of the disease. Advertisers saw opportunities to claim preventive powers in their products, and pipe-smoking motor cyclists with false teeth could expect maximum protection!!
As the epidemic progressed, hospitals were overwhelmed with patients. Additional staff was employed at well-earned wage increases, and by the time the first wave had abated, citizens committees were organized to do volunteer work ranging from the equivalent of ‘meals on wheels’ and accommodating children whose mothers were hospitalized. These Good Samaritan efforts were not without negative consequences, both in terms of contracting illnesses and violence at the hands of distraught relatives of those in their care.
Many lessons can be learnt from Australia’s experience with this pandemic, particularly for outbreaks for which there are no existing medical remedies or measures to contain the disease. Cooperation between Federal and State governments in imposing quarantine measures is of paramount importance in controlling the spread of disease. Public health preparedness and the awareness and appreciation of the impact of such a disaster on the health care system will also be important, as will the role of the media in reporting any outbreaks in a manner that does not incite chaos and fear. Medical journals later accused daily new papers of ‘fanning the flame of panic’ by attention-grabbing headlines, including words like ‘plague’ and ‘black death’ and raising alarms that muted any appeals for calm and measured responses.
1918 also saw many other notable historical events that cannot be overlooked in terms of their impact on the pandemic. World War I was not yet over and there were concerns that the turmoil of the previous years, combined with the quarantine restrictions at home, would lead to further disquiet among returning troops. The Bolshevik revolution in Russia was well in progress at the time, and threatened to spread revolution all over the world. While conservative governments feared ‘copycat’ uprisings taking hold in the name of social revolution, reporters saw opportunity to link epidemics of disease with epidemics of social disorder under the name of ‘Bolshevism’.
As the Australian winter wears on and flu season progresses, it is worth remembering what we have learnt from our past experiences. Guidelines implemented to safeguard public health and focus resources where they are most needed will only be as good as the individual responses in heeding these restrictions. We are further along the curve of public health awareness and access to reputable information about disease control and the value of coordinated responses. Let us harness what we have learnt from our past, and not regress to a time when the ‘every man/State for himself/itself’ principle reigned supreme.
References:
P. Curson and K. McCracken. An Australian perspective of the 1918–1919 influenza pandemic. http://www.phrp.com.au/wp-content/uploads/2014/10/NB06025.pdf
H. McQueen. The ‘Spanish’ Influenza Pandemic in Australila, 1912-1919. In ‘Social Policy in Australia’. Cassell Australia Ltd. 1976. http://honesthistory.net.au/wp/wp-content/uploads/SpanishFlu-1919.pdf
-
Reputation and ethics ― shadow or substance
June 5th, 2018 | by Sharon JohnattyReputation, ethics and actual value should go hand in hand. In the corporate world, success depends, among other things, on whether people are willing to buy or recommend your product, and this is usually driven by the perception that the company can be trusted and will deliver on promises made in advertising. The question arises, is this perception based on any real substance, or is it merely a shadow cast by an unjustified REPUTATION!!
Where consumers have a choice between essentially similar products, their decision to buy is based on what the company is about, and this ultimately comes down whether the company name is associated with a core of ethics.
Many fail to grasp the connection between reputation and ethics, as evidenced by the rise in reports of unethical behaviour and loss of credibility both at the individual and corporate level.
“It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.” (Warren Buffet)
I have heard so much in the past year about ‘reputation’ both in the media and from those who work in the corporate sector that I often wonder if ‘reputation’ has become a substitute for a resume or CV, or any real evidence of value. Those who claim to have a ‘reputation’ assume this can be leveraged regardless of whether there is any substance to their claims. Some even use ‘protecting reputation’ as justification for unethical behavior.
It is tempting to get caught up with the perception and not the practice of ethical conduct, and to define reputation only by how you are perceived by others. If in fact you are ethical, is there really a need to speak of having a reputation? Surely your ethical conduct speaks volumes about you in the workplace without needing to utter a word about it!
The value of a company or organization is tied up not only in physical or financial capital, but in human capital ― individuals who interact to produce outcomes. It is the perceptions of these outcomes that drive reputation and therefore long-term success.
“Ethics are frameworks for human conduct that relate to moral principle and attempt to distinguish right from wrong” (Miesing and Preble, 1985).
Reputation “is a perceptual representation of a company’s past actions and future prospects that describe the firm’s overall appeal to all of its key constituents when compared with other leading rivals” (C.J. Fombrun, 1996).
Beyond the workplace laws legislated by governments, each company or institution will develop a set of guidelines and policies about workplace conduct with the aim of building value, whether financial or social. Social value is measured in terms of trust; where there is trust, stakeholders are more likely to give the corporation the benefit of any doubt in the face of a crisis or controversy.
We’ll now look at what constitutes a good reputation in two essentially different industries.
Reputation in the Corporate Sector
At the corporate level, issues of reputation may be tied up with legal and ethical issues, but they are not always mutually exclusive, meaning not all that is legal is necessarily ethical. In some scenarios a legal solution may not be the most ethical one, and choosing an ethical solution over a legal solution may in fact pay dividends to a company’s reputation despite the cost in terms of real dollars.
In an ideal world, the pharmaceutical and medical device sectors should comprise individuals who spend the best years of their working life inspired by the prospect of improving human health, quality of life and longevity. However, these corporations are answerable to their share-holders, and their motivations are not entirely altruistic. For society to benefit, there has to be a balance between good science and good money. Where it becomes unstuck is when money is the primary goal, as evidenced by lawsuits, trials and fines related to unethical behavior.
The prevalence of unethical behavior and the betrayal of the public trust by the financial sector have recently dominated news headlines since the start of the Australian Banking Royal Commission. Initially, the concern of some politicians was that this investigation would “damage the banking sector’s reputation internationally”— until day after day of public testimony highlighted the egregious behavior of those under investigation. Some involved claimed they concealed relevant details of financial products in order to preserve their reputations!!
Clearly ethical conduct was side-stepped in the hope that ‘reputations’ would remain intact. Among other things, the lack of honesty and transparency by these institutions and the individuals who represent them, all in the name of reputation, highlights the gaping chasm between ethics and reputation in this sector.
This and other similar cases highlight the need to change unethical behavior that clearly impacts company reputation. Once a decision is made, whether conscious or sub-conscious, to compromise ethics in favor of financial gain, it sets in motion a domino effect that may be impossible to stop before it reaches the brink of disaster. People’s lives are ruined, and monetary settlements are merely a band-aid on the greater festering problem of greed and self-interest.
It takes moral courage and discipline at the leadership level to ensure that stakeholder trust is not compromised for financial gain, and recognize that the intangible and invaluable thing called ‘reputation’ is not just a shadow without any substance, but is firmly rooted in a core of ethics.
To quote another blogger on this topic, “A company’s commitment to ethics is the most effective means to preserve and protect the company’s reputation. Frankly, they go hand in hand and mutually reinforce each other.”
Reputation in Academia
Reputation in academia is highly dependent on ‘getting it right’ and not necessarily on ‘being right’. The motivation to do academic science tends to be a personal one, and the choice of fields may closely follow something the individual personally identifies with. In the current environment, academic reputation is primarily defined by the quality of publications and the ability to convince the scientific community of the veracity of their finding, and secondarily by the value of their work to the general public.
Recent concerns in the academic community on reproducibility and replication are perhaps the greatest threat to academic reputation, because it calls into question the intrinsic value of the research and threatens acceptance by their peers. I have written previously on the challenges of reproducible research and some steps that can be taken to ensure reproducibility.
In a recent published article, ~4700 individuals from varied backgrounds were asked to choose between two extremes about their expectations of scientists. Overwhelmingly they chose ‘boring but certain’ (i.e. very reproducible) over ‘exciting but uncertain’ (i.e. not very reproducible). Although respondents of this survey chose the ‘exciting’ researcher as the more creative and therefore more celebrated, they linked reputation and success to ethics and truth, and cared more that the scientist pursued certainty and reproducibility in their work.
The main message from this survey was that the pursuit of truth and ethical conduct in academia was more valuable to reputation than the research outcome itself.
Building a positive reputation
- Learn to accept criticism gracefully. Anyone in a regular working environment, whether academic or corporate, will face this sooner or later. It is a mark of personal development to be honest with yourself, and consider whether there is room for improvement, rather than fire off a nasty response.
- Before you act in any situation, stop and consider it from the other person’s point of view. How would you like to be treated if you were in their shoes?
- Do not ignore emails from colleagues or collaborators or those below you in the career ladder. Respond in a timely manner, even if it’s a brief response to say I’ll get back to you on that. And if you say that, follow through. Few things foster mistrust more than someone who says they’ll do something but never get around to it.
- For those in publishing, everything that you put your name to should be taken seriously, and carefully checked to ensure that you are willing to take responsibility for the published work. You should never agree to be an author or offer authorship to someone where contributions do not meet the ICMJE criteria for authorship.
- Be open and transparent about conflicts of interest. Consider opting out if asked to review the work of a competitor, or provide advice on a product you stand to benefit financially from.
- Maintain your integrity and be true to your values. It is easy to take shortcuts hoping no one will notice. Any level of misconduct will eventually catch-up with you and has been known to ruin careers. For more on integrity, see my recent article on integrity in science and why it matters.
Developing and maintaining a code of ethics and strictly adhering to it is central to a good reputation. As stated earlier, individuals collectively interact to produce outcomes that affect external stakeholders and reflect company image, whether corporate or academic. Building a reputation therefore must start with ethical conduct at the individual level.
At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Get in touch today and let’s talk.
-
The 1918 flu pandemic in perspective
May 10th, 2018 | by Sharon JohnattyAs Australia prepares for the winter months and the upcoming flu season, some facts are worth remembering about influenza100 years on from the 1918 flu pandemic. We live in an environment today where there is a growing trend of discounting the importance of vaccine. This article is not about vaccines, nor is it advising for or against it, but it is a reminder of the history of influenza and what led to global surveillance of this disease, and the big question, can it happen again.
The 1918 influenza pandemic infected 500 million people — that was one third of the world’s population at the time — and killed as many as 100 million people — that was 3–5% of the world’s population at the time. Think about that for a minute. About 4 times the population of Australia, or two-thirds of the UK population, or one-third of the US population was wiped out. This virus also killed more people than World War I & II combined. What was most unusual was the fact that unlike other flu epidemics, this one claimed the lives of healthy young adults — about half the deaths were in young adults age 20–40.
This tragedy came in the final days of World War I. Accounts from various sources tell of the horrific manner in which those infected died, and the utter confusion it created.
“Two hours after admission they have the mahogany spots over the cheek bones, and a few hours later you can begin to see the cyanosis extended from the ears and spreading all over the face…It is only a matter of a few hours then until death comes…We have an averaging about 100 deaths per day.” (Grist 1979, Fort Devens MA).
“Because coffins were in short supply, many were buried in blankets in mass graves” (Phillips1978, Capetown).
“Visiting nurses often walked into scenes resembling the plague years of the 14th century. They drew crowds of supplicants – or people would shun them for fear of the white gowns and gauze masks they often wore. One nurse found a husband dead in the same room where his wife lay with newly born twins. It had been 24 hours since the death and the births, and the wife had no food but an apple which happened to lie within reach.” (Crosby 1976, Philadelphia PA).
There was no immunization against influenza in 1918; indeed the culprit for this pandemic, the H1N1 viral strain, was isolated 15 years later. This virus is known to mutate rapidly, and it is likely that over time it evolved into less lethal strains, as all apparent descendants of the H1N1 virus cause less fatal disease. There is also the aspect of killing off its host too fast to survive. Highly lethal strains of viruses will naturally work its way out of the population because it has killed off its best ally — the human body.
So why was this particular flu outbreak so deadly? This virus supposedly hijacked the immune system. Healthy people in the prime of their life succumbed to this virus. Their lungs filled up with fluid and could not absorb oxygen. Their skin lost its normal pink colour ― a sign that the blood is oxygenated ― and instead it turned a sort of dusky purple or black. This is why it was called ‘black death’.
The word ‘pandemic’ literally means an epidemic that involves all people. The 1918 outbreak turned into a pandemic because of the unusually high movement of people around the world because of World War I. Also soldiers were living in barracks in close proximity to each other. There are reports that the winter of 1917-1918 was particular cold due to a La Nina event, and these conditions meant more people stayed indoors, which made it the perfect breeding ground for the virus, allowing it to spread very fast. At that time there were limitations to communicating information, particularly illnesses for fear of damaging morale during the war. In areas wracked by war, people were not very healthy, and food was limited for both soldiers and civilians. Media was under strict censorship in the countries at war in an effort to conceal any vulnerability from the enemy, and as a result, US newspapers reported this as just the ordinary flu and that there was nothing to fear if precautions were taken. Together, these factors contributed to the perfect conditions for the virus to ‘win’ on all fronts.
There are many myths that surround the 1918 pandemic, a common one being that it was called the ‘Spanish flu’ because it originated in Spain. Not so. It was dubbed the Spanish flu only because Spain was the first to report it. Being neutral in the war, they had no reason to conceal the ravages of this epidemic in their country, while other countries involved in the war kept it under wraps, as stated, for fear of seeming vulnerable and lowering the morale of their troops.
What is known about the virus that was responsible for the 1918 pandemic is based on archetypal evidence and trained observers present at the time. The geographic origin of the virus has been disputed, and although there are theories that it originated in China, sufficient evidence points to the American Mid-West.
This pandemic spread in three distinct waves. The death rate subsided after the initial wave, but rose sharply again in the second wave, suggesting a more fatal version of the first. The first wave occurred in March 1918 and spread through the US, Europe and Asia over the next 6 months. The second wave spread across both the Northern and Southern hemispheres from September to November of 1918, and had a death toll of about five times that of the first wave. The third wave came in early 1919 and again had a higher fatality than the first, but was not as severe as the second wave. This pattern of successive waves of the disease and the relatively short intervals between them was unprecedented and is still somewhat of a mystery.
In 2005, the influenza virus responsible for the 1918 pandemic was sequenced from virus recovered from the body of a victim buried in the Alaskan permafrost. This and other sources of data including the fact that pigs and humans were simultaneously infected in the 1918 pandemic, provide evidence that the virus crossed from birds to humans.
Influenza has been under global surveillance and scrutiny for centuries. Vaccine development began in the 1940s, and it became apparent very quickly that changes in the virus required updating the vaccine for it to be effective. In 1952 the WHO Global Influenza Surveillance Network (GISN) was established to monitor circulating viruses around the world throughout the year. In 2011 it was renamed the Global Influenza Surveillance and Response System (GISRS), and consists of five WHO collaborating centres, 142 National Influenza centres in 115 countries, and 16 laboratories. Their aim is to monitor global emergence of influenza viruses, make recommendations on laboratory diagnostics and vaccines, and serve as a global alert system for the emergence of viruses that have pandemic potential.
Surveillance is the key to keeping abreast of viral activity. Flu vaccines, as we know, offer protection against a few strains, but given the propensity of the virus to mutate and come back with a vengeance, the likelihood of a similar occurrence cannot be ruled out if conditions are similar to those that led to the 1918 pandemic.
So could a flu pandemic similar to the 1918 one happen again? Leading experts say it is “possible, even probable”. Interestingly the collective memory of this worldwide tragedy seems suppressed, possibly by choice because of how dreadful this disease was. The implications that it could recur seem to have been buried in the past, partly because it was upstaged by the war at the time. It also came at a time when the world was dealing with the tragedy of the war, and little was really known about what caused it till decades later.
A final thought. There were no vaccines in 1918. Many factors determine whether we should or should not be vaccinated, but are we moving towards a significant condition that led to the 1918 pandemic by choosing not to be vaccinated?
ABOUT INFLUENZA
Influenza is an acute respiratory illness caused by the influenza virus. It has been around since the 16th century, but much of what we know about it came from 1933 onwards when the first influenza virus was isolated and cultured.
There are 3 types of influenza viruses, A, B, and C. Influenza A and B are responsible for the seasonal outbreaks, while C generally causes mild disease. Influenza A is further classified according to two surface proteins, haemagglutinin (H) and neuraminidase (N), which are targeted by the influenza vaccine. There are 16 H subtypes (H1 to H16) and nine N (N1 to N9) subtypes. These subtypes have been isolated from birds, and are endemic in many species of birds and water fowl. Influenza viruses also circulate within domestic poultry species and pigs.
Although there is a species barrier to infection, human influenza viruses have been found in pigs, suggesting that the species barrier in pigs is not very high. Viruses that normally circulate in distinct species can undergo reassortment in pigs to produce novel viral strains, hence the reason pigs have been labelled the ‘mixing vessel’.
New flu strains have been traced to China and Asia, where live-bird markets abound even in over-populated cities, and many tend to be in close proximity to poultry and pig farms. At least two of the four influenza pandemics of the last century, the 1957 Asian flu (H2N2) and the 1968 Hong Kong flu (H3N2) originated in China.
Drift, Shift, and Reassortment
Antigenic drift are small changes in the genes of the virus that happen slowly over time as the virus replicates. No sooner that we get one under control, the virus manages to escape immune detection by creating another version of itself. These are slow to take effect and are more often associated with seasonal changes.
Antigenic shift is an abrupt major changes in influenza A, resulting in a new H and N that gets into the population. This occurs when at least two different influenza viruses infect the same cell. A mechanism called reasssortment or ‘gene swapping’ causes a genetically different virus to emerge. True pandemics are believed to arise from genetic reassortment with animal influenza viruses. Such viruses tend to be more infectious than existing viruses because there is no prior immunity in the population.
With the movement of people globally through travel, the virus resulting from reassortment can spread globally faster than we can become aware of its existence, let alone try to control it. Three of the last four major pandemics of the last century were caused by reassortment of animal and human influenza viruses.
References:
- Taubenberger JK and Morens DM. 1918 Influenza: the Mother of All Pandemics. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291398/
- Edwin D. Kilbourne Influenza pandemics of the 20th Century. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291411/
- Kilbourne ED. Influenza (1987). New York Plenum Medical Book Co
- The flu that changed the world. The ABC Radio National series on influenza. http://www.abc.net.au/radionational/features/the-flu-that-changed-the-world/