October 20th, 2018 | by Sharon Johnatty
The debate around whether glyphosate causes lymphoma is raging on many fronts since the landmark decision by a California court found in favour of the plaintiff in the case of Dewayne Johnson vs Monsanto. The question of whether glyphosate (or glycophosphate), the active ingredient in the well-known herbicide Roundup®, causes lymphoma, has been the subject of public discourse and discussion forums. These issues need a thoughtful, common sense, honest and public response, which highlights the company as being ethical and transparent in their promotion of this vital product to farmers worldwide. By ethical we mean an adoption of standards and advice that go beyond the mere assertion of legal or scientific criteria that define ‘safe’, to common sense, man-on-the-land advice. This would clearly identify manufacturers who have the moral courage to do the right thing in terms of the needs and well-being of their customers.
Regarding the science behind the claims, the most comprehensive evidence synthesis done to date was published by the International Agency for Research on Cancer (IARC) in 2015. The IARC report concluded that there was ‘limited evidence in humans for the carcinogenicity of glyphosate”. This is the basis for their position statement “Glyphosate is probably carcinogenic to humans”. Subsequently the IARC came under attack for their evaluation of glyphosate, to which they stood their ground and responded earlier this year in an open and transparent manner.
“The epidemiological evidence that glycophosphates are associated with an increased risk of lymphoma is very weak. This is why IARC class them as possibly carcinogenic. Furthermore if there were a risk it is modest and would not be big enough to conclude that it is more likely than not that in any given individual with lymphoma who was exposed to glycophosphates that the exposure was cause of their cancer.” (Paul Pharoah, Professor of cancer epidemiology, University of Cambridge)
Several studies on glyphosate and lymphoma can be accessed on PubMed, including four systematic reviews relevant to lymphoma and glyphosate, one of which also reviewed the IARC study. Overwhelmingly, these studies reported that the overall body of literature was “limited”, “inconsistent”, “associations were weak”, and therefore a causal link between exposure to glyphosate and lymphoma could not be established.
While opinions on the science abounds, what concerns me is the extrapolation that the ‘limited evidence’ is taken to mean that the chemical is safe. This is the classic fallacy upon which alternative remedies are promoted as efficacious―that of placing the burden of proof on the ‘no’ camp rather than where it belongs―on those promoting the product. One article published by the Conversation touted its safety, stating “Establishing whether a chemical can cause cancer in humans involves demonstrating a mechanism in which it can do so.” Respectfully I disagree. We do not need to demonstrate a mechanism before we established causality. We can hypothesize about the mechanism, and this helps to strengthen the argument, but it is not a necessary requirement to establish causality. A well-known example of this was the thalidomide disaster that shocked the world in the 1960s. Did we know what the mechanism of action of thalidomide was that led to birth defects before we acknowledged that it caused birth defects? No! In fact all we had was a very high degree of specificity between thalidomide and some very rare birth defects, and that was enough to establish causality and withdraw the drug from further use.
So what does the science tell us about glyphosate? Without a doubt the scientific evidence to support an association between lymphoma and glyphosate is suggestive but weak. Those who state that the California ruling shows ignorance of the science clearly do not understand that the ‘science’ on its own is inconclusive, and the legal ruling was evidently based on more than just the science.
This leads to my next concern; why did evidence surface that the company took steps to systematically attack any science or scientist that suggested their product was not safe. If it was “as safe as table salt”, then claims that they were trying to cover up evidence to the contrary should have had no bearing on the case. Lack of transparency and attempts to cover up information is generally a rather telling indicator of company ethical standards. Another concern I have is, can a chemical designed to kill something actually be considered to be less dangerous to humans than table salt???
I have worked with companies to help them understand the science behind the adverse events associated with drugs that led to litigation, and have reviewed published literature for scientific evidence to support expert testimony by those hired to front up in court. I have also reviewed company documents confiscated by the law firms involved in the litigation. There were instances where the scientific evidence from the published literature overwhelmingly supported a causal link between the drug and the adverse event in question. One of the cases that I worked on produced minimal evidence from the scientific literature to support the claim brought by the plaintiff, although I was expected to literally go through thousands of published articles. (Needless to say I could not in good faith continue this because this is not how evidence for causality works!!). However there was evidence in company documents that revealed an internal culture of covering up adverse event reports, and that regulatory authorities had levied fines for failure to report adverse events, which tends to raise red flags. In that particular case, although the scientific evidence was lacking, the jury found in favour of the plaintiff because the evidence of failure to be transparent about adverse events took centre stage in court proceedings.
Jurors are typically not scientists. They are down to earth common sense folk like you and I. Efforts to give the impression, if not an outright preparedness, of a willingness to blatantly deceive the public for their profit, as we recently witnessed in the Australian Banking inquiry, will inevitably come with a hefty price tag.
As with any product, regulatory authorities need to weigh up the benefits against the risks and state this in relative terms rather than definitive terms like ‘safe’ or ‘harmless’ because the public hears this and takes little or no precaution. The ultimate responsibility, however, resides with the manufacturer who would know their product better than any independent scientist.
It is critical that business giants, whether it is the financial, pharmaceutical, or agricultural sectors, recognize that they stand to lose more if they lack the moral courage to do the right thing by customers.
As individuals we need to take personal responsibility and assess potential long term health risks, because when it comes to our health, we are in control; to not do so can have grave consequences not only for ourselves but those we love. If the risk is 1 in 10,000, that ‘one’ may be you, and the probability of a bad outcome is no longer 0.01% but 100%. As consumers, we need to be aware that the use of chemicals on crops are a fact of life and observe the ‘all things in moderation’ approach. Foods we consume may be ‘safe’, but there are elements in anything that, if consumed in large enough quantities, will render them ‘unsafe’—not necessarily in and of themselves, but potentially in combination with genetic and environmental factors that may interact to trigger a disease or health condition.
Safe and effective weed control has huge benefits to the global agricultural industry, but along with the many occupational hazards associated with this industry, farmers using these products in large quantities and with long term exposure have the most to lose in terms of adverse health outcomes and loss of income from failed crops. If you are a farmer using glyphosate or any other chemical, it would be wise to take all possible precautions and suit up if necessary, wear masks and gloves to minimise physical contact with chemicals.
Those with the task of reviewing the science need to take a responsible balanced approach and realize that the ‘stop worrying and trust the evidence’ advice is naïve and simplistic. Frankly the research to date on glyphosate is saying only one thing – THE SCIENCE IS INCONCLUSIVE, but we cannot claim it is universally safe. We also don’t know if individual factors like genetic mutations combined with the cumulative effects of long-term exposure to glyphosate can trigger cancer. So until there are known markers that allow genetic testing for susceptibility, a common sense measure is to take all available precaution to ensure minimal physical exposure.
Those who have a responsibility to the public, whether for-profit companies or government regulators, need to first and foremost do what is true, honest, just, and as assessed by the man in the street, to be commendable. Point-of-sale distributors who work closely with farmers are often very knowledgeable about products and can offer guidance and advice on safety. Clear and accessible guidelines from manufacturers on how to take precautions to minimise exposure and risk are imperative with all ‘poisonous’ products — because absence of evidence is NOT evidence for absence.
At SugarApple Communications, our mission is to adhere to the highest ethical standards in the promotion of high quality research. Get in touch today and let’s talk.
June 19th, 2018 | by Sharon Johnatty
The 1918 influenza pandemic was one of the worst natural disasters recorded in history. Known as the ‘Spanish’ flu because it was first reported by Spain, it infected ~500 million people, and killed as many as 100 million people worldwide, including healthy adults under the age of 40. When the disease hit Australia in 1919, maritime quarantine measures put in place helped to curb the death toll, but the social impact was significant.
This pandemic spread across the globe in three distinct waves; the first was in March 1918 and spread through the US, Europe and Asia over the next 6 months. The second wave spread across both the Northern and Southern hemispheres from September to November of 1918 and had about five times the death toll of the first wave. The third wave came in early 1919 and killed more people than the first, but it was not as severe as the second wave. More on the global impact of this pandemic can be found in my previous blog.
Australia was spared the ravages of other countries in this pandemic. Although the mortality rate that was the lowest on record—233 per 100,000 of the general Australian population, compared to 430 in England and 500 in the non-indigenous New Zealand population—there were ~15,000 recorded deaths. Indigenous populations were more severely affected. Among Aboriginals in Australia, some tribes were almost entirely wiped out. Infection rates were quite high, up to 40% of the general population, and 50% in Aboriginal communities.
Like most countries, Australia was not prepared to cope with this disaster, given that the war had disrupted social and economic life, and key medical personnel were abroad. During the first wave in March 1918, Australia remained free of infection, and the Australian Quarantine Service monitored the spread of the pandemic. After learning of outbreaks in New Zealand and South Africa, a first line defence was to implement maritime quarantine, which came into force on 17th October 1918. But the first infected ship arrived the very next day in Darwin. Over the next six months, ~50% of intercepted vessels were found to be carrying the infection.
A second line of defense was to establish a consistent response to handling and containing influenza outbreaks. A national influenza planning conference was held in November 1918 with all State and Commonwealth health authorities, and a thirteen-point plan was agreed upon, six of which involved interstate quarantine. It was agreed that the Federal government would be responsible for declaring infected States and enacting more stringent quarantine, while States would be responsible for local medical and emergency services as well as public awareness of the potential for outbreaks. These measures limited the entry of the virus into Australia, by which time the virulence had lessened.
The first severe case of ‘Spanish’ flu occurred in Melbourne in January 1919. Confusion about other milder cases led to a delay in confirming that there was indeed an outbreak in Victoria, and it was not until a case was diagnosed in New South Wales that the Victorian authorities officially notified the Director of Quarantine. The New South Wales government viewed Victoria’s delay as a breach of the national influenza planning agreements. Although both States were declared infected, New South Wales unilaterally closed their border with Victoria because the first diagnosed case was a soldier travelling by train from Melbourne. This led to a general breakdown in the Federal systems agreed upon in November 1918. Individual States then made their own decisions regarding border control and handling and containing outbreaks, and in February 1919, the Commonwealth withdrew temporarily from the November 1918 agreements.
The measures put in place did not prevent the spread of the disease, but slowed its movement. Although the cause of influenza was not known at the time, an experimental vaccine to treat pneumonia had been developed by Commonwealth Serum Laboratories (CSL). Once the New South Wales government closed its borders, the city of Sydney closed schools and some public places, and implemented the use of masks and vaccine programs. However, there were three waves of outbreaks in Sydney with many deaths. The first cases of the ‘Spanish’ flu in Queensland were recorded at the Kangaroo Point Hospital in Brisbane in May 1919, and by the end of June there were over 20,600 reported cases throughout the Sunshine State. The relative isolation of Perth and State border controls proved effective, as the ‘Spanish’ flu did not appear in Western Australia till June 1919.
There were various degrees of maritime quarantine enforced, depending on the extent of infection on a vessel. If a ship arrived with a single infected individual, everyone on board was inoculated and forced to wear a mask for the quarantine period, which could be up to fourteen days. This did not bode well with returning troops, as family reunions and victory parades were delayed, causing a sense of rejection and divisiveness to fester among troops. As expected, there were instances of some troops breaking quarantine. One significant break occurred in South Australia, leading to a court martial in March 1919, a charge of inciting mutiny, and sixty days detention. A more spectacular example of breaking with quarantine occurred in New South Wales in February 1919, when 1000 men broke out of their snake-infested campsite at North Head where their ship had landed. The North Head quarantine station is now the longest operating quarantine station in Australia with a rich history dating back to the 1800’s.
Once the Commonwealth efforts had broken down, each State then implemented interstate travel regulations in an attempt at self-preservation, and imposed restrictions as they saw fit. Over the next six months, further ‘mayhem’ ensued as a result, with considerable disruptions in commerce and tourism between States, the impounding of the Trans-Australian railway which had opened one year prior, as well as political fallout for the Nationalist Government.
Although these disputes had no positive impact on the control of the disease, at least 25% of the New South Wales population were inoculated against pneumonia by the end of 1919. There were few trained doctors around at the time because many were still on overseas service. Given that little was known about the cause of this pandemic and the fact that available trained doctors could not suggest anything substantive, people turned to their own methods of diagnosis and treatment. Quack remedies came from all quarters, including some in the medical establishment, many of whom were seen to be arguing in the press about the nature, causes and treatment of the disease. Advertisers saw opportunities to claim preventive powers in their products, and pipe-smoking motor cyclists with false teeth could expect maximum protection!!
As the epidemic progressed, hospitals were overwhelmed with patients. Additional staff was employed at well-earned wage increases, and by the time the first wave had abated, citizens committees were organized to do volunteer work ranging from the equivalent of ‘meals on wheels’ and accommodating children whose mothers were hospitalized. These Good Samaritan efforts were not without negative consequences, both in terms of contracting illnesses and violence at the hands of distraught relatives of those in their care.
Many lessons can be learnt from Australia’s experience with this pandemic, particularly for outbreaks for which there are no existing medical remedies or measures to contain the disease. Cooperation between Federal and State governments in imposing quarantine measures is of paramount importance in controlling the spread of disease. Public health preparedness and the awareness and appreciation of the impact of such a disaster on the health care system will also be important, as will the role of the media in reporting any outbreaks in a manner that does not incite chaos and fear. Medical journals later accused daily new papers of ‘fanning the flame of panic’ by attention-grabbing headlines, including words like ‘plague’ and ‘black death’ and raising alarms that muted any appeals for calm and measured responses.
1918 also saw many other notable historical events that cannot be overlooked in terms of their impact on the pandemic. World War I was not yet over and there were concerns that the turmoil of the previous years, combined with the quarantine restrictions at home, would lead to further disquiet among returning troops. The Bolshevik revolution in Russia was well in progress at the time, and threatened to spread revolution all over the world. While conservative governments feared ‘copycat’ uprisings taking hold in the name of social revolution, reporters saw opportunity to link epidemics of disease with epidemics of social disorder under the name of ‘Bolshevism’.
As the Australian winter wears on and flu season progresses, it is worth remembering what we have learnt from our past experiences. Guidelines implemented to safeguard public health and focus resources where they are most needed will only be as good as the individual responses in heeding these restrictions. We are further along the curve of public health awareness and access to reputable information about disease control and the value of coordinated responses. Let us harness what we have learnt from our past, and not regress to a time when the ‘every man/State for himself/itself’ principle reigned supreme.
P. Curson and K. McCracken. An Australian perspective of the 1918–1919 influenza pandemic. http://www.phrp.com.au/wp-content/uploads/2014/10/NB06025.pdf
H. McQueen. The ‘Spanish’ Influenza Pandemic in Australila, 1912-1919. In ‘Social Policy in Australia’. Cassell Australia Ltd. 1976. http://honesthistory.net.au/wp/wp-content/uploads/SpanishFlu-1919.pdf
May 10th, 2018 | by Sharon Johnatty
As Australia prepares for the winter months and the upcoming flu season, some facts are worth remembering about influenza100 years on from the 1918 flu pandemic. We live in an environment today where there is a growing trend of discounting the importance of vaccine. This article is not about vaccines, nor is it advising for or against it, but it is a reminder of the history of influenza and what led to global surveillance of this disease, and the big question, can it happen again.
The 1918 influenza pandemic infected 500 million people — that was one third of the world’s population at the time — and killed as many as 100 million people — that was 3–5% of the world’s population at the time. Think about that for a minute. About 4 times the population of Australia, or two-thirds of the UK population, or one-third of the US population was wiped out. This virus also killed more people than World War I & II combined. What was most unusual was the fact that unlike other flu epidemics, this one claimed the lives of healthy young adults — about half the deaths were in young adults age 20–40.
This tragedy came in the final days of World War I. Accounts from various sources tell of the horrific manner in which those infected died, and the utter confusion it created.
“Two hours after admission they have the mahogany spots over the cheek bones, and a few hours later you can begin to see the cyanosis extended from the ears and spreading all over the face…It is only a matter of a few hours then until death comes…We have an averaging about 100 deaths per day.” (Grist 1979, Fort Devens MA).
“Because coffins were in short supply, many were buried in blankets in mass graves” (Phillips1978, Capetown).
“Visiting nurses often walked into scenes resembling the plague years of the 14th century. They drew crowds of supplicants – or people would shun them for fear of the white gowns and gauze masks they often wore. One nurse found a husband dead in the same room where his wife lay with newly born twins. It had been 24 hours since the death and the births, and the wife had no food but an apple which happened to lie within reach.” (Crosby 1976, Philadelphia PA).
There was no immunization against influenza in 1918; indeed the culprit for this pandemic, the H1N1 viral strain, was isolated 15 years later. This virus is known to mutate rapidly, and it is likely that over time it evolved into less lethal strains, as all apparent descendants of the H1N1 virus cause less fatal disease. There is also the aspect of killing off its host too fast to survive. Highly lethal strains of viruses will naturally work its way out of the population because it has killed off its best ally — the human body.
So why was this particular flu outbreak so deadly? This virus supposedly hijacked the immune system. Healthy people in the prime of their life succumbed to this virus. Their lungs filled up with fluid and could not absorb oxygen. Their skin lost its normal pink colour ― a sign that the blood is oxygenated ― and instead it turned a sort of dusky purple or black. This is why it was called ‘black death’.
The word ‘pandemic’ literally means an epidemic that involves all people. The 1918 outbreak turned into a pandemic because of the unusually high movement of people around the world because of World War I. Also soldiers were living in barracks in close proximity to each other. There are reports that the winter of 1917-1918 was particular cold due to a La Nina event, and these conditions meant more people stayed indoors, which made it the perfect breeding ground for the virus, allowing it to spread very fast. At that time there were limitations to communicating information, particularly illnesses for fear of damaging morale during the war. In areas wracked by war, people were not very healthy, and food was limited for both soldiers and civilians. Media was under strict censorship in the countries at war in an effort to conceal any vulnerability from the enemy, and as a result, US newspapers reported this as just the ordinary flu and that there was nothing to fear if precautions were taken. Together, these factors contributed to the perfect conditions for the virus to ‘win’ on all fronts.
There are many myths that surround the 1918 pandemic, a common one being that it was called the ‘Spanish flu’ because it originated in Spain. Not so. It was dubbed the Spanish flu only because Spain was the first to report it. Being neutral in the war, they had no reason to conceal the ravages of this epidemic in their country, while other countries involved in the war kept it under wraps, as stated, for fear of seeming vulnerable and lowering the morale of their troops.
What is known about the virus that was responsible for the 1918 pandemic is based on archetypal evidence and trained observers present at the time. The geographic origin of the virus has been disputed, and although there are theories that it originated in China, sufficient evidence points to the American Mid-West.
This pandemic spread in three distinct waves. The death rate subsided after the initial wave, but rose sharply again in the second wave, suggesting a more fatal version of the first. The first wave occurred in March 1918 and spread through the US, Europe and Asia over the next 6 months. The second wave spread across both the Northern and Southern hemispheres from September to November of 1918, and had a death toll of about five times that of the first wave. The third wave came in early 1919 and again had a higher fatality than the first, but was not as severe as the second wave. This pattern of successive waves of the disease and the relatively short intervals between them was unprecedented and is still somewhat of a mystery.
In 2005, the influenza virus responsible for the 1918 pandemic was sequenced from virus recovered from the body of a victim buried in the Alaskan permafrost. This and other sources of data including the fact that pigs and humans were simultaneously infected in the 1918 pandemic, provide evidence that the virus crossed from birds to humans.
Influenza has been under global surveillance and scrutiny for centuries. Vaccine development began in the 1940s, and it became apparent very quickly that changes in the virus required updating the vaccine for it to be effective. In 1952 the WHO Global Influenza Surveillance Network (GISN) was established to monitor circulating viruses around the world throughout the year. In 2011 it was renamed the Global Influenza Surveillance and Response System (GISRS), and consists of five WHO collaborating centres, 142 National Influenza centres in 115 countries, and 16 laboratories. Their aim is to monitor global emergence of influenza viruses, make recommendations on laboratory diagnostics and vaccines, and serve as a global alert system for the emergence of viruses that have pandemic potential.
Surveillance is the key to keeping abreast of viral activity. Flu vaccines, as we know, offer protection against a few strains, but given the propensity of the virus to mutate and come back with a vengeance, the likelihood of a similar occurrence cannot be ruled out if conditions are similar to those that led to the 1918 pandemic.
So could a flu pandemic similar to the 1918 one happen again? Leading experts say it is “possible, even probable”. Interestingly the collective memory of this worldwide tragedy seems suppressed, possibly by choice because of how dreadful this disease was. The implications that it could recur seem to have been buried in the past, partly because it was upstaged by the war at the time. It also came at a time when the world was dealing with the tragedy of the war, and little was really known about what caused it till decades later.
A final thought. There were no vaccines in 1918. Many factors determine whether we should or should not be vaccinated, but are we moving towards a significant condition that led to the 1918 pandemic by choosing not to be vaccinated?
Influenza is an acute respiratory illness caused by the influenza virus. It has been around since the 16th century, but much of what we know about it came from 1933 onwards when the first influenza virus was isolated and cultured.
There are 3 types of influenza viruses, A, B, and C. Influenza A and B are responsible for the seasonal outbreaks, while C generally causes mild disease. Influenza A is further classified according to two surface proteins, haemagglutinin (H) and neuraminidase (N), which are targeted by the influenza vaccine. There are 16 H subtypes (H1 to H16) and nine N (N1 to N9) subtypes. These subtypes have been isolated from birds, and are endemic in many species of birds and water fowl. Influenza viruses also circulate within domestic poultry species and pigs.
Although there is a species barrier to infection, human influenza viruses have been found in pigs, suggesting that the species barrier in pigs is not very high. Viruses that normally circulate in distinct species can undergo reassortment in pigs to produce novel viral strains, hence the reason pigs have been labelled the ‘mixing vessel’.
New flu strains have been traced to China and Asia, where live-bird markets abound even in over-populated cities, and many tend to be in close proximity to poultry and pig farms. At least two of the four influenza pandemics of the last century, the 1957 Asian flu (H2N2) and the 1968 Hong Kong flu (H3N2) originated in China.
Drift, Shift, and Reassortment
Antigenic drift are small changes in the genes of the virus that happen slowly over time as the virus replicates. No sooner that we get one under control, the virus manages to escape immune detection by creating another version of itself. These are slow to take effect and are more often associated with seasonal changes.
Antigenic shift is an abrupt major changes in influenza A, resulting in a new H and N that gets into the population. This occurs when at least two different influenza viruses infect the same cell. A mechanism called reasssortment or ‘gene swapping’ causes a genetically different virus to emerge. True pandemics are believed to arise from genetic reassortment with animal influenza viruses. Such viruses tend to be more infectious than existing viruses because there is no prior immunity in the population.
With the movement of people globally through travel, the virus resulting from reassortment can spread globally faster than we can become aware of its existence, let alone try to control it. Three of the last four major pandemics of the last century were caused by reassortment of animal and human influenza viruses.
- Taubenberger JK and Morens DM. 1918 Influenza: the Mother of All Pandemics. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291398/
- Edwin D. Kilbourne Influenza pandemics of the 20th Century. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291411/
- Kilbourne ED. Influenza (1987). New York Plenum Medical Book Co
- The flu that changed the world. The ABC Radio National series on influenza. http://www.abc.net.au/radionational/features/the-flu-that-changed-the-world/