• Choosing the right journal – think before you submit!

    June 20th, 2017 | by
    read more

    Deciding on the journal best suited to your research can be difficult, particularly for interdisciplinary research. Reputable journals are increasingly rejecting sound, well-written manuscripts without review, and with the recent explosion in new journals, deciding where to publish is an ongoing challenge.

    Only you and your collaborators can assess the value of your research findings and decide where to submit your work, but some basic journal sleuthing will be a good start. It can be difficult to discriminate the legitimate journals from the predatory ones, and this could take some time, but it is well worth the effort.

    My last blog outlined strategies to avoid predatory journals. In this article I continue the theme of journal selection with additional thoughts that also apply to avoiding predatory journals. A combination of these approaches will yield the best results.

    1. Develop a list of journals suitable for your subject area. This should ideally be done at the start of the write-up process. Research the journals relevant to your specialty area and develop a list of candidate journals for submission. I usually start by looking at articles that I have referenced or used to generate my hypothesis. Check with your co-authors, collaborators or colleagues and ask about their experiences with journals on your list and their response time. In many large collaborative groups this information is often readily shared. Update your list as you progress, or if your work is not accepted by your first choice journal.
    1. Read the aims and scope of the journals on your list. Beyond the evidence of transparent and accessible peer-review editorial and fee policies, the question of whether the journal is a good fit for your topic should be considered. Check that the Aims and Scope of the journal fits with your research, particularly if your topic is narrowly focused on a specific scientific question. Likewise, if you notice that a journal’s aims are so broad that they will publish just about anything, then look at some of their recently published articles before making a decision.
    1. Think about your target audience. Journals you find interesting and relevant to your field are also likely to be accessed by your peers or other researchers in your field. Compare articles that are similar in design and methodology to yours, and check if your target journal publishes such articles. This helps if you wish to refute a claim by a journal that ‘we do not publish’ these types of articles. I recently had the experience where a journal’s Editor-in-Chief took 15 weeks to respond, stating that they did not publish our particular methodology — which was actually spelled out in the title of the article! Consider a presubmission inquiry if in doubt, as their response may also highlight their legitimacy and save you precious time.
    1. Impact factor isn’t everything. Publishing in journals with a high impact factor was the second most common reason for journal selection, according to a recent opinion poll. Despite concerns that they can be manipulated, impact factors continue to play a key role in academic career progression. Journal impact factors and other journal metrics are available from Thomson Reuters Web of Science and can be used in conjunction with SCImago Journal Rankings based on Scopus.  Changes in journal rankings and metrics over the past few years should be reviewed, as a drop in journal metrics may signal potential problems, but an increase may simply be the result of decreasing numbers of articles published by the journal. A recent systematic analysis explored contributors to changes in journal impact factors and recommends caution in interpretation, as the number of articles published by the journal is also part of the impact factor calculation. 
    1. Check with your university or institution whether they have an approved list of journals. Many institutions, in cooperation with the Directory of Open Access Journals (DOAJ) and other publishers of ‘whitelists’, may already have resources in place as part of their career advancement scheme, and to encourage researchers to publish in credible journals. In the past three to four years, a number of online forums and websites that aim to protect against deceptive publication practices have been initiated. For additional guidance and advice visit the Think Check Submit website. 

    As part of large international consortia, I have been fortunate enough in my career to have co-authored articles published in high-impact journals, as well as a number of respectable journals with lower impact factors. In my experience, an honest appraisal of your research, both in terms of interpreting your results and assessing methodological flaws, should be considered in determining the appropriate repository for this work.

    It is important to find the right balance when deciding if your research is ready for publication. Attempting to submit weak or incomplete findings for the sake of your publication record, or holding out too long for what you hope will be Nobel-prize winning results, can prove to be demoralising. Applying a ‘Goldilocks’ approach in deciding when and where to publish can save time and money, and avoid frustration and disappointments.

    Please feel free to contact us with additional thoughts on this topic. We welcome your comments and suggestions.

  • How to dodge predatory journals – choose carefully!

    June 14th, 2017 | by
    read more

    Scientific publishing has become increasingly contaminated by ‘noise’ resulting from the explosion in new journals eager to gain a market share. In 2014 alone 1,000 new journals were launched. With the rise in investigative reports in reputable journals such as Science and Nature highlighting this problem, it is becoming a daunting task just working out the unethical ones from the real ones.

    In the first of a 2-part series on strategies to select journals that are appropriate for your work while avoiding predatory journals, I have listed a number of approaches (in no particular order of importance), that collectively will yield the best results.

    1. Read the journal policies and articles they publish
      1. Legitimate journals should have clear and transparent peer-review, editorial and fee policies that are accessible on their website.
      2. Publication fees are not uncommon for open access journals but should be paid only when the article is accepted for publication.
      3. You should not need to pay ‘submission’ or ‘handling’ fees.
      4. Check publication guidelines on whether the author or the journal retains copyright. Reputable open access journals tend to retain the copyright licence under the Creative Commons guidelines, but you should not be expected to transfer copyright before the article is accepted for publication.
      5. Look at the quality of articles published by the journal in question. If there are obvious typos, it is poorly written or the science does not stack up, avoid it like the plague! As a writer and editor, I have real problems with journals that publish poorly written/edited manuscripts. You should too.
    1. Verify the claimed journal impact factor. Some predatory journals have been known to use contrived impact factors on their websites or in emails to appear credible. Journal impact factors published by Thomson Reuters Web of Science are the most widely accepted journal metrics.  Most universities and research institutes make this available through their online library access.
    1. Check the journal’s membership with recognized best practice professional organisations. The Committee on Publication Ethics (COPE) was established in 1997 as a forum for editors and peer-reviewed journals to exchanging views and advise on how to deal with research and publication misconduct. Currently it has a membership of over 10,000 worldwide. Other professional organisations include the International Association of Scientific, Technical, & Medical Publishers (STM), or the Open Access Scholarly Publishers Association (OASPA). However no resource is completely error-free, so use this information judiciously.
    1. Check whether the journal is listed with the Directory of Open Access Journals (DOAJ). Although predatory journals have been found on their ‘whitelists’, DOAJ implemented a stringent review process two years ago and has delisted over a third of their journal listings in an effort to crack down on this scam. As with ‘blacklists’, a comprehensive approach including your own review of ‘whitelists’ will yield the best result, as these resources are dynamic in nature.
    1. Check online journal comparison tools and resources. These allow researchers to not only select the best journals for their research, but they also allow authors to filter on various criteria including performance and prestige, and report their publishing experiences. Many are free to use, while some require a small fee. A review of this can be found here. Another new resource is scheduled to be launched on 15th June 2017 that require a paid subscription to a ‘blacklist’ of 3,900 predatory journals, developed using 65 criteria that the curators say will be reviewed quarterly. The utility of this resource and how many institutions will sign up for it remains to be seen.
    1. Are the listed members of the editorial board contactable by phone or email? Cross-check claims regarding editorial board members with their own or affiliated institutional websites regarding their activities and involvement with the journal. A recent sting operation uncovered major retractions because of fabricated editors.

    Many of us in academia are bombarded with emails almost daily from journals inviting us to submit manuscripts for publication, or submit abstracts for scientific conferences. These journal names are deceptively similar to those we are familiar with, and it is easy to fall prey to their invitations, particularly academics in developing countries where competition for research funding and pressures to publish can be very intense. Many legitimate start-up journals, including those focused on sub-specialties important to regional research in developing countries, can be unfairly tarnished by this trend. However, some common sense when reviewing journal policies and following the above guidelines will go a long way to allaying concerns of legitimacy.

    As part of our publications management service at SugarApple Communications,  we provide advice to our clients on journal selection as well as submission management and follow-up. We understand that publishing in recognised credible journals is critical to your success. 

    Regarding those pesky unsolicited email invitations to submit an article or sign up for their conferences, if a quick look reveals poor grammar, spelling mistakes, or overly flattering language about your research, then hit the ‘Delete’ button!!

    Please feel free to contact us with additional thoughts on this topic. We welcome your comments and suggestions.

  • Beware of predatory journals

    June 5th, 2017 | by
    read more

    Yes, predatory journals are out there, and their aim is to try to cash in on a potentially lucrative publication market by extracting fees from authors anxious to publish. Jeffery Beall, professor and librarian at the University of Colorado, began his investigations into “predatory open access publishing” (his term) in 2008 when he began receiving emails from unfamiliar journals asking him to serve on editorial boards. A major tip-off was the fact that these emails had numerous grammatical errors. He developed a list of ‘potential, possible or probably predatory scholarly open-access publishers.” The list went from 18 in 2011 to 923 in 2016. He estimated that such journals publish about 5-10 percent of all open access articles. 

    The entire content of his website was removed on 15th January 2017, along with his faculty page.  After much speculation on social media as to why the list was removed, the University of Colorado declared that the decision was a personal one by Beall himself.  Academics regarded this as a disaster, because this list represented an extremely important resource. However he had made a considerable mark on the scientific community and publishing houses who undertook their own investigations.  For many years open access was regarded as a thorn in the side of publishers, while researchers welcomed the concept as it seems exorbitant to pay for access to articles funded by research grants. However this has been the arena in which these predatory journals have flourished.

    In 2013 Science published the results of a sting operation where a paper of essentially ‘fake’ science written by a fictitious author at a non-existent research institute was submitted to 304 open-access journals. More than half of the journals accepted the paper without noticing glaring experimental flaws including material that a high-school chemistry student would spot. Beyond this, the operation uncovered the convoluted schemes in place to conceal the identity, location and financial paper trails of these publishing companies that prey upon researchers. More worrying was the fact that some reputable journals hosted by industry giants such as Sage and Elsevier accepted this fake paper!!

    The question of fake editors and reviewers have also been recently investigated and published in Nature in March 2017.  A fake application for the position of editor was submitted to 360 journals, both legitimate and suspected predatory journals.  Along with the application, the sting operation included accounts created for the applicant on academia.edu, Google+ and Twitter, and a profile and CV that included experience and interests that were hopelessly inadequate for the role of an editor. The operation included sufficiently stringent criteria for coding a journal’s response to the fake application as ‘accepted’, ‘rejected’ or ‘no response’. None of the 120 journals with an official impact factor as indexed on Journal Citation Reports accepted this fake application, compared to 7% of journals listed on the Directory of Open Access Journals, and 33% of predatory journals on Beall’s list who accepted this application.

    In May of 2017 Science published some astounding statistics of papers retracted by Tumor Biology, a former Springer journal, citing evidence of a journal editorial board consisting of fake reviewers.  Springer tried to scape-goat agencies specializing in manuscript editing by suggesting that these agencies were proposing fabricated reviewer names. But as later discovered, seven members of the editorial board could not be contacted, some did not work at the listed institute, and one who recognized the scam and tried to remove his name as a board member continued to be listed until the journal was recently taken over by Sage. Even if an author or agency proposes reviewers for their manuscript, which is not uncommon, it is the responsibility of the journal to evaluate potential reviewers, and if necessary, engage a ‘real’ one, as most reputable journals will. Springer has since publicly stated they will develop tools for its remaining journals to ensure the peer-review process is more robust.

    At SugarApple Communications we believe ethical publication practice is of paramount importance to scientific endeavours in all fields. We make journal recommendations to our clients based on whether journal policies and author fees are available on the journal website.  We also check that editorial board members are listed with their affiliations, and are recognised experts in the relevant scientific discipline.

    In our next installment in this series, we will provide additional details on this ongoing problem and ways to identify and avoid these predatory journals.

    Please feel free to contact us with additional thoughts on this topic. We welcome your comments and suggestions.

  • Let the data speak truthfully: chance findings

    April 28th, 2017 | by
    Let the data speak truthfully: chance findings - SugarApple Communications read more

    The dreaded topic of statistics is one that has both confounded and fascinated me as a scientist who enjoyed research and discovery, but tolerated the ‘number-crunching’ part of it as a necessary evil. I have had to come to terms with a subject that I avoided earlier on in my high-school days, i.e. statistics. In fact I still remember absolutely loathing it, but as with so many of life’s ironies, it came back to haunt me, because in scientific research, if you are ‘fair dinkum’ about your work, you need to get acquainted with different statistical tests, what they tell us, and how to translate them into something meaningful. 

    A prominent and well-respected statistics professor of mine during my PhD candidacy, who was both feared and respected by students and faculty for his candour, bluntness and militant adherence to scientific rigor and discipline in health sciences research, is still one whose example I draw upon when considering the analysis output of any given project that I’m preparing for publication. I will not bore you with statistics-speak in this article, but will try to put into ordinary everyday language what the main statistics mean when we write articles for any audience, whether our scientific peers or the general public. 

    In medical research, unless you are able to identify every single individual with the condition that you are studying, everywhere in the world, get their consent to join your study, and get every piece of information you need, including information you don’t know you need but suspect you might – then your research is essentially sample-based.

    As a researcher, after you have decided what the medical condition is that you wish to study, and what new knowledge is needed, you then need to decide who you will study. You may have a wide range of subsets of the population with the medical condition that you can draw from, and your choice will depend on the research question.

    A major compromise of sample-based research is that the individuals you choose to study (collectively your study sample) could have a range of characteristics that are widely different from other samples studied by other researchers, and therefore could generate different results.  This variability, which is the differences in measurements across different samples of the same general population, can be the start of a problem that we will call ‘sample error’. The key question then remains, how do I select a study sample that minimizes ‘sample error’?

    When a researcher chooses a sample to study, she can use a number of schemes to select them. She can impose any number of restrictions according to gender, ethnicity, age, geographical location etc. But it is important to realize the only purpose in studying a sample is to represent the larger population we are interested in. So we must decide at the start of the research what relationships we want to identify between patient characteristics and the medical condition we’re studying. If our sample is distorted in any way and not representative of the larger population, then our results will likewise be distorted.

    We must therefore choose a sample that provides the clearest view of the population we want to study. Random sampling tends to be the least biased selection process. This means applying a scheme that gives each eligible individual the same chance of being selected for the study. However, it is not fool-proof, and even the best random sampling scheme can generate a ‘bad hand’, meaning what we see in the sample is not reflective of the general population. How do we decide whether we have been dealt with a ‘bad hand’?

    The p-value answers this question. This is a measure that we see in almost every research effort. Put simply, the p-value gives us the probability that we do not have a sample that is representative of the population. In statistics-speak, it is also known as alpha error or the probability of type I error. The p-value is the probability that the relationship we identified between certain characteristics of the sample population and the medical condition we are studying, was there just through the play of chance. In other words, our study sample misled us.

    The threshold that most studies use as the level at which we decide that the finding is significant (statistically) is 5%, i.e. p<0.05 is considered significant. Most analysis approaches will automatically generate this statistic along with other measures of the relationship, which we will deal with in another article. P<0.05 is quite arbitrary and more of a tradition that goes back to the days of Sir Ronald Fisher (1890-1962). However a researcher can and should set this threshold independently and during the design stages taking into consideration the number of statistical tests she intends to carry out, and what she plans to do with her research findings, i.e. apply it to medical practice or use it as a clue to search even further in a larger sample to find confirm these initial findings.

    P-values are only one element of what the sample tells us, and by no means the most important. It simply gives us a gauge of how good our study sample is. The interpretation of it is also dependent on whether we followed our study protocol as outlined at the start of the study.

    Overall, the estimates our analysis gives us are what we must be careful to interpret in light of what we set out to look for, assuming that we did not mid-way through the study, decide to shift course because of how interesting the data itself looked. In the latter case, our p-value would in actual fact be uninterpretable, riddled with random error, and essentially meaningless. This is where study rigour and discipline is paramount to the validity of the research findings.

    We will deal with the other aspects of data analysis that are relevant to research in upcoming articles.

Unfog the science…ensure quality, clarity and accuracy.

CONTACT US