Irene M. Hames
Independent Editorial and Publishing Consultant,
York, UK
Twitter: @irenehames
Irene M. Hames
Beilstein Magazine 2016, 2, No. 7 doi:10.3762/bmag.7
published: 4 October 2016
Researchers cherry-picking results to fit their hypotheses, altering images inappropriately to create cleaner and more convincing data, plagiarizing other researchers’ papers and ideas – these are quite common stories nowadays (as even a quick look at the Retraction Watch blog will show [1]). Reports are also appearing in the mainstream media, sometimes with sensationalized headlines. The scale of fabrication and falsification can in some cases be quite staggering, affecting a large proportion of some researchers’ career outputs [2]. These cases are worrying, and damaging not only to the whole research enterprise – research is wrongly informed, resources and human effort wasted - but also to the public’s perception of the reliability of research. What is the reality of the situation? Are we witnessing an explosion in research misconduct? And if we are, what can be done to address it?
Firstly, it’s important that the numbers are put into perspective. Each year, around 2.5 million articles are published in peer-reviewed journals, but only around 0.02% (i.e. 1 in 5000) are retracted. The number of retractions is, however, increasing faster than the rate at which the literature is growing [3]. Is this because more misconduct is taking place? Is research getting less rigorous and more errors occurring? Are misconduct and errors just being picked up more easily? Are journals retracting papers more quickly? We don’t really know, but it’s thought that all these factors may be playing a part.
The numbers don’t, however, add up in other ways. Surveys show that around 2% of scientists admit to having fabricated, falsified or modified results at least once [4], and up to a third admit to a range of questionable research behaviour [5]. These are worrying figures, but what is even more concerning is that when researchers are asked how many have observed colleagues engaging in these behaviours, the proportion goes up very substantially, suggesting there is considerable self-under-reporting in surveys. As even the self-reported levels are higher than the level of retraction, there is speculation that we may be seeing just the ‘tip of the iceberg’, with many problems in the scholarly literature lying undiscovered. A recent large-scale analysis of over 20,000 published papers for inappropriate image duplication has increased this concern, finding that around 4% of the papers contained problematic figures, and that in at least half this manipulation looked to have been deliberate [6]. The editors of many of the journals in which those articles appear have been notified, and the expectation is that a large number of corrections and retractions will need to be published. This analysis involved 40 reputable journals in the fields of microbiology and immunology, cancer biology, and general biology. Whether it reflects the situation in other areas and disciplines is being debated, but it seems unlikely that just those they studied are affected.
The findings have also highlighted another issue – considerable failure on the editorial side of the publication process, as these problem images were missed by the reviewers, editors and editorial staff at those journals. This has fuelled criticism of peer review and the ongoing debate on whether current processes are adequate. Editorial systems and resources are, however, being stretched as more and more checking of manuscript submissions is being required. The use of ‘plagiarism checking’ software is now commonplace, although the term is rather misleading as it is textual duplication that is picked up, and a human being has to analyze the reports and determine whether plagiarism has occurred or the duplication is acceptable. Many editors are reporting seeing increasing numbers of problem cases, which is straining resources in terms of both time and money. Also, the systems aren’t able to detect plagiarism that is more sophisticated, such as paraphrasing without appropriate attribution, or the appropriation of ideas. Image checking is not currently the norm, but in the light of the problems found in the study mentioned above [6], it – or certainly better guidance for editorial staff and reviewers - may need to introduced into more journals. Automated checking of manuscripts in other areas, for example compliance with reporting guidelines and data-sharing policies, is being investigated, and will undoubtedly help editorial screening become less demanding and more consistent in the future.
Many problems with research integrity don’t come to light till the work is submitted for publication or published. Anything that can help reduce the incidence will help the research effort, editorial workloads, and the soundness of the scholarly literature. Honesty and trust are central to research – we trust that researchers are carrying out and reporting studies accurately and honestly, that they are acting with integrity. Research integrity guidelines are important, and there are many. Most, however, although they include principles and worthy statements, don’t actually provide advice on what should be done, making it challenging for researchers to know how to put them into practice. Even the most basic principles may not always be known by researchers, especially those at an early career stage, or they may not be understood, for example by researchers whose first language isn’t that in which the guidelines are written.
Also, research norms and practices can vary, from discipline to discipline, from country to country. Outside of the three practices that are pretty much universally recognized to constitute research misconduct – fabrication, falsification, and plagiarism – what else is considered misconduct or viewed as questionable research practice varies. This is the case even for the countries within Europe. Guidance on research integrity also varies, and how research misconduct is dealt with, which has led to calls for the harmonization of research integrity guidelines and a unified approach within Europe [7]. Publication of the European Code of Conduct for Research Integrity is generally viewed as a positive step in the right direction [8].
Why might we be witnessing more research-integrity problems? It is clear that researchers are under increasing pressure and competition, to get jobs, research funding, and permanent research positions. This is coming up more and more in surveys and reports. High on the list of suggested causes for lapses in research integrity and standards is the pressure to publish, and to publish in the most prestigious journals. Despite recognition of the problems associated with using journal Impact Factor as a proxy for article importance and potential future impact, the Impact Factor is still a prominent indicator for many. Alternative ways to assess influence and impact (‘altmetrics’) are, however, being introduced. Is research misconduct regarded seriously enough, or dealt with appropriately? There are concerns that it isn’t. We are, however, beginning to see cases of researchers found guilty of scientific fraud being served custodial sentences.
Authorship is often referred to as the ‘currency’ of academia. Because of this, authorship issues and disputes are some of the most common problems journals see, and many researchers will experience authorship-related situations at some time in their careers. It is increasingly being felt that existing authorship schemes are not adequate to reflect the current complex and multi-faceted research environment. More accurate and granular ways of assigning credit are therefore being looked into, and ways to acknowledge new types of contributions to research projects [9]. The whole scholarly publishing world is undergoing considerable change and upheaval, with imaginative new research dissemination initiatives and publishing platforms appearing.
In this new and fast-moving research landscape, it is more important than ever to distinguish between intentional bad behaviour and inadequate experimental and reporting practices or lack of knowledge. In my experience, many researchers aren’t receiving adequate or sufficiently up to date training in research integrity and publication ethics. We need not only to better prepare our young researchers to deal with a whole range of integrity and ethical issues, but also to create a culture where such issues are not just considered to be box-ticking exercises, a culture where researchers are not afraid to admit to honest errors and are rewarded for acting ethically and with integrity. We also need mechanisms for keeping more senior researchers abreast of new issues, especially those that are more complex and nuanced (such as the current discussions surrounding reproducibility [10]), and where they are rewarded for creating a culture of research integrity in their own research groups, departments and institutions. How do we do this?
We need to have what I like to think of as ‘the pyramid of research integrity’ (Figure 1). At the top there should be good, concise, easily understandable global guidelines, suitable for all disciplines and that take into account new or emerging issues, such as those that have come from the World Conferences on Research Integrity [11,12] (Figure 2). We then move to national guidelines, which can build on the global guidelines, setting them into national context and providing the frameworks for good research conduct and governance within nations, for example the UK’s Concordat to Support Research Integrity [13]. Institutions play a vital part in the scheme, and aside from having good and rigorous policies and processes, can provide researchers at all career stages with appropriate training and resources (Figure 3).
Research group leaders are responsible for providing their trainees and junior people with training in all aspects of research, including integrity and ethics, and in all the stages of the research cycle. They are the people guiding research, setting the standards and expectations, establishing the cultures in their groups. They are also in direct contact with those carrying out research and can have enormous impact on how well their researchers understand ethical principles and know how to put them into practice. Making time to hold ethics and integrity discussions will not only help educate their trainees, it will also provide the opportunity to introduce them to important new initiatives such as ORCID [14], and to find out whose knowledge of integrity and ethical issues is lacking and be able to address this at an early stage. At the bottom of the pyramid we have the largest group - individual researchers, who are responsible for ensuring that the studies and experiments they undertake are carried out appropriately and ethically, and the results are recorded accurately and reported honestly. Funding agencies, learned societies, journals and publishers can all have a positive impact at all levels of the pyramid, providing not only guidelines, resources and training opportunities, but the relevant specialist expertise (Figure 1).
We are living in an increasingly competitive and complex world, one in which researchers are facing increasing pressures. It is also a world where highly unethical things are being offered for sale by some unscrupulous third party service providers – authorship on papers can be bought, as can data - to help achieve a research publication record [15]. In this sometimes muddy environment it is more important than ever that integrity in research and publication is taught, valued and rewarded.
This article is based on a presentation given at the STM Publication Ethics and Research Integrity meeting held in London on 3 December 2015. A video recording is available at http://www.stm-assoc.org/events/publication-ethics-and-research-integrity/?presentations. Irene Hames worked with the University of Dundee, UK, to produce its research integrity online resource and received remuneration for this. Details of the resource can be found at http://www.dundee.ac.uk/opd/otheropportunities/researchintegrityonlinemodules/.
[1] |
Retraction Watch blog. http://retractionwatch.com/ (accessed 30 September 2016). |
[2] |
Retraction Watch The Retraction Watch leaderboard http://retractionwatch.com/the-retraction-watch-leaderboard/ (accessed 30 September 2016). |
[3] |
Van Noorden, R. Nature 2011, 478, 26-28. doi:10.1038/478026a |
[4] |
Fanelli, D. PLOS ONE 2009, 4(5), e5738. doi:10.1371/journal.pone.0005738 |
[5] |
Martinson, B. C.; Anderson, M. S.; de Vries, R. Nature 2005, 435, 737-738. doi:10.1038/435737a |
[6] |
Bik, E. M.; Casadevall, A.; Fang, F. C. mBio 2016, 7(3), e00809-16. doi:10.1128/mBio.00809-16. Pre-print published bioRxiv 20 April 2016. doi:10.1101/049452 |
[7] |
Godecharle, S.; Nemery, B.; Dierickx, K. The Lancet 2013, 381, 1097-1098. doi:10.1016/S0140-6736(13)60759-X |
[8] |
European Science Foundation (ESF)and All European Academies (ALLEA) 2011, The European Code of Conduct for Research Integrity ISBN 978-2-918428-37-4. http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegrity.pdf (accessed 30 September 2016) |
[9] |
Allen, L.; Scott, J.; Brand, A.; Hlava, M.; Altman, M. Nature 2014, 508, 312-313. doi:10.1038/508312a |
[10] |
Academy of Medical Sciences, 2015 Reproducibility and Reliability of Biomedical Research: Improving Research Practice, Symposium report, October 2015. http://www.acmedsci.ac.uk/viewFile/56314e40aac61.pdf |
[11] |
Singapore Statement on Research Integrity, 2010 http://www.singaporestatement.org/ (accessed 30 September 2016). |
[12] |
Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations, 2013 http://www.researchintegrity.org/Statements/Montreal%20Statement%20English.pdf (accessed 30 September 2016). |
[13] |
Universities UK (UUK) The Concordat to Support Research Integrity, 2012, ISBN 978-1-84036-273-2. http://www.universitiesuk.ac.uk/policy-and-analysis/reports/Documents/2012/the-concordat-to-support-research-integrity.pdf (accessed 30 September 2016). |
[14] |
ORCID http://orcid.org/ (accessed 30 September 2016). |
[15] |
Hvistendahl, M. Science 2013, 342, 1035-1039. doi:10.1126/science.342.6162.1035 |