A critical step is the initial decision regarding the submitted manuscript. Supposedly, the judgement whether to publish a submitted manuscript is made impartially, on the basis of scientific merit alone, by unpaid (traditionally anonymous) peer reviewers, selected from the scientific community on the basis of their expertise.
However, editors have a vested interest in maximizing the perceived 'quality' of their scientific journals and often exercise their personal judgement to decide whether a manuscript will be rejected immediately rather than enter the formal refereeing system . Early rejection without peer inputs is practiced to varying degrees by many journals – according to the editor of one 'prestigious' journal, nearly 30% of the manuscripts they received fell into this category (personal communication from editor not wishing to be identified). A major reason given for this practice, which is a highly subjective judgement at the administrative level and is not an informed, expert decision, is simply to reduce work load . Some journals allow authors to 'pre-select' editors, which again introduces the possibility of bias in the acceptance process . Cosy relationships between editors wanting to favor authors with known high citation rates and authors wanting their papers in 'high impact' journals with minimal fuss do a disservice to innovative science and to the career prospects of young and less prominent scientists.
While much has been written about peer reviewing, little examination has been made of the role of editors. Noble aspirations  and numerous guidelines [98-100] exist for editors, but there are few studies of practice. Writing on integrity in scientific publishing, Rennie  (Editor, New England Journal of Medicine) noted that no profession could be trusted to self-regulate. He observed that "while demanding ever-higher standards from their authors, editors themselves have only too often been content to operate in an evidence-free zone concerning their own outcomes, while at the same time asserting the virtues of their procedures". Südhof  argued that "emerging flaws in the integrity of the peer review system are [...] driven by economic forces and enabled by a lack of accountability of journals, editors, and authors". He considered that "editors should be named as part of the published reviews and should be held accountable if papers fail to meet basic quality and reproducibility standards".
With editors being paid for their services and sometimes recruited on a free-lance basis , there needs to be an examination of the extent to which remuneration may compromise their independence, loyalty to the field and capacity to withstand the dictates of the publisher even if it is detrimental to the discipline or scientists' interest. The role of the (paid) Editorial Board  also needs a great deal more scrutiny.
The selection of referees can also be heavily skewed. For example, 1) it is difficult to defend the practice of asking authors to propose referees, since they are unlikely to propose ones they think will reject their work and may be suggesting friends or colleagues who will do a favor and expect one in return – the only excuse can again be to reduce work load in the editorial office by choosing not to make the effort to seek out well qualified and impartial referees; and 2) editors may chose referees more likely to accept or reject a particular manuscript, including as part of the gaming of prestige and impact factors.
Neylon  has commented that "there are a few studies that suggest peer review is somewhat better than throwing a dice and a bunch that say it is much the same". A sculpture unveiled in Moscow in 2017, entitled 'Monument to the Anonymous Peer Reviewer', presents five faces of a large stone dice labelled 'Accept', 'Minor Changes', 'Major Changes', 'Revise and Resubmit' and 'Reject' . As discussed here, the dice itself may be loaded!
These problems lead to the question: To whom are the editors accountable – to the publishers, authors or discipline?
While not nearly as old as many believe , the tradition of anonymous review by peers has come to be regarded as the gold standard in the publishing of research findings. But, is peer review fit for purpose in the 21st century? In recent years, the process has become increasingly questioned, criticized and discredited – including by some prominent journal editors such as Richard Smith . The argument that the traditional peer review process ensures quality and validity and filters out incorrect material, plagiarism and scientific fraud is demonstrably not always true [107,109-111] and studies show that the unreliability of papers is greater in higher-ranking journals . Moreover, given that refereeing work goes unrecognized by the performance measurement process, it has been questioned whether the amount of effort expended in peer review is justified .
Unsurprisingly, unscrupulous operators have emerged who are willing to take advantage of the situation for profit. Pressure on scientists to publish has led to a situation where "any paper, however bad, can now be printed in a journal that claims to be peer-reviewed" .
The Royal Society 2015 conference on publishing noted that "It's extraordinary that universities and other institutions have effectively outsourced the fundamental process of deciding which of their academics are competent and which are not doing so well". The overall view from the conference round table on this subject was that the principle of 'review by peers' (as distinct from 'peer review' as usually practiced) was necessary and valuable, but should be organized in a different way. In general, participants felt that the opportunities offered by new technologies and the web had not yet been fully exploited. They saw a role for learned societies and funders to encourage innovation and drive the necessary changes .
The question of alternatives to the traditional model of peer review organized by the journals is increasingly being discussed . Broadly speaking, three major kinds of problems must be addressed: the shortage of peer reviewers in an era of massively increased numbers of manuscripts; the need to ensure competent review; and the need for transparency that demonstrates the absence of bias and encourages reviewers to accept greater responsibility for their decisions.
To date, two main kinds of modifications have been proposed. Responding to evidence that many reviewers do a poor job of spotting shortcomings in the papers they are critiquing, one report suggested that a solution is to make peer review more desirable and less of a duty by paying the reviewers . However, while this may attract more referees to work within the existing model, it is not evident that a profit motive will lead to more careful or objective refereeing and, conversely, it may create a new perverse incentive that reinforces journal gaming practices to achieve high impact factors.
The second type of modification responds to the opportunities of the globally connected, digital era, asking "should we conduct peer review in much the same way as when manuscripts were delivered by postal workers with horses?"  Variants in the answer to this question comprise a family of 'open peer review' approaches [118,119], many of which utilize the opportunity for papers to be placed online, in restricted or open spaces, and reviews invited or enabled.
Kriegeskorte  proposed an 'open evaluation' system, in which papers are evaluated post-publication in an ongoing fashion by means of open peer review and rating using newly defined 'paper evaluation functions' (PEFs). In this model: 1) The paper is instantly published to the entire community and reviewing commences. Although anyone can review the paper, peer-to-peer editing by a named editor helps encourage a balanced set of reviewers to get the process started. 2) Reviews and numerical ratings are linked to the paper and made accessible as 'open letters to the community'. 3) Rating averages can be viewed with error bars that tend to shrink as ratings accumulate. 4) Defined PEFs combine a paper's evaluative information into a single score. 5) The evaluation process is ongoing. Important papers will accumulate more evaluations (both reviews and ratings) over time as the review phase is open ended, thus, providing an increasingly reliable evaluative signal.
An extremely open model involves 'crowd-sourcing', in which a platform is provided for depositing unrefereed research papers for open peer reviewing. Harnad  has predicted that crowdsourcing will provide an excellent supplement to classical peer review but not a substitute for it.
An editor of the chemistry journal Synlett reported an experiment with 'intelligent crowd reviewing'. A forum-style commenting system allowed 100 recruited referees to comment anonymously on submitted papers and also on each other's comments. The result was judged to be more effective than traditional reviewing of the same papers, conducted for comparison. Despite the fact that sample on which conclusions have been drawn is small, Synlett is reported to be moving to the new system for all its papers [122,123]. It remains to be seen whether innovations like this are sustainable, since enthusiasm for and engagement in such initiatives may soon die down.
Fraud in scientific publishing
In a number of areas of scientific publishing, the line has been crossed between gaming the publishing system and outright deceit. Two aspects, in particular, are of current concern: the falsification of results by authors and the creation of fake journals by publishers.
Bad practice on the part of authors can cover a spectrum from plagiarism and self-plagiarism [50,124] and the 'massaging' of data (e.g., chemists will be familiar with the practice of reporting yields of materials that were not rigorously dried to constant weight, or ignoring/suppressing inconvenient peaks in spectra or chromatograms that may signal the presence of impurities) to false attribution of work  and the complete invention of experiments, results and product characteristics.
The precise extent to which falsification of results occurs across science is difficult to estimate but it certainly happens, as revealed by a number of sting operations taking in both high- and low-prestige journals of both traditional and open access varieties and showing the ease with which false papers can gain acceptance and fictional individuals can be appointed to editorial boards [125-129].
Retraction Watch tracks and publicizes retractions of papers that are shown to contain flawed claims  and monitors and highlights other unethical practices by authors, journals and manuscript editing companies . One study found that two thirds of retractions were because of misconduct while only a fifth were attributable to error; and the percentage of scientific articles retracted because of fraud has increased by an order of magnitude since 1975 . Disturbingly, the most prestigious journals have the highest rates of retraction, and fraud and misconduct are greater sources of retraction in these journals than in less prestigious ones . The practices of allowing authors to propose referees and to 'pre-select' editors contribute to the risks . There has been much publicity of recent cases in which Springer retracted 107 papers from one journal after discovering they had been accepted with fake peer reviews [134,135].
The publication of falsified results has much broader implications than for the reputations of individual scientists and journals – it affects the credibility of the entire scientific enterprise [136,137] and therefore science's capacities both to attract support from the public purse and to influence decision-making on issues of vital importance to society and to the world. For example, it enables the science underlying such important global challenges as climate change to be dismissed by some as unreliable and unsuitable as a basis for government policies [138,139]. Rigorous reviewing, the exposure of unethical practices and severe sanctions against the perpetrators are essential and all actors in the science community must take responsibility for stamping out this evil.
The large profits available from scientific publishing, especially in the new era of online-only publishing of e-journals, have opened the gates to a range of bad practices by journals, on a scale that extends from poor standards to outright fraud. New journals are constantly appearing that recruit editorial board members with dubious qualifications, or bona fide but naïve scientists lured by flattering invitations; undertake aggressive or 'predatory' practices to secure manuscripts that are accepted for publication with little or no refereeing; take fees for publications that never appear; and disappear after a short period, with concomitant loss of access to the papers they have accepted [140-143]. This has become a large-scale enterprise – one librarian has counted several thousand of these predatory/fake journals  and blacklists of predatory journals that falsely claim to be peer-reviewed have been developed – including a commercial enterprise [145,146]. Predatory and fake scientific meetings have also become an increasing problem .
The challenge of open access
According to HEFCE , "open access is about making the products of research freely accessible to all. It allows research to be disseminated quickly and widely, the research process to operate more efficiently, and increased use and understanding of research by business, government, charities and the wider public". With growing recognition of the desirability of open access [149-153], many funders of research are increasingly requiring that work they support is published through open access channels. HEFCE describes two complementary mechanisms that authors can use to achieve open access, known as the 'gold' and 'green' routes:
- Gold – The journal publisher forgoes subscription or access charges for the online user, making the paper immediately accessible to everyone electronically and free of charge. Publishers can recoup their costs through a number of mechanisms, including through APCs, advertising, donations or other subsidies.
- Green – The author deposits the final peer-reviewed research output in a 'repository' – an electronic archive which may be run by the researcher's institution or an independent group and which may cover one or a number of disciplines. Permission needs to be given by the publisher who holds copyright on the paper and access to the research output can be granted either immediately or after an agreed embargo period.
The Directory of Open Access Journals, launched in 2003, currently contains about 9,000 open access journals covering all areas of science, technology, medicine, social science and humanities . Of particular note are the Beilstein Journal of Organic Chemistry and the Beilstein Journal of Nanotechnology which, thanks to their endowment, are able to offer 'platinum' open access (which means that neither authors nor readers are charged any fees ) and the open access Arkivoc (Archive for Organic Chemistry), which was established as a not-for-profit entity in 2000 through a personal donation .
While many publishers have supported the 'gold' approach which preserves their opportunity for income (especially from authors), funding agencies have been increasingly moving towards insisting that their grantees take up the 'green' route. arXiv.org is the world's oldest and one of the largest open access archives (others include individual institutional archives as well as subject archives like bioRxiv, ChemXSeer and PubMedCentral), with participation in self–archiving of preprints approaching 100 percent in some sub–disciplines, such as high energy physics .
Driven by the requirements of key research funders (including many in Europe and North America, such as the Bill and Melinda Gates Foundation ) that publications based on research they support is rapidly placed on open access, many university libraries now operate document repositories across disciplines to support self-archiving. Considering flaws in the current publication model, Bachrach  proposed greatly reducing the number of journals, with the vast majority of papers being placed on open access within institutional repositories with an emphasis on 'open data' – making data freely available to all with no restrictions on re-use, available in the format that allows the reader ready and direct access for complete reuse. He did not, however, consider how this would create an even stronger gradient for aspiring authors under the prevailing system of 'publish or perish'. Brembs et al.  went further, suggesting that "abandoning journals altogether, in favour of a library-based scholarly communication system, will ultimately be necessary". This new system will use modern ICT "to vastly improve the filter, sort and discovery functions of the current journal system".
A 2012 Nature editorial  emphasized that with the increasing adoption of open access approaches the important questions remained to be answered of who will pay, and how much, to supply what to whom? It noted that "the Finch report rightly concludes that universities will need to set up dedicated funds for APCs".
However, an alternative model emerged later the same year, with the launch of the new on-line journal eLife. This was funded by three highly prestigious research funding bodies – the Howard Hughes Medical Institute, the Max Planck Society and the Wellcome Trust – to publish outstanding science under an open-access license [160,161]. Initial funding of £18 million was provided and a further sum of £25 million for 2017–2022 was announced in June 2016 . However, the journal declared in September 2016 that it would begin charging fees of US$2,500 for all accepted papers, removing its most distinctive feature .
Harnad  has argued for strong institutional pressure to adopt self-archiving practices. He has suggested  that universal adoption of green open access may eventually make subscriptions unsustainable and has been a strong advocate of the use of online open access repositories in the 'post-Gutenberg' era.
As part of a broader program for 'open science', in 2016 the European Union initiated a strong move towards open access, with the Competitiveness Council (a gathering of ministers of science, innovation, trade and industry) agreeing on the target of making all publicly funded scientific papers published in Europe free by 2020 [166,167]. A study  for the European Commission (EC) noted that "market forces alone are not sufficient to deliver widespread access to scientific information" and that current policy interventions in Europe were not sufficient either to deliver the goal of immediate open access by 2020, or to significantly improve market competitiveness. The EC Directorate for Research and Innovation had already made open access an obligation for grantees in its Horizon 2020 research program and has been investigating the possibility to fund a new, non-compulsory platform for Horizon 2020 beneficiaries to publish open access, in addition to the currently existing options . It appears to be moving rapidly towards establishing such a platform in line with similar recent initiatives by the Bill and Melinda Gates Foundation and Wellcome Trust .
3. THE WAY FORWARD
Because of the highly integrated nature of the elements in the sub-systems that constitute scientific publishing, piecemeal solutions that address the flaws in each component in isolation are unlikely to provide more than temporary palliatives. A comprehensive overhaul is required that simultaneously drives all components towards a system that genuinely serves the interests of science, scholars and society. This system must ensure equitable opportunity for all researchers – without regard to their prior scientific reputation, location or gender – to make their findings public, gain credit for the quality of their contributions and have open access to all the published work of others; and it must provide a high level of assurance to scientists, policy makers and the public about the reliability of the information accessed.
Achieving this transformation will require persistent effort along several intersecting axes. Reaching many of the objectives will be made possible by leveraging the capabilities of 21st century ICT, in combination with addressing the reality that self-regulation can no longer be relied upon to sustain the integrity and reputation of science publishing and a well-inspected and enforceable system of oversight and penalties is needed. The implications for each of the scientific publishing sub-systems are considered here and key inter-linkages are highlighted.
Financial system: The central question is: who pays, and how much, to achieve the most equitable and open access that is sustainable?
Fully open access, in which neither authors nor users pay fees, can be regarded as an ideal for scientific publishing, ensuring that all researchers – without regard to their prior scientific reputation, location or gender – are able to make their findings public, gain credit for the quality of their contributions and have free access to all the published work of others. Such an ideal cannot, however, be achieved through either 'gold' or 'green' models, even if all journals are published in electronic format. The costs of organizing and managing a traditional peer-reviewing system, of processing manuscripts and of creating and mounting issues of periodicals and the regular up-dating of the electronic hardware and software are not zero.
However, the more these costs and the consequent fees that need to be charged to authors or users can be reduced; the lower will be the barriers to publication and access. Greatly reducing profit margins should also mean that there will be less attraction for operators of predatory and fraudulent journals. Ultimately, the lowest costs would be generated if publishing was managed efficiently by not-for-profit entities.
Promising initiatives in recent years have been the growth of open access archives into which authors deposit their papers in either finally refereed and corrected or finally formatted and published version; the support given to the new biological sciences journal eLife by a group of prestigious research funding bodies, which in its first few years enabled the journal to dispense with author publication charges; and the support that Beilstein has given to fully open access journals in organic chemistry and nanotechnology.
There needs to be a serious debate, led by science academies and professional organizations and engaging scientists, policy makers, industry, science funders and foundations, about the best way to move open access forward sustainably.
The success of a fully 'open access' model will depend critically on its linkages with two key components: the refereeing element of the science advancement system and the evaluation of scientific quality and contribution at the core of the reputational system.
Science advancement system: The use of the traditional peer reviewing system is evidently not sustainable, since there are too many papers being published; no longer reliable, since many examples are beginning to emerge where reviewers have failed to detect false data; and lacking confidence due to non-transparency and evidence of biases in the system and of randomness in outcomes. The traditional system is also temporally frozen, producing a judgement at one point in time, while the fast pace of development in many areas of science may mean that the assessment of quality/value of the paper is quite quickly out of date.
Recent explorations of open evaluation have demonstrated, in principle, the potential for review by larger groups of scientists on the web, in either fully open or semi-structured modes. Such reviews can be ongoing, adding perspective to the correctness and value of the work. However, there are questions about the sustainability of such models once the initial phase of enthusiasm subsides. Further examination of these models and development of a universal approach is needed, through the joint effort of scientists, their institutions, archive centers and research funders.
In any system that relies on oversight by editors and conscientious application by peer reviewers, the integrity and fairness of decision-making needs to be robustly ensured through the rigorous application of scrutiny, adjudication and sanctions. The penalties faced by scientists who deliberately distort or falsify data must also be well defined, publicized and rigorously enforced. Self-regulation should not be considered as an option. The confidence that the public and policy-makers, as well as scientists, have in published science results must be a primary consideration and must be guaranteed by scrutiny that is independent of the scientists' institutions and the publishers.
Reputational system: Current practices in the evaluation of scientific merit drive many of the worst features of the present scientific publishing system, placing excessive emphasis on metrics of publication numbers, the citation rates of papers and the status of the journals in which they appear. These metrics are used inappropriately for evaluating the extent of authors' contributions to the field and for judgements about career advancement, rather than employing qualitative judgements based on expert assessment. A system that would preclude a scientist like Peter Higgs from developing the work that led to the discovery of the fundamental particle that carries his name  cannot be considered fit for purpose in the 21st century.
An important step towards countering these bad practices was made with the formulation of the San Francisco Declaration on Research Assessment. However, DORA does not go far enough. It emphasizes that funders and institutions should acknowledge that "the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published" and that publishers should "greatly reduce emphasis on the journal impact factor as a promotional tool". 'DORA 2' is now needed, which eradicates the use of all publication metrics for evaluations of authors' scientific contributions and the use of 'impact factors' as an indicator of journal quality. Academic institutions, funding agencies and bodies representing professional scientists should engage to generate a 'DORA 2' and to vigorously promote its universal application.
As presently constituted and operated, the scientific publishing system is highly flawed. The three sub-systems that relate to science advancement, finance and reputation are influenced by historical legacies and contemporary forces that drive diverse actors to employ gaming strategies and sometimes unethical or fraudulent practices for their own benefit rather than the good of science.
Major flaws that are seen include the sub-division of bodies of research into multiple papers that are salami-sliced to reduce them to the smallest publishable unit; operation of biased and non-transparent editorial and refereeing processes producing results that in some cases appear to be unreliable and in others little better than the throw of a dice; the pursuit of models of 'open access' that present financial barriers to the authors and work against the interests of scientists in poorer countries; the use, by a range of evaluators, of metrics based on the mechanical extraction of publication and citation data which is fundamentally skewed, readily gamed by authors and publishers and ignores qualitative assessment of the intrinsic value of the contribution made by each scientist to their discipline; and exploitation of the system by authors seeking academic credit through falsified results and by operators of predatory and fake journals motivated only by financial returns.
The overall result of these deep flaws and fissures in the scientific publishing system is damage to the careers of many (especially young) scientists and to the reputation of the field of science publishing – and, by association, damaging to the reputation of the content of the publications. In consequence, harm is done to science as a whole, whose advancement is slowed by the fracturing of bodies of work and by uncertainties about veracity of data and the reliability of information that may be used for major policy decisions.
Moving forward will require champions and leaders to overcome resistance by those with a vested interest in the current, flawed system. At the level of disciplines, learned societies (e.g., chemistry societies) can play a key role in championing the cause. At the level of science as a whole, the National Academies need to take up the issue through a global initiative. Much could be achieved by building on the 2015 conference and report of the Royal Society, e.g., by convening an international meeting of academies and other stakeholders to consider and form a consensus around new models and to develop an action plan.
The writing of this article was initiated at a workshop hosted at the School of Chemistry, University of Hyderabad, India in March 2017, supported by the International Organization for Chemical Sciences in Development (IOCD).