last updated 2022-12-01
JEDI members are jointly developing a collection of resources for journal editors in the social sciences. We are very grateful to JEDI members who have contributed resources for this page (see below for a list) and warmly welcome further suggestions. These can be made either by posting them to the mailing list, or by directly emailing Priya Silverstein at [email protected]. Corresponding links will then be added to this page.
Please check back here from time to time, as we hope this page will be updated regularly!
If you are a journal editor and are not yet a member, please join JEDI.
Table of Contents
- Incoming editors
- Diversifying social science research
- Open science
- Reconsideration of previously rejected submissions
- Improving the quality of reviews
- Incentivising reviews
- Reviewer and editorial collusion
- Detecting overlap between submitted and existing manuscripts
- Submission types
- Peer review innovations
- Limitations of peer review
- Editorial secrets
- Persistence and preservation
- List of contributors to this page
Publishers often offer guidelines and associated resources relating to editorial practices. While typically provided for editors of the journals that they publish, these resources may also be more generally helpful. For some examples see:
If you are just starting out as a journal editor, you might find the Committee on Publication Ethics’ short guide to ethical editing for new editors helpful, as well as this glossary of publishing and editing terms.
The PKP school also offers a free course in becoming an editor, focusing on how to perform the major tasks required of an editor for a scholarly journal, how to analyze and solve common problems that may arise when editing a scholarly journal, how to assist other members of the journal team, and where to look for help with difficult issues.
The Council of Science Editors has sample correspondence for an editorial office that you can customize to suit your journal, and also offers short courses in journal and manuscript editing. The European Association of Science Editors (EASE) offers training and webinars, and a guide on how to select reviewers.
Onboarding associate/action editors will vary hugely across journals. Most journals will provide some written guidelines, and some may also offer opportunities for incoming editors to reassure themselves that they have comprehended these guidelines. PCI Registered Reports does this (see their guide and test). Some journals have also created an onboarding video (see this video introduction for editors and reviewers at Collabra: Psychology).
The Committee on Publication Ethics has many resources to help journal editors deal with ethical issues, including guidelines and case studies. For example, they have guidelines on publication manipulation.
The Council of Science Editors has a white paper on publication ethics including a guide to editor roles and responsibilities.
Although some ethical issues are clearly defined (e.g. that authors should not fabricate data), there are other areas where there is debate surrounding the lines of editorial responsibility. For example, Nature Human BehaviorNature Human Behavior has put in place a new research ethics policy that addresses potential harms for human population groups who do not participate in research but may be harmed by its publication (see the editorial introducing the policy here). However, there have been some criticisms of the policy, suggesting that its vagueness could be open to abuse and stand in the way of “truth”. For one discussion on whether and how (identity- and ideological-based) diversity and scientific values should impact the research that gets conducted and published in psychology see Conry-Murray & Silverstein (2022).
Editors publishing in their own journals
There is some controversy surrounding the question of whether editors should be allowed to publish in the journals that they edit. Where this is permitted, journals should have clear processes in place to evaluate articles submitted by editors to their own journals.
Helgesson et al. (2022) conducted a systematic review of the prevalence of this practice. They found large variability of self-publishing across fields, journals, and editors, but ultimately that there are some situations where levels of self-publication are very high. This is corroborated by Bishop (2020) who found that there are even some cases where the most prolific author in a journal is the editor-in-chief! In addition, among biomedical journals where few authors were responsible for a disproportionate number of publications, the most prolific author was part of the editorial board in over 60% of cases (Scanff et al., 2021).
Diversifying social science research
An argument can be made that social science disciplines are characterized by different kinds of systemic inequality. For example, elitism is one source of inequality – Kawa et al. (2019) found that the top 5 or so anthropology departments in the USA all hire each others' graduates. Another example is authors’ limited geographical distribution. Of the authors who published articles in the top five developmental psychology journals between 2006 and 2010, fewer than 3% were from countries in Central or South America, Africa, Asia, or the Middle East (Nielsen et al., 2017). This limitation also extends to journal editors (Altman & Cohen, 2021). This is reflected in citation patterns too – this article points to the trend of inadequately citing or omitting scholarship by women and people of color and suggests some best practices journals can adopt to avoid this.
Some fields have developed local resources that could be adapted to other fields. For example, Roberts et al. (2020) examine racial inequality in psychological research to date and offer recommendations for editors and authors for working towards research that benefits from diversity in editing, writing, and participation. Buchanan et al. (2020) also discuss strategies for upending racism in psychological science. In a different paper, Buchanan et al. (2021) propose a Diversity Accountability Index for Journals (DAI-J) in order to increase awareness and establish accountability across psychology journals. The American Psychological Association provides a detailed Equity, Diversity, and Inclusion Toolkit for Journal Editors.
There are several areas where editors may have an opportunity to encourage a more inclusive use of language when communicating about science (Khan, 2021).
For example, when discussing open peer review, one might use the term “masked” instead of “blind”; many journals have also adopted this change in wording. See this American Psychological Association blog post for an explanation of why.
Similarly, “go-list” / “allow-list” / “green-list” and “stop-list” / “block-list” / “red-list” can be used instead of “white-list” and “black-list” when talking about predatory journals. See Houghton (2018) for an explanation of why.
Collecting demographic data
One step towards diversifying social science research is to have a better picture of who constitutes a journal’s authors, reviewers, and editors. Collecting demographic data may help efforts to increase diversity.
Editors at Personal Relationships have created a "Diversity matrix” template that can be used and adapted to collect data from authors, reviewers, editorial board members, editors, etc.
However, it is important to acknowledge that collecting demographic data may not be straightforward. For example, there may be geographical differences in how (and whether) to require authors to specify the demographic characteristics of their samples or themselves. For example, see Jugert et al. (2021) and Juang et al. (2021) for discussions of these tensions within a European context.
Some journals and/or societies choose to adopt a sociocultural policy in order to contextualise samples and study findings (see for example this policy from the Society for Research in Child Development).
Authors, reviewers, and editors
Proceedings of the Python in Science Conferences undertook a survey of the demographics of their authors and reviewers; see blog posts summarizing their results from 2020 and 2021.
This blog post summarises the American Geophysical Union’s (AGU’s) efforts to increase diversity in their reviewers and editors.
Another way to encourage authors to reflect on and report demographic characteristics and how they may have affected the way they have approached their research is to require or offer the opportunity for authors to provide positionality statements. Positionality statements are statements made following the process of reflexivity, whereby authors examine the “conceptual baggage” that they are bringing to the research. Positionality statements have many other benefits unrelated to diversity (beyond the scope of this resource).
Who is responsible for advancing diversity?
Research communities which adopt diversity as one of their goals should avoid over-burdening under-represented scholars with diversity-related tasks. Under-represented scholars are sometimes expected to play a disproportionate role in advancing diversity and inclusion in institutions (Jimenez et al., 2019, or read this summary). This may also be the case in journal editorial teams, and hence it is likely worthwhile to keep a note of who is advancing diversity and inclusion in your team to ensure a fair distribution of effort.
English language editing
If your journal publishes in English, you can consider offering an additional free editing service for manuscripts submitted by those for whom English is a second language. For example, see the International Section guidelines at Personal Relationships and this article about how the International Section is contributing to elevating international scholarship.
Retroactive name changes
Although retroactive name changes may seem strangely placed in a section on diversity, the historical inability to retroactively change names in publications is likely to disproportionately impact marginalized individuals. Women are historically more likely than men to change their name after marriage and/or divorce. Although many authors who have changed their names after marriage or divorce have no problem authoring publications under different names, for others viewing their previous name will be traumatic (e.g., where a marriage ended because of domestic violence). In addition, transgender individuals often choose to change their names, and being referred to by their previous name ("deadnaming") can be traumatic for them.
This trauma can be avoided by implementing a name change policy. These policies can be implemented at the publisher level (e.g., see these from Wiley and Elsevier), the society level (e.g., see these from the American Psychological Association and the American Chemical Society), or the individual journal level.
Considering citation disparities
Gender (Chatterjee & Werner, 2021; Dworkin et al., 2020; Kozlowski et al., 2022) and racial (Chakravartty et al., 2018; Kozlowski et al., 2022) citation disparities have been found across a number of fields.
There have since been calls for more awareness, transparency, and communication about potential biases in citation practices (summarized in Kwon, 2022). Tools have been created to assess gender balance (e.g. GBAT) and to diversify the race/ethnicity (e.g. this Citation Audit Template) of the authors cited in bibliographies (although see Syed (2022) for a critique of these tools, what they imply, and the technology that underpins them).
Collectives such as the “Cite Black Women” movement go beyond these tools and instead focus on reading, appreciating, and acknowledging the scientific contributions of marginalized scholars.
As editors, you may consider suggesting or encouraging authors to reflect on possible citation disparities in their work at either the submission or review stage.
A strong consensus is emerging in the social sciences and cognate disciplines that knowledge claims are more understandable and evaluable if scholars describe the research processes in which they engaged to generate them. Citing and showing the evidence on which claims rest (when this can be done within ethical and legal constraints), discussing the processes through which evidence was garnered, and explicating the analysis that produced the claims facilitate expression, interpretation, reproduction, and replication. The Committee on Publication Ethics has a list of principles of transparency and best practice in scholarly publishing.
Nosek et al. (2015) presents an overview of the Transparency and Openness Promotion (TOP) Guidelines for journals, which have been used to generate the journal-level TOP Factor and provide a clear view of areas in which editors can consider steps towards more open science at their journals. See this summary table and example policy text for achieving a TOP level of 1, 2, or 3 for different standards. For an example of how to explicitly signal adherence to TOP guidelines at your journal, see this example policy and author checklist from Cortex. You can even contact the policy team at the Center for Open Science to talk through your journal’s specific needs with regards to editing your policies and practices in line with TOP Guidelines. A similar initiative is the DA-RT Journal Editors’ Transparency Statement (JETS), a product of the American Political Science Association’s 2010 Data Access and Research Transparency initiative.
Resources for authors
Aczel et al. (2020) present a consensus-based checklist to improve and document the transparency of research reports in social and behavioral research along with an online application that allows users to complete the checklist and generate a report that they can submit with their manuscript or post to a public repository.
Data and code
A set of stable core practices has begun to emerge with regard to data management and sharing. While all are readily available, some require more effort on the part of journals. For example, requiring authors to share data via a trusted digital repository (and not e.g. via a personal website) is easier than checking that the author's code runs. Iain Hrynaszkiewicz (Public Library of Science (PLOS)) has written a chapter outlining “Publishers’ Responsibilities in Promoting Data Quality and Reproducibility” that describes practical approaches being taken by publishers to promote rigor and transparency in data practices.
Journals are increasingly adopting data and code availability policies. The Research Data Alliance has developed a Research Data Policy Framework for all journals and publishers including template policy texts which can be implemented by journals in their Information for Authors and publishing workflows.
There are many repositories to choose from, and different repositories will have benefits for different fields and needs. For lists of recommended repositories, see for example: section 2.2.1 in F1000 Research’s data guidelines; Springer Nature’s list of social science repository examples; and PLOS One’s recommended repositories. Similarly, the Society for Social and Personality Psychology has created a matrix of different trusted repositories and what they offer. Similarly, re3data provides a database of different repositories and their attributes categorized by subject, content type, and country.
Goodman et al. (2014) states that “A proper, trustworthy archive will: (1) assign an identifier such as a “handle” (hdl) or “digital object identifier” (doi) (see the DOI handbook); (2) require that you provide adequate documentation and metadata; and (3) manage the “care and feeding” of your data by employing good curation practices”. The Research Data Alliance have created 10 Things for Curating Reproducible and FAIR Research and The CoreTrustSeal Trustworthy Data Repositories Requirements. See also the FAIR Guiding Principles for data management and stewardship.
In some fields (e.g. economics), it is common practice to have on the editorial team a specific “data editor”. These data editors are responsible for creating, adapting, and implementing data and code sharing policies at their respective journals. Data editors often share important information that can be useful for other editors looking to adopt or adapt their existing data and code sharing policies. For example, the data editor websites for the American Economic Association, The Review of Economic Studies, and The Economic Journal all offer a wealth of information and advice. See, for example, the American Economic Association policy on revisions of data and code deposits in the AEA data and code repository.
It is important to note that there will be cases where data cannot be shared, and that it is important to be “as open as possible, as closed as necessary”. See Meyer (2018) for an excellent guide on “ethical data sharing”.
FORCE11 and COPE have developed some recommendations for the handling of ethical concerns relating to the publication of research data. See their blog post and the recommendations themselves here.
There may be particular types of data that are more difficult to share – e.g., data on sensitive topics. For a case study that includes challenges, tools, and future directions of sharing data in these cases, please see Joel et al. (2018). For a case study that discusses the redaction of sensitive data, see Casadevall et al. (2013). Many repositories offer restricted access to data, e.g. the repositories organized in Data-PASS (e.g. ICPSR, QDR, Odum, Databrary), and almost all repositories organized in CESSDA.
Mandating or encouraging authors to share data alongside their manuscripts means that reviewers (and later, readers) can:
- See the structure of the data more clearly
- Run additional analyses
- Use the data to answer new questions
In addition, there may be a citation advantage for authors sharing data (Colavizza et al., 2020).
Mandating or encouraging authors to share code alongside their manuscripts means that reviewers (and later, readers) can:
- See the analyses that were conducted more clearly
- Check whether the code runs on another (similarly structured) dataset
Authors can also give reviewers the opportunity to check basic code functionality by providing synthetic data. Dan Quintana has done a lot of work promoting the sharing of synthetic datasets and providing resources to help authors do so – see his YouTube video, blog post, and Quintana (2020).
Open data and code
Mandating or encouraging authors to share both data and code alongside their manuscripts has all the above benefits of sharing either of these separately, but also means that reviewers (and later, readers) can check the computational reproducibility of the results (i.e., does running the code on the data produce the same results that are reported in the paper).
The American Economic Association provides helpful guidance on implementation of their data and code availability policy that could easily be applied to other journals and fields. The Berkeley Institute for Transparency in the Social Sciences offers a handy guide on conducting reproducibility checks.
For a discussion of the impact of journal data policy strictness on the code re-execution rate (i.e., how likely the code is to run without errors) and a set of recommendations for code dissemination aimed at journals, see Trisovic et al. (2020).
Pre-publication verification of analyses
Some journals have now adopted a policy whereby data and code are not only required for publication in the journal, but must be checked before publication to ensure that the analyses are reproducible – that the results in the manuscript match the results that are produced when someone who is not one of the authors re-runs the code on the data. This is called pre-publication “verification of analyses”, “data and code replication”, or “reproduction of analyses”. See Willis & Stodden (2020) for a useful overview of how to leverage policies, workflows, and infrastructure to ensure computational reproducibility in publication.
For more information on how to implement a policy like this at your journal, see the Data and Code Guidance by Data Editors developed by Lars Vilhuber and colleagues, which is used by the American Economic Association journals, Canadian Journal of Economics, the Review of Economic Studies, and the Economic Journal as a reference.
Several journals in political science also require pre-publication verification. See for example State Politics & Policy Quarterly, Political Analysis, and the American Journal of Political Science.
The social sciences are increasingly adopting the use of permanent identifiers, such as digital object identifiers (DOIs), for datasets, making it easier to find and cite data sources. There may be additional stylistic guidelines that are discipline-specific, for example this guide to dataset references from the American Psychological Association.
Many guides for data citation exist that can be shared with authors and/or adopted as a policy at your journal:
- TOP Guidelines (section on Citation Standards)
- Joint Declaration of Data Citation Principles
- Guidance on Data Citations (including a widget for computing citations)
- DataCite (including DataCite Metadata Schema)
- How to Cite Data and Statistics
- Citing data sources – Why is it good and how to do it?
DataSeer is intended to help journals flag references to data in manuscripts, which could be used to ensure all data referenced are being properly cited.
Heterogeneous data and analytic materials
While this section is framed in terms of numeric data and computer code, it is worth noting that cognate considerations arise in all circumstances where authors use a combination of data and analytic reasoning to make their findings. For example, authors can also make qualitative data and materials available for case studies that used process tracing analyses and relied on interviews and archival data.
Open science badges
One way of incentivizing open science is to offer open science badges to signal and reward when underlying data, materials, or preregistrations are available. Implementing badges is associated with an increasing rate of data sharing, as seeing colleagues practice open science signals that new community norms have arrived. See the guidance on badges by the Center for Open Science for more information on how to implement badges at your journal. However, it is important to note that receiving a badge for sharing data and code does not necessarily mean that analyses are reproducible.
Verification Reports (VRs) are an article format focusing specifically on computational reproducibility and analytic robustness. VRs meet this objective by repeating the original analyses or reporting new analyses of original data. In doing so they provide the verifiers conducting the investigation with professional credit for evaluating one of the most fundamental forms of credibility: whether the claims in previous studies are justified by their own data. Chris Chambers has introduced this format at Cortex (see his introductory editorial and materials for journals). For examples of the first two VRs published by Cortex, see Chalkia et al. (2020) and Mirman et al. (2021). If you’re interested in including VRs as an article type at your journal, Cortex’s author guidelines provide more information on this format.
Preregistering research means specifying a research plan in advance of a study and submitting it to a registry. Preregistration separates hypothesis-generating (exploratory) from hypothesis-testing (confirmatory) research. See the Center for Open Science’s guide to preregistration for more information. Journals can choose to encourage, reward, or require preregistration for confirmatory research.
There are several independent, institutional registries for preregistering research; for example, ClinicalTrials, American Economic Association Randomised Control Trial Registry, Open Science Framework, EGAP Registry, and The Registry for International Development Impact Evaluations.
There are many templates available for preregistering different types of research. For example, the Center for Open Science has several options available here, including more general templates and templates specific to social psychology, qualitative research, and secondary data. See van den Akker et al. (2021) for a systematic review preregistration template.
If you choose to encourage, reward, or require preregistration for confirmatory research at your journal, you can think about whether to suggest or require that authors use a specific registry or template for studies submitted to your journal.
Registered Reports (RR) is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology. This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings. Although RRs are usually reserved for hypothesis-testing research, a version for exploratory research – Exploratory Reports – is now also being offered.
RRs are a new and evolving format, and so there are many people working to improve RRs for authors, reviewers, and editors. To read a summary of these conversations, or add to them, please see this working document.
Resources for editors
See the resources for editors by the Center for Open Science for more information on implementing Registered Reports at your journal. More advice for reviewers and editors can be found in Box 2 and 3 of this preprint by Chris Chambers and colleagues. For an example of an RR policy, see the RR policy for Cortex.
Resources for authors and reviewers
The Journal of Development Economics has developed a website dedicated to guidance for authors and reviewers on Registered Reports. Eike Rinke and colleagues at the Journal of Experimental Political Science have also created a great FAQ page for authors.
PCI Registered Reports
An exciting new initiative – Peer Community In Registered Reports (PCI-RR) offers free and transparent pre- and post-study recommendations; managing the peer review of Registered Report preprints. The peer review is independent of journals but is endorsed by a growing list of journals that accept PCI-RR recommendations. Read about PCI-RR and see the PCI-RR Journal Adoper FAQ for more information.
Open Peer Review
“Open review” means different things depending on what is open and to whom. Open review is a good example of a situation where the most open version may not be the fairest or best option, as there are many factors to take into account. It is also important to distinguish between open identities and open reviews (both of which, confusingly, can be called open reviewing). For an overview of the various definitions of Open Peer Review, see Ross-Hellauer (2017). For a review of the benefits and limitations of open peer review (some outlined below), see Besançon et al. (2020). For guidelines on implementing open peer review, see Ross-Hellauer and Görögh (2019).
Journals will have a policy on whether editors, reviewers, and authors know each other’s identities. Transpose (TRANsparency in Scholarly Publishing for Open Scholarship Evolution) offers a database that includes information about which type of peer review different journals are currently using.
In open reviewing everyone knows the identity of everyone. This is argued to increase the accountability of the reviewer, giving less scope for biased or unjustified judgements. Godlee et al. (2002) offers a good introduction into the benefits of making reviewers open.
In a single-masked system, only reviewers are made anonymous. This is in order to be as open as possible, while ensuring that reviewers are not treated unfairly (or fear being treated unfairly) for giving unfavorable reviews. Indeed, it has been found that reviewers are less likely to express criticism (Mulligan et al., 2013; Ross-Hellauer et al., 2017) and are less likely to recommend to reject articles (Bravo et al, 2019; Bruce et al., 2016; Sambeek & Lakens, 2021; Walsh et al., 2000) if their identity is known to authors.
In a double-masked system, the editor knows the identities of reviewers and authors, reviewers and authors both know the identity of the editor, but authors and reviewers do not know the identity of each other. This is in an attempt to eliminate personal biases of reviewers (e.g. those based on gender, seniority, reputation and affiliation), e.g. see Tomkins et al (2017) for evidence of single-masked review being biased towards papers with famous authors and authors from high-prestige institutions. See Nature’s editorial adopting an optionally double-masked system. Kern-Goldberger et.al. (2022) have conducted a systematic review on the impact of double-masked peer review on gender bias in scientific publishing, and studies seem to demonstrate mixed results. When given the choice, corresponding authors from less prestigious institutions are more likely to choose double-masked review (McGillivray & De Ranieri, 2018).
In a triple-masked system the identity of the editor, authors, and reviewers are all masked from each other. This is to acknowledge the possible personal biases of editors as well as reviewers. For some examples of triple-masked review in practice, see the guidelines for authors from Comparative Political Studies and Perspectives on Politics.
There is no one-size-fits-all solution, but it is important to think carefully about which policy makes sense for your journal. If you choose a single-masked or double-masked system, you also need to think about what to do if authors choose to sign their reviews. Some journals remove these signatures, and some choose to allow authors to do this if they wish. If your field has a high percentage of desk rejections (e.g. Political Science: Garand & Harman, 2021) then you may wish to consider double-masked review as editor bias would have a bigger impact in your field.
It is also possible to make the reviews themselves openly available alongside published manuscripts (with or without the reviewers being identified). This can help to give context to the published article. See this blog post discussing the launch of open reviews at Quantitative Science Studies. EASE provides a history of why some journals are making reviews open as well as a guide on how reviewers can publish their own review reports. The Publish Your Reviews initiative encourages peer reviewers to publish their reviews alongside the preprint of an article.
Open access (making research outputs freely available to all) is important for disseminating and sharing scientific results with scientists and members of the public around the world. See The Turing Way’s guide to open access here for an explanation of different versions of Open Access. Essentially, journals can publish open access articles in a hybrid model (where not all articles are open access, but some are), or be fully open access (where all articles are open access). Journals and publishers decide how to fund this – usually through charging authors for making their articles open access.
Although many journals are now open access from the outset, many journals that previously did not offer open access are making/have made the transition to some form of open access. For example, the editorial team from the journal Lingua broke off from Elsevier and launched a fully open access journal: Glossa. Those interested in taking similar steps may wish to read about their journey; here are some slides on the transition, a blog post by one of the previous Executive Editors, and an interview with one of the previous associate editors.
Becoming an independent open access journal
There are some outstanding examples of existing journals shifting from a commercial publisher or university press to an independent fully open access journal, or new independent fully open access journals being founded. See some of the below examples for inspiration! It is important to consider how/whether your journal will be able to be indexed, for example in Web of Science (see here for their evaluation process and selection criteria).
The editorial team from the journal Lingua broke off from Elsevier and launched Glossa. Here are some slides on the transition, a blog post by one of the previous Executive Editors, and an interview with one of the previous associate editors.
The editorial team from the Journal of Informetrics also broke off from Elsevier and launched Quantitative Science Studies. See this article about the transition.
The Journal of Privacy and Confidentiality made the shift to being an independent journal – see these 2018 editorials on their relaunch (Vilhuber, 2018) and future (Dwork, 2018). In the next phase, they are currently working on creating a non-profit organization that will house the journal.
Willis and Stodden (2020) highlight nine decision points for journals looking to improve the quality and rigor of computational research and suggest that journals reporting computational research aim to include “assessable reproducible research artifacts” along with published articles.
The American Journal of Political Science Verification Policy provides a role-model example of how computational research can be made to be more rigorous and error-free through additional steps in the editorial process – but it also shows how this requires resources on top of the procedures editors have become accustomed to over decades.
Encouraging authors to acknowledge limitations
It is important that authors are transparent about and own the limitations of their work. Hoekstra and Vazire (2020) provide a set of recommendations on how to increase intellectual humility in research articles that can be used as both author and reviewer guidelines. In addition, editors who want to incentivize intellectual humility in their journals can implement policies that make it clear to authors and reviewers that owning the limitations of one’s research will be considered a prerequisite for publication, rather than a possible reason to reject a manuscript. For an example of a policy like this, see the reviewing policies for Management and Organization Review. See also these two editorials from Nature Human Behavior for short overviews of issues and solutions (“Tell it like it is” and “Not the first, not the best”).
Many journals now have policies on publishing replication studies. Subscribing to the “pottery barn rule” means that journals agree to publish a direct replication of any study previously published in their journal. Other journals go beyond this, to agree to publish a replication of any study published in a major journal. In order to ensure that replications are assessed on the quality of their design rather than their results, a replication policy can include results masked review and/or be only for Registered Reports (see above).
Royal Society Open Science offers a great example of a replication policy that adopts the (extended) pottery barn rule, and offers two tracks for review (results masked or Registered Report). See this blog post introducing their policy.
The Institute for Replication has a compilation of sample replication reports from various Economics and Political Science journals, replication instructions, and educational material on replication and open science that can be shared with authors.
Editors might consider suggesting or requiring that authors abide by discipline-specific reporting standards that include transparency requirements. For example, the American Psychological Association has a list of Reporting Standards for Research in Psychology. Specific reporting guidelines also exist for certain methodologies, e.g. the Equator network has a list of reporting guidelines for main study types in health research, including the PRISMA guidelines for systematic reviews.
You may consider adding requirements for specific article types in your submission guidelines, e.g. see the examples from JAMA.
One topic that editors might consider providing specific guidance on is statistics. Hardwicke et al. (2022) found large gaps and inconsistent coverage in statistical guidance provided by top-ranked journals across scientific disciplines, and that the number of journals including statistical guidance for authors has changed little since the 90s (Hardwicke & Goodman, 2020). When creating statistical guidance, it is important to both consider what authors are being asked to report and how they are being asked to interpret statistics. Fidler et al. (2004) found that even though instructing authors to provide confidence intervals in their papers increased this practice, authors very rarely used these to inform their discussion of the results. It is also important that editorial policies not selectively favor one side on controversies about statistical significance tests (Mayo, 2021).
Preprints are a publicly available version of any type of scientific manuscript/research output preceding formal publication (Parsons et al., 2022). Journals can support preprints by including links to preprint versions of the paper in the final published version, providing integrated workflows that enable authors to easily post a preprint when they submit their work to a journal (e.g. see SciPost), supporting peer review organized around preprint platforms, or even making posting of preprints mandatory before review.
Reconsideration of previously rejected submissions
It is important to have a policy on how to handle appeals to reconsider previously rejected submissions. However, many journals do not currently have detailed, reproducible, or established appeal policies in operation (Dambha & Jones, 2017). For an example of a reproducible and established appeal policy, see the Infant and Child Development section on “Decision Appeals”.
Improving the quality of reviews
Although academics are expected to peer-review articles as part of their job, they often receive little (or no) formal training for this. Early career researchers are often keen to be involved in reviewing papers, but without having had many (or any) of their own papers reviewed, they don’t know what a review should look like. Here are some how-to guides from different fields that editors can share with their reviewers in order to help increase the quality of the reviews they receive.
- General: Berk et al. (2015), Faff (2018), Haggerty (2012), Hames (2016), Heddle et al., 2009, Lucey (2013), McPeek et al. (2009), Miller et al. (2013), Ngiam (2022), PLOS (2020)
- Accounting: Dalton et al. (2016), Kachelmeier (2004), Oler et al. (2016)
- Computer Science: Cormode, 2009
- Ecology: Scrimgeour et al. (2016)
- Economics: Berk et al. (2017)
- Health: Alexander (2005)
- Information Systems: Hirschheim (2008)
- Management: Carpenter (2009), Lee (1995), Lepak (2009), Leung et al. (2014)
- Biomedical Science: Christensen et al. (2010), Heddle & Ness (2009), Hoppin (2002), Mayden (2012), Rosenfeld (2010),
- Physiology: Benos et al. (2003), Guilford (2001), Seals et al. (2000)
- Political Science: Esarey (2015), Hall et al.. (2019), Krupnikov & Levine (2015), Miller at al. (2013), Nyhan (2015)
- Psychology: Epstein (1995)
In addition to this, there are papers covering what not to do as a reviewer, for example humiliate the authors (Comer et al., 2014) or be adversarial (Cormode, 2009).
Unique issues arise for reviewing interdisciplinary research – view and contribute to these conversations via this working document.
For a more complete bibliography organized alphabetically, see the list here.
EASE also provides several guides for reviewers in their Peer Review Toolkit, including one on how to write a review report and a list of peer review training options.
One way editors can help reviewers is to make explicit the values and requirements of the journal so that reviewers can reflect these in their responses. For example, Language Development Research makes it explicit in their reviewer guidelines that they wish to publish any research that meets their rigor criteria, without regard to the perceived novelty or importance of the findings.
The peer review system can be conceptualized as a “gift economy”, i.e. a system where previously published authors repay their debt for the labor that other scholars acting as editors and reviewers have invested in their manuscripts by acting as reviewers for others (Kaltenbrunner et al., 2021). Despite this, it can be increasingly difficult to find reviewers for papers.
One way to incentivize reviews is to implement a way for authors to receive credit for the reviews they complete, and there are several ways to do this (see EASE's guide to rewarding reviewers). One of the most widely known and used is Publons – a way to track an individual’s verified peer review history. Reviewer Credits is a similar initiative, except individuals can use their credits in a “Reward Center” to get actual rewards (ranging from discounted Article Processing Charges to online courses). Alternatively, reviewers can choose to record their peer reviews on their ORCID profile. All of these require partnership with individual journals in order to be used by reviewers.
Reviewer and editorial collusion
Although hopefully very rare, there have been some reports of reviewer collusion and fraud. In some cases, authors set up fake accounts for suggested reviewers where they either impersonate actual researchers or create fictional academic characters in order to review their own papers (favorably, of course). In other cases, authors form “collusion rings” whereby they agree to suggest each other as reviewers and give favorable reviews to each other.
Sometimes the editor is even in on this scheme. This includes the phenomenon of paper mills (see here for a blog post introduction) – the process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.
Education of editors and reviewers to make them aware of these issues is the first step to tackling them. Publishers can also set up control systems that check for suspicious behavior such as private email addresses (although note that this is not foolproof as it will also flag innocent academics using non-institutional email addresses), as well as ensuring that papers always have at least one reviewer that has not been suggested by the authors themselves.
Detecting overlap between submitted and existing manuscripts
It can be useful to have tools and strategies to help your journal detect when a submitted manuscript violates your originality guidelines (e.g. a significant overlap with other work by the same author/s, self-plagiarism, or text recycling).
It is worth reaching out to your publisher to check whether they already offer any services to this end as part of the publishing platform and, if not, whether they would be willing to do so. For example, your journal (e.g. if you are with Taylor & Francis) may already have the option to opt to activate Crossref in Manuscript Central, which produces a report showing the sources of any overlapping text. You can also opt to send manuscripts to iThenticate, which is integrated into ScholarOne.
Sometimes authors will attempt to re-submit work that has already been rejected by your journal. Editorial Manager runs a similarity check based on title, abstract, and authors – this can flag these cases.
As well as traditional manuscript types (e.g., empirical articles, reviews and meta-analyses, and commentaries), several new innovative submission types are emerging. It may be worth considering which submission types make sense for your journal, and whether it would be useful to add a new one!
Some journals offer Datasets / Data Descriptors as their own submission type (akin to the type of manuscripts published in Nature Scientific Data). Journals offering this submission type will need to think through all the same decisions as they do when implementing a data and code availability policy for traditional articles (see our extensive section on this above), including ensuring data and code are hosted in a trusted repository.
Academy of Management Discoveries has implemented “Discoveries Through Prose”, where authors are invited to present their empirical findings in prose rather than in more traditional “academese”. This can include (but is not limited to) continuous anecdotes or guiding examples, placing methods/thick descriptions in appendices, no traditional academic headings, and incorporating data/findings throughout. See Kahn (2021) for an example manuscript of this type.
Publishing scientific criticism
In this context, “scientific criticism” means a formal written response to a published finding. Journals can handle scientific criticism in different ways, e.g., publishing letters, commentaries, or online comments (post-publication peer review, PPPR). Hardwicke et al. (2022) assessed PPPR policies and practices at journals and found that most journals don’t currently offer options for submitting PPPR, and those that do tend to impose strict length/timing limits. They also found that even at journals that do offer PPPR, very few PPPRs actually end up being published. Hardwicke and colleagues make several recommendations for journals that are interested in adopting PPPR in Box 2 of their article.
Peer review innovations
Peer review is undergoing a sustained and multifaceted debate about different attempts to improve its perceived effectiveness, objectivity, transparency, and long-term sustainability. Many of these have already been discussed extensively above, e.g. deanonymizing reviews and/or reviewers, acknowledging and incentivizing reviewing, etc. There are many more – for an analytical overview of ongoing peer review innovations see Kaltenbrunner et al. (2022). They categorize peer review innovations as either being about the object of peer review (what is being peer reviewed), the aim of peer review (why is peer review being performed), the role of peer review actors (who performs peer review), the nature of peer review (how is peer review performed), or the openness/transparency of peer review (what is available to whom during and after peer review).
Limitations of peer review
It is important to acknowledge that peer review is an imperfect method for ensuring the credibility of published science. In fact, peer review became more popular after World War II partly for logistical reasons – as a means to manage increasing submissions (Burnham, 1990).
Although not a concrete piece of advice or “resource” as such, it can be important for journal editors to keep in mind debates surrounding the limitations of peer review. Some useful papers include: “Peer review: a flawed process at the heart of science and journals” (Smith, 2006), “The limitations to our understanding of peer review” (Tennant & Ross-Hellauer, 2020), and “Is Peer Review a Good Idea?” (Heesen & Bright, 2021), and “Academic urban legends” (Rekdal, 2014).
One specific concern is that peer review may not be reliable/consistent (even taking into account having multiple reviewers per paper). Two papers showing this are: “The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation” (Cicchetti, 1991) and “Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment” (Cortes & Lawrence, 2021). It is debated whether this is a “feature or a bug”, and whether it is possible to ever achieve reliable and consistent peer review. One potential solution to this is to have collaborative review, whereby groups of reviewers rate manuscripts on multiple dimensions and then gather to discuss those ratings and re-adjust their respective ratings. This isn’t a model that has been applied to article peer review yet, but has been used successfully in the repliCATS project (part of the SCORE program).
Others challenge the “thinking against" approach to peer review, and suggest that peer review needs to be reorganized to a "thinking with" model, or "care review" as Allegra Lab put it. Read about this idea in this Emergent Conversation series on “Peer Review as Intellectual Accompaniment”, including the introduction article “Beyond the Courtroom Model of Intellectual Exchange in Peer Review”. You can also watch a digital roundtable discussion on the same topic. A supportive and collaborative model of peer review like this better enables a communal rather than competitive/combative academic culture, which is important for increasing the participation of those from historically excluded groups (Murphy et al., 2020).
Some feel that there should be different norms for how papers can/should be written. For example, Mastroianni & Ludwin-Peer (2022) published a preprint summarizing a line of work with multiple studies. This preprint got a lot of attention because of its style – in the authors’ own words: “The paper you just read could never be published in a scientific journal. The studies themselves are just as good as the ones Ethan and I have published in fancy journals, but writing about science in this way is verboten...For instance, in a journal you're not allowed to say things like "we don't know why this happens." You're not allowed to admit that you forgot why you ran a study...You're supposed to be very serious; a reviewer once literally told me that my paper was "too fun" and that I should make it more boring. You're supposed to pack your paper with pointless citations because reviewers might like your paper more if they see their name in it. And if reviewers don't like your paper, they'll reject it and nobody will ever see it…”.
There are several schools of thought with regards to how to improve scientific peer review. Waltman et al. (2022) have proposed four: The Quality & Reproducibility school, the Democracy & Transparency school, the Equity & Inclusion school, and the Efficiency & Incentives school. Check out this blog post summarizing the preprint.
The publishing system is not as rigid as it seems from the outside. Some know this and take advantage of it, which is a source of inequities in publishing. In order to share these “editorial secrets” with authors, Moin Syed wrote a great blog post. However, each journal will have different secrets, and so you may consider making these explicit with authors in your submission guidelines. Some examples are outlined below.
The opportunity for authors to appeal against rejected manuscripts provides an important step in ensuring that high-quality and credible science is not incorrectly rejected from publication. However, there are considerable variations in appeal processes amongst journals, with little evidence of any detailed, reproducible, or established appeal policies in operation (Dambha-Miller & Jones, 2017). You may consider adopting a formal appeals policy that authors are made aware of so that this can be applied universally.
How important is it to include a cover letter with a manuscript submission? It seems that opinions differ. Nature Immunology considers cover letters to be “a dialog between the authors and the editors” where authors can “present their cases in a one- to two-page cover letter”. You may consider outlining in your submission guidelines what is expected in a cover letter, in order to ensure that authors do not spend unnecessary time on a letter that is never read (or worse, have their manuscript rejected for not spending enough time on one).
This infographic is a helpful guide to authors, so if this corresponds well to what your journal expects from a cover letter, you may consider sharing this with authors in your submission guidelines.
Revise & resubmit workflows
Most journals leave it up to authors to determine what to include in a response to a revise & resubmit. Some authors write a very brief response, mostly referring reviewers back to the revised manuscript for a full re-review, whereas some write detailed breakdowns replying to each comment, copying excerpts from the revised manuscript with corresponding page numbers. It is fine to leave this up to author discretion, but if there is a preferred format you would like to receive the response in, you can consider creating a template that authors fill out, or at the least sharing some helpful guidelines with authors. Moin Syed has outlined one workflow/format in a blog post.
Formatting requirements need to balance saving the time and effort of authors, reviewers, editors, and publishers. Some journals choose to have specific guidelines that are most conducive to translating the articles to the journal’s own formatting. Other journals have format-free submissions and make relevant changes later on when an article is accepted for publication. The European Association of Science Editors have developed a Quick-Check Table for Submissions in an attempt to standardize instructions across journals and identify key submission requirements (this is available in several languages).
Persistence and preservation
It is important to take steps towards the preservation of digital-only journals and other non-article based digital scholarly outputs (e.g. interactive notebooks, video-based narratives, etc.). The Internet Archive’s Wayback Machine and Clockss go some way towards this, but some digital-only journals are no longer findable even with these tools (see Laakso et al., 2020). The persistence of “computational” articles is even more important to consider, as a lack of persistence can easily occur through technological obsolescence and/or neglect (Pimentel et al., 2019; Wang et al., 2020).
List of contributors to this page (alphabetical by first name)
- Abel Brodeur
- Alison Ledgerwood
- Ana Trisovic
- Andrew Foster
- Arie Lewin
- Ashley Randall
- Barbara McGillivray
- Ben Ambridge
- Chase Harrison
- Chris Chambers
- Christopher Aberson
- Colin Elman
- Corine de Ruiter
- Crystal Steltenpohl
- David Mellor
- Dessi Kirilova
- Diana Kapiszewski
- Dillon Niederhut
- Eike Rinke
- Elena Naumova
- Elizabeth Chin
- Esther Plomp-Peterson
- Evelyn Ersanilli
- Felix Elwert
- Geoff Hodgson
- Ingo Rohlfing
- Jadranka Stojanovski
- James Green
- Jonathan Adler
- Katie Corker
- Kevin Arceneaux
- Kevin Rockman
- Koraly Perez-Edgar
- Lars Vilhuber
- Loukia Tzavella
- Ludo Waltman
- Maitreyee Shilpa Kishor
- Margaret Levenstein
- Mario Malicki
- Melissa Curran
- Michael Waibel
- Michael Weiss
- Moin Syed
- Olivia Lowrey
- Patrizio Tressoldi
- Pippa Smart
- Priya Silverstein
- Rick Gilmore
- Sebastian Deterding
- Sebastian Karcher
- Simine Vazire
- Sindiso Mnisi Weeks
- Tess Neal
- Thomas Rhys Evans
- Thu-Mai Christian
- Tom Hardwicke
- Tony Ross-Hellauer
- Wolfgang Kaltenbrunner