last updated 2021-09-01
JEDI members are jointly developing a collection of resources for journal editors in the social sciences. We are very grateful to JEDI members who have contributed resources for this page (see below for a list) and warmly welcome further suggestions. These can be made either by posting them to the mailing list, or by directly emailing Priya Silverstein at [email protected]. Corresponding links will then be added to this page.
Please check back here from time to time, as we hope this page will be updated regularly!
If you are a journal editor and are not yet a member, please join JEDI.
Table of Contents
- Incoming editors
- Diversifying social science research
- Open science
- Reconsideration of previously rejected submissions
- Improving the quality of reviews
- List of contributors to this page
Publishers often offer guidelines and associated resources relating to editorial practices. While typically provided for editors of the journals that they publish, these resources may also be more generally helpful. For some examples see:
If you are just starting out as a journal editor, you might find the Committee on Publication Ethics’ short guide to ethical editing for new editors helpful, as well as this glossary of publishing and editing terms.
The PKP school also offers a free course in becoming an editor, focusing on how to perform the major tasks required of an editor for a scholarly journal, how to analyze and solve common problems that may arise when editing a scholarly journal, how to assist other members of the journal team, and where to look for help with difficult issues.
The Council of Science Editors has sample correspondence for an editorial office that you can customize to suit your journal, and also offers short courses in journal and manuscript editing. The European Association of Science Editors (EASE) offers training and webinars.
Onboarding associate/action editors will vary hugely across journals. Most journals will provide some written guidelines, and some may also offer opportunities for incoming editors to reassure themselves that they have comprehended these guidelines. PCI Registered Reports does this (see their guide and test). Some journals have also created an onboarding video (see this video introduction for editors and reviewers at Collabra: Psychology).
The Committee on Publication Ethics has many resources to help journal editors deal with ethical issues, including guidelines and case studies. For example, they have guidelines on publication manipulation.
Diversifying social science research
A strong case can be made that systemic inequality exists within social science research. Roberts et al. (2020) examine racial inequality in psychological research to date and offer recommendations for editors and authors for working towards research that benefits from diversity in editing, writing, and participation. One recommendation is to require or offer the opportunity for authors to provide positionality statements, which are statements made following the process of reflexivity, whereby authors examine the “conceptual baggage” that they are bringing to the research.
However, it is important to acknowledge the geographical differences in how (and whether) to require authors to specify the demographic characteristics of their samples or themselves. For example, see Jugert et al. (2021) and Juang et al. (2021) for discussions of these tensions within a European context.
Some journals and/or societies choose to adopt a sociocultural policy in order to contextualise samples and study findings (see for example this policy from the Society for Research in Child Development).
If your journal publishes in English, you can consider offering an additional free editing service for manuscripts submitted by those for whom English is a second language. For example, see the International Section guidelines at Personal Relationships.
A strong consensus is emerging in the social sciences and cognate disciplines that knowledge claims are more understandable and evaluable if scholars describe the research processes in which they engaged to generate them. Citing and showing the evidence on which claims rest (when this can be done within ethical and legal constraints), discussing the processes through which evidence was garnered, and explicating the analysis that produced the claims facilitate expression, interpretation, reproduction, and replication. The Committee on Publication Ethics has a list of principles of transparency and best practice in scholarly publishing.
Nosek et al. (2015) presents an overview of the Transparency and Openness Promotion (TOP) Guidelines for journals, which have been used to generate the journal-level TOP Factor and provide a clear view of areas in which editors can consider steps towards more open science at their journals. For an example of how to explicitly signal adherence to TOP guidelines at your journal, see this example policy from Cortex. A similar initiative is the DA-RT Journal Editors’ Transparency Statement (JETS).
Resources for authors
Aczel et al. (2020) present a consensus-based checklist to improve and document the transparency of research reports in social and behavioral research along with an online application that allows users to complete the checklist and generate a report that they can submit with their manuscript or post to a public repository.
Data and code
A set of stable core practices has begun to emerge with regard to data management and sharing. While all are readily available, some require more effort on the part of journals. For example, requiring authors to share data via a trusted digital repository (and not e.g. via a personal website) is easier than checking that the author's code runs.
Journals are increasingly adopting data and code availability policies. The Research Data Alliance has developed a Research Data Policy Framework for all journals and publishers including template policy texts which can be implemented by journals in their Information for Authors and publishing workflows.
Mandating or encouraging authors to share data alongside their manuscripts means that reviewers (and later, readers) can:
- See the structure of the data more clearly
- Run additional analyses
- Use the data to answer new questions
It is important to note that there will be cases where data cannot be shared, and that it is important to be “as open as possible, as closed as necessary”. See Meyer (2018) for an excellent guide on “ethical data sharing”.
Mandating or encouraging authors to share code alongside their manuscripts means that reviewers (and later, readers) can:
- See the analyses that were conducted more clearly
- Check whether the code runs on another (similarly structured) dataset
Authors can also give reviewers the opportunity to check basic code functionality by providing synthetic data. Dan Quintana has done a lot of work promoting the sharing of synthetic datasets and providing resources to help authors do so -- see his YouTube video, blog post, and Quintana (2020).
Open data and code
Mandating or encouraging authors to share both data and code alongside their manuscripts has all the above benefits of sharing either of these separately, but also means that reviewers (and later, readers) can check the computational reproducibility of the results (i.e., does running the code on the data produce the same results that are reported in the paper).
The American Economic Association provides helpful guidance on implementation of their data and code availability policy that could easily be applied to other journals and fields.
For a discussion of the impact of journal data policy strictness on the code re-execution rate (i.e., how likely the code is to run without errors) and a set of recommendations for code dissemination aimed at journals, see Trisovic et al. (2020).
Pre-publication verification of analyses
Some journals have now adopted a policy whereby data and code are not only required for publication in the journal, but must be checked before publication to ensure that the analyses are reproducible -- that the results in the manuscript match the results that are produced when someone who is not one of the authors re-runs the code on the data. This is called pre-publication “verification of analyses”, “data and code replication”, or “reproduction of analyses”.
For more information on how to implement a policy like this at your journal, see the Data and Code Guidance by Data Editors developed by Lars Vilhuber and colleagues, which is used by the American Economic Association journals, Canadian Journal of Economics, the Review of Economic Studies, and the Economic Journal as a reference.
The social sciences are increasingly adopting the use of permanent identifiers, such as digital object identifiers (DOIs), for datasets, making it easier to find and cite data sources.
See the Joint Declaration of Data Citation Principles for a set of guiding principles for citing data that can be shared with authors or adopted as policy at your journal. Social Science for Data Editors also provides Guidance on Data Citations for authors and data editors.
Heterogeneous data and analytic materials
While this section is framed in terms of numeric data and computer code, it is worth noting that cognate considerations arise in all circumstances where authors use a combination of data and analytic reasoning to make their findings. For example, authors can also make qualitative data and materials available for case studies that used process tracing analyses and relied on interviews and archival data.
Open science badges
One way of incentivizing open science is to offer open science badges to signal and reward when underlying data, materials, or preregistrations are available. Implementing badges is associated with an increasing rate of data sharing, as seeing colleagues practice open science signals that new community norms have arrived. See the guidance on badges by the Center for Open Science for more information on how to implement badges at your journal. However, it is important to note that receiving a badge for sharing data and code does not necessarily mean that analyses are reproducible -- for this we turn to pre-publication verification of analyses.
Verification Reports (VRs) are an article format focusing specifically on computational reproducibility and analytic robustness. VRs meet this objective by repeating the original analyses or reporting new analyses of original data. In doing so they provide the verifiers conducting the investigation with professional credit for evaluating one of the most fundamental forms of credibility: whether the claims in previous studies are justified by their own data. Chris Chambers has introduced this format at Cortex (see his introductory editorial). For examples of the first two VRs published by Cortex , see Chalkia et al. (2020) and Mirman et al. (2021). If you’re interested in including VRs as an article type at your journal, Cortex’s author guidelines provide more information on this format.
Registered Reports (RR) is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology. This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings. Although RRs are usually reserved for hypothesis-testing research, a version for exploratory research -- Exploratory Reports -- is now also being offered.
RRs are a new and evolving format, and so there are many people working to improve RRs for authors, reviewers, and editors. To read a summary of these conversations, or add to them, please see this working document.
Resources for editors
See the resources for editors by the Center for Open Science for more information on implementing Registered Reports at your journal. More advice for reviewers and editors can be found in Box 2 and 3 of this preprint by Chris Chambers and colleagues. For an example of an RR policy, see the RR policy for Cortex.
Resources for authors and reviewers
The Journal of Development Economics has developed a website dedicated to guidance for authors and reviewers on Registered Reports. Eike Rinke and colleagues at the Journal of Experimental Political Science have also created a great FAQ page for authors.
Open Peer Review
“Open review” means different things depending on what is open and to whom. Open review is a good example of a situation where the most open version may not be the fairest or best option, as there are many factors to take into account. It is also important to distinguish between open identities and open reviews (both of which, confusingly, can be called open reviewing). For a review of the benefits and limitations of open peer review (some outlined below), see Besançon et al. (2020). For guidelines on implementing open peer review, see Ross-Hellauer and Görögh (2019).
Journals will have a policy on whether editors, reviewers, and authors know each other’s identities.
In a single-blind system the editor knows the identities of reviewers and authors, reviewers and authors both know the identity of the editor, but authors and reviewers do not know the identity of each other. This is in an attempt to eliminate personal biases of reviewers and authors (e.g. those based on gender, seniority, reputation and affiliation).
In a double-blind system even the editor is blind to the identity of the authors, taking into account the possible personal biases of the editor themself. See Nature’s editorial adopting an optionally double-blind system and Tomkins et al (2017) for evidence of single-blind review being biased towards papers with famous authors and authors from high-prestige institutions.
In open reviewing everyone knows the identity of everyone. This is argued to be more ethical, in that it increases the accountability of the reviewer, giving less scope for biased or unjustified judgements. Godlee et al. (2002) offers a good introduction into the benefits of making reviewers open. However, some worry that reviewers will be less likely to express criticism (Mulligan et al., 2013; Ross-Hellauer et al., 2017) and less likely to recommend to reject articles (Bravo et al, 2019; Bruce et al., 2016; Sambeek & Lakens, 2021; Walsh et al., 2000) if their identity is known to authors.
There is no one-size-fits-all solution, but it is important to think carefully about which policy makes sense for your journal. If you choose a single-blind or double-blind system, you also need to think about what to do if authors choose to sign their reviews. Some journals remove these signatures, and some choose to allow authors to do this if they wish.
It is also possible to make the reviews themselves openly available alongside published manuscripts (with or without the reviewers being identified). This can help to give context to the published article.
Willis and Stodden (2020) highlight nine decision points for journals looking to improve the quality and rigor of computational research and suggest that journals reporting computational research aim to include “assessable reproducible research artifacts” along with published articles.
The American Journal of Political Science Verification Policy provides a role-model example of how computational research can be made to be more rigorous and error-free through additional steps in the editorial process – but it also shows how this requires resources on top of the procedures editors have become accustomed to over decades.
Encouraging authors to acknowledge limitations
It is important that authors are transparent about and own the limitations of their work. Hoekstra and Vazire (2020) provide a set of recommendations on how to increase intellectual humility in research articles that can be used as both author and reviewer guidelines. In addition, editors who want to incentivize intellectual humility in their journals can implement policies that make it clear to authors and reviewers that owning the limitations of one’s research will be considered a prerequisite for publication, rather than a possible reason to reject a manuscript. For an example of a policy like this, see the reviewing policies for Management and Organization Review.
Many journals now have policies on publishing replication studies. Subscribing to the “pottery barn rule” means that journals agree to publish a direct replication of any study previously published in their journal. Other journals go beyond this, to agree to publish a replication of any study published in a major journal. In order to ensure that replications are assessed on the quality of their design rather than their results, a replication policy can include results blind review and/or be only for Registered Reports (see above).
Royal Society Open Science offers a great example of a replication policy that adopts the (extended) pottery barn rule, and offers two tracks for review (results blind or Registered Report). See this blog post introducing their policy.
Reconsideration of previously rejected submissions
It is important to have a policy on how to handle appeals to reconsider previously rejected submissions. However, many journals do not currently have detailed, reproducible, or established appeal policies in operation (Dambha & Jones, 2017). For an example of a reproducible and established appeal policy, see the Infant and Child Development section on “Decision Appeals”.
Improving the quality of reviews
Although academics are expected to peer-review articles as part of their job, they often receive little (or no) formal training for this. Early career researchers are often keen to be involved in reviewing papers, but without having had many (or any) of their own papers reviewed, they don’t know what a review should look like. Here are some how-to guides from different fields that editors can share with their reviewers in order to help increase the quality of the reviews they recieve.
- General: Berk et al. (2015), Faff (2018), Hames (2016), Lucey (2013), McPeek et al. (2009)
- Accounting: Dalton et al. (2016), Kachelmeier (2004), Oler et al. (2016)
- Computer Science: Cormode, 2009
- Ecology: Scrimgeour et al. (2016)
- Economics: Berk et al. (2017)
- Health: Alexander (2005)
- Information Systems: Hirschheim (2008)
- Management: Carpenter (2009), Lee (1995), Lepak (2009), Leung et al. (2014)
- Biomedical Science: Christensen et al. (2010), Heddle & Ness (2009), Hoppin (2002), Mayden (2012), Rosenfeld (2010),
- Physiology: Benos et al. (2003), Guilford (2001), Seals et al. (2000)
- Political Science: Esarey (2015), Hall et al.. (2019), Krupnikov & Levine (2015), Miller at al. (2013), Nyhan (2015)
- Psychology: Epstein (1995)
Unique issues arise for reviewing interdisciplinary research -- view and contribute to these conversations via this working document.
For a more complete bibliography organized alphabetically, see the list here.
Reviewer collusion and fraud
Although hopefully very rare, there have been some reports of reviewer collusion and fraud. In some cases, authors set up fake accounts for suggested reviewers where they either impersonate actual researchers or create fictional academic characters in order to review their own papers (favourably, of course). Sometimes the editor is even in on this scheme. In other cases, authors form “collusion rings” whereby they agree to suggest each other as reviewers and give favourable reviews to each other.
In order to avoid this, publishers can set up control systems that check for suspicious behavior such as private email addresses (although note that this is not foolproof as it will also flag innocent academics using non-institutional email addresses), as well as ensuring that papers always have at least one reviewer that has not been suggested by the authors themselves.
The publishing system is not as rigid as it seems from the outside. Some know this and take advantage of it, which is a source of inequities in publishing. In order to share these “editorial secrets” with authors, Moin Syed wrote a great blog post. However, each journal will have different secrets, and so you may consider making these explicit with authors in your submission guidelines. Some examples are outlined below.
The opportunity for authors to appeal against rejected manuscripts provides an important step in ensuring that high-quality and credible science is not incorrectly rejected from publication. However, there are considerable variations in appeal processes amongst journals, with little evidence of any detailed, reproducible, or established appeal policies in operation (Dambha-Miller & Jones, 2017). You may consider adopting a formal appeals policy that authors are made aware of so that this can be applied universally.
How important is it to include a cover letter with a manuscript submission? It seems that opinions differ. Nature Immunology considers cover letters to be “a dialog between the authors and the editors” where authors can “present their cases in a one- to two-page cover letter”. You may consider outlining in your submission guidelines what is expected in a cover letter, in order to ensure that authors do not spend unnecessary time on a letter that is never read (or worse, have their manuscript rejected for not spending enough time on one).
This infographic is a helpful guide to authors, so if this corresponds well to what your journal expects from a cover letter, you may consider sharing this with authors in your submission guidelines.
Revise & resubmit workflows
Most journals leave it up to authors to determine what to include in a response to a revise & resubmit. Some authors write a very brief response, mostly referring reviewers back to the revised manuscript for a full re-review, whereas some write detailed breakdowns replying to each comment, copying excerpts from the revised manuscript with corresponding page numbers. It is fine to leave this up to author discretion, but if there is a preferred format you would like to receive the response in, you can consider creating a template that authors fill out, or at the least sharing some helpful guidelines with authors. Moin Syed has outlined one workflow/format in a blog post.
List of contributors to this page (alphabetical by first name)
- Ana Trisovic
- Andrew Foster
- Arie Lewin
- Ashley Randall
- Chris Chambers
- Colin Elman
- Crystal Steltenpohl
- Diana Kapiszewski
- Dillon Niederhut
- Eike Rinke
- Elena Naumova
- Ingo Rohlfing
- Kevin Arceneaux
- Lars Vilhuber
- Loukia Tzavella
- Melissa Curran
- Michael Weiss
- Moin Syed
- Patrizio Tressoldi
- Pippa Smart
- Priya Silverstein
- Simine Vazire
- Tess Neal