Resources

Join JEDI  

last updated 2021-11-01

JEDI members are jointly developing a collection of resources for journal editors in the social sciences. We are very grateful to JEDI members who have contributed resources for this page (see below for a list) and warmly welcome further suggestions. These can be made either by posting them to the mailing list, or by directly emailing Priya Silverstein at [email protected]. Corresponding links will then be added to this page.

Please check back here from time to time, as we hope this page will be updated regularly!

If you are a journal editor and are not yet a member, please join JEDI.

Table of Contents

General

Publishers often offer guidelines and associated resources relating to editorial practices. While typically provided for editors of the journals that they publish, these resources may also be more generally helpful. For some examples see:

Incoming editors

If you are just starting out as a journal editor, you might find the Committee on Publication Ethics’ short guide to ethical editing for new editors helpful, as well as this glossary of publishing and editing terms.

The PKP school also offers a free course in becoming an editor, focusing on how to perform the major tasks required of an editor for a scholarly journal, how to analyze and solve common problems that may arise when editing a scholarly journal, how to assist other members of the journal team, and where to look for help with difficult issues.

The Council of Science Editors has sample correspondence for an editorial office that you can customize to suit your journal, and also offers short courses in journal and manuscript editing. The European Association of Science Editors (EASE) offers training and webinars.

Onboarding associate/action editors will vary hugely across journals. Most journals will provide some written guidelines, and some may also offer opportunities for incoming editors to reassure themselves that they have comprehended these guidelines. PCI Registered Reports does this (see their guide and test). Some journals have also created an onboarding video (see this video introduction for editors and reviewers at Collabra: Psychology).

Ethics

The Committee on Publication Ethics has many resources to help journal editors deal with ethical issues, including guidelines and case studies. For example, they have guidelines on publication manipulation.

The Council of Science Editors has a white paper on publication ethics including a guide to editor roles and responsibilities.

Diversifying social science research

A good case can be made that social science disciplines are characterized by different kinds of systemic inequality. For example, elitism is one source of inequality -- Kawa et al. (2019) found that the top 5 or so anthropology departments in the USA all hire each others' graduates.

Some fields have developed local resources that may have the potential to be adapted to other fields. For example, Roberts et al. (2020) examine racial inequality in psychological research to date and offer recommendations for editors and authors for working towards research that benefits from diversity in editing, writing, and participation. Buchanan et al. (2020) also discusses strategies for upending racism in psychological science. Buchanan et al. (2021) proposes a Diversity Accountability Index for Journals (DAI-J) in order to increase awareness and establish accountability across psychology journals.

Collecting demographic data

One step towards diversifying social science research is to have a better picture of who constitutes a journal’s authors, reviewers, and editors. Collecting demographic data may help efforts to increase diversity.

Editors at Personal Relationships have created a “Diversity matrix” template that can be used and adapted to collect data from authors, reviewers, editorial board members, editors, etc.

However, it is important to acknowledge that collecting demographic data may not be straightforward. For example, there may be geographical differences in how (and whether) to require authors to specify the demographic characteristics of their samples or themselves. For example, see Jugert et al. (2021) and Juang et al. (2021) for discussions of these tensions within a European context.

Participants

Some journals and/or societies choose to adopt a sociocultural policy in order to contextualise samples and study findings (see for example this policy from the Society for Research in Child Development).

Authors, reviewers, and editors

Proceedings of the Python in Science Conferences undertook a survey of the demographics of their authors and reviewers; see blog posts summarizing their results from 2020 and 2021.

This blog post summarises the American Geophysical Union’s (AGU’s) efforts to increase diversity in their reviewers and editors.

Another way to encourage authors to reflect on and report demographic characteristics and how they may have affected the way they have approached their research is to require or offer the opportunity for authors to provide positionality statements. Positionality statements are statements made following the process of reflexivity, whereby authors examine the “conceptual baggage” that they are bringing to the research. Positionality statements have many other benefits unrelated to diversity (beyond the scope of this resource).

Who is responsible for advancing diversity?

Research communities which adopt diversity as one of their goals should avoid over-burdening under-represented scholars with diversity-related tasks. Under-represented scholars are sometimes expected to play a disproportionate role in advancing diversity and inclusion in institutions (Jimenez et al., 2019, or read this summary). This may also be the case in journal editorial teams, and hence it is likely worthwhile to keep a note of who is advancing diversity and inclusion in your team to ensure a fair distribution of effort.

English language editing

If your journal publishes in English, you can consider offering an additional free editing service for manuscripts submitted by those for whom English is a second language. For example, see the International Section guidelines at Personal Relationships.

Open science

A strong consensus is emerging in the social sciences and cognate disciplines that knowledge claims are more understandable and evaluable if scholars describe the research processes in which they engaged to generate them. Citing and showing the evidence on which claims rest (when this can be done within ethical and legal constraints), discussing the processes through which evidence was garnered, and explicating the analysis that produced the claims facilitate expression, interpretation, reproduction, and replication. The Committee on Publication Ethics has a list of principles of transparency and best practice in scholarly publishing.

Nosek et al. (2015) presents an overview of the Transparency and Openness Promotion (TOP) Guidelines for journals, which have been used to generate the journal-level TOP Factor and provide a clear view of areas in which editors can consider steps towards more open science at their journals. For an example of how to explicitly signal adherence to TOP guidelines at your journal, see this example policy from Cortex. A similar initiative is the DA-RT Journal Editors’ Transparency Statement (JETS).

Resources for authors

Aczel et al. (2020) present a consensus-based checklist to improve and document the transparency of research reports in social and behavioral research along with an online application that allows users to complete the checklist and generate a report that they can submit with their manuscript or post to a public repository.

Data and code

A set of stable core practices has begun to emerge with regard to data management and sharing. While all are readily available, some require more effort on the part of journals. For example, requiring authors to share data via a trusted digital repository (and not e.g. via a personal website) is easier than checking that the author's code runs. Iain Hrynaszkiewicz (Public Library of Science (PLOS)) has written a chapter outlining “Publishers’ Responsibilities in Promoting Data Quality and Reproducibility” that describes practical approaches being taken by publishers to promote rigor and transparency in data practices.

Journals are increasingly adopting data and code availability policies. The Research Data Alliance has developed a Research Data Policy Framework for all journals and publishers including template policy texts which can be implemented by journals in their Information for Authors and publishing workflows.

There are many repositories to choose from, and different repositories will have benefits for different fields and needs. For lists of recommended repositories, see for example: section 2.2.1 in F1000 Research’s data guidelines; Springer Nature’s list of social science repository examples; and PLOS One’s recommended repositories. Similarly, the Society for Social and Personality Psychology has created a matrix of different trusted repositories and what they offer. Goodman et al. (2014) states that “A proper, trustworthy archive will: (1) assign an identifier such as a “handle” (hdl) or “digital object identifier” (doi); (2) require that you provide adequate documentation and metadata; and (3) manage the “care and feeding” of your data by employing good curation practices”.

In some fields (e.g. economics), it is common practice to have on the editorial team a specific “data editor”. These data editors are responsible for creating, adapting, and implementing data and code sharing policies at their respective journals. Data editors often share important information that can be useful for other editors looking to adopt or adapt their existing data and code sharing policies. For example, the data editor websites for the American Economic Association, The Review of Economic Studies, and The Economic Journal all offer a wealth of information and advice.

Data ethics

It is important to note that there will be cases where data cannot be shared, and that it is important to be “as open as possible, as closed as necessary”. See Meyer (2018) for an excellent guide on “ethical data sharing”.

FORCE11 and COPE have developed some recommendations for the handling of ethical concerns relating to the publication of research data. See their blog post and the recommendations themselves here.

There may be particular types of data that are more difficult to share -- e.g., data on sensitive topics. For a case study that includes challenges, tools, and future directions of sharing data in these cases, please see Joel et al. (2018). For a case study that discusses the redaction of sensitive data, see Casadevall et al. (2013).

Open data

Mandating or encouraging authors to share data alongside their manuscripts means that reviewers (and later, readers) can:

  • See the structure of the data more clearly
  • Run additional analyses
  • Use the data to answer new questions

Open code

Mandating or encouraging authors to share code alongside their manuscripts means that reviewers (and later, readers) can:

  • See the analyses that were conducted more clearly
  • Check whether the code runs on another (similarly structured) dataset

Authors can also give reviewers the opportunity to check basic code functionality by providing synthetic data. Dan Quintana has done a lot of work promoting the sharing of synthetic datasets and providing resources to help authors do so -- see his YouTube video, blog post, and Quintana (2020).

Open data and code

Mandating or encouraging authors to share both data and code alongside their manuscripts has all the above benefits of sharing either of these separately, but also means that reviewers (and later, readers) can check the computational reproducibility of the results (i.e., does running the code on the data produce the same results that are reported in the paper).

The American Economic Association provides helpful guidance on implementation of their data and code availability policy that could easily be applied to other journals and fields.

For a discussion of the impact of journal data policy strictness on the code re-execution rate (i.e., how likely the code is to run without errors) and a set of recommendations for code dissemination aimed at journals, see Trisovic et al. (2020).

Pre-publication verification of analyses

Some journals have now adopted a policy whereby data and code are not only required for publication in the journal, but must be checked before publication to ensure that the analyses are reproducible -- that the results in the manuscript match the results that are produced when someone who is not one of the authors re-runs the code on the data. This is called pre-publication “verification of analyses”, “data and code replication”, or “reproduction of analyses”. See Willis & Stodden (2020) for a useful overview of how to leverage policies, workflows, and infrastructure to ensure computational reproducibility in publication.

For more information on how to implement a policy like this at your journal, see the Data and Code Guidance by Data Editors developed by Lars Vilhuber and colleagues, which is used by the American Economic Association journals, Canadian Journal of Economics, the Review of Economic Studies, and the Economic Journal as a reference.

Several journals in political science also require pre-publication verification. See for example State Politics & Policy Quarterly, Political Analysis, and the American Journal of Political Science.

Data citation

The social sciences are increasingly adopting the use of permanent identifiers, such as digital object identifiers (DOIs), for datasets, making it easier to find and cite data sources.

See the Joint Declaration of Data Citation Principles for a set of guiding principles for citing data that can be shared with authors or adopted as policy at your journal. Social Science for Data Editors also provides Guidance on Data Citations for authors and data editors.

Heterogeneous data and analytic materials

While this section is framed in terms of numeric data and computer code, it is worth noting that cognate considerations arise in all circumstances where authors use a combination of data and analytic reasoning to make their findings. For example, authors can also make qualitative data and materials available for case studies that used process tracing analyses and relied on interviews and archival data.

Open science badges

One way of incentivizing open science is to offer open science badges to signal and reward when underlying data, materials, or preregistrations are available. Implementing badges is associated with an increasing rate of data sharing, as seeing colleagues practice open science signals that new community norms have arrived. See the guidance on badges by the Center for Open Science for more information on how to implement badges at your journal. However, it is important to note that receiving a badge for sharing data and code does not necessarily mean that analyses are reproducible -- for this we turn to pre-publication verification of analyses.

Verification Reports

Verification Reports (VRs) are an article format focusing specifically on computational reproducibility and analytic robustness. VRs meet this objective by repeating the original analyses or reporting new analyses of original data. In doing so they provide the verifiers conducting the investigation with professional credit for evaluating one of the most fundamental forms of credibility: whether the claims in previous studies are justified by their own data. Chris Chambers has introduced this format at Cortex (see his introductory editorial). For examples of the first two VRs published by Cortex , see Chalkia et al. (2020) and Mirman et al. (2021). If you’re interested in including VRs as an article type at your journal, Cortex’s author guidelines provide more information on this format.

Registered Reports

Registered Reports (RR) is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology. This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings. Although RRs are usually reserved for hypothesis-testing research, a version for exploratory research -- Exploratory Reports -- is now also being offered.

RRs are a new and evolving format, and so there are many people working to improve RRs for authors, reviewers, and editors. To read a summary of these conversations, or add to them, please see this working document.

Resources for editors

See the resources for editors by the Center for Open Science for more information on implementing Registered Reports at your journal. More advice for reviewers and editors can be found in Box 2 and 3 of this preprint by Chris Chambers and colleagues. For an example of an RR policy, see the RR policy for Cortex.

Resources for authors and reviewers

The Journal of Development Economics has developed a website dedicated to guidance for authors and reviewers on Registered Reports. Eike Rinke and colleagues at the Journal of Experimental Political Science have also created a great FAQ page for authors.

PCI Registered Reports

An exciting new initiative -- Peer Community In Registered Reports (PCI-RR) offers free and transparent pre- and post-study recommendations; managing the peer review of Registered Report preprints. The peer review is independent of journals but is endorsed by a growing list of journals that accept PCI-RR recommendations. Read about PCI-RR and see the PCI-RR Journal Adoper FAQ for more information.

Open Peer Review

“Open review” means different things depending on what is open and to whom. Open review is a good example of a situation where the most open version may not be the fairest or best option, as there are many factors to take into account. It is also important to distinguish between open identities and open reviews (both of which, confusingly, can be called open reviewing). For a review of the benefits and limitations of open peer review (some outlined below), see Besançon et al. (2020). For guidelines on implementing open peer review, see Ross-Hellauer and Görögh (2019).

Open Identities

Journals will have a policy on whether editors, reviewers, and authors know each other’s identities.

In open reviewing everyone knows the identity of everyone. This is argued to increase the accountability of the reviewer, giving less scope for biased or unjustified judgements. Godlee et al. (2002) offers a good introduction into the benefits of making reviewers open.

In a single-masked system, only reviewers are made anonymous. This is in order to be as open as possible, while ensuring that reviewers are not treated unfairly (or fear being treated unfairly) for giving unfavourable reviews. Indeed, it has been found that reviewers are less likely to express criticism (Mulligan et al., 2013; Ross-Hellauer et al., 2017) and are less likely to recommend to reject articles (Bravo et al, 2019; Bruce et al., 2016; Sambeek & Lakens, 2021; Walsh et al., 2000) if their identity is known to authors.

In a double-masked system, the editor knows the identities of reviewers and authors, reviewers and authors both know the identity of the editor, but authors and reviewers do not know the identity of each other. This is in an attempt to eliminate personal biases of reviewers (e.g. those based on gender, seniority, reputation and affiliation), e.g. see Tomkins et al (2017) for evidence of single-masked review being biased towards papers with famous authors and authors from high-prestige institutions. See Nature’s editorial adopting an optionally double-masked system.

In a triple-masked system the identity of the editor, authors, and reviewers are all masked from each other. This is to acknowledge the possible personal biases of editors as well as reviewers. For some examples of triple-masked review in practice, see the guidelines for authors from Comparative Political Studies and Perspectives on Politics.

There is no one-size-fits-all solution, but it is important to think carefully about which policy makes sense for your journal. If you choose a single-masked or double-masked system, you also need to think about what to do if authors choose to sign their reviews. Some journals remove these signatures, and some choose to allow authors to do this if they wish. If your field has a high percentage of desk rejections (e.g. Political Science: Garand & Harman, 2021) then you may wish to consider double-masked review as editor bias would have a bigger impact in your field.

Note, the discussion above uses the term “masked” instead of “blind” in order to be more inclusive; many journals have also adopted this change in wording. See this American Psychological Association blog post for an explanation of why.

Open Reviews

It is also possible to make the reviews themselves openly available alongside published manuscripts (with or without the reviewers being identified). This can help to give context to the published article.

Computational research

Willis and Stodden (2020) highlight nine decision points for journals looking to improve the quality and rigor of computational research and suggest that journals reporting computational research aim to include “assessable reproducible research artifacts” along with published articles.

The American Journal of Political Science Verification Policy provides a role-model example of how computational research can be made to be more rigorous and error-free through additional steps in the editorial process – but it also shows how this requires resources on top of the procedures editors have become accustomed to over decades.

Encouraging authors to acknowledge limitations

It is important that authors are transparent about and own the limitations of their work. Hoekstra and Vazire (2020) provide a set of recommendations on how to increase intellectual humility in research articles that can be used as both author and reviewer guidelines. In addition, editors who want to incentivize intellectual humility in their journals can implement policies that make it clear to authors and reviewers that owning the limitations of one’s research will be considered a prerequisite for publication, rather than a possible reason to reject a manuscript. For an example of a policy like this, see the reviewing policies for Management and Organization Review. See also these two editorials from Nature Human Behavior for short overviews of issues and solutions (“Tell it like it is” and “Not the first, not the best”).

Replication studies

Many journals now have policies on publishing replication studies. Subscribing to the “pottery barn rule” means that journals agree to publish a direct replication of any study previously published in their journal. Other journals go beyond this, to agree to publish a replication of any study published in a major journal. In order to ensure that replications are assessed on the quality of their design rather than their results, a replication policy can include results masked review and/or be only for Registered Reports (see above).

Royal Society Open Science offers a great example of a replication policy that adopts the (extended) pottery barn rule, and offers two tracks for review (results masked or Registered Report). See this blog post introducing their policy.

Reconsideration of previously rejected submissions

It is important to have a policy on how to handle appeals to reconsider previously rejected submissions. However, many journals do not currently have detailed, reproducible, or established appeal policies in operation (Dambha & Jones, 2017). For an example of a reproducible and established appeal policy, see the Infant and Child Development section on “Decision Appeals”.

Improving the quality of reviews

Although academics are expected to peer-review articles as part of their job, they often receive little (or no) formal training for this. Early career researchers are often keen to be involved in reviewing papers, but without having had many (or any) of their own papers reviewed, they don’t know what a review should look like. Here are some how-to guides from different fields that editors can share with their reviewers in order to help increase the quality of the reviews they recieve.

In addition to this, there are papers covering what not to do as a reviewer, for example humiliate the authors (Comer et al., 2014) or be adversarial (Cormode, 2009).

Unique issues arise for reviewing interdisciplinary research -- view and contribute to these conversations via this working document.

For a more complete bibliography organized alphabetically, see the list here.

Reviewer collusion and fraud

Although hopefully very rare, there have been some reports of reviewer collusion and fraud. In some cases, authors set up fake accounts for suggested reviewers where they either impersonate actual researchers or create fictional academic characters in order to review their own papers (favourably, of course). Sometimes the editor is even in on this scheme. In other cases, authors form “collusion rings” whereby they agree to suggest each other as reviewers and give favourable reviews to each other.

In order to avoid this, publishers can set up control systems that check for suspicious behavior such as private email addresses (although note that this is not foolproof as it will also flag innocent academics using non-institutional email addresses), as well as ensuring that papers always have at least one reviewer that has not been suggested by the authors themselves.

Limitations of peer review

Although not a concrete piece of advice or “resource” as such, it can be important for journal editors to keep in mind debates surrounding the limitations of peer review. Some useful papers include: “Peer review: a flawed process at the heart of science and journals” (Smith, 2006), “The limitations to our understanding of peer review” (Tennant & Ross-Hellauer, 2020), and “Is Peer Review a Good Idea?” (Heesen & Bright, 2021).

One specific concern is that peer review may not be reliable/consistent (even taking into account having multiple reviewers per paper). Two papers showing this are: “The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation” (Cicchetti, 1991) and “Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment” (Cortes & Lawrence, 2021). It is debated whether this is a “feature or a bug”, and whether it is possible to ever achieve reliable and consistent peer review.

Editorial secrets

The publishing system is not as rigid as it seems from the outside. Some know this and take advantage of it, which is a source of inequities in publishing. In order to share these “editorial secrets” with authors, Moin Syed wrote a great blog post. However, each journal will have different secrets, and so you may consider making these explicit with authors in your submission guidelines. Some examples are outlined below.

Appeals

The opportunity for authors to appeal against rejected manuscripts provides an important step in ensuring that high-quality and credible science is not incorrectly rejected from publication. However, there are considerable variations in appeal processes amongst journals, with little evidence of any detailed, reproducible, or established appeal policies in operation (Dambha-Miller & Jones, 2017). You may consider adopting a formal appeals policy that authors are made aware of so that this can be applied universally.

Cover letters

How important is it to include a cover letter with a manuscript submission? It seems that opinions differ. Nature Immunology considers cover letters to be “a dialog between the authors and the editors” where authors can “present their cases in a one- to two-page cover letter”. You may consider outlining in your submission guidelines what is expected in a cover letter, in order to ensure that authors do not spend unnecessary time on a letter that is never read (or worse, have their manuscript rejected for not spending enough time on one).

This infographic is a helpful guide to authors, so if this corresponds well to what your journal expects from a cover letter, you may consider sharing this with authors in your submission guidelines.

Revise & resubmit workflows

Most journals leave it up to authors to determine what to include in a response to a revise & resubmit. Some authors write a very brief response, mostly referring reviewers back to the revised manuscript for a full re-review, whereas some write detailed breakdowns replying to each comment, copying excerpts from the revised manuscript with corresponding page numbers. It is fine to leave this up to author discretion, but if there is a preferred format you would like to receive the response in, you can consider creating a template that authors fill out, or at the least sharing some helpful guidelines with authors. Moin Syed has outlined one workflow/format in a blog post.

List of contributors to this page (alphabetical by first name)