Skip navigation

Resources

Join JEDI  

last updated 2021-07-01

JEDI members are jointly developing a collection of resources for journal editors in the social sciences. We are very grateful to JEDI members who have contributed resources for this page (see below for a list) and warmly welcome further suggestions. These can be made either by posting them to the mailing list, or by directly emailing Priya Silverstein at [email protected]. Corresponding links will then be added to this page.

Please check back here from time to time, as we hope this page will be updated regularly!

If you are a journal editor and are not yet a member, please join JEDI.

General

Publishers often offer guidelines and associated resources relating to editorial practices. While typically provided for editors of the journals that they publish, these resources may also be more generally helpful. For some examples see:

Incoming editors

If you are just starting out as a journal editor, you might find the Committee on Publication Ethics’ short guide to ethical editing for new editors helpful, as well as this glossary of publishing and editing terms.

The PKP school also offers a free course in becoming an editor, focusing on how to perform the major tasks required of an editor for a scholarly journal, how to analyze and solve common problems that may arise when editing a scholarly journal, how to assist other members of the journal team, and where to look for help with difficult issues.

The Council of Science Editors has sample correspondence for an editorial office that you can customize to suit your journal.

Ethics

The Committee on Publication Ethics has many resources to help journal editors deal with ethical issues, including guidelines and case studies. For example, they have guidelines on publication manipulation.

The Council of Science Editors has a white paper on publication ethics including a guide to editor roles and responsibilities.

Diversifying social science research

A strong case can be made that systemic inequality exists within social science research. Roberts et al. (2020) examine racial inequality in psychological research to date and offer recommendations for editors and authors for working towards research that benefits from diversity in editing, writing, and participation. One recommendation is to require or offer the opportunity for authors to provide positionality statements, which are statements made following the process of reflexivity, whereby authors examine the “conceptual baggage” that they are bringing to the research.

Some journals and/or societies choose to adopt a sociocultural policy in order to contextualise samples and study findings (see for example this policy from the Society for Research in Child Development).

Open science

A strong consensus is emerging in the social sciences and cognate disciplines that knowledge claims are more understandable and evaluable if scholars describe the research processes in which they engaged to generate them. Citing and showing the evidence on which claims rest (when this can be done within ethical and legal constraints), discussing the processes through which evidence was garnered, and explicating the analysis that produced the claims facilitate expression, interpretation, reproduction, and replication. The Committee on Publication Ethics has a list of principles of transparency and best practice in scholarly publishing.

Nosek et al. (2015) presents an overview of the Transparency and Openness Promotion (TOP) Guidelines for journals, which have been used to generate the journal-level TOP Factor and provide a clear view of areas in which editors can consider steps towards more open science at their journals. A similar initiative is the DA-RT Journal Editors’ Transparency Statement (JETS).

Resources for authors

Aczel et al. (2020) present a consensus-based checklist to improve and document the transparency of research reports in social and behavioral research along with an online application that allows users to complete the checklist and generate a report that they can submit with their manuscript or post to a public repository.

Data and code

A set of stable and easily adoptable core practices has begun to emerge with regard to data citation and management. For example, the social sciences are increasingly adopting the use of permanent identifiers, such as digital object identifiers (DOIs), for research products, articles and datasets. Similarly, there is now a strong consensus that sharing data via trusted digital repositories is preferable to doing so via personal websites.

Journals are increasingly adopting data and code availability policies. The Research Data Alliance has developed a Research Data Policy Framework for all journals and publishers including template policy texts which can be implemented by journals in their Information for Authors and publishing workflows. The American Economic Association also provides helpful guidance on implementation of their data and code availability policy that could easily be applied to other journals and fields.

For a discussion of the impact of journal data policy strictness on the code re-execution rate (i.e., how likely the code is to run without errors) and a set of recommendations for code dissemination aimed at journals, see Trisovic et al. (2020).

Open science badges

One way of incentivizing open science is to offer open science badges to signal and reward when underlying data, materials, or preregistrations are available. Implementing badges is associated with an increasing rate of data sharing, as seeing colleagues practice open science signals that new community norms have arrived. See the guidance on badges by the Center for Open Science for more information on how to implement badges at your journal. However, it is important to note that receiving a badge for sharing data and code does not necessarily mean that analyses are reproducible -- for this we turn to pre-publication verification of analyses.

Pre-publication verification of analyses

Some journals have now adopted a policy whereby data and code are not only required for publication in the journal, but must be checked before publication to ensure that the analyses are reproducible -- that the results in the manuscript match the results that are produced when someone who is not one of the authors re-runs the code on the data. This is called pre-publication “verification of analyses”, “data and code replication”, or “reproduction of analyses”. For more information on how to implement a policy like this at your journal, see the Data and Code Guidance by Data Editors developed by Lars Vilhuber and colleagues, which is used by the American Economic Association journals, Canadian Journal of Economics, the Review of Economic Studies, and the Economic Journal as a reference.

Unshareable data

When data is unshareable (e.g. when data are sensitive), synthetic data can still be shared. This allows pre-publication verification of analyses to still take place. Dan Quintana has done a lot of work promoting the sharing of synthetic datasets and providing resources to help authors do so -- see his YouTube video, blog post, and Quintana (2020).

Verification Reports

Verification Reports (VRs) are an article format focusing specifically on computational reproducibility and analytic robustness. VRs meet this objective by repeating the original analyses or reporting new analyses of original data. In doing so they provide the verifiers conducting the investigation with professional credit for evaluating one of the most fundamental forms of credibility: whether the claims in previous studies are justified by their own data. Chris Chambers has introduced this format at Cortex (see his introductory editorial). For examples of the first two VRs published by Cortex , see Chalkia et al. (2020) and Mirman et al. (2021). If you’re interested in including VRs as an article type at your journal, Cortex’s author guidelines provide more information on this format.

Registered Reports

Registered Reports is a publishing format used by over 250 journals that emphasizes the importance of the research question and the quality of methodology by conducting peer review prior to data collection. High quality protocols are then provisionally accepted for publication if the authors follow through with the registered methodology. This format is designed to reward best practices in adhering to the hypothetico-deductive model of the scientific method. It eliminates a variety of questionable research practices, including low statistical power, selective reporting of results, and publication bias, while allowing complete flexibility to report serendipitous findings. Although Registered Reports are usually reserved for hypothesis-testing research, a version for exploratory research -- Exploratory Reports -- is now also being offered.

Resources for editors

See the resources for editors by the Center for Open Science for more information on implementing Registered Reports at your journal. More advice for reviewers and editors can be found in Box 2 and 3 of this preprint by Chris Chambers and colleagues.

Resources for authors and reviewers

The Journal of Development Economics has developed a website dedicated to guidance for authors and reviewers on Registered Reports. Eike Rinke and colleagues at the Journal of Experimental Political Science have also created a great FAQ page for authors.

Open Peer Review

“Open review” means different things depending on what is open and to whom. Open review is a good example of a situation where the most open version may not be the fairest or best option, as there are many factors to take into account. It is also important to distinguish between open identities and open reviews (both of which, confusingly, can be called open reviewing). For a review of the benefits and limitations of open peer review (some outlined below), see Besançon et al. (2020). For guidelines on implementing open peer review, see Ross-Hellauer and Görögh (2019).

Open Identities

Journals will have a policy on whether editors, reviewers, and authors know each other’s identities. In a single-blind system the editor knows the identities of reviewers and authors, reviewers and authors both know the identity of the editor, but authors and reviewers do not know the identity of each other. This is in an attempt to eliminate personal biases of reviewers and authors (e.g. those based on gender, seniority, reputation and affiliation). In a double-blind system even the editor is blind to the identity of the authors, taking into account the possible personal biases of the editor themself. See Nature’s editorial adopting an optionally double-blind system and Tomkins et al (2017) for evidence of single-blind review being biased towards papers with famous authors and authors from high-prestige institutions. In open reviewing everyone knows the identity of everyone. This is in an attempt to encourage more civil and thoughtful reviewer comments. There is no one-size-fits-all solution, but it is important to think carefully about which policy makes sense for your journal.

Open Reviews

It is also possible to make the reviews themselves openly available alongside published manuscripts (with or without the reviewers being identified). This can help to give context to the published article.

Computational research

Willis and Stodden (2020) highlight nine decision points for journals looking to improve the quality and rigor of computational research and suggest that journals reporting computational research aim to include “assessable reproducible research artifacts” along with published articles.

The American Journal of Political Science Verification Policy provides a role-model example of how computational research can be made to be more rigorous and error-free through additional steps in the editorial process – but it also shows how this requires resources on top of the procedures editors have become accustomed to over decades.

Replication studies

Many journals now have policies on publishing replication studies. Subscribing to the “pottery barn rule” means that journals agree to publish a direct replication of any study previously published in their journal. Other journals go beyond this, to agree to publish a replication of any study published in a major journal. In order to ensure that replications are assessed on the quality of their design rather than their results, a replication policy can include results blind review and/or be only for Registered Reports (see above).

Royal Society Open Science offers a great example of a replication policy that adopts the (extended) pottery barn rule, and offers two tracks for review (results blind or Registered Report). See this blog post introducing their policy.

Improving the quality of reviews

Although academics are expected to peer-review articles as part of their job, they often receive little (or no) formal training for this. Early career researchers are often keen to be involved in reviewing papers, but without having had many (or any) of their own papers reviewed, they don’t know what a review should look like. Here are some how-to guides from different fields that editors can share with their reviewers in order to help increase the quality of the reviews they recieve.

In addition to this, there are papers covering what not to do as a reviewer, for example humiliate the authors (Comer et al., 2014) or be adversarial (Cormode, 2009).

For a more complete bibliography organized alphabetically, see the list here.

Reviewer collusion and fraud

Although hopefully very rare, there have been some reports of reviewer collusion and fraud. In some cases, authors set up fake accounts for suggested reviewers where they either impersonate actual researchers or create fictional academic characters in order to review their own papers (favourably, of course). Sometimes the editor is even in on this scheme. In other cases, authors form “collusion rings” whereby they agree to suggest each other as reviewers and give favourable reviews to each other.

In order to avoid this, publishers can set up control systems that check for suspicious behavior such as private email addresses (although note that this is not foolproof as it will also flag innocent academics using non-institutional email addresses), as well as ensuring that papers always have at least one reviewer that has not been suggested by the authors themselves.

List of contributors to this page (alphabetical by first name)