Format

Journal Club Format (Proposed)

Caution

This is a provisional version of the proposed journal club format and is subject change based on feedback from initial attendees of the journal club whose feedback will shape the direction of the club going forward. I (Richard - initial proposer of the format) am hoping that the club members will take ownership of it and shape it to fit what they want to get out of it. I’m also looking to ‘start at home’ and dog food the format by first working with pre-prints and papers that we’ve produced at the institute. The goal being to ensure that our feedback is delivered in a constructive, effective and empathetic manner that can be usefully acted upon by a colleague who has volunteered their work to be the object of our scrutiny; before potentially turning our attentions on unsuspecting 3rd parties.

The Journal club meets every 2 weeks, tackling 1 paper every 4 weeks. In the first session we examine the paper figure by figure in groups for the reproducibility of the results. In between session we may send a message to the corresponding author(s) of the paper we are looking at with the post containing the questions that we generated during our first session. In the second session we attempt to visualize the results of the paper in different ways. If we cannot get the original data underlying a plot to work with then we will attempt to source similar public data or make up some similarly structured data for the purposes of the data visualization exercise - It should always be clearly labelled when we do this. We will try and improve on the original graphics, we largely ignore journal formatting constraints in this process focusing on how best to visually convey the meaning of the data. When done we share our results on this blog and with the original authors.

Definitions

Grid with the characteristics of: Reproducible; same data, same analysis. Replicable; different data, same analysis. Robust; same data, different analysis. And generalisable; different data, different analysis; Research

Figure from The Turing Way project (fig. 5), Used under a CC-BY 4.0 licence

We are focused in our assessment on the reproducibility of a paper on the narrow question of getting the same results from the same analysis of the same data. Also if there is sufficient methodological detail to likely be able to carry out experiments in the same way as the original authors. We don’t have the time or resources to attempt replications, we are assessing in principle would we have the information that we would need in order to reproduce &/or replicate the results in the figures of the papers that we are covering.

We will also be asking what could be done better in a systemic sense. So some of our suggestions go above and beyond what is currently the norm and be aspirational in nature so should not be taken necessarily as a critique of a specific piece of work. They way be directed at the persuit of cultural changes in the field or at policy changes for institutional actors. Though individuals taking us up on suggestions about how to go above and beyond current expectations may help to pave to the way to future cultural and policy changes.

Procedure

  • Draw names from the pool of participants to lead a session and pick a paper for each of the sessions in the planning period.
  • Selected session leads can choose a paper however they like, for example by picking it themselves, putting a shortlist to a vote, rolling a dice, or finding someone else willing to do it.
  • The session lead should create a draft page for the paper on blog
  • The session lead should make a list of the figures in the paper, and ensure that each figure someone to cover it.
    • Two people should not choose the same figure when there are others with no one, and three should not choose ones with two when there are others with one and so on if possible. This should ensure all figures are covered and by groups of similar size.

Session 1

In the first session on a paper everyone working on a figure should get together to ask the question: Do I have all the information that I would need to reproduce the results in this figure?

If No Describe:

  • What additional information do I need?

  • How could this be done better, what would make it easier to find this information (keep it constructive).

  • Are there journal/pre-print server practices which may have been an impediment to or failed to encourage working reproducibly?

    • How could the journal/pre-print server in question improve their processes to better facilitate reproducible working. How could authors work around the current policy gaps.
  • Is the data deposited data in a public repository?

    • Is the repository suitable to the type/subject of the data?
    • Can improvements be made to the metadata?
  • Is any code used in the analysis publicly available?

  • Are any data and code licensed appropriately to allow re-use?

  • Is any new software packaged, documented, and distributed appropriately?

  • Does the data, code etc. shared match up with any statements of checklist pertaiing to data sharing submitted with the manuscript?

  • Consider what the authors have done well and state this prior to suggesting improvements.

  • Write at least a paragraph about the reproducibility of your group’s figure for the blog page and commit this to the draft page for the blog

  • Groups may send a request to the authors of papers with information that they would need to reproduce the work asking for this extra information.

Session 2

In the second session using the data gleaned from the previous week’s exercise in examining the repoducibility of the paper each group will attempt to redesign the figures in the paper ideally using the data underlying the actual figures from the previous session. Where this data cannot be obtained some fake data with suitable values should be generated for the purposes of devising improved approaches to visualization.

  • Were their any data visualized in a sub-optimal plot type?
  • Were there any design choices which interfere with the accessibility of the figures e.g. poor colour palettes?
  • Were there any journal/pre-print server policies/practices which were an impediment to best graphical practice?
  • Were there any useful visualizations were missing?
  • Were there any superfluous visualisations?
  • Are the titles and axis labels appropriate and descriptive?
  • Are the scales well chosen for the data type and range?
  • What would constitute a good alt text for this figure to make is accessible via a screen reader?

The reproducibility critiques and attempts at improved / alternative graphics should be posted on the journal club’s blog for our coverage of that paper and a link sent to corresponding author(s) of the papers covered.

Contributorship

Warning

This is not yet configured

This blog is a collaboratively authored work and anyone who contributes to the journal club will be listed as a contributor on the Zenodo Repository using the CRediT system. The Zenodo repository contains snapshots of this site and is updated everytime we publish a new post with a versioned DOI. This makes your contributions to this journal club citable.

For technical deatails of contributing to this site please refer to the project README file.

Change Starts at home

Starting with the papers of friends and colleagues who volunteer to have their work scrutinized in this fashion will be a good place to start. This is a fairly in depth critique and is often holding papers to a higher standard of reproducibility than is expected by their publishers. Our critiques are intended to be constructive and useful to the original authors, friendly intent can be difficult to convey remotely so hence this initial focus.

If you have a pre-print that you would like to improve having the journal club scrutinize it could be a great way to do that.

This blog aims to embody open-science and repoducibility best practices. It is built and hosted using open tools, and using reproducible compute environments.