Reproducibility of experiments, one of the foundation stones of science, ought to be easy in computational science, given that computers are deterministic, and do not suffer from the problems of inter-subject and trial-to-trial variability that make reproduction of biological experiments more challenging.
In general, however, it is not easy, neither for the gold-standard case of reproduction by an independent researcher using independent code, nor for the much simpler case of an individual scientist or team being able to replicate their own results some months or years later. For this second case, the reasons include the complexity of our code and our computing environments, and the difficulty of capturing every essential piece of information needed to reproduce a computational experiment using existing tools such as spreadsheets, version control systems and paper notebooks.
In other areas of science, particularly in applied science laboratories with high-throughput, highly-standardised procedures, electronic lab notebooks are in widespread use, but none of these tools seem to be well suited for tracking simulation experiments or computational analyses. In developing something like an electronic lab notebook for computational science, there are a number of challenges:
- different researchers have very different ways of working and different workflows: command line, GUI, batch-jobs (e.g. in supercomputer environments), or any combination of these;
- some projects are essentially solo endeavours, others collaborative projects, possibly distributed geographically;
- as much as possible should be recorded automatically; if it is left to the researcher to record critical details there is a risk that some details will be missed or left out, particularly under pressure of deadlines.
I present here a tool, Sumatra, for automated recording of: (i) the code that was run, (ii) any parameter files and command line options, (iii) the platform on which the code was run. Sumatra consists of a core library implemented as a Python package, together with a series of interfaces that build on top of this: currently a command-line interface and a web interface. Each of these interfaces enables (i) launching simulations/analyses with automated recording of provenance information; and (ii) managing a project: browsing, viewing, deleting simulations and analyses.
Sumatra is intended to be useable with any command-line-launched simulation or analysis tool, not just Python. If your code is written in Python, however, you can use the Sumatra package directly in your own code, to enable provenance recording, then simply launch computations in your usual way. Sumatra is distributed as open-source software (http://neuralensemble.org/trac/sumatra/), so its functionality can be incorporated in other tools, and is developed using a community model: anyone is welcome to get involved with its development.