Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • wslda wslda
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • wtools
  • wsldawslda
  • Wiki
  • Results reproducibility

Last edited by Gabriel Wlazłowski Mar 26, 2025
Page history
This is an old version of this page. You can view the most recent version or browse the history.

Results reproducibility

Introduction

Results reproducibility is a critical issue in science. It has already been noted that in many cases, reproducing your results even after a few months (the typical time scale of the referee process) may be challenging. In most cases, having the same version of the code is insufficient, but you also need precise knowledge about the input parameters used, and the same input data must be provided. Since the standard methodology in science is based on try-and-fail methodology, typically, the researcher ends up with many datasets. Only a few of them are released for publication, while others serve as experimental runs. Under such conditions, tracking changes introduced to codes in the research process becomes problematic. W-SLDA implements a methodology that does it automatically and allows for the reproduction of the results (up to machine precision). Namely, the generated results are always accompanied by the reproducibility pack, where complete information needed to reproduce them is included.
reproducibility

For meaning of each file see here.

W-SLDA mechanism of results reproducibility

Developers of W-SLDA Toolkit recognize the need for intrinsically implemented support that will simplify the process of reproducing the results. To comply with this requirement, the following mechanism has been implemented (called a reproducibility pack):

  1. Each file generated by W-SLDA Toolkit in the header provides basic info about the code version that has been used; for example, the header of wlog file may look like:
# CREATION TIME OF THE LOG: Sun Feb  7 15:29:44 2021
# EXECUTION COMMAND       : ./st-wslda-2d input.txt
# CODE NAME               : "W-SLDA-TOOLKIT"
# VERSION OF THE CODE     : 2021.01.27
# COMPILATION DATE & TIME : Feb  7 2021, 15:19:57
  1. When executing the code, all user-definable files are recreated and attached to the output-set. For example, if the user set outprefix as test, then among output files there will be:
test_input.txt             # input file used for calculations
test_predefines.h          # predefines selected at compilation stage
test_problem-definition.h  # user's definition of the problem 
test_logger.h              # user's logger
test_machine.h             # machine configuration that was used in calculations
test.stdout                # standard output generated by the code
test_checkpoint.dat.init   # checkpoint file that was used as input (st codes only)
test_extra_data.dat        # Binary file with the extra_data array (if provided)
test_reprowf.tar           # reproducibility pack for restoring wave-functions that were used as input (td codes only)

This provides the full information required to reproduce your results (up to machine precision).

Good practices

  1. For each project, use a separate folder; do not mix results from various projects in the same folder. Use a meaningful name for the folder.
  2. Use meaningful outprefix names.
  3. Do not modify output files, except wtxt file. This one is designed to store various metadata information, including your comments. The wtxt file is easy to reproduce if you destroy it accidentally, which is not the case with other files. Add your comments/remarks/etc in the form of comments starting with #.
  4. When copying results to a new location/machine, copy all files assisted with the run. The simplest way is to execute the command (for more info, see here):
scp outprefix* new_location
  1. When printing messages to stdout use functions:
// prints to stdout and to file outprefix.stdout
void wprintf( const char * format, ... );       

// prints to stream (like stdout or stderr) and to file outprefix.stdout         
void wfprintf(FILE *stream,  const char * format, ... );

These are analogs of printf and fprintf with the difference that the message will also be added to outprefix.stdout.

To learn more about good practices related to results reproducibility issues see:

  • Creating Reproducible Data Science Projects
Clone repository
  • API version
  • Automatic interpolations
  • Auxiliary tools
  • Browsing the code
  • Broyden algorithm
  • C and CUDA
  • Campaign of calculations
  • Checking correctness of settings
  • Chemical potentials control
  • Code & Results quality
  • Common failures of static codes
  • Common failures of time dependent codes
  • Computation domain
  • Configuring GPU machine
  • Constraining densities and potentials
View All Pages