Skip to content

GitLab

  • Menu
Projects Groups Snippets
    • Loading...
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • wslda wslda
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Packages & Registries
    • Packages & Registries
    • Package Registry
    • Container Registry
    • Infrastructure Registry
  • Analytics
    • Analytics
    • CI/CD
    • Repository
    • Value stream
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • wtools
  • wsldawslda
  • Wiki
  • Results reproducibility

Results reproducibility · Changes

Page history
Update Results reproducibility authored Feb 27, 2024 by Gabriel Wlazłowski's avatar Gabriel Wlazłowski
Hide whitespace changes
Inline Side-by-side
Results-reproducibility.md
View page @ e1c8a2ab
# Introduction
Results reproducibility is a critical issue in science. It has already been noted that in many cases, reproducing your results even after a few months (the typical time scale of the referee process) may be challenging. In most cases, having the same version of the code is insufficient, but you also need precise knowledge about the input parameters used, and the same input data must be provided. Since the standard methodology in science is based on try-and-fail methodology, typically, the researcher ends up with many datasets. Only a few of them are released for publication, while others serve as experimental runs. Under such conditions, tracking changes introduced to codes in the research process becomes problematic. W-SLDA implements a methodology that does it automatically and allows for the reproduction of the results (up to machine precision). Namely, the generated **results** are always accompanied by the **reproducibility pack**, where complete information needed to reproduce them is included.
Results reproducibility is a critical issue in science. It has already been noted that in many cases, reproducing your results even after a few months (the typical time scale of the referee process) may be challenging. In most cases, having the same code version is insufficient, but you also need precise knowledge about the input parameters used, and the same input data must be provided. Since the standard methodology in science is based on try-and-fail methodology, typically, the researcher ends up with many datasets. Only a few are released for publication, while others serve as experimental runs. Under such conditions, tracking changes introduced to codes in the research process becomes problematic. W-SLDA implements a methodology that does it automatically and allows for reproducing the results (up to machine precision). Namely, the generated **results** are always accompanied by the **reproducibility pack**, where complete information needed to reproduce them is included.
![reproducibility](uploads/8ee07ac131aa636f0b6e041cc0948cac/reproducibility.png)
For meaning of each file see [here](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Output%20files).
For the meaning of each file, see [here](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Output%20files).
# W-SLDA mechanism of results reproducibility
Developers of W-SLDA Toolkit recognize the need for intrinsically implemented support that will simplify the process of reproducing the results. To comply with this requirement, the following mechanism has been implemented (called a reproducibility pack):
......
Clone repository
  • API version
  • Automatic interpolations
  • Auxiliary tools
  • Browsing the code
  • Broyden algorithm
  • C and CUDA
  • Campaign of calculations
  • Checking correctness of settings
  • Chemical potentials control
  • Code & Results quality
  • Common failures of static codes
  • Common failures of time dependent codes
  • Computation domain
  • Configuring GPU machine
  • Constraining densities and potentials
View All Pages