|
|
# Introduction
|
|
|
The priorities for W-SLDA Toolkit developers are as follow (from highest to lowest):
|
|
|
The priorities for W-SLDA Toolkit developers are as follows (from highest to lowest):
|
|
|
1. Correctness of computation.
|
|
|
2. Performace.
|
|
|
3. User-friendly interface.
|
... | ... | @@ -8,34 +8,34 @@ The highest priority is computation correctness, which is the fundamental issue |
|
|
|
|
|
# W-SLDA Toolkit quality
|
|
|
|
|
|
Within W-SLDA Toolkit we apply the following strategies to ensure a high level of confidence of the computation correctness:
|
|
|
Within W-SLDA Toolkit, we apply the following strategies to ensure a high level of confidence in the computation correctness:
|
|
|
|
|
|
## Level 1: Inspection of the results
|
|
|
It is the most standard method (and in many groups the only one). Namely,<br>
|
|
|
It is the most standard method (and in many groups, the only one). Namely,<br>
|
|
|
_results are correct if they look to be correct_.<br>
|
|
|
While it may look to be rather a weak method of testing of computation correctness, in practice, it works very well. Typically, we know what to expect as the results (more or less), at least at the level of qualitative behavior. Physics is equipped with various methods, experiments, etc, and making extensive comparisons with them automatically leads to the inspection of sources of discrepancies (if present). From point of view of the implementation, the W-SLDA Toolkit delivers extensive support for results analysis.
|
|
|
While it may look to be rather a weak method of testing computation correctness, in practice, it works very well. Typically, we know what to expect as the results (more or less), at least at the level of qualitative behavior. Physics is equipped with various methods, experiments, etc., and making extensive comparisons with them leads to the identification of discrepancies (if present). From the point of view of the implementation, the W-SLDA Toolkit delivers extensive support for results analysis.
|
|
|
|
|
|
Actions taken within W-SLDA Toolkit to support the level 1 quality checks:
|
|
|
Actions taken within the W-SLDA Toolkit to support the level 1 quality checks:
|
|
|
* [W-data format](https://gitlab.fizyka.pw.edu.pl/wtools/wdata): conceptually easy data format that allows a variety of tools/languages to be used for data analysis.
|
|
|
* [Integration with VisIt](https://gitlab.fizyka.pw.edu.pl/wtools/wdata/-/wikis/Visit-integration): advanced platform for data visualisation & analysis.
|
|
|
* [Auxiliary tools & Extensions](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/home#extensions): we provide various examples of tested codes for data analysis.
|
|
|
|
|
|
## Level 2: Internal tests
|
|
|
|
|
|
While Level 1 actions are sufficient to detect most of errors that lead to results that are incorrect at the qualitative level, they are not sufficient to assure the correctness of the code computation at the quantitative level. The next level of testing is<br>
|
|
|
_checking correctness of the computation for cases where solutions are know_.<br>
|
|
|
Developers of W-SLDA Toolkit constantly seek for cases which can be used as test cases.
|
|
|
While Level 1 actions are sufficient to detect most of the errors in results at the qualitative level, they are not sufficient to assure the correctness of the code computation at the quantitative level. The next level of testing is<br>
|
|
|
_checking the correctness of the computation for cases where solutions are known_.<br>
|
|
|
Developers of the W-SLDA Toolkit constantly seek cases that can be used as test cases.
|
|
|
|
|
|
Actions taken within W-SLDA Toolkit to support the level 2 quality checks:
|
|
|
* [Testsuite](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Testsuite): an automated system of executing test cases that checks correctness of computation at the quantitative level. The automatic tests are executed routinely after each new commit to the main engine of the code.
|
|
|
Actions taken within the W-SLDA Toolkit to support the level 2 quality checks:
|
|
|
* [Testsuite](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Testsuite): an automated system of executing test cases that checks the correctness of computation at the quantitative level. The automatic tests are executed routinely after each new commit to the main engine of the code.
|
|
|
|
|
|
## Level 3: Open-source & Transparency
|
|
|
The review process by external researchers is a crucial aspect in science. It is standard practice in the case of scientific publication. In the context of scientific codes, it less popular since it requires to make your code available to other. However, here we also observe that open-source/access movement get stronger with time in computational physics. It is next level method that supports the implementation of high-quality code. It steams from Linus's law:<br>
|
|
|
The review process by external researchers is a crucial aspect of science. It is standard practice in the case of scientific publication. In the context of scientific codes, it is less popular since it requires you to make your code available to others. However, we also observe that open-source/access movement gets stronger with time in computational physics. It is the next-level method that supports the implementation of high-quality code. It steams from Linus's law:<br>
|
|
|
_given enough eyeballs, all bugs are shallow_.
|
|
|
|
|
|
Actions taken within W-SLDA Toolkit to support the level 3 quality checks:
|
|
|
Actions taken within the W-SLDA Toolkit to support the level 3 quality checks:
|
|
|
* The [W-SLDA Toolkit](https://wslda.fizyka.pw.edu.pl/) is open-source code.
|
|
|
* [Results reproducibility](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Results%20reproducibility): Together with publications, we deliver reproducibility packs. Using them independent research may inspect the computation process from beginning to the end. As example see [ arXiv:2201.07626](https://arxiv.org/abs/2201.07626).
|
|
|
* [Results reproducibility](https://gitlab.fizyka.pw.edu.pl/wtools/wslda/-/wikis/Results%20reproducibility): Together with publications, we deliver reproducibility packs. Using them, a researcher may inspect the computation process from beginning to end. For example, see [ arXiv:2201.07626](https://arxiv.org/abs/2201.07626).
|
|
|
|
|
|
# Links to interesting articles related to code & results quality
|
|
|
* [Does your code stand up to scrutiny?](https://www.nature.com/articles/d41586-018-02741-4)
|
... | ... | |