|
Results reproducibility is a very important issue in science. It has been already noted that in many cases reproducing your own results even after a few months (typical time scale of referee process) may be challenging. It is because in most cases it is not sufficient to have the same version of the code, but you also need precise knowledge about input parameters that were used.
|
|
Results reproducibility is a very important issue in science. It has been already noted that in many cases reproducing your own results even after a few months (typical time scale of referee process) may be challenging. It is because in most cases it is not sufficient to have the same version of the code, but you also need precise knowledge about input parameters that were used and the computation process organization.
|
|
Since the standard methodology in science is based on _try and fail_ methodology, typically at the end we end up with many datasets, and only a few of them is released to publication finally, while others serve as _experimental runs_. Then, tracking of settings that were used for various runs becomes a problem. W-SLDA implements an automatic framework that allows for results reproduction. Namely, the generated **results** are always accompanied by the **reproducibility pack**:
|
|
Since the standard methodology in science is based on _try and fail_ methodology, typically at the end we end up with many datasets, and only a few of them is released to publication finally, while others serve as _experimental runs_. Then, tracking of settings that were used for various runs becomes a problem. W-SLDA implements an automatic framework that allows for results reproduction. Namely, the generated **results** are always accompanied by the **reproducibility pack**:
|