Comparing the results of replications in software engineering
|Author:||Santos, Adrian1; Vegas, Sira2; Oivo, Markku1;|
1University of Oulu, Finland
2Universidad Politécnica de Madrid, Spain
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe2021051429836
|Publish Date:|| 2022-02-02
Context: It has been argued that software engineering replications are useful for verifying the results of previous experiments. However, it has not yet been agreed how to check whether the results hold across replications. Besides, some authors suggest that replications that do not verify the results of previous experiments can be used to identify contextual variables causing the discrepancies.
Objective: Study how to assess the (dis)similarity of the results of SE replications when they are compared to verify the results of previous experiments and understand how to identify whether contextual variables are influencing results.
Method: We run simulations to learn how different ways of comparing replication results behave when verifying the results of previous experiments. We illustrate how to deal with context-induced changes. To do this, we analyze three groups of replications from our own research on test-driven development and testing techniques.
Results: The direct comparison of p-values and effect sizes does not appear to be suitable for verifying the results of previous experiments and examining the variables possibly affecting the results in software engineering. Analytical methods such as meta-analysis should be used to assess the similarity of software engineering replication results and identify discrepancies in results.
Conclusion: The results achieved in baseline experiments should no longer be regarded as a result that needs to be reproduced, but as a small piece of evidence within a larger picture that only emerges after assembling many small pieces to complete the puzzle.
Empirical software engineering
|Type of Publication:||
A1 Journal article – refereed
|Field of Science:||
113 Computer and information sciences
This research was developed with the support of project PGC2018-097265-B-I00, funded by: FEDER/Spanish Ministry of Science and Innovation—Research State Agency.
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021. This is a post-peer-review, pre-copyedit version of an article published in Empirical Software Engineering. The final authenticated version is available online at: https://doi.org/10.1007/s10664-020-09907-7.