You are here
Empirical Studies with Humans: Student Participants, Crowdsourcing and How It All Affects Our Oracle Assessment and Improvement Study
Empirical studies, often in the form of controlled experiments, have been widely adopted in software engineering research as a way to evaluate the merits of new software engineering approaches. However, controlled experiments involving human participants are still rare, and when they are conducted, some have serious validity issues. In this seminar I am going to talk about two concerns in empirical studies with humans (based on papers below):
the realism of the results acquired through students and adaptability of this results to software industry.
opportunities and challenges of using crowdsourcing.
Currently we are conducting experiments with humans for our Oracle Improvement and Assessment approach. So, I will also present the effects of above mentioned forms of participant recruitment on the design of our experiments, along with the current state of our results.
- Iflaah Salman, Ayse Tosun Misirli, Natalia Juristo: "Are students representatives of professionals in software engineering experiments?", ICSE (2015).
- Thomas D. Latoza, Andre van der Hoek: "Crowdsourcing in Software Engineering: Models, Motivations, and Challenges", IEEE, Volume 33, Issue 1 (2016).
- Kathryn T. Stolee, Sebastian Elbaum: "Exploring the Use of Crowdsourcing to Support Empirical Studies in Software Engineering", ESEM (2010).