University of Oulu

Ranci Ren, John W. Castro, Adrián Santos, Sara Pérez-Soler, Silvia T. Acuña, and Juan de Lara. 2020. Collaborative Modelling: Chatbots or On-Line Tools? An Experimental Study. In Proceedings of the Evaluation and Assessment in Software Engineering (EASE '20). Association for Computing Machinery, New York, NY, USA, 260–269. DOI:https://doi.org/10.1145/3383219.3383246

Collaborative modelling : chatbots or on-line tools? an experimental study

Saved in:
Author: Ren, Ranci1; Castro, John W.2; Santos, Adrián3;
Organizations: 1Universidad Autónoma de Madrid, Madrid, Spain
2Universidad de Atacama, Copiapó, Chile
3University of Oulu, Oulu, Finland
Format: article
Version: accepted version
Access: open
Online Access: PDF Full Text (PDF, 0.8 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2020110689447
Language: English
Published: Association for Computing Machinery, 2020
Publish Date: 2020-11-06
Description:

Abstract

Modelling is a fundamental activity in software engineering, which is often performed in collaboration. For this purpose, on-line tools running on the cloud are frequently used. However, recent advances in Natural Language Processing have fostered the emergence of chatbots, which are increasingly used for all sorts of software engineering tasks, including modelling. To evaluate to what extent chatbots are suitable for collaborative modelling, we conducted an experimental study with 54 participants, to evaluate the usability of a modelling chatbot called SOCIO, comparing it with the on-line tool Creately. We employed a within-subjects cross-over design of 2 sequences and 2 periods. Usability was determined by attributes of efficiency, effectiveness, satisfaction and quality of the results. We found that SOCIO saved time and reduced communication effort over Creately. SOCIO satisfied users to a greater extent than Creately, while in effectiveness results were similar. With respect to diagram quality, SOCIO outperformed Creately in terms of precision, while solutions with Creately had better recall and perceived success. However, in terms of accuracy and error scores, both tools were similar.

see all

ISBN Print: 978-1-4503-7731-7
Pages: 260 - 269
DOI: 10.1145/3383219.3383246
OADOI: https://oadoi.org/10.1145/3383219.3383246
Host publication: EASE '20: Proceedings of the Evaluation and Assessment in Software Engineering
Conference: Evaluation and Assessment in Software Engineering
Type of Publication: A4 Article in conference proceedings
Field of Science: 113 Computer and information sciences
Subjects:
Funding: Work funded by the Spanish Ministry of Science (project MASSIVE, RTI2018-095255-B-I00) and the R&D programme of Madrid (project FORTE, P2018/TCS-4314).
Copyright information: © 2020 Copyright held by the owner/author(s). This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in EASE '20: Proceedings of the Evaluation and Assessment in Software Engineering, https://doi.org/10.1145/3383219.3383246.