Author(s): Kristen Goebel, Jasmine Obas
Mentor(s): Kevin Moran, Computer Science
Abstract
Many bug reports written by end users lack key information, such as the steps needed to reproduce the bug. Without this information, it can be difficult to reproduce and assess the reported bug, leading to more time spent working on resolving bugs as well as reported problems that cannot be fixed because they cannot be reproduced. Our work this summer focused on doing analysis for BURT, a bug reporting chatbot which aims to improve the quality of bug reports. We used various tool and metrics to assess the quality of existing bug reports and app reviews, creating a baseline for the current quality of reports and highlighting areas for improvement. In particular, we have made use of a tool which uses neural sentence classification and linear support vector machines to identify whether important information is included in a given bug report. Furthermore, we also analyze the readability of bug report prose through the calculation of three complementary metrics: spelling mistakes, grammar mistakes, and language regularity. Additionally, we have assisted with collecting data for and setting up a user study that will be used to evaluate BURT in comparison to existing bug reporting systems. Our research this summer lays the groundwork for illustrating the benefits that interactive bug reporting systems can have for both end-users and developers — leading to the creation of more informative bug reports for developers, with low effort required from end-users.
One reply on “BURT: A Bug Reporting Chatbot”
Very comprehensive explanation!