Jasmine Obas URSP Presentation

Author(s): Jasmine Obas

Mentor(s): Kevin Moran, Volgenau School of Engineering

Abstract
Automate Bug Reporting in Mobile Applications

The reason for this research is to provide a middle ground between developers and their users. Bug reporting is the biggest means of communication in improving software and increasing user satisfaction. The underlying issue is that users that don’t have technical background do not know how to address issues to their developers. It’s a language barrier and it can’t require users to just “become more technical” they need assistance.

The problem we are solving is the development of bug reports from users by giving them assistance through automation. Having a source that can take them through the steps of creating a strong bug report by covering the three main components developers look for and need. Those components are Observed Behavior (OB), Expected Behavior (EB), and Steps to Reproduce (S2R). The source we are developing is an automated chatbot currently called BURT, that can communicate with the user and give them user friendly tools to write and submit their report. To develop this chatbot we’re using Machine Learning tools such as Text Semantics, Image Recognition, and more to collect data on bug reports on various applications. Finding the weakest area can allow us to train the bot in those areas along with user testing feedback. Other approaches we’re implementing are using user reviews and bug reports on mobile applications to conduct data analysis and provide proof of the problem as well as giving us the basis of what our chatbot needs.

Audio Transcript
Hi guys my name is jasmine Obas, I am currently a Senior at George Mason University majoring in Applied Computer Science with a Concentration in Computer Game Design. I’m participant in the OSCAR URSP program 2022. Now, funny enough just as long as my major, my research is in Automating Bug Reports and Mobile Applications. This is all done under the direction of my mentor Dr. Kevin Moran. Now, before I go into the details of our research, I did want to explain what a bug report is for those of you who may not know. Now, a bug report is on the developer side and when we’re talking mobile applications we’re talking about your favorite apps like Facebook, Instagram, Tik Tok. All of them have developers that give bug reports from people like you, the users. Uh, it’s a place where you can report bugs and other issues as well as areas of improvement but, unfortunately bug reports for developers are just as pesky and annoying as mosquitoes at the summer barbecue and standing motionless is not going to make these bugs go away. Of course, that’s where our research comes in, we don’t want to just automate bug reports we want to automate good bug reports. So of course we had to figure out (chuckles) what makes a bad bug report we found out that developers are looking for three key components in a bug report. The observed behavior, expected behavior, and then steps to reproduce. So we collected data sets on user reviews and bug reports and were able to tag them based on the components they used. We found observed behavior was used an excessive amount while the other two not so much and then we also use machine learning models to be able to catch spelling mistakes , missing punctuation, even scoring the understanding and readability of the text with perplexity which is basically a machine learning model that was meant to represent uh, human understanding. So, if it gave you a low score that meant the sentence was completely surprising and it had no idea what you’re trying to say a high score means it was a well-structured sentence. We’ve been able to use this data to develop a bot, that’s where the automation comes in. We have a bot named B.U.R.T and we have a demo that we’ve been able to use with live users and they’ve been able to go through a full bug report process with the chatbot and it’s been able to make them highlight all three of the components that developers are looking for. Not only that we actually added a nice friendly feature using images where the steps were reproduced instead of typing out step by step they can actually get live images of the GUI components for the application they’re making the report for and make step-by-step images to reproduce the problem. With that, I would like to thank again my mentor, Dr. Kevin Moran and OSCAR for this opportunity, thank you for listening in.

Leave a Reply