Using Semantic Similarity in Crawling-based Web Application Testing (IEEE 2017).

ABSTRACT:

To automatically test web applications, crawlingbased techniques are usually adopted to mine the behavior models, explore the state spaces or detect the violated invariants of the applications. However, their broad use is limited by the required manual configurations for input value selection, GUI state comparison and clickable detection. In existing crawlers, the configurations are usually string-matching based rules looking for tags or attributes of DOM elements, and often application-specific. Moreover, in input topic identification, it can be difficult to determine which rule suggests a better match when several rules match an input field to more than one topic. This paper presents a natural-language approach based on semantic similarity to address the above issues. The proposed approach represents DOM elements as vectors in a vector space formed by the words used in the elements. The topics of encountered input fields during crawling can then be inferred by their similarities with ones in a labeled corpus. Semantic similarity can also be applied to suggest if a GUI state is newly discovered and a DOM element is clickable under an unsupervised learning paradigm. We evaluated the proposed approach in input topic identification with 100 real-world forms and GUI state comparison with real data from industry. Our evaluation shows that the proposed approach has comparable or better performance to the conventional techniques. Experiments in input topic identification also show that the accuracy of the rule-based approach can be improved by up to 22% when integrated with our approach.

INTRODUCTION:

Web applications nowadays play important roles in our financial, social and other daily activities. Testing modern web applications is challenging because their behaviors are determined by the interactions among programs written in different languages and running concurrently in the front-end and the back-end. To avoid dealing with these complex interactions separately, test engineers treat the application as a black-box and abstract the DOMs (Document Object Models) presented to the end-user in the browser as states. The behaviors of the application can then be modeled as a state transition diagram on which model-based testing can be conducted. Since manual state exploration is often laborintensive and incomplete, crawling-based techniques [1], [5], [6], [7], [11], [19], [27], [28], [30], [33] are introduced to systematically and automatically explore the state spaces of web applications. Although such techniques automate the testing of complicated web applications to a great extent, their broad use is limited by the required manual configurations for applications under test (AUT). First, many web applications need specific input values to their input fields in order to access the pages and functions behind the current forms. To achieve proper coverage of the state space of the application, a user of existing crawlers has to set the rules for identifying the topics of encountered input fields in advance so as to feed appropriate input values at run time. Typical rules are string-matching based, mapping the DOM representations of input fields to their topics. For example, Fig. 1 illustrates an input field requesting a last name, a value of topic last_name. To identify the topic of the input field, the values of its attributes such id and name have to be compared with a feature string “last_name” and an appropriate value can then be determined by the identified topic. Because input values in different topics such as email, URL and password are necessary for a web page requesting them, the manual configuration has to be repeated.

APPROACH:

A novelty of this paper comparing with other crawlingbased techniques for web application testing is that we consider not only the attributes but also the nearby labels or descriptions of DOM elements for input topic identification. Algorithm 1 shows how it is achieved. First, we specify DOM attributes such as id, name, placeholder and maxlength which concern input topic identification in an attribute list, and the values of matched attributes of the DOM element will be put into the feature vector (line 2 to 4). Moreover, to find the corresponding descriptions, we search the siblings of the DOM element for tags such as span and label in a tag list and put the texts enclosed by the tags into the feature vector (line 11 to 18). If no such tags are found, the search will continue on the DOM’s parent recursively for several times (line 20). In addition, we perform a couple of normalizations such as special character filtering and lowercase conversion to the words in the extracted feature vector.

IMPLEMENTATION:

We implemented the proposed method with Python 2.7. A Python library, gensim [23] is used for vector space related operations such as vector transformations and similarity calculation. Interaction with web applications is supported by Selenium Webdriver [31], and BeautifulSoup [9] is used to parse and manipulate DOMs.

EVALUATION:

To assess the efficacy of the proposed approach, we conducted two experiments on input topic identification and GUI state comparison, respectively. In the first experiment, we collected and labeled input fields from 100 real-world forms, and divided them as training and validation data to evaluate the performances of the proposed and rule-based approaches. In the second experiment, we used GUI states collected from real tests in QNAP, a software company in Taiwan, to evaluate the effectiveness of the proposed technique and different abstraction mechanisms.

CONCLUSION:

In this paper, we proposed a natural-language technique to improve the effectiveness of crawling-based web application testing. By considering semantic similarities between a training corpus and a DOM element to be inferred, input topic identification, GUI state comparison and clickable detection can be performed with the proposed approach. In the future, we plan to evaluate how the proposed techniques impact overall crawling efficacy with more data and other topic model alternatives such as LDA. Moreover, the proposed feature extraction algorithm could be improved with more information about DOM elements such as comments.