Mohsen Vakilian's Blog

September 27, 2007

Using a Goal-driven Approach to Generate Test Cases for GUIs

Filed under: testing — mohsenvakilian @ 5:44 pm

The title of this post is the title of a paper we’ve been assigned for cs527. And, the following is my report for it.

The good point that I enjoyed about this work was the notion of modeling hierarchical structures in the GUI. That is the tool extracts some high level model of the GUI and refines it to get the actual test cases. This makes regression testing quite easy. I had this trouble with Selenium where a small change in GUI would result in breaking down of a lot of test cases. However, by using hierarchical structures, one can still benefit from the higher level structures which don’t change by a small change of GUI.

One bad point about the work is that the test designer has to go through the tedious process of writing pre- and post-conditions for operators derived by the tool.

One idea that I could think of as a project was to develop a GUI tester using the method presented in this paper for a specific GUI toolkit such as Qt. The problem with general GUI testers such as the one introduced in this paper is that they don’t much about the specific structures of the GUI components under test and might not be able to trigger all the capabilities of the GUI. If we restrict ourselves to a specific GUI toolkit, the tester may be able to perform some static analysis to better understand how the GUI is supposed to behave.

By figure 2, the paper talks about combining parts of two figures to obtain a new figure. Can the tool verify that the goal state has been achieved? Can it analyze figures?!

How powerful is the tool’s language for writing pre- and post-conditions of operators?


Filed under: general — mohsenvakilian @ 5:22 pm

As you might have noticed from my recent posts, I’ve begun writing posts about several papers. Most of these reports are for the cs527 class I’m taking this semester at UIUC. We have to write reports for papers assigned to us in this class. And, I decided to publish my reports as my blog posts. This is a good way of publishing posts!

We have to mention good points and bad points about the paper in addition to our overall opinion about the paper. This is my first experience of criticizing research papers. And, I don’t mean to condemn the authors. However, I have to think about bad points of the paper as well as its good points.

September 20, 2007

EXE: A Symbolic Executor

Filed under: testing — mohsenvakilian @ 7:07 pm

EXE is introduced in a paper titled EXE: Automatically Generating Inputs of Death.

It’s interesting to see that EXE could find bugs in widely used and well tested libraries and programs such as FreeBSD’s BPF, udhcpd, pcre, and famous Linux file systems such as ext2, ext3, and JFS. Besides, unlike other bug finder tools, EXE doesn’t generated any false positives.

I had problems while reading the paper to find out the exact scope of EXE. That is, what type of programs can it check for bugs. I think that it cannot find bugs in non-deterministic programs as the path that EXE finds and the concrete values that STP generate may not trigger the bug again in future executions. However, EXE have been used to find bugs of large programs which are probably multi-threaded!

Another problem was that the it wasn’t stated explicitly what are the bounds on the size of programs to be checked by EXE.

One idea to improve their work would be to eliminate make_symbolic() calls so that the programmer won’t have to change the code he wants to test. That is, the tool itself should find out the inputs to the program and make those inputs symbolic. Another idea might be to compare EXE with RANDOOP to see if the EXE’s authors claims about shortcomings of random testing are valid or not.

My question is whether the condition used for terminatig the EXE can be improved or not. Currently, (1) EXE checks for null or out-of-bounds memory references or (2) a division or modulo by zero. If find more conditions signaling errors, we might be able to find more bugs.

September 18, 2007

Refactoring Using Type Constraints

Filed under: refactoring — mohsenvakilian @ 7:02 pm

Several refactorings of Eclipse are based on the results of this paper by Frank Tip.

Although the main idea behind the paper is simple, it definitely takes a lot of effort to apply them to all subtleties and various combinations of Java language elements. I think the implementation and wide use of his results is the best evaluation for their work. The good point is that he has extended his work to include various complex uses of Java generics.

The bad point about the paper might be that it is not known whether he has considered all the constructs of Java or has ignored some. For example, it is not easy to find out whether or not he supports rarely used constructs such as anonymous classes or the impact of using the final keyword on subclassing. It is not even known if his method supports extract interface or generalize declared type refactorings on generics or not. Maybe, he has talked about the limitations of his work in other papers.

An idea that came to mind while reading the paper is whether or not these refactorings are useful for languages such as Ruby with dynamic typing. As these refactorings just manipulate the class hierarchies and types, it seems that they won’t be useful for dynamic languages. However, by the growth of dynamic languages such as JRuby and Groovy which are compatible with Java, these refactorings might need further consideration.

As, I said before, some of Eclipse’s refactorings are based on the methods introduced in this paper. My question is how other IDE’s such as NetBeans and IntelliJ IDEA have dealt with the issue. Have they used the same concepts? If not, how do their methods compare with the method presented in this paper?

September 13, 2007

Java Path Finder

Filed under: testing — mohsenvakilian @ 7:01 pm

Recently, I read a paper on JPF (Model Checking Programs) which was the topic of our discussion in our CS 527 class. JPF is basically a system to verify executable Java bytecode programs. And, I this post is some of my ideas about the paper. JPF

One good point that interested me in reading the paper was the strong motivation. As the authors mention, formal methods community should consider analysis of programs written in real programming languages, rather than just their own special purpose languages. Also, the basic idea of how JPF works is interesting to me. JFP first runs the program in simulation mode to find the critical threads and then applies model checker to them.

One bad point about JPF is that although they have performed lots of optimization, symmetry reductions, abstract interpretation, static analysis and dynamic analysis, they are still limited to analyzing programs in the 1000 to 5000 line range.

Several coverage metrics have been developed in software testing. And, there is a need for deriving metrics to measure how good a specific model checker is. Also, more improvements to the performance of the tool by reducing the state space are possible for example by feeding the results of model checking and static analysis to each other. Another area of future work is developing a similar tool for other languages.

One question about JPF is whether there exists any chance that JPF changes the actual behavior of the program under test such that some bugs don’t show up at all. I guess that there should be some chance of losing some errors this way but the tool tries to minimize the number of such bugs by performing static analysis.

Create a free website or blog at