Methodologies and Tools for Testing Java Applications

Agile Day - 21. Mai

Agile methodology promotes an iterative development in which software testing plays an essential role: every version of the software must be tested carefully and frequently with respect to the constant feedback from everyone that holds a vision of the final product. While popular in practice, generating test cases manually can become very costly. It requires vast knowledge and experience to manually design tests for corner behavioral cases, and yet whole classes of bugs may be missed.

To address this problem, various tools have been developed over the past decade, that focus on generating useful test cases fully automatically. They are based on different approaches, namely directed random testing, bounded exhaustive testing, and concolic testing, and deliver various guarantees. In this presentation, we provide a brief overview of such tools for Java programs. These tools are complementary, thus some or all of them can be used depending on the nature of the software under test. All of them, however, are easy-to-use by programmers, and support practical software features.

Directed Random Testing. Traditional random testing is quick, easy and scalable, but it tends to produce many illegal or redundant inputs. Directed random testing combines random testing with runtime guidance. That is, each new test is executed and labeled as redundant, illegal, error-revealing, or useful. Failing tests reveal errors and passing ones are used for regression testing. We present Randoop as a successful tool in this category.

Bounded Exhaustive Testing. This approach aims for programs that take complex data as input, and produces all possible structures up to a certain size. It allows users to specify validity properties of input structures as an imperative predicate in Java. It then systematically searches the bounded state space and produces tests by running the predicate on candidate inputs. We present the tool Korat for this category.

Concolic Testing. This approach aims at increasing the path coverage of the test suite. It combines concrete and symbolic testing to generate test cases that enable a program to explore all the distinct feasible execution paths, while avoiding redundant tests and false warnings. To test a program against its correctness conditions, it uses runtime monitors, small software units weaved into the code to check whether the specification is violated or not.

Recent attempts on automatic test generation, such as the ones described above, hold promise for low-cost, high-quality software. They enable creating many more inputs than an engineer could create by hand, revealing important scenarios that can be missed by manual efforts. Certain concepts such as specification-based testing support test-driven development, and can be applied when appropriate.

Dr. Mana Taghdiri

Promatis GmbH

Dr. Mana Taghdiri is lead software developer at Promatis GmbH. She has received her Masters and PhD degrees in the field of Computer Science, Software Design from MIT in Massachusetts, USA. She has worked as a senior software engineer on the design of a Just-In-Time compiler at MathWorks Inc, and later joined KIT as a junior professor in Informatics, supervising the Automated Software Analysis group. She has several publications on automatic software checking, modular analysis, and specifications analysis. She has received two ACM/SIGSOFT distinguished paper awards for her work on scalable software analysis, and research grants for her ideas on automating software verification.