LDRA Introduces LDRAunit for Automating Unit Tests

LDRA LDRAunit integrated framework for automating the generation and management of unit tests

LDRA announced the LDRAunit integrated framework for automating the generation and management of unit tests. It is a comprehensive and cost-effective host/target test generation and management solution for the C, C++, Ada, and Java languages. The tool takes over the tedious, error-prone process of manually developing a test harness simply by analyzing the code, generating tests, and applying a range of parameters that ensure against conditions that can cause unexpected results.

LDRA LDRAunit Features

  • Automated test driver / harness generation with no manual scripting requirement
  • High levels of test throughput via the intuitive graphical and command line interface options
  • Sophisticated automated analysis facilities which reduce test effort, freeing up developers and empowering testers
  • Storage and maintenance of test data and results for fully automated regression analysis
  • Automated detection and documentation of source code changes
  • Tool driven test vector generation
  • Facilitates execution of tests in host, target and simulator environments
  • Automated generation of test case documentation including pass/fail and regression analysis reports

LDRAunit takes the smallest piece of testable software in an application, isolating it from the remainder of the code, and determining whether it behaves as expected. LDRAunit tests code units separately before integrating them into modules and then systems to simplify identification of which part of the code might be failing to deliver expected results.

LDRAunit automatically generates tests in the application language (C, C++, Ada, Java) and makes them capable of executing on the host or target. LDRAunit also automates stub generation for artifacts such as methods, constructors, system calls, and packages that are managed within a user interface. In addition, through eXtreme Testing capabilities, LDRAunit applies a range of return and global parameter values to the managed stubs for fully testing stub behavior and configurable exception handling to ensure that all code can be tested, hence minimizing the need for manual intervention.

By storing groups of tests as sequences, LDRAunit contains all of the information required to rerun test cases and store the results for regression verification and requirements-based testing. LDRAunit can also measure and report structural coverage metrics including procedure call, statement, branch/decision, modified condition/decision coverage (MC/DC), and linear code sequence and jump (LCSAJ). Coverage data can be presented through a combination of built in reports, custom reports using a results Application Programming Interface (API), and flow and call graph displays. Developers can use results to populate compliance reports that give overall pass/fail metrics for industry standards, such as DO-178B/C, with line-by-line views that detail specific statements, branches, and conditions executed by individual tests and combinations of tests.

More info: LDRA LDRAunit