4 Testing Code Against Model
The automatic test-generation and execution offered by Reactis enables
engineers to easily check whether an implementation conforms to the behavior
specified in a model.
The benefits of model debugging and validation have been discussed above. A
question that immediately presents itself is: How can the effort expended on
these activities be “reused” to support the testing of system implementations?
This is the question addressed in this section.
4.1 Software Testing
A crucial aspect of the tests generated by Reactis Tester is that they also
store model outputs. Therefore, these tests encode all the information needed
to ensure that model-derived source code conforms to its model. Reactis-driven
source-code testing proceeds as follows:
-
For each test in the suite, execute the software using the input values
contained in the test.
- Compare the output values produced by the software with those stored in the
test.
- Record any discrepancies.
This methodology is referred to as model-based software testing or
back-to-back testing. Its key advantage is that the model serves as an
“oracle” for testing purposes: the outputs produced by the model can be
used as a basis for assessing those generated by the software. If the
software does not agree with the model, then the developer can assume that
the problem lies within the source code.
The net effect of model-based testing with Reactis is better-quality software at
a lower cost. Because good test data is generated and run automatically, less
engineer time is required to create and run tests. Because the tests are
thorough, the probability of finding bugs is maximized. Because the test suites
are compact they may be run quickly. In sum, Reactis dramatically reduces the
costs of testing embedded control software.
Figure 7 illustrates how the Reactis tool suite can provide
advanced model-based testing of source code. As the figure indicates, the
model-based testing protocol supported by Reactis is as follows:
Figure 7: Testing for conformance of code to model with Reactis. |
- The developer provides as input to Reactis an .slx file
representing the validated Simulink/Stateflow model of the system under
development.
- Reactis Tester is used to automatically generate a test suite that
thoroughly exercises the given model according to the various coverage metrics
supported by Reactis.
- The developer may deploy Reactis Simulator to visualize test execution and
to fine tune the tests in the test suite to further improve model coverage.
- The test suite and the software implementing the model are fed as inputs
into a test harness to automate the source-code testing process. Note, that
if Reactis for C is used as the test harness it will read the native test suite
format, execute the tests on the C code, flag any runtime errors, track
coverage within the C code, and compare the outputs computed by the C code
against those generated by the model and stored in the tests.
- By comparing the outputs produced by the software against the model-generated
outputs (stored in the test), deviations in the behavior of the source code
from the model are readily detectable and help the developer ensure that
the source code conforms to the model.
- Testing concludes when the source code passes all the tests in the test
suite.
4.2 System Testing
After software testing, the next step in certifying a system is to compile the
code and test the resulting executable on the platform, including the
target microprocessor and associated system software, on which it will
eventually be deployed. Such testing is often referred to as system, or
integration, testing.
System testing typically involves the use of hardware-in-the-loop (HIL)
simulation tools. These HIL tools are expensive, and thus using them as
efficiently as possible can promote significant cost savings.
Provided that system-level models are given in Simulink / Stateflow, Reactis can
greatly facilitate system testing. As in the case of software testing, test
engineers can use Reactis Tester and Reactis Simulator to generate thorough yet
compact test suites from these models and feed the test data into their HIL
environments in order to check system behavior against model behavior. The
compactness of Reactis-generated tests means that expensive HIL hardware need
not be tied up with long test runs in order to get precise insights into
system behavior.
How the Reactis-generated test data may be used in HIL testing will in general
depend on the HIL environment used. HIL tools typically provide a scripting
facility for defining test runs. Reactis exports test data in several easy to
parse formats (comma separated value, CSV, for example) to simplify the writing
of scripts to read Reactis-generated test data into an HIL environment.