FAQ: Reactis for Simulink - Reactis Tester
What is Reactis Tester?
Reactis Tester automatically generates test
suites from Simulink®/Stateflow® models of embedded control
software. The test suites provide comprehensive coverage of different
test-quality criteria, while at the same time minimizing redundancy in
Can I restrict the values a top-level inport assumes during a test?
Is it possible to generate tests only for a subsystem instead of the whole model?
Yes. In brief, do the following:
I have added/removed top-level inports from my model. How do I make these changes show up in the Port Type Editor in Reactis?
Whenever you add/remove top-level inports to/from your model, or change the type of a top-level inport, you should:
I have added/removed top-level inports or outports to/from my model. How do I reuse old test suites created for the model that contain a different set of inputs and outputs?
Whenever you add/remove top-level inports or outputs to/from your model, you can convert old test suites to the new input/output signature as follows:
A workspace data item is used to specify one of several configurations of my model. Depending on the value of this item, various parts of my model are unreachable; however, all parts of the model are reachable when all configurations of the model are considered. Is it possible for Reactis to generate tests for all the different configurations and give cumulative coverage information?
Yes, Reactis uses a facility called configuration variables to do this. The idea is that you may tag a workspace data item as a configuration variable, and specify the different values it may assume. When generating tests, Reactis will change the values of configuration variables only between tests (not during a test). To tag a workspace data item as a configuration variable, do the following:
I created a test suite from a model using Reactis, but when I run the tests within Reactis, the output values are different than those stored in the tests. Why is this?
The probable explanation is that your model was modified after the tests were generated, but before they were run. In this case, input values from the test might yield different responses from the model. To update a test suite, so that the new model responses are recorded in the tests, load the model and test suite in Simulator and then select Test Suite, Update Outputs...
Another possible explanation for the differences is that the model
contains an S-function that maintains its own state from one call to
the next. (For example, the S-function might modify global or
statically allocated variables.) During the test-generation process
Reactis repeatedly restarts simulations from intermediate model
configurations. In order to restore configurations for S-functions
appropriately, Reactis must be able to extract any persistent state
information the function might have. Reactis can do this if the S-
function adheres to the constraints described in
Chapter Preparing Models for Use with Reactis of the Reactis User's Guide and only uses storage allocated by
mdlInitializeSizes() to store information between calls to
When I run Reactis Tester multiple times on the same model, I get different test suites. Why is this?
The reason for this is that the test generation algorithm employed by
Tester includes a pseudo-random component -- during guided simulation,
some inputs are selected using a random number
generator. Therefore, different test steps result when different
random numbers are generated.
Is it possible to make Tester generate the same test suite on different runs on the same model? In other words, is it possible to get reproducible results with Tester?
Yes, you can do this by specifying that the random number generator use the same seed for different runs. For a model foo.mdl, this is done as follows:
Reactis can also automatically save all relevant Tester parameters (including
the random seed) to a file when generating a test suite. To enable this feature,
select "File -> Settings -> General" in Reactis and make sure the "When creating a
test suite, also create a parameters file (extension '.rtp')" option is enabled.
To use the .rtp files generated by this feature, click the "Load" button in the
Tester launch dialog and select the .rtp created by an earlier Tester run.
Why do test suites with the same coverage level have different numbers of steps?
This is due to a couple of factors. First, Tester employs a random
number generator to select some inputs. Also, Tester includes a
"pruning" phase during which test steps that do not improve coverage
are eliminated. So even though two different test suites may achieve
the same level of coverage, they could be quite different.