Papers 2018
Automatisierung von anforderungsbasiertem Testen
Date | 6 Dec 2018 - 6 Dec 2018 |
---|---|
Event | Embedded Software Engineering Kongress 2018 |
Location | Sindelfingen, Germany |
Manual requirements-based testing is time-consuming: Input data must cover the
requirements and observed output data must be checked for their compatibility
with the requirements. Testcases can also be automatically generated from test
models. However, these models first have to be established manually. In
contrast, the approach to be presented here uses simpler ways of formalizing
requirements to automatically map test data generated for automatic robustness
testing using massive stimulation to requirements and to check the results for
correctness.
Permalink
Files
Generating Random Telecomand Test Data Using Genetic Algorithms
Date | 29 May 2018 - 31 May 2018 |
---|---|
Event | DASIA 2018 |
Location | Oxford, United Kingdom |
Generating useful test data is one of the big
challenges in automatic software testing. While random test
data generation is the easiest method, the test inputs generated
by it may fail to exercise the software under test properly if the
internal structure of the data is unknown to the generator and
at the same time relevant for the decisions taken in the code.
Handling of telecommands in space onboard software is one
example where this is the case. We investigate a method of
generating test data for these cases using genetic algorithms.
Permalink
Files
Verification of the C++-Operating System RODOS in Context of a Small-Satellite
Date | 10 Apr 2018 - 10 Apr 2018 |
---|---|
Event | 2nd Workshop on Computer Architectures in Space (CompSpace'18) |
Location | Braunschweig, Germany |
Within the small satellite mission TechnoSat of
Technische Universität Berlin, a verification strategy based on
Dynamic Analysis has been applied to the C++-operating system
RODOS using automated massive stimulation of the software-
under-test. This approach is aiming at evaluating the robustness
of the software and to derive feedback on the implemented
messaging scheme of the on-board process chain. For fault
detection and recording of message exchange the code is
automatically instrumented with application-independent
indicators which shall flag anomalies. Manual fault analysis is
limited to the reported issues highlighting fault potential in
contrast to usual reviews on the full code. The suggested reviews
were extended to similar code, an approach which turned out as
being effective. For the verification of the messaging scheme
observed functional and performance properties were evaluated.
The verification strategy targets the reduction of costs of
verification and risks. Within this paper, the different
verification steps are described and examples for reported issues
are given.
Permalink
Files
Evaluating Automated Software Verification Tools
Date | 10 April 2018 - 12 April 2018 |
---|---|
Event | ICST 2018 |
Location | Västerås, Sweden |
Automated software verification tools support devel-
opers in detecting faults that may lead to runtime errors. A fault
in critical software that slips into the field, e.g., into a spacecraft,
may have fatal consequences. However, there is an enormous
variety of free and commercial tools available. Suppliers and
customers of software need to have a clear understanding what
tools suit the needs and expectations in their domain. We selected
six tools (Polyspace, QA C, Klocwork, and others) and applied
them to real-world spacecraft software. We collected reports from
all the tools and manually verified whether they were justified.
In particular, we clocked the time needed to confirm or disprove
each report. The result is a profile of true and false positive
and negative reports for each tool. We investigate questions
regarding effectiveness and efficiency of different tools and their
combinations, what the best tool is, if it makes sense at all to
apply automated software verification to well-tested software, and
whether tools with many or few reports are preferable.
Permalink
Files
Evaluating Test Data Generation for Untyped Data Structures Using Genetic Algorithms
Date | 9 Apr 2018 - 9 Apr 2018 |
---|---|
Event | 1st IEEE Workshop on NEXt level of Test Automation 2018 (NEXTA2018) |
Location | Västerås, Sweden |
Untyped data such as the byte streams
used in communications between spacecraft and ground
stations present a specifically challenging field for au-
tomatic test data generation. We investigate variations
of genetic algorithms to improve test data generation,
and present measurements and preliminary results ob-
tained using our prototype. The future goal is to extend
our white-box random testing tool DCRTT with these
methods and thus apply the approach to industry-grade
software.
Permalink