Seminar: Model-based Verification - Fundamentals and Industrial Practice
Reliability, security, accuracy, and robustness of software become more and more important. Mistakes in system development are made, including critical ones, due to mental-logic errors in the specification and/or during implementation of the software itself. Thereby, compilers and used programming languages play an important role. To avoid errors, the used programming languages are often limited to specific language constructs. This is done to avoid dynamic malfunction (memory overflows, memory leaks etc.). Another reason for that is to simplify the analysis and verification process of these software components.
Fuzz Testing: Fuzz Testing or fuzzing is a blackbox testing method, where the system under test (SUT) is executed with inputs that are not foreseen by the SUTâs specification. The SUT is then checked for the presence of generally undesired behaviors, such as memory access violations. The inputs used for fuzzing must be semi-valid, i.e. inputs that are not valid according to the specification, but not entirely invalid either, so that the software processes the incoming data instead of rejecting it as wrong input. Based on the approach to define a coverage criterion for fuzz testing, an overview about fuzzing, its benefits and limitations should be presented.
Model-based testing: The idea of model-based testing is to compare the I/O behavior of an explicit behavior model with that of a system under test. This requires the model to be valid. If the model is a simplification of the SUT, then it is easier to check the model and use it for subsequent test case generation than to directly check the SUT. In this case, the different levels of abstraction must be bridged. Not surprisingly, experience shows that choosing the right level of abstraction is crucial to the success of model-based testing. The presentation shall include an overview about the classification of the different abstraction levels, their benefits and disadvantages.
Survey about test case generation methods from different sources: Generating test cases automatically is one of the most important challenges in testing area. Depending on the given sources for the generation of test cases, we can check different requirements of the system under test (SUT). This presentation should give an overview of the different sources which allow a (semi-)automatically extraction of test cases and the state of the art methods are used for.
Test case selection/reduction: Automatically generated test suites usually have redundant test cases and they are usually impractical due to the huge number of test cases that can be generated. One major challenge in this context is the reduction of the test suite by avoiding the reduction of their efficiency. One approach is to use a similarity function. The presentation should include an overview about this method and a comparison to random and other selection strategies.
Fault Localization Metrics: Fault localization metrics are applied to localize bugs in software. They analyze and compare specific features of executed software. They compare test runs without bugs with test runs which cause a bug. For example, they monitor the coverage of executed source lines of different runs. Source lines which occur in the failing run, but not in any passing run, may contain the root-cause of a failure. The goal of this topic is to present and compare different fault localization metrics.
Monitoring Frameworks: For detecting root-causes of failures in embedded software it is required to monitor the software during execution. The monitoring results in a trace file, which can be automatically analyzed (e.g. using fault localization metrics) to detect anomalies or root-causes of failures. Different monitoring techniques and tools are available in the state of the art. They can be categorized as follows: source code instrumentation, binary code instrumentation or debugging tools. The goal of this topic is to present different monitoring tools and compare them considering their practical application to embedded software.
Multi-Threaded Replay Debugging for Embedded Software: Replay Debugging traces software during operation in real world. The traced execution which causes a bug can be replayed in the laboratory for debugging. Therefore, the I/O inputs of the software can be captured during operation and injected during replay. Nevertheless, multi-threaded software can cause different executions during replay, caused by mismatched thread schedules. This presentation should present different approaches for the replay of thread schedules which can be applied to embedded software.
16.04.2015 17:00 c.t. (Raum: A104)
Die VortrÃ¤ge werden am 30. Juli 2015 16:00-19:00 Uhr (Raum: A104) stattfinden
Randbedingungen fÃ¼rs Seminar
Von den Teilnehmerinnen und Teilnehmnern wird eine weitgehend selbststÃ¤ndige Erarbeitung des jeweiligen Themengebiets incl. Literaturbeschaffung und -recherche erwartet. Literaturhinweise werden natÃ¼rlich gegeben, eine bedarfsgesteuerte Betreuung ist sichergestellt.
Vortragsdauer und Themenaufbau
Der Vortrag soll einen Dauer von 25 Minuten haben und folgenden Aufbau besitzen:
Bedingungen fÃ¼r den Seminarschein
Um den Schein zu erhalten muss
Die Seminarleistung wird nach folgenden Kriterien benotet:
Latex- und Word-Vorlage
Tips zur Gestalltung von Vortrag und Ausarbeitung