A minor change of a Simulink model can result in an unexpected consequence, so the Simulink model is usually required to be rerun and tested, which increases the development cost and time. Compared with the reference model, only the changed parts of the updated model could result in a failure at the outputs. So, a two-stage model reduction algorithm is designed to isolate the changed parts, that speeds up the processes of neutrality test and fault localization. The first reduction is based on the changed parts, the second reduction is based on the bad outputs. The changed parts and the bad outputs are the blocks of interest of the reduction. The blocks related to the blocks of interest are reserved, the others are deleted. The thesis proposes a way of conversion of the Simulink model to a digraph based on extended data dependence to find the related blocks. After the model reduction, the faults are located with the help of signal comparison
Author: A. Morozov
The efficiency comparison of the prism and storm probabilistic model checkers for error propagation analysis tasks
Dual-graph Error Propagation Model (DEPM) is a stochastic framework developed in our lab that captures system properties relevant to error propagation processes such as control and data flow structures and reliability characteristics of single components. The DEPM helps to estimate the impact of a fault of a particular component on the overall system reliability. The probability of a system failure, Mean Time Between Failures and Mean Time to Repair, and the expected number of failures are the quantitative results of the DEPM-based analysis. In addition, the DEPM provides an access to the powerful techniques from Probabilistic Model Checking (PMC) that allow the evaluation of advanced, highly customizable, time-related reliability properties, e.g., “the probability of a system failure during certain time units after the occurrence of a certain combination of errors in certain system components”.
For this reason, the DEPM models are automatically transformed to discrete-time Markov chain (DTMC) models and are analyzed using the state of the art probabilistic model checker PRISM. However, the new promising model checker Storm has been recently released.
This paper presents the results of the efficiency comparison of these two model checkers based on the automatically generated set of the DTMC models that are typical for the error propagation analysis tasks. Several computation engines that are supported by both model checkers have been taking into account. The paper also gives the general description of the DEPM workflow with the focus on the PMC interface.
Recently we have proposed a new method for error propagation analysis of the safety-critical software using the transformation of the source code to the Dual-graph Error Propagation Model (DEPM) based on the Low-Level Virtual Machine (LLVM) compiler framework, that allows the automatic analysis of C-code or another LLVM supported front-end. The source code is compiled into the LLVM Intermediate Representation and instrumented in order to analyze control and data flow structures of the software and the control flow transition probabilities between the basic blocks. Based on this information a DEPM for further analysis is generated. The DEPM is a stochastic model that captures system properties relevant to error propagation processes such as control and data flow structures and reliability characteristics of single components, LLVM instructions in this particular case. The DEPM helps to estimate the impact of a fault in a particular instruction on the overall system reliability, e.g. to compute the mean number of erroneous values in a critical system output during given operation time. The feasibility of the method has been proven on several case studies and also reveals several limitations of the current control flow model.
This paper address the improvement of the control flow model using a new customizable heuristic method for the analysis of control flow sequences and their mapping into discrete-time Markov chain models. The method is designed in a way to keep a required tradeoff between the model size and precision.