Table of Contents

  • METHODS AND TECHNOLOGIES OF BUILDING AN INTELLIGENT SERVICE FOR ENERGY TECHNOLOGY FORECASTING

    pg(s) 223-228

    This article reports the approach and software tools for decision-making support in forecasting the energy infrastructure development. The author considers the problem of searching for information from various open sources, technology of information searching, knowledge detection and classification. The author describes architecture of Intelligent information system. For experts this classification and integrated warehouse simplify search for the knowledge required

  • THE EFFICIENCY COMPARISON OF THE PRISM AND STORM PROBABILISTIC MODEL CHECKERS FOR ERROR PROPAGATION ANALYSIS TASKS T.

    pg(s) 229-231

    Dual-graph Error Propagation Model (DEPM) is a stochastic framework developed in our lab that captures system properties relevant to error propagation processes such as control and data flow structures and reliability characteristics of single components. The DEPM helps to estimate the impact of a fault of a particular component on the overall system reliability. The probability of a system failure, Mean Time Between Failures and Mean Time to Repair, and the expected number of failures are the quantitative results of the DEPM-based analysis. In addition, the DEPM provides an access to the powerful techniques from Probabilistic Model Checking (PMC)
    that allow the evaluation of advanced, highly customizable, time-related reliability properties, e.g., “the probability of a system failure during certain time units after the occurrence of a certain combination of errors in certain system components”.

    For this reason, the DEPM models are automatically transformed to discrete-time Markov chain (DTMC) models and are analyzed using  the state of the art probabilistic model checker PRISM. However, the new promising model checker Storm has been recently released.

    This paper presents the results of the efficiency comparison of these two model checkers based on the automatically generated set of the DTMC models that are typical for the error propagation analysis tasks. Several computation engines that are supported by both model checkers have been taking into account. The paper also gives the general description of the DEPM workflow with the focus on the PMC interface.

  • MODEL REDUCTION ALGORITHM FOR FAST NEUTRALITY TESTS AND FAULT LOCALIZATION OF SIMULINK MODELS

    pg(s) 232-235

    A minor change of a Simulink model can result in an unexpected consequence, so the Simulink model is usually required to be rerun and tested, which increases the development cost and time. Compared with the reference model, only the changed parts of the updated model could result in a failure at the outputs. So, a two-stage model reduction algorithm is designed to isolate the changed parts, that speeds up the processes of neutrality test and fault localization. The first reduction is based on the changed parts, the second reduction is based on the bad outputs. The changed parts and the bad outputs are the blocks of interest of the reduction. The blocks related to the blocks of interest are reserved, the others are deleted. The thesis proposes a way of conversion of the Simulink model to a digraph based on extended data dependence to find the related blocks. After the model reduction, the faults are located with the help of signal comparison

  • USING THE FUZZY CONTROLLER TO CONTROL PROCESS PARAMETERS

    pg(s) 236-239

    In the course of writing this article, the goal was to develop the algorithm for setting up an adaptive fuzzy controller with a double rule base. To achieve this goal, a fuzzy controller designed to control the temperature of the steam leaving the boiler. As the boiler was considered the model of boiler BKZ-75-39GMA. In the course of writing, the fuzzy controller was synthesized and adapted. On the basis of the work done, the algorithm for adaption the fuzzy controller was determined.

  • STREAM HANDLING LARGE VOLUME DOCUMENTS IN SITUATIONALLYORIENTED DATABASES

    pg(s) 240-244

    The article discusses the streaming data processing in large quantities for the situation-oriented databases (SODB). Traditionally SODB provides cached processing of heterogeneous data. This article considered the work with large files that do not fit entirely in memory – containing a number of similar fragments, which can be processed portions. Portions of data are extracted from the input stream, processed in the buffer, and then sent to the output stream. Three variants of implementing this scheme are considered in the framework of the hierarchical situational model of the HSM. The tools offered at the conceptual level are illustrated by the example of processing XML data using PHP software tools, such as XMLReader, XMLWriter, DOM.

  • THE MULTIPROCESSOR CONTROL SYSTEM OF DYNAMICS MULTIPLY CONNECTED TECHNOLOGICAL ELECTRIC DRIVES

    pg(s) 248-250

    The problems of construction of practically realized adaptive control algorithms for interconnected electric drives with incomplete measurement of the state are considered. The certain results of industrial development of adaptive controllers for technological electric drives providing high efficiency in the conditions of the strengthened influence of parametric and vibration disturbances are reported. Models are developed for them, with the help of which the synthesis of a multiprocessor control system invariant to errors of both parametric and structural identification is carried out.

  • CLUSTER ANALYSIS TO IDENTIFY POTENTIAL EMPLOYERS OF UNIVERSITY GRADUATES

    pg(s) 255-258

    The results of cluster analysis are presented, the purpose of which was the verification of the base of industrial enterprises of the Republic of Bashkortostan to identify potential employers of graduates by 27.03.00 on the enlarged group of specialties and directions “Control in technical systems”. The solution of the problem will allow to interact with potential graduates’ consumers at the stage of developing higher education educational programs, which will ensure high-quality training of highly qualified personnel in accordance  with the requests of enterprises and organizations of the region and, accordingly, further employment of graduates. The work analyzes the conditions for the formation of clusters, presents a sample of data for intellectual analysis, results of component and cluster analysis.

  • JUSTIFICATION OF VOLUME OF OUTPUT DEPENDING ON PRODUCT ENVIRONMENTAL CRITERION BY NONLINEAR MATHEMATICAL PROGRAMMING

    pg(s) 262-265

    The article provides the economical formulation, mathematical formalization and practical implementation of assortment planning of output product on the basis of financial profitability of each of the products; environmental profitability of each of the products; the period of receipt of funds for each sold product. Mathematical tools of formalization and implementation are nonlinear mathematical programming. The study was implemented on material of one of the key enterprises of the Republic of Bashkortostan.

  • SCENARIO APPROACH FOR ANALYZING EXTREME SITUATIONS IN ENERGY FROM A CYBERSECURITY PERSPECTIVE

    pg(s) 266-269

    The article describes the approach based on the Bayesian networks for construction of probabilistic scenarios to simulate extreme situations in the energy sector. The paper proves the feasibility of using Bayesian networks to simulate energy security threats caused by the implementation of cyber threats. The main components of the scenario and their interrelations are described. The main stages of modeling extreme situations in the energy sector are determined. The position of the scenario approach in the main stages of identifying critical objects in the case of energy is recognized.

  • METHODS AND TOOLS FOR PROCESSING SEMI-STRUCTURED DATA ON THE EXAMPLE OF ACCOUNTING FOR EDUCATION IN THE SELECTION OF PERSONNEL IN THE IT INDUSTRY

    pg(s) 270-279

    This article is devoted to the analysis of documents containing requirements and underlying the development of educational programs, their structures, methods and tools that facilitate the automated processing of data contained in documents and the ability to make decisions using the results obtained. In the content plan, the task is to identify patterns between a given list of educational standards with possible types of professional activity in the IT field and key technologies using automation of data processing tools.

  • INTELLIGENT DECISION-MAKING SUPPORT IN ENERGY AND ECOLOGY IN VIEW OF THE QUALITY OF LIFE

    pg(s) 280-283

    The article examines the results of an international project carried out with the support of the EAPI Fund together with researchers from Belarus and Armenia. The project aims to develop methods and technologies for assessing the impact of energy on the geoecology of the region. The article is devoted to the development of tools for intelligent support of decision-making in this field. (style CSIT- abstract).

  • FEATURE EXTRACTION FROM IMAGES WITH USE OF CONVOLUTIONAL NEURAL NETWORKS: APPLICATION TO SECURING PERSONAL DATA

    pg(s) 284-287

    The task of recognizing the bank cards on images is considered in this paper. This task is motivated by premoderation of images in social networks in order to avoid leakage of personal data. An example of such data is the number of bank card. Detection algorithm based on convolutional neural networks is applied for this task. A number of experiments were conducted to develop the most optimal neural network architecture, which took into account the speed and accuracy of recognition. In the course of the experiments, the recognition accuracy of 91% and the processing speed of 25 images per second were achieved.