main| new issue| archive| editorial board| for the authors| publishing house|
Đóññêèé
Main page
New issue
Archive of articles
Editorial board
For the authors
Publishing house

 

 


ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 9. Vol. 22. 2016

To the contents

A. B. Barsky, Professor, e-mail: arkbarsk@mail.ru, Moscow State University of Railway Engineering (MIIT)

Simulation of Inductive Reasoning Using Inference PROLOG

In accordance with the two paradigms of artificial intelligence — the expert and student — studied the possibility of constructing models of deductive and inductive reasoning based on human inference PROLOG language. If the simulation of deductive reasoning studied enough and is the main goal of the language, the model of inductive reasoning, that is, the formation of new knowledge, offered for the first time. Its essence is as follows. According to the knowledge base consisting of facts and rules, all possible deductive inference chain built. They are detected and distinguished finished, preferably repetitive structure. Related variables are replaced by their abstract representation for the formation of a generalized type of the selected design. This creates a hypothesis about the description of the new concepts. These hypothetical concepts are named and they are the new rules that complement the knowledge base. At the same time the description of the new regulations complement the conceptual logical neural network to be able to work with fuzzy data. Practice successful, a consistent application of the new rules should confirm their high reliability.
Keywords: artificial intelligence paradigm, deductive and inductive reasoning, logic neural network, PROL0G, the logical chain

P. 643—648


V. A. Marenko, Senior Scientist Sobolev Institute of Mathematics of the Russian Academy of Sciences, PhD, Associate Professor, marenko@ofim.oscsbras.ru

Cognitive Modelling for the Interethnic Relations Stady

The work purpose — research of the interethnic relations in youth environment. Research problems: the short analysis of scientific information, questioning in the student's environment and application of a technique of cognitive modeling.
At the initial stage of research the problem field is formed of basic factors: interethnic integration, partnership, religion, assimilation, mentality and external conditions. Formalization of a problem field is carried out in the form of the weighed focused count. Tops of the count are basic factors. Arches have the direction and "weight", appointed by experts. Coordination of scales of arches is carried out with application of mathematical statistics.
The algorithm of computing experiment is constructed with application of numerical methods. Calculation essence the following. Indignations are brought in tops of the count. Distribution of "a wave of indignations" on various ways in the column is observed. The first calculations showed the undesirable phenomena: linear and exponential resonance. They arose because of mistakes of experts. Theorists advise to carry out restructuring of the count in the form of "rose". Petals of "rose" increase stability and counterbalance structure of the count. The recommended transformations led to steady structure. Computing experiment showed that strengthening of religion weakens processes of assimilation, partnership and interethnic integration. Improvement of external conditions promotes process of interethnic integration.
It is planned to be considered not only the weight of arches, but also value of variables in the count's tops. Variables can be linguistic. They play an important role in decision-making on the basis of approximate reasonings. The example of determination of value of an output variable on an entrance variable is given.
Keywords: cognitive map, count, cognitive model, linguistic variable, interethnic integration

P. 649—654


N. N. Svetushkov, Associate Professor, å-mail: svt.n.n@mail.ru, Moscow Aviation Institute (National Research University), Moscow, Russia

Simplified Mathematical Model for Temperature Fields Calculation in Detonation Combustion

This paper investigates the possibility to calculate the heat load on the combustion chamber wall in detonation combustion, which is important in the development of industrial detonation engines, including those with spin detonation combustion. For the calculation was proposed simplified mathematical model based on the parabolic partial differential equations, which reflects the main features of the propagation of detonation in the combustion chamber. The article details the derivation of this equation is presented and shows the main assumptions used in its preparation. To solve the resulting equations and numerical calculations by the authors used a new approach — a method of strings based on the integral representation of the heat equation. The article shows the advantages of the proposed approach and noted that its use avoids the "nonphysical" oscillations in the numerical solution in case of large temperature gradients. For the purpose of calculation was modified previously developed software environment that allows you to create two-dimensional model of the combustion chambers, in accordance with their configuration, and set the initial and boundary conditions. These results have shown that by varying the free parameters of the model, you can change the shape of the plume and the nature of the temperature field in the vicinity of the detonation wave to better compliance with the experimental results. The article concludes that the effectiveness of this approach in terms of minimizing the computational cost and a series of numerical experiments to optimize the combustion chamber design. The developed code can be useful for technicians who expect thermal loads in detonation combustion chambers.
Keywords: thermal load, the detonation wave, the conservation laws, the heat equation, mathematical modeling software

P. 655—659


P. Sh. Geidarov, PhD, Associate Professor, Institute of Azerbaijan National Academy of Sciences of the System Management

Algorithm for the Shortest Route Based on the Selected Set of Routes

This paper describes an algorithm for determining the shortest transport route based on the selected set of routes. The algorithm is considered in relation to public transport by successive complications and extension algorithm features. Initially described an algorithm for determining the shortest route without the use of geographical coordinates of the area and with one change of transport. On the basis of the underlying algorithm examines the algorithm with two or more transfers. Also describe the possibilities of expanding the functionality of the algorithm, applied to different transport, with the possibilities of geo-referenced location. Also give a description of software and structural means to speed up the algorithm.
Keywords: shortest route, urban transport, public transport, transport in Baku, Intelligent Transport Management Center

P. 660—668


G. A. Melnikov, Postgraduate student, grmel89@gmail.com, T. A. Melnikov, Master student, temmelnik@gmail.com, V. V. Gubarev , PhD, Professor of Department of Computer Engineering Novosibirsk State Technical University

Regression Tree Pruning Algorithms: an Overview and Empirical Comparison

Regression trees belong to a very important class of regression models which allows to split feature space into segments with building specialized local model for each of them and to achieve visualizable, easy interpretable and accurate piecewise models. As for the classification tree, choosing the right size of the tree is one of the key issues of regression tree induction. Unfortunately, this issue is given very little attention. The majority of works focused on the development of data splitting algorithms.
The first part of the paper gives an overview and systematization of existing regression tree pruning algorithms. These algorithms can be divided into two standard groups: pre-pruning and post-pruning. Most authors follow the best practices of classification trees induction algorithms and use cost-complexity pruning or design their own post-pruning algorithms. There are only few works where used pre-pruning. Is worth noting here only two algorithms from this group: the first uses validation dataset to estimate generalization error and the second are based on the Chow test. In the second part of the paper, we conducted an empirical comparison of five key pruning algorithms in three indicators: running time of the algorithms, adequacy and complexity of the obtained models. The results of experiments show that, unlike the case of the classification trees, pre-pruning algorithms are more preferred to post-pruning algorithms in regression tree induction. The first are much less time-consuming, induction time decreased on average by three times. In addition, the pre-pruning algorithms induct models that in most cases has a better adequacy and complexity comparable with that of the best post-pruning models.
Future work should be focused on the development pre-pruning algorithms of regression tree induction. Of particular interest, in our opinion, should be given to adaptation of statistical tests and information criteria of model selection.
Keywords: data mining, machine learning, non-linear regression, piecewise models, model trees, regression trees, regression tree pruning

P. 669–675


I. S. Pechenko, Research Associate, e-mail: ivan.pechenko@intel.com
ZAO "Intel AO"

Specification Representation Forms: Issues and Ways for Automatic Processing

The complexity of computer systems has grown dramatically over past thirty years. The growth of system design complexity leads to increasing importance of full and accurate system specifications creation. The quality of these specifications affect greatly on the product design success.
Requirement engineering process consists of several parts: requirements elicitation; requirements analysis; specification creation; specification validation and verification; requirements management.
Specification can consist of many documents written in different languages, for example: natural languages, structural and flow diagrams, spreadsheets and formal languages. The whole specification creation process can be characterized as a transition from natural language requirements got from stakeholders to more formal specification. Natural language requirements are commonly inaccurate, inconsistent, incomplete and cannot be automatically processed. Formal language specifications are the example of the most accurate and unambiguous system description. They also have another great advantage: they can be analyzed using formal methods. Despite this, usually only a small part of system specification is written in formal languages, because they are hard to learn and to read and have many semantic restrictions. Diagrams and spreadsheets stay somewhere between natural and formal languages: they have formal syntax, but also additional data in natural language.
Diagrams are one of the most common and convenient form of semi-formal specifications. UML is the most prevalent visual language in requirement engineering. There are both structural and behavioral types of UML diagrams. Specifications in UML contain usually high-level information and cannot be directly analyzed using formal methods. Instead of this, usually UML diagrams are modified, supplemented with additional data and then translated into some formal language. Many recent studies use this flow for diagram analysis.
Spreadsheets are another common form of semi-formal specifications. They usually serve as detailed description of system structure. They can be analyzed with formal methods only if the structure and syntax of the spreadsheet are set. There are two common directions in spreadsheets specification studies. One group of works consider spreadsheet validation capabilities. They use semi­automatic validation because, if spreadsheet structure is not set, formal validation methods cannot be applied to the spreadsheet. Another group of studies consider making spreadsheets correct by construction using spreadsheet templates.
Keywords: requirements, specification, validation, natural language, formal language, automatic processing, UML diagram, automatic translation, spreadsheet, spreadsheet template

P. 676–683


A. V. Vishnekov, DSc., Professor, avishnekov@hse.ru, E. M. Ivanova, PhD, Associate Professor, emivanova@hse.ru National Research University "Higher School of Economics" (MIEM NRU HSE)

Automation of the Smart-Education Trajectory Choice

The basic Smart-education elements are considered in the article: university facilities; information sources types; the way of the educational process organization; the number of educational group; technology of an information presentation; technology of the organization of current and total knowledge control; knowledge presentation speed according to a particular student; the se­quence and the number of the simultaneous studied objects; students preferences. The questions of trajectory choice in the Smart-education environment are discussed. The authors give an approximate content of a possible education trajectory. The article offers a technique of the education trajectory choice based on the decision-making support methods. The specified technique allows using both expert's knowledge and experience in various fields of the organization and ensuring educational process, and requirements of students. This article considers both group and individual decision-making methods. The authors prove the method choice for the task solution. The article shows an example of the minimum distance method application for the expert's opinions coordination while choosing the alternative educational trajectories. The example of the hierarchy analysis method application while choosing one of the alternative educational trajectories recommended by experts is shown. The offered technique allows to automate the process of the most rational education trajectory choice and to reduce decision-making time and costs.
Keywords: Smart-education, education trajectory, decision-making support methods, minimum distance method

P. 684–691


A. V. Maltsev, Ph. D. (Mathematics), Senior Researcher, avmaltcev@mail.ru, M. V. Mikhaylyuk, Dr. Sc. (Mathematics), Head of Department, mix@niisi.ras.ru, P. Yu. Timokhin, Scientific Researcher, webpismo@yahoo.de, M. A. Torgashev, Ph. D. (Mathematics), Head of Sector, mtorg@mail.ru Scientific Research Institute of System Analysis (Russian Academy of Sciences)

The Technology of Video Stream Separation by Means of Multiple Kinect Devices in Distance Education

One of the important issues in the field of distance education is to transmit over network a qualitative video stream in realtime mode. In this paper we propose the technology based on using several Microsoft Kinect devices, in which initial video stream is separated into three new streams. The first of them contains only the board, the second — the lecturer, the third — a mask for selecting the lecturer. Separate compression, transmission and merging of these streams into one on user's computer allow us to achieve lower bitrate compared to direct transmission of the source video. Proposed compression methods work in real-time mode and provide high compression ratio for video transmission over data networks with low bandwidth. Using multiple Kinect devices in our technology allows to significantly increase an angle of view compared to single device.
Keywords: video stream, segmentation, Kinect, distance education, depth map, compression, real-time

P. 692–698


N. T. Abdullayev1, Associate Professor, e-mail: a.namik46@mail.ru, N. Ya. Mamedov2, Associate Professor, e-mail: mr.nuraddin47@mail.ru, G. S. Agayeva1, Researcher, e-mail: gunel_asoa@yahoo.com
1Azerbaijan Technical University,
2Azerbaijan State University of Oil and Industry

Application of the Method the Spectral Analysis for Differential Diagnosis the Disease Bodies of the Digestive Tract

The possibility use of the offered algorithm the spectral analysis of measuring signals for differential diagnostics a functional condition bodies of the enterogastric highway is considered. Elektrogastroenterografic signals for a normal state are investigated and at ulcer damages of a stomach and duodenum. For differential diagnosis this illness average values of complex coefficients Fourier are used, dependences these coefficients on number harmonicas of the studied signal are given. Structures of artificial neural networks for classification samples are offered.
Keywords: digestive tract, elektrogastroenterografic signals, spectral analysis, Fourier's coefficients, neural network

P. 699–703


V. P. Kulagin, Professor, Head of Research Laboratory, e-mail vkulagi@hse.ru, Moscow Institute of Electronics and Mathematics, Higher School of Economics, Russia, Moscow A. I. Ivanov, Associate Professor, Head of Laboratory, JSC "Penza Research Electrotechnical Institute" Russia, Penza, e-mail ivan@pniei.penza.ru. Yu. I. Serikova, Undergraduate 1st year, FBGOU VPO "Penza State University", Russia, Penza, å-mail: gosh64@mail.ru

Correction of Methodical and Casual Components of Errors of Calculation of the Coefficients of Correlation Arising on Small Selections of Biometric Data

The paper makes it clear that calculating expectation values, standard deviations and correlation rates gives appreciable errors when using small samples. The correlation rate miscalculation far exceeds those of expectation values and standard deviations. Such errors happen all along of the given data continua quantization through their representation via small sample. We give the probability distribution plots of quantization errors and errors arising from calculating correlation rates in small samples. The allowance for association of correlation rate miscalculations with the test sample sizes makes it possible to apply for simulation modelling of several variables conditioned upon their equal mutual correlation. The table lists the expectation values for correlation rates derived from different sizes of test sample. These values indicate the presence of significant systematic errors arising while evaluating the correlation rates. Small samples show appreciable systematic error, yet decreasing rapidly with the increase in a test sample size. The paper stands for correcting systematic error in the additive and / or multiplicative form, as well as considers two ap­proaches to correction data analytic description by adjusting the detected systematic error. The first approach to analytic description provides for approximating correction data rows by hyperbolic curves with dim fractional exponent. The second approach to analytic description provides for approximating correction data columns by applying to the analytical function describing beta distribution.
Keywords: methodological error, coefficient of correlation, small sample, processing of biometric data

P. 705–710


E. D. Aved'yan1, 2, 3, Senior research Fellow1, Deputy Head2, Professor3, T. T. L. Le3, Postgraduate Student,
1Department of Advanced Research and special Projects of the Federal State Autonomous Research Institution CIT&S, Moscow, Russia
2The Neural Network Technology Centre of the International Centre of Informatics and Electronics, Moscow, Russia
3Department of Radio Engineering and Cybernetics of the Moscow Institute of Physics & Technology (State University), Moscow, Russia

A Two-Level System for DoS attacks and their Components Detection based on the Neural Networks CMAC
The results of the application of the system of neural networks CMAC (NN CMAC) to detect DoS attacks and their components are given. The system used all the data in the database KDD Cup 99. The system consists of two levels. The first level is designed to detect DoS attacks using trained NN CMAC. The second level is designed to separate from the detected DoS attacks all the 6 components: Back, Neptune, Land, Pod, Teardrop and Smurf using 6 trained NN CMAC. Miss rate of DoS attacks and false alarm in the first level does not exceed 0,2 %.
Keywords: Detection of DoS attacks, the neural network CMAC, attacks, database KDD Cup 99, the components of DoS attacks

P. 711–718


To the contents