System analysis
In real-world networks, many problems imply finding the All-Pairs Shortest Paths (APSP) and their distances in a graph. Solving the large-scale APSP problem on modern multi-processor (multi-core) systems is the key for various application domains. The computational cost of solving the problem is high, therefore in many cases approximate solutions are considered as acceptable. The blocked APSP algorithms are a promising approach which can exploit many processors (cores) and their caches in parallel mode efficiently. At the same time, to our best knowledge, all blocked algorithms of the Floyd-Warshall family use blocks of equal sizes. This property limits application of the algorithms. In this paper we propose new blocked algorithms which divide the input graph into unequal subgraphs and divide the matrix of distances between pairs of vertices into blocks of unequal sizes. The algorithms describe the dense subgraphs by the adjacency matrix and describe sparse subgraphs and connections between them by the adjacency list. This approach allows the Floyd-Warshall family algorithms to be used together with Dijkstra family algorithms. It can be applied to large graphs decomposed into dense (clusters) and sparse subgraphs. A new heterogeneous algorithm can significantly reduce the computation time of blocks depending on the block type and size. The contribution of the paper is the development of a new family of blocked APSP algorithms which can handle blocks of unequal sizes, save and extend the advantages of the state-of-the-art algorithms operating on blocks of equal sizes. The proposed algorithms are implemented as single- and multiple-threaded parallel applications for multi-core systems.
Time complexity of an algorithm is the number of elementary operations performed by the algorithm. Taking into account the features of programming languages, the authors propose to consider the methodology for calculating this measure of algorithm complexity in the specific language of its implementation, provide formulas for calculating theoretical complexity and the rules for calculating the experimental complexity of program in C#.
The classification of methods for land surface image segmentation is presented in the paper. Such approaches as template matching, machine learning and deep neural networks, as well as application of knowledge about analyzed objects are considered. Peculiarities of vegetation indices application for satellite images data segmentation are considered. Advantages and disadvantages are noted. The results obtained by the authors of the methods that have appeared over the last 10 years are systematized, which will allow those interested to get oriented faster and form ideas for further research.
Management of technical objects
Mathematical models for controlling centrally oriented and non-centrally oriented rotary wheels of a mobile robot are considered. Based on the analysis of the kinematics of the mobile robot, the dependences of the rotation angle of the rear and front free wheel on the angular velocities of the right and left differentially driven drive wheel were obtained. For a specific mobile robot with certain kinematic parameters, graphs of the dependences of the angle of rotation of the free wheels on the radius of rotation of each wheel and graphs of the dependences of the angle of rotation of the free wheels on the angular velocities of the driving wheels were constructed. The results obtained made it possible to establish a pattern between the angle of rotation of the castor wheels and the design characteristics of the robot. A certain range of angular velocities of the driving wheels in accordance with the limiting values of the caster wheel rotation angles makes it possible to take into account the obtained mathematical models to increase the stability of the movement of the mobile robot.
Incident detection algorithms from an automation point of view can be divided into two categories: automatic and non-automatic incident detection. Automatic algorithms refer to those algorithms that automatically identify an incident based on traffic flow data received from traffic detectors. Manual algorithms or procedures rely on reports from human witnesses. Based on functional characteristics, incident detection algorithms are divided into algorithms for highways and algorithms for street networks. Based on data acquisition methods, incident detection algorithms are divided into three groups: algorithms using data from stationary vehicle detectors (inductive loops, radars, video cameras, etc.); algorithms using mobile sensors (Bluetooth, wi-fi RFID, GPS, Glonass sensors, toll system transponders, etc.). algorithms that use information from drivers (GSM communications, navigation services, Internet applications, etc.). This article discusses algorithms that use data from stationary vehicle detectors. The disadvantages of incident detection algorithms using stationary transport detectors include: the need to install and operate transport detectors (inductive, video, etc.) leads to interference with traffi fl and sometimes to temporary closure of traffic The location of installation of vehicle detectors, the frequency of their installation and the number are critical from the point of view of detecting an incident on a particular section of the highway. However, it is extremely labor and capital-intensive to install stationary detectors along the entire length of the highway. Also, inductive vehicle detectors, which are mainly used to determine the parameters of traffic flow on highways, are unreliable and often fail, which makes it ineffective to detect incidents on a particular section of the road. The advantages of the algorithms under consideration include their proven reliability and accuracy in identifying incidents over decades, which is their undoubted advantage over algorithms that use mobile sensors or information from drivers.
Data processing and decision–making
The objectives of the article to propose the method for complex recognition of Parkinson's disease using machine learning, based on markers of voice analysis and changes in patient movements on known data sets. The time-frequency function, (the wavelet function) and the Meyer kepstral coefficient function are used. The KNN algorithm and the algorithm of a two-layer neural network were used for training and testing on publicly available datasets on speech changes and motion retardation in Parkinson's disease. A Bayesian optimizer was also used to improve the hyperparameters of the KNN algorithm. The constructed models achieved an accuracy of 94.7 % and 96.2 % on a data set on speech changes in patients with Parkinson's disease and a data set on slowing down the movement of patients, respectively. The recognition results are close to the world level. The proposed technique is intended for use in the subsystem of IT diagnostics of nervous diseases.
Experimental studies of electroencephalograms of an operator located in conditions of electromagnetic noise radiation in the WiFi range were carried out. Electroencephalograms were recorded in standard leads Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T3, T4, T5, T6, Fpz, Fz, Cz, Pz, Oz. The quantitative parameters of the operator's emotional state, expressed by the power spectral density of the rhythmic components of the brain, as well as such information parameters as sample entropy, fractal dimension, Lempel-Ziv complexity, averaged for 10 subjects were analyzed. It has been shown that when exposed to radiation, the operator experiences depression. It has been shown that the trend of changes in the parameters of the spectral power density of theta, alpha, gamma rhythms, fractal dimension, Lempel-Ziv complexity, sample entropy in most leads of electroencephalograms coincides with the trend of changes in these parameters presented in the scientific literature in depression. It has been established that the operator experiences fear, which is determined by an increase in the fractal dimension parameter of electroencephalograms by no more than 0.4% relative to the background
Information technologies in education
The article is devoted to the study of ways to optimize the mathematical model of the structure of the educational process. Formalization of the structure of the educational process was carried out in the form of setting an objective function, including the sum of the number of hours provided by the curriculum for various types of training sessions. At the same time, for each type of occupation, a weighting coefficient is provided, which characterizes the relative effectiveness of each type of occupation. The numerical values of the weight coefficients are proposed to be determined by applying the method of analyzing hierarchies based on expert assessments, which are set by appointed specialists. The task is to maximize the objective function that characterizes the overall effectiveness of the educational process. As restrictions imposed on the structure of the educational process, a system of inequalities is considered, presented in a linear form and taking into account restrictions on the budget of study time allocated for the study of the academic discipline, financial restrictions on the remuneration of teaching and teaching staff, financial restrictions associated with the content educational and material base, purchase of software and other expenses. Thus, the task of optimizing the educational process is reduced to a linear programming problem, which in this case is solved using the simplex method using a standard program implemented in various computer environments. At the same time, a dual task is formulated to determine the required time and financial resources for a given distribution of teaching hours by type of occupation. The example given in the article, implemented in the Mathcad computer environment, clearly showed the efficiency of the developed methodology.
ISSN 2414-0481 (Online)