Preview

Neuroevolutionary reinforcing learning of neural networks

https://doi.org/10.21122/2309-4923-2021-4-16-24

Abstract

The article presents the results of combining 4 different types of neural network learning: evolutionary, reinforcing, deep and extrapolating. The last two are used as the primary method for reducing the dimension of the input signal of the system and simplifying the process of its training in terms of computational complexity.

In the presented work, the neural network structure of the control device of the modeled system is formed in the course of the evolutionary process, taking into account the currently known structural and developmental features of self-learning systems that take place in living nature. This method of constructing it makes it possible to bypass the specific limitations of models created on the basis of recombination of already known topologies of neural networks.

About the Authors

Y. A. Bury
Belarusian State University of Informatics and Radioelectronics
Belarus

Yaraslau A. Bury, Assistant of Electronic Computing Machines Department BSUIR, Post-graduate student of Electronic Computing Machines Department

Minsk



D. I. Samal
Belarusian State University of Informatics and Radioelectronics
Belarus

Dmitry I. Samal, Associate Professor of the Software for Information Technologies Department BSUIR, Associate Professor of the Software Engineering Department BSTU Ph.D.

Minsk



References

1. Hizhnjakov, J. N. Algorithms of fuzzy, neural and neural-fuzzy control in real-time systems / J. N. Hizhnjakov. – Perm: PNIPU, 2013. – 160p.

2. Sutton, R. S. Reinforcement Learning / R. S. Sutton, A. G Barto. – M.: BINOM. 2017. – 399p.

3. Rutkovskaja, D. Neural networks, genetic algorithms and fuzzy systems / D. Rutkovskaja, M. Pilin’skij, L. Rutkovskij – M.: Gorjachaja linija – Telekom, 2013. – 384p.

4. Bury, Y. Extrapolating training of neural networks. Informatics. Num 16, № 1 (2019) / Y.A. Bury, D. I. Samal – М.: Informatics, 2019. – 86–92p.

5. Images dataset THE MNIST DATABASE of handwritten digits. Access mode: http://yann.lecun.com/exdb/mnist/. Access date: 14.08.2019.

6. Official site Caffe (GitHub). Access mode: http://caffe.berkeleyvision.org. Access date: 14.08.2019.

7. Bury, Y. Application of configuration coding of the input signal in convolution neural networks for recognition of handwritten characters / Y. Buryi, D. Samal – M .: BSUIR, BigDATA, 2019. – 366–371p.

8. The Street View House Numbers (SVHN) Dataset – [Electronic resource]. – Access mode: http://ufldl.stanford.edu/house-numbers. Access date: 14.08.2019.

9. Hajkin, S. Neural networks. Full course / S. Hajkin. – M., SPb., Kiev: Vil’jams, 2006. – 1104p.

10. Plotnikov, A. D. Mathematical programming / A. D. Plotnikov – Minsk: Novoe znanie, 2007. – 171p

11. Nikolenko, S. Deep learning. Immersion in the world of neural networks/ S. Nikolenko, A. Kadurin, E. Arhangelskaya – SPb.: Piter Publ., 2018. – 480p.


Review

For citations:


Bury Y.A., Samal D.I. Neuroevolutionary reinforcing learning of neural networks. «System analysis and applied information science». 2021;(4):16-24. (In Russ.) https://doi.org/10.21122/2309-4923-2021-4-16-24

Views: 549


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 2309-4923 (Print)
ISSN 2414-0481 (Online)