main| new issue| archive| editorial board| for the authors| publishing house|
Πσρρκθι
Main page
New issue
Archive of articles
Editorial board
For the authors
Publishing house

 

 


ABSTRACTS OF ARTICLES OF THE JOURNAL "INFORMATION TECHNOLOGIES".
No. 2. Vol. 30. 2024

DOI: 10.17587/it.30.85-90

K. Sh. Gurbanova, Chief Specialist,
Training-Innovation Centre, Institute of Information Technology of The Ministry of Science and Education of the Azerbaijan Republic, Baku, Azerbaijan

Research of Stages, Types of Modeling and Methods of Gesture Recognition

It is impossible to leave aside the problem of communication with the outside world and the social integration of people with disabilities in modern society, where there is a rapid development of communication methods between people. Gesture is the only method of communication for people with disabilities (auditory and language). The research provides information on static and dynamic gestures that allow people with communication problems to exchange information. It is noted that the problem of automatic gesture recognition is solved by various mathematical methods, algorithms and computer systems. The advantages and disadvantages of 2D and 3D models for sequential recognition of hand gestures are shown. It is noted that the creation of a national electronic database that recognizes the national dactyl alphabet and gestures in the Republic of Azerbaijan is an urgent problem. The well-known methods of this category are shown. Artificial neural network method, Hidden Markov method, Random Forest method and method of designating a hand gesture using gloves painted with markers of different colors. The creation of a national electronic database that recognizes the national dactyl alphabet and gestures is proposed, which will partially eliminate the existing barrier in education and problems of communication between people.
Keywords: sign language, information technology, gesture modeling, gesture recognition methods, gesture recognition algorithm

P. 85-90

References

  1. Dvorina N. G. The use of interactive computer technology for recognition of gestures and speech in practical classes in a foreign lan­guage, Internet journal "Science Studies", 2015, no. 3(7), pp.1—15, available at: http://naukovedenie.ru/ (date of access: 12/11/2022) (in Russian).
  2. Hongyi L, Lihui W. Gesture recognition for AI interaction: from theory to recent advances, available at: https://integral-russia.ru/2021/07/02/raspoznavanie-zhestov-dlya-vzaimodejstviya-s-ii-ot-teorii-k-poslednim-dostizheniyam/ (date of access: 01/18/2023).
  3. Siryak R. V. Technologies for identification and recognition of gestures, Bulletin of the East Ukrainian National University named after Vladimir Dali, 2017, no. 8 (238), pp.79—85.
  4. Parasuraman R., Sheridan T. B., Wickens C. D. A model for types and levels of human interaction with automation, Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 2000, no. 3 (30), pp. 286—297.
  5. Enikeev D. G., Mustafina S. A. Using the leap motion controller for applied systems of sign language recognition, Information technologies,. problems and solutions, 2019, no. 1(6), pp. 50—54 (in Russian).
  6. Rozaliev V. L., Agafonov G. V., Kirichenko M. I. Automated selection of human hands for recognition of sign speech, V Intern. sci.-tech. conf. Open semantic technologies for designing intelligent systems, Feb 19—21, 2015, pp. 565—570 (in Russian).
  7. Wu Y., Huang T. S. Vision-based gesture recognition: A review, Gesture-Based Communication in Human-Computer Interaction: International Gesture Workshop, GW'99 Gif-sur-Yvette, France, March 17-19, 1999 Proceedings, 1999, pp. 103—115.
  8. Tomasi C., Petrov S., Sastry A. 3D tracking = classification + interpolation, Proc. Ninth IEEE International Conference on Computer Vision (Nice, France, 2003), IEEE Computer Society, 2003, pp. 1441—1448.
  9. Ryumin D. Method of automatic video analysis of hand movements and gesture recognition in human-machine interfaces, Scientific and technical bulletin of information technologies, mechanics and optics, 2020, no. 4(20), pp. 525—531 (in Russian).
  10. Murakami K., Taguchi H. Gesture Recognition using Recurrent Neural Networks, ACM Proceedings of the SIGCHI conference on Human factors in computing systems: Reaching through technology (CHI '91), 1999, pp. 237—242.
  11. Ognev I. V., Paramonov P. A. Speech recognition by methods of hidden Markov models in an associative oscillatory environment, Technical sciences. Informatics, computer technology, 2013, no. 3 (27), pp. 115—126.
  12. Bobick A., Davis J. An appearance-based representation of action, International Conference on Pattern Recognition, 1996, pp. 307—312.
  13. Shotton J., Fitzgibbon A., Cook M., Sharp T., Finocchio M., Moore R., Kipman A., Blake A. Real-Time Human Pose Recognition in Parts from Single Depth Images, In Proc. CVPR, 2011, pp. 1297—1304.
  14. Pugeault N., Bowden R. Spelling It Out: Real-Time ASL Fingerspelling Recognition, In Proceedings of the 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision, jointly with ICCV'2011, 2011, pp. 1114—1119.
  15. Wang R. Y., Popovi J. 'C Real-time hand-tracking with a color glove, ACM Trans. graph., 2009, no. 3 (28), pp. 1—8.
  16. National strategy for the development of the information society in the Republic of Azerbaijan for 2014—2020, available at: http://president.az/articles/11312 (date of access: 11.02.2023).
  17. Projects prepared under the guidance of the Institute's staff took first and second places in the "Scientists of Tomorrow" competition, available at: https://ict.az/az/news/5352 (date of access: 10/15/2022).
  18. Krivonos Yu. G., Krak Yu. V., Barmak A. V., Shkilnyuk D. V. Design and identification of elements of gesture communication, Cybernetics and system analysis, 2013, no. 2(49), pp. 3—14 (in Russian)


To the contents