Enhanced Human Hitting Movement Recognition Using Motion History Image and Approximated Ellipse Techniques

(1) * I Gede Susrama Mas Diyasa Mail (Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia)
(2) Made Hanindia P Mail (Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia)
(3) Mohd Zamri Mail (University Malaysia Pahang Al-Sultan Abdullah, Malaysia)
(4) Agussalim Agussalim Mail (Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia)
(5) Sayyidah Humairah Mail (Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia)
(6) Denisa Septalian A Mail (Universitas Pembangunan Nasional “Veteran” Jawa Timur, Indonesia)
(7) Faikul Umam Mail (Universitas Trunojoyo Madura, Indonesia)
*corresponding author

Abstract


Recognition of human hitting movement in a more specific context of sports like boxing is still a hard task because the existing systems use manual observation which could be easily flawed and highly inaccurate. However, in this study, an attempt is made to present an automated system designed for this purpose to detect a specific hitting movement commonly known as a punch using video input and image processing techniques. The system employs Motion History Image (MHI) to model trajectories of motions and combine them with other parameters to reconstruct movements which tend to have a temporal component. Thus, CCTV cameras set at different positions (front, back, left and right) enable the system to identify several types of punches including Jab, Hook, Uppercut and Combination punches. The most important aspect of this work is the proposal of MHI and the Ellipse approximation which is quicker in the integration of both than other sophisticated systems which take a considerable duration in computations. Therefore, the system classifies C_motion, Sigma Theta, and Sigma Rho parameters to distress hitting from non-hitting movements. Evaluation on a dataset captured from multiple viewpoints establishes that the system performs well achieving the goal of 93 percent when detecting both the hitting and the non-hitting motion. These results demonstrate the system’s superiority to the system based such detection methods. This study paves the way for other applications in real-time such as sports analysis, security surveillance, and healthcare requiring greater efficiency in and accuracy of human movement assessment. The focus of future work may be in the direction of improving the recognition of slower movements, also modifying the system for more dynamic conditions in the future.


Keywords


Human Hitting Movement; Recognition; Motion History Image; Motion Detection; Video Analysist; Approximated Ellipse

   

DOI

https://doi.org/10.31763/ijrcs.v5i1.1599
      

Article metrics

10.31763/ijrcs.v5i1.1599 Abstract views : 206 | PDF views : 36

   

Cite

   

Full Text

Download

References


[1] P. -H. Chiu, P. -H. Tseng and K. -T. Feng, "Interactive Mobile Augmented Reality System for Image and Hand Motion Tracking," IEEE Transactions on Vehicular Technology, vol. 67, no. 10, pp. 9995-10009, 2018, https://doi.org/10.1109/TVT.2018.2864893.

[2] K. Host and M. Ivašić-Kos, “An overview of Human Action Recognition in sports based on Computer Vision,†Heliyon, vol. 8, no. 6, p. e09633, 2022, https://doi.org/10.1016/j.heliyon.2022.e09633.

[3] Z. Dong, F. Li, J. Ying and K. Pahlavan, "A Model-Based RF Hand Motion Detection System for Shadowing Scenarios," IEEE Access, vol. 8, pp. 115662-115672, 2020, https://doi.org/10.1109/ACCESS.2020.3004513.

[4] L. Zhang et al., “KaraKter: An autonomously interacting Karate Kumite character for VR-based training and research,†Computers & Graphics, vol. 72, pp. 59-69, 2018, https://doi.org/10.1016/j.cag.2018.01.008.

[5] P. Washington et al., “Activity recognition with moving cameras and few training examples: Applications for detection of autism-related headbanging,†CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, no. 3, pp. 1-7, 2021, https://doi.org/10.1145/3411763.3451701.

[6] W. Pouw, J. P. Trujillo and J. A. Dixon, “The quantification of gesture–speech synchrony: A tutorial and validation of multimodal data acquisition using device-based and video-based motion tracking,†Behavior Research Methods, vol. 52, pp. 723–740, 2021, https://doi.org/10.3758/s13428-019-01271-9.

[7] T. Menzel and W. Potthast, “Application of a Validated Innovative Smart Wearable for Performance Analysis by Experienced and Non-Experienced Athletes in Boxing,†Sensors, vol. 21, no. 23, p. 7882, 2021, https://doi.org/10.3390/s21237882.

[8] F. Zhou, X. Li and Z. Wang, "Efficient High Cross-User Recognition Rate Ultrasonic Hand Gesture Recognition System," IEEE Sensors Journal, vol. 20, no. 22, pp. 13501-13510, 2020, https://doi.org/10.1109/JSEN.2020.3004252.

[9] X. Pan and, A. F. d.C. Hamilton, “Why and How to Use Virtual Reality to Study Human Social Interaction: The Challenges of Exploring a New Research Landscape,†British Journal of Psychology, vol. 109, no. 3, pp. 395-417, 2018, https://doi.org/10.1111/bjop.12290.

[10] M. B. frasetyo, E. S. Wahyuni and H. Setiawan, “Comparison of Motion History Image and Approximated Ellipse Method in Human Fall Detection System,†Indonesian Journal of Computing and Cybernetics Systems, vol. 13, no. 2, pp. 199-208, 2019, https://doi.org/10.22146/ijccs.43632.

[11] S. N. Nwe Htun, T. T. Zin and P. Tin, “Image Processing Technique and Hidden Markov Model for an Elderly Care Monitoring System,†Journal of Imaging, vol 6, no. 6, p. 49, 2020, https://doi.org/10.3390/jimaging6060049.

[12] A. M. Husein, Calvin, D. Halim, R. Leo, and William, “Motion detect application with frame difference method on a surveillance camera,†Journal of Physics: Conference Series, vol. 1230, no. 1, p. 012017, 2019, https://doi.org/10.1088/1742-6596/1230/1/012017.

[13] O. Mercanoglu Sincan and H. Y. Keles, "Using Motion History Images With 3D Convolutional Networks in Isolated Sign Language Recognition," IEEE Access, vol. 10, pp. 18608-18618, 2022, https://doi.org/10.1109/ACCESS.2022.3151362.

[14] I. G. S. M. Diyasa, A Fauzi, M Idhom and A Setiawan, “Multi-face Recognition for the Detection of Prisoners in Jail using a Modified Cascade Classifier and CNN,†Journal of Physics: Conference Series, vol. 1844, p. 012005, 2021, https://doi.org/10.1088/1742-6596/1844/1/012005.

[15] E. Sukma Wahyuni, M. Brado Frasetyo, and H. Setiawan, “Combination of motion history image and approximated ellipse method for Human Fall Detection System,†International Journal of Simulation: Systems, Science and Technology, vol. 19, no. 3, pp. 1-13, 2019, https://doi.org/10.5013/IJSSST.a.19.03.13.

[16] K. Sehairi, F. Chouireb and J. Meunier, "Elderly fall detection system based on multiple shape features and motion analysis," 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), pp. 1-8, 2018, https://doi.org/10.1109/ISACV.2018.8354084.

[17] L. C. Ngugi, M. Abelwahab, M. Abo-Zahhad, “Recent advances in image processing techniques for automated leaf pest and disease recognition–A review,†Information processing in agriculture, vol. 8, no. 1, pp. 27-51, https://doi.org/10.1016/j.inpa.2020.04.004.

[18] Z. Liu, M. Yang, Y. Yuan and K. Y. Chan, "Fall Detection and Personnel Tracking System Using Infrared Array Sensors," IEEE Sensors Journal, vol. 20, no. 16, pp. 9558-9566, 2020, https://doi.org/10.1109/JSEN.2020.2988070.

[19] S. Z. Gurbuz and M. G. Amin, "Radar-Based Human-Motion Recognition with Deep Learning: Promising Applications for Indoor Monitoring," IEEE Signal Processing Magazine, vol. 36, no. 4, pp. 16-28, 2019, https://doi.org/10.1109/MSP.2018.2890128.

[20] N. Lapierre, A. St-Arnaud, J. Meunier, and J. Rousseau, “Implementing an intelligent video monitoring system to Detect Falls of older adults at home: A multiple case study,†Journal of Enabling Technologies, vol. 14, no. 4, pp. 253–271, 2020, https://doi.org/10.1108/JET-03-2020-0012.

[21] C. Xu, J. He, X. Zhang, “Hierarchical Decision Tree Model for Human Activity Recognation Using Wearable Sensor,†Advances in Intelligent Systems and Interactive Applications, Advances in Intelligent Systems and Computing, vol. 686, pp. 368-372, 2019, https://doi.org/10.1007/978-3-319-69096-4_51.

[22] T. Mahalingam and M. Subramoniam, “A robust single and multiple moving object detection, tracking and classification,†Applied Computing and Informatics, vol. 17, no. 1, pp. 2–18, 2021, https://doi.org/10.1016/j.aci.2018.01.001.

[23] A. Srivastava, T. Badal, A. Garg, A. Vidyarthi, and R. Singh, “Recognizing human violent action using drone surveillance within real-time proximity,†Journal of Real-Time Image Processing, vol. 18, no. 5, pp. 1851–1863, 2021, https://doi.org/10.1007/s11554-021-01171-2.

[24] D. Yang, Y. Wang, A. Dantcheva, L. Garattoni, G. Francesca and F. Brémond, "Self-Supervised Video Pose Representation Learning for Occlusion- Robust Action Recognition," 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1-5, 2021, https://doi.org/10.1109/FG52635.2021.9667032.

[25] H. El-Ghaish, M. E. Hussien, A. Shoukry and R. Onai, "Human Action Recognition Based on Integrating Body Pose, Part Shape, and Motion," IEEE Access, vol. 6, pp. 49040-49055, 2018, https://doi.org/10.1109/ACCESS.2018.2868319.

[26] I. G. S. M. Diyasa, E. Y. Puspaningrum, M. Hatta and A. Setiawan, "New Method For Classification Of Spermatozoa Morphology Abnormalities Based On Macroscopic Video Of Human Semen," 2019 International Seminar on Application for Technology of Information and Communication (iSemantic), pp. 133-140, 2019, https://doi.org/10.1109/ISEMANTIC.2019.8884348.

[27] H. Zhang et al., "Deep Multimodel Cascade Method Based on CNN and Random Forest for Pharmaceutical Particle Detection," IEEE Transactions on Instrumentation and Measurement, vol. 69, no. 9, pp. 7028-7042, 2020, https://doi.org/10.1109/TIM.2020.2973843.

[28] K. A. Lambert, “Fundamentals oF Python: First Programs,†Cengage Learning, 2019, https://ggnindia.dronacharya.info/Downloads/Sub-info/RelatedBook/PYTHON-TEXT-BOOK-1.pdf.

[29] B. O. Sadiq, H. Bello-Salau, L. Abduraheem-Olaniyi, B. Muhammad and S. O. Zakariyya, “Towards Enhancing Keyframe Extraction Strategy for Summarizing Surveillance Video: An Implementation Study,†Journal of ICT Research and Applications, vol. 16, no. 2, pp. 167-183, 2022, https://doi.org/10.5614/itbj.ict.res.appl.2022.16.2.5.

[30] H. R. Sinulingga and S. G. Kong, “Key-Frame Extraction for Reducing Human Effort in Object Detection Training for Video Surveillance,†Electronics, vol. 12, no. 13, p. 2956, 2023, https://doi.org/10.3390/electronics12132956.

[31] F. Pan et al., “Accuracy of RGB-D camera-based and stereophotogrammetric facial scanners: a comparative study,†Journal of Dentistry, vol. 127, p. 104302, 2022, https://doi.org/10.1016/j.jdent.2022.104302.

[32] S. A. H. Alrubaie and A. H. Hameed, “Dynamic Weights Equations for Converting Grayscale Image to RGB Image,†Journal of University of Babylon for Pure and Applied Sciences, vol. 26, no. 8, pp. 122-129, 2018, https://doi.org/10.29196/jubpas.v26i8.1677.

[33] S. Ali et al., “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence,†Information Fusion, vol. 99, p. 101805, 2023, https://doi.org/10.1016/j.inffus.2023.101805.

[34] M. S. Zaharin, N. Ibrahim, T. M. A. T. Dir, “Comparison of human detection using background subtraction and frame difference,†Bulletin of Electrical Engineering and Informatics, vol. 9, no. 1, pp. 345-353, 2020, https://doi.org/10.11591/eei.v9i1.1458.

[35] A. J. Sathyamoorthy et al., "VERN: Vegetation-Aware Robot Navigation in Dense Unstructured Outdoor Environments," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11233-11240, 2023, https://doi.org/10.1109/IROS55552.2023.10342393.

[36] S. Nurdiani, M. Rezki, R. Dahlia, M. I. R. Ihsan, Frieyadie and S. Fauziah, “Comparison Of Apple Image Segmentation Using Binary Conversion And K-Means Clustering Methods,†PILAR Nusa Mandiri Journal, vol. 17, no. 1, pp. 99-103, 2021, https://doi.org/10.33480/pilar.v17i1.2256.

[37] M. K. Khinn Teng, H. Zhang and T. Saitoh, “LGNMNet-RF: Micro-Expression Detection Using Motion History Images,†Algorithms, vol. 17, no. 11, p. 491, 2024, https://doi.org/10.3390/a17110491.

[38] A. J. Sathyamoorthy et al., "VERN: Vegetation-Aware Robot Navigation in Dense Unstructured Outdoor Environments," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 11233-11240, 2023, https://doi.org/10.1109/IROS55552.2023.10342393.

[39] A. Alavigharahbagh, V. Hajihashemi, J. J. M. Machado and J. M. R. S. Tavares, “Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences,†Information, vol. 14, no. 11, p. 616, 2023, https://doi.org/10.3390/info14110616.

[40] O. Mercanoglu Sincan and H. Y. Keles, "Using Motion History Images With 3D Convolutional Networks in Isolated Sign Language Recognition," IEEE Access, vol. 10, pp. 18608-18618, 2022, https://doi.org/10.1109/ACCESS.2022.3151362.

[41] K. K. Verma and B. M. Singh, “Deep Multi-Model Fusion for Human Activity Recognition Using Evolutionary Algorithms,†International Journal of Interactive Multimedia and Artificial Intelligence, vol. 7, no. 2, pp. 44-58, 2021, https://doi.org/10.9781/ijimai.2021.08.008.

[42] G. L. Sravanthi, M. V. Devi, K. S. Sandeep, A. Naresh, A. P. Gopi, “An Efficient Classifier using Machine Learning Technique for Individual Action Identification,†(IJACSA) International Journal of Advanced Computer Science and Applications, vol. 11, no. 6, pp. 513-520, 2020, https://dx.doi.org/10.14569/IJACSA.2020.0110664.

[43] J. Cao and Y. Tanjo, “High-Accuracy Human Motion Recognition Independent of Motion Direction Using A Single Camera,†International Journal of Innovative
Computing, Information and Control ICIC International
, vol. 20, no. 4, pp. 1093–1103, 2024, http://www.ijicic.org/ijicic-200408.pdf.

[44] J. Yang et al., "Reinventing 2D Convolutions for 3D Images," IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 8, pp. 3009-3018, 2021, https://doi.org/10.1109/JBHI.2021.3049452.

[45] S. Bakheet and A. Al-Hamadi, “A deep neural framework for real-time vehicular accident detection based on motion temporal templates,†Heliyon, vol. 8, no. 1, p. e11397, 2022, https://doi.org/10.1016/j.heliyon.2022.e11397.

[46] J. S. Fowdur, M. Baum and F. Heymann, "An Elliptical Principal Axes-based Model for Extended Target Tracking with Marine Radar Data," 2021 IEEE 24th International Conference on Information Fusion (FUSION), pp. 1-8, 2021, https://doi.org/10.23919/FUSION49465.2021.9627039.

[47] M. Hu, H. Wang, X. Wang, J. Yang, R. Wang, “Video facial emotion recognition based on local enhanced motion history image and CNN-CTSLSTM networks,†Journal of Visual Communication and Image Representation, vol. 59, pp. 176-185, 2019, https://doi.org/10.1016/j.jvcir.2018.12.039.

[48] A. Akdağ and Ö. K. Baykan, “Isolated sign language recognition through integrating pose data and motion history images,†PeerJ Computer Science, vol. 10, p. e2054, 2024, https://doi.org/10.7717/peerj-cs.2054.

[49] F. Hajabdollahi, K. N. Premnath, S. W. J. Welch, “Central moment lattice Boltzmann method using a pressure-based formulation for multiphase flows at high density ratios and including effects of surface tension and Marangoni stresses,†Journal of Computational Physics, vol. 425, p. 109893, 2021, https://doi.org/10.1016/j.jcp.2020.109893.

[50] P. A. Harris, L. Scognamiglio, F. Magnoni, E. Casarotti, E. Tinti, “Centroid moment tensor catalog with 3D lithospheric wave speed model: The 2016–2017 Central Apennines sequence,†Journal of Geophysical Research: Solid Earth, vol. 126, no. 23, pp. 1–16, 2022, https://doi.org/10.1002/essoar.10507862.1.

[51] X. Zhou, Q. Chen, S. Lyu and H. Chen, "Ellipse Inversion Model for Estimating the Orientation and Radius of Pipes From GPR Image," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 8299-8308, 2022, https://doi.org/10.1109/JSTARS.2022.3205889.

[52] K. Brown, H. Mathur, “Modified Newtonian dynamics as an alternative to the Planet Nine hypothesis,†The Astronomical Journal, vol. 166, no. 4, pp. 168, 2023, https://doi.org/10.3847/1538-3881/acef1e.

[53] H. Hangan, D. Romanic, C. Jubayer, “Three-dimensional, non-stationary and non-Gaussian (3D-NS-NG) wind fields and their implications to wind–structure interaction problems,†Journal of Fluids and Structures, vol. 91, p. 102583, 2019, https://doi.org/10.1016/j.jfluidstructs.2019.01.024.

[54] X. Zhang, L. Duan, Q. Gong, Y. Wang, H. Song, “State of charge estimation for lithium-ion battery based on adaptive extended Kalman filter with improved residual covariance matrix estimator,†Journal of Power Sources, vol. 589, p. 233758, 2024, https://doi.org/10.1016/j.jpowsour.2023.233758.

[55] Z. Yang, L. Fang, B. Shen and T. Liu, "PolSAR Ship Detection Based on Azimuth Sublook Polarimetric Covariance Matrix," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 8506-8518, 2022, https://doi.org/10.1109/JSTARS.2022.3211431.

[56] A. Barthelme and W. Utschick, "DoA Estimation Using Neural Network-Based Covariance Matrix Reconstruction," IEEE Signal Processing Letters, vol. 28, pp. 783-787, 2021, https://doi.org/10.1109/LSP.2021.3072564.

[57] C. Gartner, J. K. Mathew, J. Desai, J. Sturdevant, E. D. Cox and D. Bullock, "Methodology for Georeferencing Roadside Pan-Tilt-Zoom CCTV Camera Views to Highway GPS Coordinates," IEEE Access, vol. 12, pp. 158140-158149, 2024, https://doi.org/10.1109/ACCESS.2024.3487048.

[58] I. Ding, N. Zheng, “CNN deep learning with wavelet image fusion of CCD RGB-IR and depth-grayscale sensor data for hand gesture intention recognition,†Sensors, vol. 22, no. 3, p. 803, 2022, https://doi.org/10.3390/s22030803.

[59] K. Amrutha and P. Prabu, "ML Based Sign Language Recognition System," 2021 International Conference on Innovative Trends in Information Technology (ICITIIT), pp. 1-6, 2021, https://doi.org/10.1109/ICITIIT51526.2021.9399594.

[60] H. Komori, M. Isogawa, D. Mikami, T. Nagai, Y. Aoki, “Timeâ€weighted motion history image for human activity classification in sports,†Sports Engineering, vol. 26, no. 45, 2023, https://doi.org/10.1007/s12283-023-00437-1.

[61] M. F. Amin, “Confusion matrix in binary classification problems: A step-by-step tutorial,†Journal of Engineering Research (ERJ), vol. 6, no. 5, pp. 1–12, 2022, https://digitalcommons.aaru.edu.jo/cgi/viewcontent.cgi?article=1278&context=erjeng.

[62] R. Kosasih, A. Fahrurozi, D. Riminarsih, “Implementation of random forest on face recognition using Isomap features,†Journal of Computing Engineering, System and Science, vol. 7, no. 2, pp. 459–469, 2022, https://doi.org/10.24114/cess.v7i2.34498.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 I Gede Susrama Mas Diyasa, Made Hanindia P, Mahardhika Pratama, Agus salim, Faikul Umam

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


About the JournalJournal PoliciesAuthor Information

International Journal of Robotics and Control Systems
e-ISSN: 2775-2658
Website: https://pubs2.ascee.org/index.php/IJRCS
Email: ijrcs@ascee.org
Organized by: Association for Scientific Computing Electronics and Engineering (ASCEE)Peneliti Teknologi Teknik IndonesiaDepartment of Electrical Engineering, Universitas Ahmad Dahlan and Kuliah Teknik Elektro
Published by: Association for Scientific Computing Electronics and Engineering (ASCEE)
Office: Jalan Janti, Karangjambe 130B, Banguntapan, Bantul, Daerah Istimewa Yogyakarta, Indonesia