Comparative Analysis of Yolov-8 Segmentation for Gait Performance in Individuals with Lower Limb Disabilities

(1) Resty Wulanningrum Mail (1) Department of Electrical Engineering and Informatics, Faculty of Engineering, Universitas Negeri Malang, Semarang Street 5, Malang, 65145, East Java, Indonesia. 2) Departement of Informatics Engineering, University of Nusantara PGRI Kediri, Kampus 2, Mojoroto Gang I, No. 6, Mojoroto, Kediri, East Java, Indonesia)
(2) * Anik Nur Handayani Mail (Universitas Negeri Malang, Indonesia)
(3) Heru Wahyu Herwanto Mail (Universitas Negeri Malang, Indonesia)
*corresponding author

Abstract


This research aims to develop an example of gait pattern segmentation between normal and disabled individuals. Walking is the movement of moving from one place to another, where individuals with physical limitations on the legs have different walking patterns compared to individuals without physical limitations. This study classifies gait into three categories, namely individuals with assistive devices (crutches), individuals without assistive devices, and normal individuals. The study involved 10 subjects, consisting of 2 individuals with assistive devices, 3 individuals without assistive devices, and 5 normal individuals. The research process was conducted through three main stages, namely: image database creation, data annotation, and model training and segmentation using YOLOv8. YOLOv8-seg is the platform used to segment the data. The test results showed that the YOLOv8L-seg model achieved convergence value at the 23rd epoch with the 4th scenario in recognizing the walking patterns of the three categories. However, research on walking patterns of people with disabilities faces several obstacles, such as the lack of confidence or emotion of the subject during the data collection process, which is conducted at the location of the subject's choice. In addition, YOLOv8-seg showed consistent performance across the five models used, obtaining a maximum mAP50 value of 0.995 for mAP50 box and mAP50 mask.

Keywords


Disabilities; Gait; Pattern Recognition; Yolov8

   

DOI

https://doi.org/10.31763/ijrcs.v5i1.1731
      

Article metrics

10.31763/ijrcs.v5i1.1731 Abstract views : 152 | PDF views : 23

   

Cite

   

Full Text

Download

References


[1] S. Okano et al., “ATP1A3 potentially causes hereditary spastic paraplegia: A case report of a patient presenting with lower limb spasticity and intellectual disability,†Brain and Development Case Reports, vol. 2, no. 2, p. 100016, 2024, https://doi.org/10.1016/j.bdcasr.2024.100016.

[2] A. S. E. Alreni, H. R. Abdo Aboalmaty, W. De Hertogh, J. Meirte, D. Harrop, and S. M. McLean, “Measuring upper limb disability for patients with neck pain: Evaluation of the feasibility of the single arm military press (SAMP) test,†Musculoskeletal Science and Practice, vol. 50, p. 102254, 2020, https://doi.org/10.1016/j.msksp.2020.102254.

[3] L. Shelmerdine and G. Stansby, “Lower limb amputation and rehabilitation,†Surgery, vol. 40, no. 7, pp. 445–449, 2022, https://doi.org/10.1016/j.mpsur.2022.05.015.

[4] D. G. Calame et al., “Cation leak through the ATP1A3 pump causes spasticity and intellectual disability,†Brain, vol. 146, no. 8, pp. 3162-3171, 2023, https://doi.org/10.1093/brain/awad124.

[5] I. F. Pérez, T. B. Villagra, J. Jiménez-Balado, J. J. Redondo, and B. B. Recasens, “Risk factors and outcome of epilepsy in adults with cerebral palsy or intellectual disability,†Epilepsy & Behavior, vol. 147, p. 109450, 2023, https://doi.org/10.1016/j.yebeh.2023.109450.

[6] A. S. Faradyza et al., "Real Time Gesture Detection Using Kinect in Rehabilitation Therapy for Children with Disability," 2021 7th International Conference on Electrical, Electronics and Information Engineering (ICEEIE), pp. 452-456, 2021, https://doi.org/10.1109/ICEEIE52663.2021.9616817.

[7] A. C. Klimchak, L. Sedita, K. L. Gooch, and D. C. Malone, “EE291 Ethical Implications of Quality-Adjusted Life Year Assessments for Patients with Disabilities: A Duchenne Muscular Dystrophy Case Study,†Value in Health, vol. 26, no. 6, p. S112, 2023, https://doi.org/10.1016/j.jval.2023.03.592.

[8] J. J. Kim, J. Lee, J. Shin, and M. He, “How are high-tech assistive devices valued in an aging society? Exploring the use and non-use values of equipment that aid limb disability,†Technology in Society, vol. 70, p. 102013, 2022, https://doi.org/10.1016/j.techsoc.2022.102013.

[9] M. Kayama, G. Yan, A. Adams, and R. J. Miles, “‘The wheelchair really is just a piece of athletic equipment to play the sport of basketball’: The experience of college athletes with disabilities navigating social inclusion and exclusion,†Children and Youth Services Review, vol. 155, p. 107251, 2023, https://doi.org/10.1016/j.childyouth.2023.107251.

[10] F. Rasouli and K. B. Reed, “Walking assistance using crutches: A state of the art review,†Journal of Biomechanics, vol. 98, p. 109489, 2020, https://doi.org/10.1016/j.jbiomech.2019.109489.

[11] M. Bendt, E. B. Forslund, G. Hagman, C. Hultling, Å. Seiger, and E. Franzén, “Gait and dynamic balance in adults with spina bifida,†Gait Posture, vol. 96, pp. 343–350, 2022, https://doi.org/10.1016/j.gaitpost.2022.06.016.

[12] C. Mazzà , M. Zok, and U. Della Croce, “Sequencing sit-to-stand and upright posture for mobility limitation assessment: determination of the timing of the task phases from force platform data,†Gait Posture, vol. 21, no. 4, pp. 425–431, 2005, https://doi.org/10.1016/j.gaitpost.2004.05.006.

[13] A. A. da Silva Costa et al., “Corticomuscular and intermuscular coherence as a function of age and walking balance difficulty,†Neurobiology of Aging, vol. 141, pp. 85–101, 2024, https://doi.org/10.1016/j.neurobiolaging.2024.05.004.

[14] I. A. E. Zaeni, D. Lestari, A. N. Handayani, and M. K. Osman, “Development of Stride Detection System for Helping Stroke Walking Training,†Journal of Electronics, Electromedical Engineering, and Medical Informatics, vol. 5, no. 3, pp. 159–167, 2023, https://doi.org/10.35882/jeemi.v5i3.306.

[15] A. Alfayyadh et al., “Unbalanced Medial-to-Lateral Knee Muscle Co-Contractions are Associated with Medial Tibiofemoral Underloading during Gait Three Months after Anterior Cruciate Ligament Reconstruction,†Journal of Biomechanics, vol. 163, p. 111925, 2024, https://doi.org/10.1016/j.jbiomech.2024.111925.

[16] J. Nie, M. Jiang, A. Botta, and Y. Takeda, “An adaptive gait event detection method based on stance point for walking assistive devices,†Sensors and Actuators A: Physical, vol. 364, p. 114842, 2023, https://doi.org/10.1016/j.sna.2023.114842.

[17] D. Matsuura, Y. Chounan, M. Omata, Y. Sugahara, and Y. Takeda, “Chapter 2 - Gait analysis and regeneration by means of principal component analysis and its application to kinematic design of wearable walking assist device for hemiplegics,†Design and Operation of Human Locomotion Systems, pp. 33–49, 2020, https://doi.org/10.1016/B978-0-12-815659-9.00002-0.

[18] W.-Y. Huang, S.-H. Tuan, M.-H. Li, and P.-T. Hsu, “Efficacy of a novel walking assist device with auxiliary laser illuminator in stroke Patients~ a randomized control trial,†Journal of the Formosan Medical Association, vol. 121, no. 3, pp. 592–603, 2022, https://doi.org/10.1016/j.jfma.2021.06.019.

[19] B. Feodoroff and V. Blümer, “Unilateral non-electric assistive walking device helps neurological and orthopedic patients to improve gait patterns,†Gait Posture, vol. 92, pp. 294–301, 2022, https://doi.org/10.1016/j.gaitpost.2021.11.016.

[20] K. Nakagawa et al., “Short-term effect of a close-fitting type of walking assistive device on spinal cord reciprocal inhibition,†Journal of Clinical Neuroscience, vol. 77, pp. 142–147, 2020, https://doi.org/10.1016/j.jocn.2020.04.121.

[21] E.-S. Kim, Y. Oh, and G. W. Yun, “Sociotechnical challenges to the technological accuracy of computer vision: The new materialism perspective,†Technology in Society, vol. 75, p. 102388, 2023, https://doi.org/10.1016/j.techsoc.2023.102388.

[22] M. Ciranni, V. Murino, F. Odone, and V. P. Pastore, “Computer vision and deep learning meet plankton: Milestones and future directions,†Image and Vision Computing, vol. 143, p. 104934, 2024, https://doi.org/10.1016/j.imavis.2024.104934.

[23] A. P. Wibawa, W. A. Yudha Pratama, A. N. Handayani, and A. Ghosh, “Convolutional Neural Network (CNN) to determine the character of wayang kulit,†International Journal of Visual and Performing Arts, vol. 3, no. 1, pp. 1–8, 2021, https://doi.org/10.31763/viperarts.v3i1.373.

[24] N. Ottakath, A. Al-Ali, S. Al-Maadeed, O. Elharrouss, and A. Mohamed, “Enhanced computer vision applications with blockchain: A review of applications and opportunities,†Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 10, p. 101801, 2023, https://doi.org/10.1016/j.jksuci.2023.101801.

[25] A. Parashar, A. Parashar, A. F. Abate, R. S. Shekhawat, and I. Rida, “Real-time gait biometrics for surveillance applications: A review,†Image and Vision Computing, vol. 138, p. 104784, 2023, https://doi.org/10.1016/j.imavis.2023.104784.

[26] U. Gawande, K. Hajari, and Y. Golhar, “Real-Time Deep Learning Approach for Pedestrian Detection and Suspicious Activity Recognition,†Procedia Computer Science, vol. 218, pp. 2438–2447, 2023, https://doi.org/10.1016/j.procs.2023.01.219.

[27] M. H. Khan, M. S. Farid, and M. Grzegorzek, “Vision-based approaches towards person identification using gait,†Computer Science Review, vol. 42, p. 100432, 2021, https://doi.org/10.1016/j.cosrev.2021.100432.

[28] A. Larasati, A. M. Hajji, and A. N. Handayani, “Identification of Learning Characteristics Pattern of Engineering Students using Clustering Techniques,†Proceedings of the 2nd International Conference on Learning Innovation ICLI - Volume 1, pp. 274–278, 2019, https://doi.org/10.5220/0008411002740278.

[29] M. C. Bagaskoro, F. Prasojo, A. N. Handayani, E. Hitipeuw, A. P. Wibawa, and Y. W. Liang, “Hand image reading approach method to Indonesian Language Signing System (SIBI) using neural network and multi layer perseptron,†Science in Information Technology Letters, vol. 4, no. 2, pp. 97–108, 2023, https://doi.org/10.31763/sitech.v4i2.1362.

[30] G. Li, L. Guo, R. Zhang, J. Qian, and S. Gao, “TransGait: Multimodal-based gait recognition with set transformer,†Applied Intelligence, vol. 53, no. 2, pp. 1535–1547, 2023, https://doi.org/10.1007/s10489-022-03543-y.

[31] Y. Liu, C. Wang, H. Li, and Y. Zhou, “Gait recognition of camouflaged people based on UAV infrared imaging,†Infrared Physics & Technology, vol. 138, p. 105262, 2024, https://doi.org/10.1016/j.infrared.2024.105262.

[32] R. A. Asmara et al., “Comparative Study of Gait Gender Identification using Gait Energy Image (GEI) and Gait Information Image (GII),†MATEC Web of Conferences, vol. 197, p. 15006, 2018, https://doi.org/10.1051/matecconf/201819715006.

[33] J. Luo, H. Wu, L. Lei, H. Wang, and T. Yang, “GCA-Net: Gait contour automatic segmentation model for video gait recognition,†Multimedia Tools and Applications, vol. 81, no. 24, pp. 34295–34307, 2022, https://doi.org/10.1007/s11042-021-11248-6.

[34] M. M. Islam, S. Nooruddin, F. Karray, and G. Muhammad, “Human activity recognition using tools of convolutional neural networks: A state of the art review, data sets, challenges, and future prospects,†Computers in Biology and Medicine, vol. 149, p. 106060, 2022, https://doi.org/10.1016/j.compbiomed.2022.106060.

[35] D. Balta, I. Cocchi, F. Molinari, U. Della Croce, and A. Cereatti, “A comparative study of three segmentation algorithms for 2-dimensional markerless gait analysis,†Gait & Posture, vol. 74, p. 4, 2019, https://doi.org/10.1016/j.gaitpost.2019.07.447.

[36] S. Sivolobov, “Human gait feature extraction method,†Procedia Computer Science, vol. 193, pp. 220–227, 2021, https://doi.org/10.1016/j.procs.2021.10.022.

[37] M. S. Rana, “Leveraging Markerless Computer Vision for Comprehensive Walking Automated Gait Analysis in Rehabilitation,†Master’s Thesis, 2024, https://www.theseus.fi/bitstream/handle/10024/860826/Rana_Md_Shohel.pdf?sequence=2&isAllowed=y.

[38] R. Sapkota, D. Ahmed, and M. Karkee, “Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments,†Artificial Intelligence in Agriculture, vol. 13, pp. 84-99, 2024, https://doi.org/10.1016/j.aiia.2024.07.001.

[39] J. Terven and D. Cordova-Esparza, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,†Machine Learning and Knowledge Extraction, vol. 5, no. 4, pp. 1680-1716, 2023, https://doi.org/10.3390/make5040083.

[40] A. Aboah, B. Wang, U. Bagci and Y. Adu-Gyamfi, "Real-time Multi-Class Helmet Violation Detection Using Few-Shot Data Sampling Technique and YOLOv8," 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 5350-5358, 2023, https://doi.org/10.1109/CVPRW59228.2023.00564.

[41] Y. Zhang, Y. Lu, W. Zhu, X. Wei, and Z. Wei, “Traffic sign detection based on multi-scale feature extraction and cascade feature fusion,†The Journal of Supercomputing, vol. 79, no. 2, pp. 2137–2152, 2023, https://doi.org/10.1007/s11227-022-04670-6.

[42] C. Zhang, X. Chen, P. Liu, B. He, W. Li, and T. Song, “Automated detection and segmentation of tunnel defects and objects using YOLOv8-CM,†Tunnelling and Underground Space Technology, vol. 150, p. 105857, 2024, https://doi.org/10.1016/j.tust.2024.105857.

[43] A. Mourato, J. Faria, and R. Ventura, “Automatic sunspot detection through semantic and instance segmentation approaches,†Engineering Applications of Artificial Intelligence, vol. 129, p. 107636, 2024, https://doi.org/10.1016/j.engappai.2023.107636.

[44] R. A. Asmara, B. Syahputro, D. Supriyanto and A. N. Handayani, "Prediction of Traffic Density Using YOLO Object Detection and Implemented in Raspberry Pi 3b + and Intel NCS 2," 2020 4th International Conference on Vocational Education and Training (ICOVET), pp. 391-395, 2020, https://doi.org/10.1109/ICOVET50258.2020.9230145.

[45] R. A. Asmara et al., “YOLO-based object detection performance evaluation for automatic target aimbot in first-person shooter games,†Bulletin of Electrical Engineering and Informatics, vol. 13, no. 4, pp. 2456–2470, 2024, https://doi.org/10.11591/eei.v13i4.6895.

[46] Y. Choi, B. Bae, T. Hee Han and J. Ahn, "Application of Mask R-CNN and YOLOv8 Algorithms for Concrete Crack Detection," IEEE Access, vol. 12, pp. 165314-165321, 2024, https://doi.org/10.1109/ACCESS.2024.3469951.

[47] A. M. Hafiz and G. M. Bhat, “A survey on instance segmentation: state of the art,†International Journal of Multimedia Information Retrieval, vol. 9, no. 3, pp. 171–189, 2020, https://doi.org/10.1007/s13735-020-00195-x.

[48] F. Barrientos-Espillco, M. J. Gómez-Silva, E. Besada-Portas, and G. Pajares, “Integration of object detection and semantic segmentation based on convolutional neural networks for navigation and monitoring of cyanobacterial blooms in lentic water scenes,†Applied Soft Computing, vol. 163, p. 111849, 2024, https://doi.org/10.1016/j.asoc.2024.111849.

[49] A. Tharatipyakul, T. Srikaewsiew, and S. Pongnumkul, “Deep learning-based human body pose estimation in providing feedback for physical movement: A review,†Heliyon, vol. 10, no. 17, p. e36589, 2024, https://doi.org/10.1016/j.heliyon.2024.e36589.

[50] C. Zhang et al., “Evaluation of the YOLO models for discrimination of the alfalfa pollinating bee species,†Journal of Asia-Pacific Entomology, vol. 27, no. 1, p. 102195, 2024, https://doi.org/10.1016/j.aspen.2023.102195.

[51] Q. Li, W. Ma, H. Li, X. Zhang, R. Zhang, and W. Zhou, “Cotton-YOLO: Improved YOLOV7 for rapid detection of foreign fibers in seed cotton,†Computers and Electronics in Agriculture, vol. 219, p. 108752, 2024, https://doi.org/10.1016/j.compag.2024.108752.

[52] Y. Peng et al., “A dynamic individual method for yak heifer live body weight estimation using the YOLOv8 network and body parameter detection algorithm,†Journal of Dairy Science, vol. 107, no. 8, pp. 6178–6191, 2024, https://doi.org/10.3168/jds.2023-24065.

[53] C. Xiong, T. Zayed, and E. M. Abdelkader, “A novel YOLOv8-GAM-Wise-IoU model for automated detection of bridge surface cracks,†Construction and Building Materials, vol. 414, p. 135025, 2024, https://doi.org/10.1016/j.conbuildmat.2024.135025.

[54] Q. Wang et al., “Enhanced recognition of insulator defects on power transmission lines via proposal-based detection model with integrated improvement methods,†Engineering Applications of Artificial Intelligence, vol. 136, p. 109078, 2024, https://doi.org/10.1016/j.engappai.2024.109078.

[55] S. A. Mostafa et al., “A YOLO-based deep learning model for Real-Time face mask detection via drone surveillance in public spaces,†Information Sciences, vol. 676, p. 120865, 2024, https://doi.org/10.1016/j.ins.2024.120865.

[56] J. Chen, C. Ji, J. Zhang, Q. Feng, Y. Li, and B. Ma, “A method for multi-target segmentation of bud-stage apple trees based on improved YOLOv8,†Computers and Electronics in Agriculture, vol. 220, p. 108876, 2024, https://doi.org/10.1016/j.compag.2024.108876.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Re Wulanningrum, Anik Nur Handayani, Heru Wahyu Herwanto

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

 


About the JournalJournal PoliciesAuthor Information

International Journal of Robotics and Control Systems
e-ISSN: 2775-2658
Website: https://pubs2.ascee.org/index.php/IJRCS
Email: ijrcs@ascee.org
Organized by: Association for Scientific Computing Electronics and Engineering (ASCEE)Peneliti Teknologi Teknik IndonesiaDepartment of Electrical Engineering, Universitas Ahmad Dahlan and Kuliah Teknik Elektro
Published by: Association for Scientific Computing Electronics and Engineering (ASCEE)
Office: Jalan Janti, Karangjambe 130B, Banguntapan, Bantul, Daerah Istimewa Yogyakarta, Indonesia