Determination of living quarters clutter for caregiver support

(1) * Stephen Karungaru Mail (Computer Science Department, Tokushima University, Japan)
*corresponding author

Abstract


Providing enough health caregivers due to an aging population has recently been challenging. To alleviate this problem, there's a growing demand for certain household monitoring tasks to be automated especially for elderly persons living independently to reduce the number of scheduled visits by caregivers. Moreover, gathering crucial data using AI technology about functional, cognitive, and social health status, is essential for monitoring daily physical activities at home. This paper proposes a system that determines a room's cleanliness (degree of clutter) to decide whether a caregiver visit is required. A Yolov5-based method is applied to recognize objects in the room including clothes, utensils, clothes, etc. However, due to background noise interference in the rooms and the insufficient feature extraction in YOLOv5, an improvement regime is proposed to improve the detection accuracy. The ECA (Efficient Channel Attention) is added to the network's backbone to focus on feature information, reducing the missed detection rate. The initial anchor box clustering algorithm is improved by replacing K-means with the K-means++ algorithm, enabling more effective adaptation to changing room views. The regression loss function EIoU (Enhanced Intersection over Union) is introduced to optimize the convergence speed and improve the accuracy. The room clutter is determined using set rules by comparing the detection results and prior information from the clean room using IOU. In 31 rooms, 9 subjects' evaluation was used to prove the effectiveness of the proposed system. Compared to the original Yolov5 algorithm, the method proposed in this paper achieved better performance

Keywords


Room Clutter; Yolov5; Caregiver support; Elderly living

   

DOI

https://doi.org/10.31763/sitech.v5i1.1459
      

Article metrics

10.31763/sitech.v5i1.1459 Abstract views : 118 | PDF views : 12

   

Cite

   

Full Text

Download

References


[1] H. A. Abbass, “Social Integration of Artificial Intelligence: Functions, Automation Allocation Logic and Human-Autonomy Trust,†Cognit. Comput., vol. 11, no. 2, pp. 159–171, Apr. 2019, doi: 10.1007/s12559-018-9619-0.

[2] A. Barredo Arrieta et al., “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,†Inf. Fusion, vol. 58, pp. 82–115, Jun. 2020, doi: 10.1016/j.inffus.2019.12.012.

[3] P. Dhamija and S. Bag, “Role of artificial intelligence in operations environment: a review and bibliometric analysis,†TQM J., vol. 32, no. 4, pp. 869–896, Jul. 2020, doi: 10.1108/TQM-10-2019-0243.

[4] M. Zhao, J. Gao, M. Li, and K. Wang, “Relationship Between Loneliness and Frailty Among Older Adults in Nursing Homes: The Mediating Role of Activity Engagement,†J. Am. Med. Dir. Assoc., vol. 20, no. 6, pp. 759–764, Jun. 2019, doi: 10.1016/j.jamda.2018.11.007.

[5] S. Padhan, A. Mohapatra, S. K. Ramasamy, and S. Agrawal, “Artificial Intelligence (AI) and Robotics in Elderly Healthcare: Enabling Independence and Quality of Life,†Cureus, vol. 15, no. 8, Aug. 2023, doi: 10.7759/cureus.42905.

[6] G. Cheung, V. Wright-St Clair, E. Chacko, and Y. Barak, “Financial difficulty and biopsychosocial predictors of loneliness: A cross-sectional study of community dwelling older adults,†Arch. Gerontol. Geriatr., vol. 85, p. 103935, Nov. 2019, doi: 10.1016/j.archger.2019.103935.

[7] A. Frolli, M. C. Ricci, A. Cavallaro, S. Rizzo, and F. Di Carmine, “Virtual Reality Improves Learning In Children With Adhd,†In Edulearn21 Proceedings, Jul. 2021, vol. 1, pp. 9229–9236, doi: 10.21125/edulearn.2021.1859.

[8] A. Frolli et al., “Children on the Autism Spectrum and the Use of Virtual Reality for Supporting Social Skills,†Children, vol. 9, no. 2, p. 181, Feb. 2022, doi: 10.3390/children9020181.

[9] M. Di Gregorio, M. Romano, M. Sebillo, and G. Vitiello, “Dyslexeasy-App to Improve Readability through the Extracted Summary for Dyslexic Users,†in 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Jan. 2022, pp. 1–6, doi: 10.1109/CCNC49033.2022.9700618.

[10] “Current status of the nursing,†Ministry of Health, Labor and Welfare, p. 24, 2018.[Online]. Available at: https://www.mhlw.go.jp/content/12602000/000489026.pdf.

[11] D. Chishima, M. Nihei, And M. Kamata, “2G5-3 The relationship between Mess and Stress,†Japanese J. Ergon., vol. 54, no. Supplement, pp. 2G5-3-2G5-3, Jun. 2018, doi: 10.5100/jje.54.2G5-3.

[12] I. Logothetis, P. Rani, S. Sivasothy, R. Vasa, and K. Mouzakis, “Smart Home Goal Feature Model -- A guide to support Smart Homes for Ageing in Place,†Nov. 2023. [Online]. Available at: https://arxiv.org/abs/2311.09248v1.

[13] W. Liu et al., “SSD: Single Shot MultiBox Detector,†in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, Springer Verlag, 2016, pp. 21–37, doi: 10.1007/978-3-319-46448-0_2.

[14] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,†Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-October, pp. 2980–2988, Dec. 2017, doi: 10.1109/ICCV.2017.322.

[15] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,†in Computer Vision and Pattern Recognition, Apr. 2018, pp. 1–6. [Online]. Available at: https://arxiv.org/abs/1804.02767v1.

[16] R. H. Krishnan, B. A. Naik, G. G. Patil, P. Pal, and S. K. Singh, “AI Based Autonomous Room Cleaning Bot,†in 2022 International Conference on Futuristic Technologies (INCOFT), Nov. 2022, pp. 1–4, doi: 10.1109/INCOFT55651.2022.10094492.

[17] V. Suryani, K. Agustriana, A. Rakhmatsyah, and R. R. Pahlevi, “Room cleaning robot movement using A* algorithm and imperfect maze,†J. INFOTEL, vol. 15, no. 1, pp. 75–81, Mar. 2023, doi: 10.20895/infotel.v15i1.901.

[18] Y. Liu, Z. Shao, Y. Teng, and N. Hoffmann, “NAM: Normalization-based Attention Module,†arxiv Artif. Intell., no. NeurIPS 2021, pp. 1–5, 2021, [Online]. Available at: http://arxiv.org/abs/2111.12419.

[19] M.-H. Guo et al., “Attention mechanisms in computer vision: A survey,†Comput. Vis. Media, vol. 8, no. 3, pp. 331–368, Sep. 2022, doi: 10.1007/s41095-022-0271-y.

[20] J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation Networks,†in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 7132–7141, doi: 10.1109/CVPR.2018.00745.

[21] D. Arthur and S. Vassilvitskii, “K-means++: The advantages of careful seeding,†Proc. Annu. ACM-SIAM Symp. Discret. Algorithms, vol. 07-09-Janu, pp. 1027–1035, 2007, [Online]. Available at: https://theory.stanford.edu/~sergei/papers/kMeansPP-soda.pdf.

[22] K. Krishna and M. Narasimha Murty, “Genetic K-means algorithm,†IEEE Trans. Syst. Man Cybern. Part B, vol. 29, no. 3, pp. 433–439, Jun. 1999, doi: 10.1109/3477.764879.

[23] Y.-F. Zhang, W. Ren, Z. Zhang, Z. Jia, L. Wang, and T. Tan, “Focal and efficient IOU loss for accurate bounding box regression,†Neurocomputing, vol. 506, pp. 146–157, Sep. 2022, doi: 10.1016/j.neucom.2022.07.042.

[24] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression,†Proc. AAAI Conf. Artif. Intell., vol. 34, no. 07, pp. 12993–13000, Apr. 2020, doi: 10.1609/aaai.v34i07.6999.

[25] J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, “UnitBox,†in Proceedings of the 24th ACM international conference on Multimedia, Oct. 2016, pp. 516–520, doi: 10.1145/2964284.2967274.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 Stephen Karungaru

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
Science in Information Technology Letters
ISSN 2722-4139
Published by Association for Scientific Computing Electrical and Engineering (ASCEE)
W : http://pubs2.ascee.org/index.php/sitech
E : sitech@ascee.org, andri@ascee.org, andri.pranolo.id@ieee.org

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

View My Stats