University of Oulu

Siirtola, P., Röning, J. Context-aware incremental learning-based method for personalized human activity recognition. J Ambient Intell Human Comput (2021). https://doi.org/10.1007/s12652-020-02808-z

Context-aware incremental learning-based method for personalized human activity recognition

Saved in:
Author: Siirtola, Pekka1; Röning, Juha1
Organizations: 1Biomimetics and Intelligent Systems Group, University of Oulu, P.O. Box 4500, 90014 Oulu, Finland
Format: article
Version: published version
Access: open
Online Access: PDF Full Text (PDF, 1.2 MB)
Persistent link: http://urn.fi/urn:nbn:fi-fe2021060232713
Language: English
Published: Springer Nature, 2021
Publish Date: 2021-06-02
Description:

Abstract

This study introduces an ensemble-based personalized human activity recognition method relying on incremental learning, which is a method for continuous learning, that can not only learn from streaming data but also adapt to different contexts and changes in context. This adaptation is based on a novel weighting approach which gives bigger weight to those base models of the ensemble which are the most suitable to the current context. In this article, contexts are different body positions for inertial sensors. The experiments are performed in two scenarios: (S1) adapting model to a known context, and (S2) adapting model to a previously unknown context. In both scenarios, the models had to also adapt to the data of previously unknown person, as the initial user-independent dataset did not include any data from the studied user. In the experiments, the proposed ensemble-based approach is compared to non-weighted personalization method relying on ensemble-based classifier and to static user-independent model. Both ensemble models are experimented using three different base classifiers (linear discriminant analysis, quadratic discriminant analysis, and classification and regression tree). The results show that the proposed ensemble method performs much better than non-weighted ensemble model for personalization in both scenarios no matter which base classifier is used. Moreover, the proposed method outperforms user-independent models. In scenario 1, the error rate of balanced accuracy using user-independent model was 13.3%, using non-weighted personalization method 13.8%, and using the proposed method 6.4%. The difference is even bigger in scenario 2, where the error rate using user-independent model is 36.6%, using non-weighted personalization method 36.9%, and using the proposed method 14.1%. In addition, F1 scores also show that the proposed method performs much better in both scenarios that the rival methods. Moreover, as a side result, it was noted that the presented method can also be used to recognize body position of the sensor.

see all

Series: Journal of ambient intelligence and humanized computing
ISSN: 1868-5137
ISSN-E: 1868-5145
ISSN-L: 1868-5137
Volume: Online First
Issue: Online First
Pages: 1 - 15
DOI: 10.1007/s12652-020-02808-z
OADOI: https://oadoi.org/10.1007/s12652-020-02808-z
Type of Publication: A1 Journal article – refereed
Field of Science: 213 Electronic, automation and communications engineering, electronics
Subjects:
Funding: This research is supported by the Business Finland funding for Reboot IoT Factory-project (http://www.rebootiotfactory.fi). Authors are also thankful for Infotech Oulu.
Copyright information: © The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
  https://creativecommons.org/licenses/by/4.0/