DYNAMIC ADAPTATION OF RECOGNITION ALGORITHMS ON WEARABLES WITH MINIMAL HUMAN SUPERVISION
Rokni Dezfooli, Seyed Ali
MetadataShow full item record
Wearable sensors utilize machine learning algorithms to infer important events such as behavioral routine and health status of their end-users from time-series sensor data. A major obstacle in the large-scale utilization of these systems is that the machine learning algorithms for these systems are designed based on labeled training data collected, and feature representations engineered, in controlled environments. As a result, the algorithms need to be retrained from scratch in new contexts such as when the on-body location of the wearable sensor changes or when the system is utilized by a new user. This approach has limited scalability of wearable technologies because (i) the retraining process places a significant burden on end-users and system designers to collect and label large amounts of training sensor data, and (ii) wearables are deployed in highly dynamic environments of the end-users whose context undergoes consistent changes. These changes result in a sudden decline in performance of the recognition models trained in controlled settings. To address this challenge, this thesis proposes solutions to dynamically adapt the trained recognition models for new contexts with no or minimal human supervision. First, Share-n-Learn is introduced to automatically detect and learn physical sensor-contexts from a repository of shared expert models and activate the most accurate one for current context without interacting with the user. In the absence of shared models, the thesis provides solutions to enable a newly added sensor by autonomous training of the machine learning algorithms in real-time with no human supervision. While one solution is dedicated for target sensors with static context, another method is presented for sensors with dynamically changing context. These solutions measure the inherent correlation between observations made by an existing sensor view for which trained algorithms exist and the new sensor view for which an algorithm needs to be developed. Additionally, to learn reusable features for different contexts, TransNet, a deep learning framework is introduced that learns efficient representation from raw sensor data and quickly reconfigures the underlying model in new domains with minimal supervision. Finally, structured prediction algorithms are utilized to transfer high-level knowledge among related contexts.