Lik Hang Lee, Ngo Yan Yeung, Tristan Braud, Tong Li, Xiang Su, and Pan Hui. 2020. Force9: Force-assisted Miniature Keyboard on Smart Wearables. In Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI '20). Association for Computing Machinery, New York, NY, USA, 232–241. DOI:https://doi.org/10.1145/3382507.3418827
Force9 : force-assisted miniature keyboard on smart wearables
|Author:||Lee, Lik-Hang1,2; Yeung, Ngo-Yan1; Braud, Tristan1;|
1Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
2Center for Ubiquitous Computing, The University of Oulu, Finland
3Department of Computer Science, The University of Helsinki, Finland
|Online Access:||PDF Full Text (PDF, 4.9 MB)|
|Persistent link:|| http://urn.fi/urn:nbn:fi-fe202102255952
Association for Computing Machinery,
|Publish Date:|| 2021-02-25
Smartwatches and other wearables are characterized by small-scale touchscreens that complicate the interaction with content. In this paper, we present Force9, the first optimized miniature keyboard leveraging force-sensitive touchscreens on wrist-worn computers. Force9 enables character selection in an ambiguous layout by analyzing the trade-off between interaction space and the easiness of force-assisted interaction. We argue that dividing the screen’s pressure range into three contiguous force levels is sufficient to differentiate characters for fast and accurate text input. Our pilot study captures and calibrates the ability of users to perform force-assisted touches on miniature-sized keys on touchscreen devices. We then optimize the keyboard layout considering the goodness of character pairs (with regards to the selected English corpus) under the force-based configuration and the users? familiarity with the QWERTY layout. We finally evaluate the performance of the trimetric optimized Force9 layout, and achieve an average of 10.18 WPM by the end of the final session. Compared to the other state-of-the-art approaches, Force9 allows for single-gesture character selection without addendum sensors.
|Pages:||232 - 241|
ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction, virtual event Netherlands, October 2020
International Conference on Multimodal Interaction
|Type of Publication:||
A4 Article in conference proceedings
|Field of Science:||
113 Computer and information sciences
The authors would like to acknowledge 5G-VIIMA and REBOOT Finland IoT Factory projects funded by Business Finland, 5GEAR project and the 6G Flagship project funded by the Academy of Finland (Decision No. 318927), and project 16214817 from the ResearchGrants Council of Hong Kong. We would also like to thank the reviewers for their generous suggestions.
|Academy of Finland Grant Number:||
318927 (Academy of Finland Funding decision)
© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction, virtual event Netherlands, October 2020, https://doi.org/10.1145/3382507.3418827.