MCU-based isolated appealing words detecting method with AI techniques |
|
Author: | Ye, Liang1,2,3; Li, Yue3,4; Dong, Wenjing1; |
Organizations: |
1Department of Information and Communication Engineering, Harbin Institute of Technology, Harbin 150080, China 2Health and Wellness Measurement Research Group, OPEM Unit, University of Oulu, 90014 Oulu, Finland 3Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security, Harbin 150080, China
4Electrical Engineering School, Heilongjiang University, Harbin 150080, China
5Physiological Signal Analysis Team, University of Oulu, 90014 Oulu, Finland |
Format: | article |
Version: | accepted version |
Access: | open |
Online Access: | PDF Full Text (PDF, 0.2 MB) |
Persistent link: | http://urn.fi/urn:nbn:fi-fe2020042822804 |
Language: | English |
Published: |
Springer Nature,
2019
|
Publish Date: | 2020-07-05 |
Description: |
AbstractBullying in campus has attracted more and more attention in recent years. By analyzing typical campus bullying events, it can be found that the victims often use the words “help” and some other appealing or begging words, that is to say, by using the artificial intelligence of speech recognition, we can find the occurrence of campus bullying events in time, and take measures to avoid further harm. The main purpose of this study is to help the guardians discover the occurrence of campus bullying in time by real-time monitoring of the keywords of campus bullying, and take corresponding measures in the first time to minimize the harm of campus bullying. On the basis of Sunplus MCU and speech recognition technology, by using the MFCC acoustic features and an efficient DTW classifier, we were able to realize the detection of common vocabulary of campus bullying for the specific human voice. After repeated experiments, and finally combining the voice signal processing functions of Sunplus MCU, the recognition procedure of specific isolated words was completed. On the basis of realizing the isolated word detection of specific human voice, we got an average accuracy of 99% of appealing words for the dedicated speaker and the misrecognition rate of other words and other speakers was very low. see all
|
Series: |
Lecture notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering |
ISSN: | 1867-8211 |
ISSN-E: | 1867-822X |
ISSN-L: | 1867-8211 |
ISBN: | 978-3-030-22971-9 |
ISBN Print: | 978-3-030-22970-2 |
Pages: | 300 - 308 |
DOI: | 10.1007/978-3-030-22971-9_26 |
OADOI: | https://oadoi.org/10.1007/978-3-030-22971-9_26 |
Host publication: |
Artificial intelligence for communications and networks : First EAI International Conference, AICON 2019, Harbin, China, May 25–26, 2019 Proceedings, Part II |
Host publication editor: |
Han, Shuai Ye, Liang Meng, Weixiao |
Conference: |
Artificial Intelligence for Communications and Networks : EAI International Conference |
Type of Publication: |
A4 Article in conference proceedings |
Field of Science: |
113 Computer and information sciences |
Subjects: | |
Funding: |
This work was supported by the National Natural Science Foundation of China under Grant No. 61602127, the Basic scientific research project of Heilongjiang Province under Grant No. KJCXZD201704, and the Key Laboratory of Police Wireless Digital Communication, Ministry of Public Security under Grant No. 2018JYWXTX01. The authors would like to thank those people who have helped with these experiments. |
Copyright information: |
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019. This is a post-peer-review, pre-copyedit version of an article published in Artificial intelligence for communications and networks : First EAI International Conference, AICON 2019, Harbin, China, May 25–26, 2019 Proceedings, Part II. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-22971-9_26. |