© 2018 IEEE. According to World Health Organization (WHO), approximately 328 million of adults and 32 million of children present hearing loss in the world. Likewise, the number of people with such impairment increased from 42 million in 1985 to about 360 million in 2011. However, the most of multimedia and web contents, whether they are educational or leisure, are not accessible for persons with hearing loss. In this line, this paper presents an interactive system aimed at automatically generating video summaries and performing subtitles synchronization for persons with hearing loss. Our proposal relies on an educational platform (MOODLE) and Natural Language Processing (NLP) to provide an environment fully configurable for these persons. The module that generates the video summaries uses techniques such as Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA), whereas the synchronization module is based on forced alignment between audio streams and text. With the aim of validating our environment, we have tested our approach on 15 videos, obtaining a score of 80% for three criteria related to the summary content: understandability, concordance, and context appropriateness.
|State||Published - 6 Nov 2018|
|Event||Proceedings of the 2018 IEEE 25th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018 - |
Duration: 6 Nov 2018 → …
|Conference||Proceedings of the 2018 IEEE 25th International Conference on Electronics, Electrical Engineering and Computing, INTERCON 2018|
|Period||6/11/18 → …|