Speech and Multimodal Interfaces Laboratory

Verkholyak Oxana Vladimirovna

Verkholyak Oxana Vladimirovna



Soğancıoğlu G., Verkholyak O., Kaya H., Fedotov D., Cadée T., Salah A., Karpov A. Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion Recognition // Proceedings of INTERSPEECH-2020, ISCA, 2020. pp. 2097-2101.
Kaya H., Verkholyak O., Markitantov M., Karpov A. Combining Clustering and Functionals based Acoustic Feature Representations for Classification of Baby Sounds. In Companion Publication of the 2020 International Conference on Multimodal Interaction ICMI’20 Companion, WoCBU’20 Workshop Proceedings, ACM, 2020, pp. 509-513.
Dvoynikova A., Verkholyak O., Karpov A. Emotion Recognition and Sentiment Analysis of Extemporaneous Speech Transcriptions in Russian // Lecture Notes in Computer Science, Springer LNAI 12335, SPECOM 2020. 2020. pp. 136-144.
Dvoynikova A., Verkholyak O., Karpov A. Analytical review of methods for identifying emotions in text data // CEUR Workshop Proceedings, 3rd International Conference on R. Piotrowski's Readings in Language Engineering and Applied Linguistics PRLEAL-2019, vol. 2552, 2020, pp. 8-21.
Dvoynikova A., Verkholyak O., Karpov A. Sentiment-analysis of spoken language using a method based on tonal dictionaries // Almanac of scientific works of young scientists of ITMO University. 2020. vol. 3. pp. 75-80.
Verkholyak O., Karpov A. Chapter 4 "Automatic analysis of emotionally-colored speech" in a monograph "Child speech portrait with typical and atypical development" / Lyakso E., Frolova O., Grechaniy S., Matveev Yu., Verkholyak O., Karpov A. / St. Petersburg: Publishing and Printing Association of Higher Educational Institutions, 2020, 204 p. ISBN 978-5-91155-096-7.


Verkholyak O., Fedotov D., Kaya H., Zhang Y., Karpov A. Hierarchical Two-Level Modelling of Emotional States in Spoken Dialog Systems. In Proc. 44th IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP-2019, Brighton, UK, 2019, pp. 6700-6704.