Speech and Multimodal Interfaces Laboratory

Verkholyak Oxana Vladimirovna

Verkholyak Oxana Vladimirovna
Position
Junior researcher
Qualification
PhD student

Publications

2019

Verkholyak O., Fedotov D., Kaya H., Zhang Y., Karpov A. Hierarchical Two-Level Modelling of Emotional States in Spoken Dialog Systems. In Proc. 44th IEEE International Conference on Acoustics, Speech, and Signal Processing ICASSP-2019, Brighton, UK, 2019, pp. 6700-6704.
More
Fedotov D.V., Verkholyak O.V., Karpov A.A. Contextual continuous recognition of emotions in Russian speech using recurrent neural networks. Proceedings of the 8th Interdisciplinary Seminar “Analysis of Conversational Russian Speech” AR3-2019, St. Petersburg, St. Petersburg State University, 2019, pp. 96-99.
Verkholyak O.V., Kaya H., Karpov A.A. Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification // SPIIRAS Proceedings, Issue 62, № 1, 2019, pp. 30-56.
More
Markitantov M., Verkholyak O. Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks // Lecture Notes in Computer Science, Springer LNAI 11658, SPECOM 2019, 2019, pp. 327-336.

2018

O. Verkholyak. An analytical review of textual corpora in Russian for automatic sentiment analysis // Proc. of Conf. “Information Technologies in Control” (ITU-2018), 2-4 October 2018, St. Petersburg. Pp. 548-553.
Kaya H., Fedotov D., Yesilkanat A., Verkholyak O., Zhang Y., Karpov A. LSTM based Cross-corpus and Cross-task Acoustic Emotion Recognition. In Proc. 19th International Conference INTERSPEECH-2018, Hyderabad, India, ISCA, 2018, pp. 521-525.
More

2017

Verkholyak O., Karpov A. Combining utterance-level and frame-level feature representations for emotion classification from speech. In Proc. IEEE International Symposium "Video and Audio Signal Processing in the Context of Neurotechnologies", SPCN-2017, 2017, pp. 31.