Speech and Multimodal Interfaces Laboratory

Verkholyak Oxana Vladimirovna

Verkholyak Oxana Vladimirovna
PhD student



Fedotov D.V., Verkholyak O.V., Karpov A.A. Contextual continuous recognition of emotions in Russian speech using recurrent neural networks. Proceedings of the 8th Interdisciplinary Seminar “Analysis of Conversational Russian Speech” AR3-2019, St. Petersburg, St. Petersburg State University, 2019, pp. 96-99.
Verkholyak O.V., Kaya H., Karpov A.A. Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification // SPIIRAS Proceedings, Issue 62, № 1, 2019, pp. 30-56.
Markitantov M., Verkholyak O. Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks // Lecture Notes in Computer Science, Springer LNAI 11658, SPECOM 2019, 2019, pp. 327-336.


O. Verkholyak. An analytical review of textual corpora in Russian for automatic sentiment analysis // Proc. of Conf. “Information Technologies in Control” (ITU-2018), 2-4 October 2018, St. Petersburg. Pp. 548-553.
Kaya H., Fedotov D., Yesilkanat A., Verkholyak O., Zhang Y., Karpov A. LSTM based Cross-corpus and Cross-task Acoustic Emotion Recognition. In Proc. 19th International Conference INTERSPEECH-2018, Hyderabad, India, ISCA, 2018, pp. 521-525.


Verkholyak O., Karpov A. Combining utterance-level and frame-level feature representations for emotion classification from speech. In Proc. IEEE International Symposium "Video and Audio Signal Processing in the Context of Neurotechnologies", SPCN-2017, 2017, pp. 31.
Verkholyak O., Karpov A. Combined feature representation for emotion classification from Russian speech. In Proc. 6th International Conference on Artificial Intelligence and Natural Language AINL-2017, St. Petersburg, Communications in Computer and Information Science, Springer, vol. 789, pp. 68-73.