Speech and Multimodal Interfaces Laboratory

Ryumina Elena Vitalevna

Ryumina Elena Vitalevna
Position
Junior researcher
Qualification
PhD student

Publications

2023

Ryumina E., Ryumin D., Markitantov M., Kaya H., Karpov A. Multimodal Personality Traits Assessment (MuPTA) Corpus: The Impact of Spontaneous and Read Speech// In Proc. of the 24th International Conference INTERSPEECH-2023. 2023. pp. 4049–4053.
Ryumin D., Ryumina E., Ivanko D. EMOLIPS: Towards Reliable Emotional Speech Lip-Reading // Mathematics. 2023. vol. 11(23). no. 4787.
Ryumina E., Markitantov M., Karpov A. Multi-Corpus Learning for Audio–Visual Emotions and Sentiment Recognition // Mathematics. 2023. vol. 11(16). no. 3519.
Ryumin D., Ivanko D., Ryumina E. Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices // Sensors. 2023. vol. 23(4). no. 2284.
Axyonov A.A., Ryumina E.V., Ryumin D.A., Ivanko D.V., Karpov A.A. Neural network-based method for visual recognition of driver’s voice commands using attention mechanism // Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 2023. vol. 23. no. 4. pp. 767–775.
Ryumina E., Karpov A. Impact of Visual Modalities in Multimodal Personality and Affective Computing // The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2023. vol. 48. pp. 217–224.
Ivanko D., Ryumina E., Ryumin D., Axyonov A., Kashevnik A., Karpov A. EMO-AVSR: Two-Level Approach for Audio-Visual Emotional Speech Recognition// In Proc. of the 25th International Conference on Speech and Computer SPECOM-2023. Lecture Notes in Computer Science. LNAI. 2023. vol. 14338. pp. 18–31.