Speech and Multimodal Interfaces Laboratory

Markitantov Maxim Viktorovich

Markitantov Maxim Viktorovich
Position
Senior researcher
Qualification
PhD

Publications

2024

Dresvyanskiy D., Markitantov M., Yu J., Kaya H., Karpov A. Multi-modal Arousal and Valence Estimation under Noisy Conditions // IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2024. pp. 4773-4783.
Ryumina E., Markitantov M., Ryumin D., Kaya H., Karpov A. Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion // IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2024. pp. 4752-4760.
Ryumina E., Markitantov M., Ryumin D., Karpov A. OCEAN-AI framework with EmoFormer cross-hemiface attention approach for personality traits assessment // Expert Systems with Applications. 2024. vol. 239. ID 122441.
Ryumina E., Markitantov M., Ryumin D., Karpov A. Gated Siamese Fusion Network based on multimodal deep and hand-crafted features for personality traits assessment // Pattern Recognition Letters. 2024. vol. 185. pp. 45-51.
Ivanko D., Ryumin D., Markitantov M. End-to-End Visual Speech Recognition for Human-Robot Interaction // In Proc. of the AIP Conference. 2024. vol. 3021. pp. 82-90.

2023

Ryumina E., Ryumin D., Markitantov M., Kaya H., Karpov A. Multimodal Personality Traits Assessment (MuPTA) Corpus: The Impact of Spontaneous and Read Speech// In Proc. of the 24th International Conference INTERSPEECH-2023. 2023. pp. 4049–4053.
Ryumina E., Markitantov M., Karpov A. Multi-Corpus Learning for Audio–Visual Emotions and Sentiment Recognition // Mathematics. 2023. vol. 11(16). no. 3519.